<- Back to all blog posts

Security Digest: Be Careful what You Share with ChatGPT

April 4, 2023

Are you searching for a way to enhance your organization's security awareness training? Look no further than Haekka! Schedule a demo with us to discover how we can help you reduce costs by 75% while boosting employee satisfaction with our training by 81%.
Schedule a demo

Over the past few weeks, the launch of ChatGPT with GPT-4.0 capability, new ChatGPT APIs, and ChatGPT plugins (which connect to external services like Kayak and Instacart) has generated a surge in interest and utilization of the service.

This growing adoption has sparked concerns about the security and privacy of the data being shared with ChatGPT by employees. Cyberhaven, a security firm, has blocked 4.2% of ChatGPT requests from 1.6 million workers in order to prevent the sharing of sensitive information. It is estimated that a mere 1% of employees are responsible for 80% of high-risk requests made to ChatGPT.

The types of sensitive data users are potentially sharing with ChatGPT include:

👉 Source code ChatGPT can write, analyze, explain, and summarize code.

👉 Confidential company information One user allegedly shared their company's strategic plan with ChatGPT to create a presentation.

👉 Personally identifiable information (PII) ChatGPT is utilized for crafting email responses.

In some aspects, ChatGPT is similar to other third-party services. As with all third parties, it is crucial to limit the data shared with ChatGPT to prevent breaches of company policies, procedures, or regulatory requirements. Exercise caution, just as you would with Google, etc. and share less information rather than more.

However, unlike many existing software tools we use daily, ChatGPT presents unique potential vulnerabilities as a large language model (LLM). There is a possibility that attackers could exploit ChatGPT to obtain sensitive data from other users through its query results.

🖥️ As an experiment, we sought ChatGPT's opinion on users sharing sensitive data with it. Here's what it had to say:

"Sharing sensitive company data with large language models (LLMs) like ChatGPT poses significant risks, as it could lead to data leaks, unauthorized access, and potential legal issues. As employees increasingly use AI services like ChatGPT for productivity purposes, the risk of exposing confidential information, intellectual property, and trade secrets grows.

In some cases, companies operating in regulated industries may face non-compliance with data privacy and security regulations, resulting in fines and reputational damage. Additionally, the loss of direct control over how the information is processed raises concerns about data misuse by third parties. To mitigate these risks, businesses should implement strict data management policies, access controls, and employee training programs that specifically address the use of AI chatbots and language models, ensuring the protection of their valuable information while benefiting from AI advancements."

----

Want to subscribe your team to weekly posts like this in Slack? Check out Heakka Streams. All Streams are 100% customizable by admins before they are sent to your team. We curate current scams and relevant security and privacy news, summarize it for you, pull out teachable moments, and measure engagement - all in Slack.

Schedule a demo

Start delivering training via Slack today.

Get started with a free trial by scheduling a demo today. One of our training experts will walk you through a live Haekka demo.

Excellent! We received your demo request. You should be redirected to our scheduling system. If you ran into an issue, please contact us.
Hmm. Something went wrong while submitting your form.
Please refresh and try again.