Conversational AI has lately been the talk of the town for most knowledge workers worldwide. While this technology has existed for a few years, OpenAI’s ChatGPT released this year is leaps and bounds ahead of anything seen before.
This trailblazing web app has been used for anything from writing wedding speeches to helping people do their jobs more efficiently. One thing is certain—a shocking amount of users are putting personal information into this new technology.
While this can be seen as people trying out a new trendy app, it’s quickly coming up on every cyber security expert’s radar as a major potential threat. After all, 72% of vulnerabilities are due to flaws in web application coding.
This article will review some of the recent cyber security scares linked to this conversational AI and put together a few ChatGPT cyber security best practices, so your workplace can use this new technology safely and efficiently.
Conversation Histories Revealed
ChatGPT requires the creation of an account or partner sign-in via Google or Microsoft to be used. This requirement allows OpenAI to track past searches and hopefully improve future searches based on the previous ones. The interface also allows users to go to their previous searches made in the past 30 days for convenience.
This feature has recently led to a potentially devastating bug where users have noticed that their previous searches seemed to be from other unrelated users. The bug was initially noticed by users who saw prompts in different languages. While the bug only revealed the title or question asked and not the result, this could lead to massive breaches of privacy.
This bug hasn’t led to any significant information disclosure, and OpenAI has since patched the issue. Still, it could have been a serious issue if users entered their personal information in the prompts.
User Email Addresses Revealed
ChatGPT 3.5 is the free version most people know about and have used, but OpenAI also offers ChatGPT 4, which is more advanced for a small monthly fee. However, their payment page has been the source of mix-ups that have led to privacy violations.
Since users of ChatGPT must create an account linked to their email to use the free version, OpenAI prepopulates the email address when a user clicks on the link to their payment page. However, several users have reported seeing the incorrect address entered when they arrived on the payment page for the service.
While this is a relatively minor issue, it could become much larger since certain service users would rather their employers or colleagues not know about their ChatGPT usage.
Malicious Code Threat
From the very inception of the service, users have marveled at ChatGPT's capacity to write powerful, clean code that could be used in real-world applications. While this capability has led to massive upsides by allowing better access to coding, it has also led to nefarious usages.
In the early days of this AI, users could ask it to code anything from a simple webpage to fully-fledged malware and phishing prompts. OpenAI has since put safeguards against these questions being asked of their AI, but users still find ways around them to get the results they want.
ChatGPT Cyber Security Best Practices
ChatGPT is not a technology that should be maligned more than any other. It can lead to extremely powerful work efficiency improvements, but as with any other work application, it must be used carefully to remain a safe option.
No sensitive data
While OpenAI has assured the world they have fixed any potential issues related to displaying other users' prompt history, inputting sensitive data within a tool like ChatGPT is still not recommended—do it with caution.
Even if this tool had a stellar privacy record, inputting business information would still be a bad idea.
Legal and regulatory risks
While ChatGPT can be a good starting point or research tool, it is ill-advised to have it execute legal tasks or tasks that might require a certain type of certification.
It might be tempting to cut corners or costs this way, but it could have devastating consequences since conversational AIs are still far from able to execute these types of prompts legally.
Cyber security awareness
As with any other cyber security concern, knowledge and proper employee education are the best defense. Instead of letting users take control and integrate conversational AIs in their jobs by themselves, put guidelines into place now.
It is also a good idea to run training on tools like ChatGPT so users learn how to detect signs that the platform is operating differently or erratically.
ChatGPT’s Cyber Security Future
Conversational AI is bound to become a mainstay of any knowledge worker’s life. It is an extremely powerful tool that could simplify most jobs and bring a bit more quality of life to most people worldwide.
When used correctly, ChatGPT could dramatically increase company profitability across industries, but it could also become another vector for data breaches. Now is the time to implement strict usage policies for this technology while it is still gaining ground.
If your users rely on conversational AI, you must invest in cyber security awareness training to ensure they have the right tools to use it positively and efficiently.
Cyber Security Hub: Access Exclusive Cyber Security Content
Head over to our Cyber Security Hub today to gain access to great free resources to support your cyber security awareness program.