International Cybersecurity Research Made in Hamburg
 

ChatGPT, Bing Search, Bard & Friends – Part III

Jantje Silomon

23 May 2023


All Just Fun and Games?

While most have enjoyed playing around with the various “AI powered” toys (myself included!), there have also been a few headlines addressing the potential darker side of such systems, and not just them ‘going Terminator’ on users at times. But should we really be worried? Well, yes, no, maybe - and no, I’m not just trying to be annoying. There are different debates that need to take place, not least on risks TO the system vs those caused BY the system.


From Phishing to Poisoning

Privacy attacks on ML models are a well-known issue, with a large corpus of literature being available on phishing in its various forms, with the goal being access to private data. Examples here include confidential training data, such as used in the healthcare sector, other data sensitive info that might be of interest includes passwords, credit card numbers, or personal details. There are also prompt injection attacks going specifically after chat data, with one example being using an essentially hidden image that is a whole pixel small and thus ‘invisible’ to a user. Then of course there is also another scenario: bugs. ChatGPT, Bing, and similar systems are rapidly moving from being safely tucked away in a buggy (from yes, the wider public) to not only crawling around but racing only baby legs. Previous iterations have of course been not only tested but are being used but simply not on such a large scale. It is therefore not surprising that there are still a number of teething issues (to keep the child theme going!).  For example, a ChatGPT bug revealed users’ chat history, personal and billing data, which required a temporary take-down of the system in March… oopsie! Aside from trying to engineer your way into the systems given out information they should not, or influencing their responses, there are also other approaches to poking around. For example, rez0 posted on Twitter that he was hacking around the new ChatGPT API and found over 80 plugins that were meant to be secret – at least at the time. The API also showed how ChatGPT might use these plugins, which is rather interesting. A fix was quickly implemented but it’s another example of things not being quite right yet.

 Another rather popular attack involves data poisoning, with for example two more papers on the topic available here and here. Training data is manipulated or tampered with in order to produce ‘wrong’ outcomes – from simply inaccurate or undesirable to harmful. There are different ways to do this, such as injecting malicious data into the training set, editing or deleting it. The results (if an attack is successful) also vary based on the system at hand and the attackers end goal: recommendation or search systems can be skewed, users can be targeted with fake or malicious content, or security systems can be compromised. There is one additional element here that sometimes gets overlooked, namely low-quality data, which may not necessarily be intended to be malicious. This issue simply arises from the data volumes demand continuing to grow while reliable supervision is lacking. Although at times hard to detect, there are several methods to prevent this type of attack, including for example data validation or anomaly detection, amongst others. Looking specifically at ChatGPT and Bing Search, researchers have shown that poisoning attacks are possible for web-scale training sets – a cheap way being simply buying expired domain names. Especially models that rely on fine-tuning user-generated datasets, such as ChatGPT, seem more susceptible.

 Adversarial ML attacks are also not uncommon, which essentially use specifically crafted examples to fool a system into making false predictions – a fun example showing the potential risk involves a video of researchers at MIT 3D printing a turtle and it being misclassified as a rifle by a computer vision system! Now think of self-driving cars misclassifying things, though it would apparently be a lot easier to remove a stop-sign physically than getting the car to ignore it using an adversarial attack… but I digress! There are different types of attacks, and yes, the categories at times depend on whom you ask! One of the earlier taxonomies was published in a paper called “Can Machine Learning Be Secure?” by Barreno et al. in 2006 (PDF here) and splits attacks into two flavours – exploratory and causative. The former involves querying the system to extract information about its model, parameters, data and so on. If you try that with Bing (without a lot of creativity), you will quickly get a version of “I can’t share details about my system of how I work”, sometimes even leading to the chat being terminated. Or, you might get a polite “Well, I’m here to help you with your queries and interests, not to talk about myself. I’m sure you have many things you want to know or do, so let’s focus on that”, unlike earlier versions where it divulged a lot more following a prompt injection attack! Causative attacks on the other hand try to manipulate the system’s in- or output, seeking to degrade its performance or cause errors. In short, both can affect the integrity and availability of the system, and they can be targeted or indiscriminate.

 These types of attacks are nothing we need to panic about, after all, they are not a novelty as such and the paper above also suggested some defences. Of course, the ML landscape has changed since then (beyond simply bigger, better, faster, more) as have the potential attack avenues. We have to keep an eye on the developments given the scale and reach ChatGPT, Bing, and related systems have. It is no longer a small minority that plays, experiments, and uses these systems. Currently, the risk is certainly larger than it was just a year ago, but the increased popularity also means that there is yet more research on the topic. That includes defence of said systems, as well as many, many, MANY more eyes taking a look, even down to users simply reporting bizarre things happening. More importantly than trying to slap a quick-fix on the currently developments is to figure out where we want to be in five or ten years. This includes having yet more people understand the inner workings of such systems and gain a better understanding of potential pitfalls beyond Skynet scenarios.

 

Malware, Hacking, and the Dark Side

Aside from the attacks discussed above, there have also been reports of using ChatGPT or Bing providing step-by-step instructions to hack websites to creating polymorphic malware, including warnings from Europol. Cutting to the chase, I believe we should “Keep Calm and Carry On“– with or without a mug depicting that slogan. The reason is that while ChatGPT and similar systems can not only speed and improve the tools used by the dark side, they can equally help the cyber security community.

Consider for example the concern surrounding (generative) AI systems helping nefarious actors to innovate and develop new attack strategies, or helping them ‘code’ something up more quickly. Aside from a program that would need to be fine-tuned, the result is also prone to be riddled with errors. Furthermore, such systems can also help ‘the good guys’: from general coding to reverse engineering malware or supporting remediation. You might not want to start learning coding from ChatGPT though – as responses are good but not always accurate! ChatGPT’s also been experimentally enlisted to find and repair software bugs, performing not too shabbily in comparison to already existing automated program repair techniques. Or think along the lines of a dataset being trained on payloads to generate new ones – of course this is a threat but at the same time, this can also be done for penetration testing. ChatGPT could also help identify potential threats before they become major issues using its data analysing and pattern identification skills. I am not suggesting to simply ignore the issue or the misuse potential but to consider the overall picture in more detail.

 In my opinion, the bigger risk – at least in the short run – arises from these nefarious actors being able to generate better content, whether for phishing or disinformation purposes. Add greater quality on top and go beyond texts to voice and picture deepfakes, and well, a lot more people will likely find themselves in a pickle. We have this technology already in other forms but ChatGPT, Bing, etc. take it a step further and make it more accessible. While DeepL, Google Translate, and others often suffice for simple translations, they are not always spot on when it comes to cultural elements. For example, in Germany, very few emails would be sent starting with something akin to “I hope this email finds you well”, so you need a tad more than ‘just’ translation, which these systems can offer. Or worse, think about combining the data garnered from a large breach with an automated phishing writing service where you simply ask your friendly neighbourhood chatbot to write emails demanding outstanding payments to each person – or something along those lines. If you want to go down the horror scenario list, you can always keep going. That is the case with any tech transitioning to more widespread adoption, sadly a part of human nature.  And sure, it is something we should be concerned about but at the same time, ChatGPT & Co. can also simplify certain labour-intensive processes for security researchers, such as creating spam filter models (simple example here) or the manual sifting of logs and packet inspections. That said, maybe it is a good time to remind people again NOT to put sensitive data into such systems, whether personal or corporate. You really do not want to end up leaking your own secrets, unlike some! Various countries are also probing or banning ChatGPT, concerns often centring on privacy or copyright issues, the latter going both ways. In the US, the OpenAI CEO Sam Altman just recently testified on risks of the technology, with a full video available here.

Unsurprisingly, threat actors are also using the popularity of ChatGPT/other AI powered applications to create fakes to distribute malware and carry out other cyber attacks: most recently, the BatLoader campaign was used to impersonate ChatGPT and Midjourney, resulting in the delivery of Redline Stealer. Yet again, this is just an extension of the already existing schemes abusing software and apps, as for example seen with smartphone apps, albeit on a larger scale. So do not panic but do keep your eyes open.

 Now, all in all, I will end with elaborating on my ‘yes, no, maybe’ as to whether we should really be worried about ChatGPT & Friends from a cyber security perspective. Overall, I personally do not think so, assuming we do not stick our heads into the sand (on that note, ostriches do not actually stick their heads in the sand either despite the myth originating in ancient Rome!). Cyberspace seems to have an almost ingrained cat and mouse game between the good and the bad, and all in between. Similar tools and approaches are used across the board, and new developments are rarely an exception. So yes, there is always an increased risk when something new or improved comes out, particularly surrounding a hype. And yes, “AI” chatbots and searches will disturb the cybersecurity landscape – but far less than other areas I suspect.


<-- Read Part II here