AI-Based Attacks , Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime
Proof of Concept: Boosting Security and Taming AI 'Lies'
Troy Leach and Avani Desai on Risks of AI Hallucination and Misleading Outputs Anna Delaney (annamadeline) • September 26, 2024In the latest "Proof of Concept," Troy Leach of the Cloud Security Alliance and Avani Desai of Schellman discuss the risks of AI hallucinations, As AI models advance, hallucinations pose serious threats to security, especially when quick and accurate decision-making is essential.
Risks are growing as AI models develop more humanlike thinking and reasoning capabilities, Leach said. While AI may be able to understand what the prompter or software developer wants, it could provide the wrong answer "because it thinks it meets your objective better."
"Trust in AI systems … has to be built and maintained through a really rigorous process and continuous oversight," said Desai, pointing to the need for proactive strategies such as AI red teaming, to understand the vulnerabilities.
In this panel discussion, Anna Delaney, director, productions; Tom Field, vice president, editorial; Troy Leach, chief strategy officer, Cloud Security Alliance; and Avani Desai, CEO, Schellman - discussed:
- How AI hallucination could disrupt decision-making and cybersecurity operations;
- Strategies to detect and mitigate risks of "lying" AI models, including AI red teaming;
- The balance between leveraging "good AI" for threat detection and maintaining human oversight.
Leach has spent more than 25 years educating about and advocating for the advancement of responsible technology to improve the quality of living and parity for all. He sits on several advisory boards as an expert in information security and financial payments. Leach also founded a consulting practice that advises on the opportunities to leverage blockchain technology, zero trust methodology and various cloud services to create safe and trusted environments. Previously, he helped establish and lead the PCI Security Standards Council.
Desai has domestic and international experience in information security, operations, profit and loss, oversight, and marketing involving both startup and growth organizations. She has been featured in Forbes, CIO.com and The Wall Street Journal and is a sought-after speaker on a variety of emerging topics, including security, privacy, information security, technology trends and the rising number of young women involved in technology.
Don't miss our previous installments of "Proof of Concept", including the May 22 edition on ensuring AI compliance and security controls and the July 25 edition on how to outpace deepfake threats.
Anna Delaney: Hello. This is Proof of Concept, a talk show where we invite security leaders to discuss the cybersecurity and privacy challenges of today and tomorrow and how we can potentially solve them. We are your hosts. I'm Anna Delaney, director of productions at ISMG.
Tom Field: I'm Tom Field. I'm senior vice president of editorial at ISMG. Hello, Anna.
Delaney: Hello, and welcome back. Great to see you.
Field: It's been a while.
Delaney: Tom, I've been enjoying our recent Proof of Concept episodes, where we are diving into some of the more complex, nuanced questions around emerging technologies, especially large language models and their impact on decision-making, automation and cybersecurity. Since we started this program, it feels like every day there's a new headline about AI becoming smarter but also riskier, and we're seeing models produce more creative outputs, which can sometimes mean they're lying or misleading. And that's a big issue for cybersecurity. Don't you think, Tom?
Field: Indeed! Just today, I opened up the websites to the newspapers and was reading about superintelligence, which is the latest AI concern, and then discussions about what superintelligence is and when it might be here. Sam Altman says in some few thousand days, people second what a few thousand days means, look how far we've come in a year and a half's time talking about generative AI, its growth, its impact, how it's changing, and everything about how we view cybersecurity from either side - good guys or bad guys.
Delaney: Yes, that ties in nicely with what we are going to focus on today. Two key areas - the risks of AI models lying, as I mentioned, as they approach human intelligence in however many thousands of days, and how this could undermine decision-making and cybersecurity, and how good AI is being used to boost defenses on the other side and combat emerging threats and deepfakes and improve threat detection. It is worth mentioning some trending stories to help the conversation today or frame it at least. You mentioned OpenAI. So, OpenAI's GPT-4 and other models have this tendency or have shown a tendency to hallucinate, generating misleading information, which is an immediate concern for cybersecurity. The EU's AI Act is pushing for stronger regulations to address the growing risks of unchecked AI misinformation. Deepfake technology continues to evolve, posing new threats to fraud detection and misinformation, and at the same time, AI-powered autonomous defense tools are already making waves. But, how do we balance innovation with the risk of over-relying on automation? Have I missed anything, Tom?
Field: The first thing is, you can't take the human out of the equation. We've experimented with this internally, trying to use gen AI to help refine some of our stories. Let's say take a new story that's written for BankInfoSecurity and make it appropriate for HealthcareInfoSecurity. I've seen hallucinations in-person. Gen AI makes things up, makes up facts and makes up quotes. It's like turning loose a junior reporter that knows enough to be dangerous. You cannot take the human factor out of this, and that's what I've seen so far. As these technologies develop and get stronger and smarter, we're going to have to as well.
Delaney: Yes, for sure. With all of that in mind, we're thrilled to have two experts joining us again today who bring diverse perspectives from AI development, cybersecurity and regulation. Welcoming back, Avani Desai, CEO at Schellman, and Troy Leach, chief strategy officer at the Cloud Security Alliance. Hello to both of you. Thank you again for being here.
Field: Oh, welcome.
Avani Desai: Thanks, Anna and Tom. Good to be back.
Troy Leach: Yes, appreciate it.
Delaney: Tom, it's over to you.
Field: Okay, so I get to be a bad cop now and talk about the dark side of AI.
Delaney: You've always wanted to be the bad cop. Come on.
Field: Exactly. Troy, as AI models become more advanced, how concerned should we be about models generating false or misleading information, and what potential impact could this have on cybersecurity applications?
Desai: Yes, I can start and then pass it over to Troy. It's a great question. We're all grappling with the fact that AI continues to evolve so quickly. The potential of AI models to generate false or misleading information, as you mentioned earlier, can be seen all the time. It is very real, and it poses significant challenges, especially in cybersecurity. So, these models are only as good as the data they're trained on. So, if that data is flawed or biased, the model is going to have not the proper guardrails, and that's why the outputs can easily be inaccurate and deceptive, and what we worry about is harmful. And so in the world of cybersecurity, why is this a huge issue? Because imagine a scenario where AI-powered systems are making critical decisions. So, some of these decisions, which happen right now, are like identifying threats and managing responses to attacks that then generate false information. What we worry about is a delay in responses, or probably even worse than that, is when something escalates unnecessarily. And look at a worst-case situation. Adversaries can manipulate the AI to overlook real threats and then generate false alerts, and so essentially, you're turning AI vulnerabilities against us. So, one of the key things we need to focus on is the data pipeline. So, the data that feeds this model and how is that data being handled. What are the proper controls in place to ensure unauthorized individuals can't go into the system? And Anna, you and I have talked about frameworks. You talked about the EU AI Act and ISO 42001, which is the only global certification right now around responsible AI; that's when they come into play. Because when you're going to have to focus on the importance of data management in a system, data quality preparation is going to be essential to ensure that the outputs are accurate and trustworthy. So, trust in an AI system, it's not a given, and it's like trust between people. It has to be built and maintained. In this case, it has to be built and maintained through a rigorous process and continuous oversight.
Leach: Yes, and that's a great response. The trust is important. We need to start having maturity in other technology and have a zero trust methodology in place to always verify and constantly check to see if things are in place. That's important. Getting back to Tom earlier, you mentioned Sam Altman. The recent OpenAI o1 played around with the preview that is probably the closest we've come now to a real reasoning and logic that is starting to mimic human behavior, and as we've seen with a couple of research groups, one that stands out to me is Apollo Research. They came out and identified that we need to start having human-like thinking and reasoning, and the idea there is that can be also self-teaching, that they're starting to discover that there is a way that it will understand what the prompter or the developer is wanting, and even though it understands that it may not be the correct answer, rather than having a hallucination, which will be, "Hey, I pulled something from Reddit and I'm giving you information that I've been trained on and regurgitated, thinking it's a fact." Instead of thinking it's a fact and presenting it as a fact, it's thinking beyond that and starting to reason that, "Well, I know what the truth is. I know what the right answer is, but I'm going to provide you this answer because it meets your objective better," and that's where the lying compared to hallucination comes in. So, as Avani mentioned, there have to be guardrails in place. Here at CSA, we are building out an AI controls framework, very complementary to the ISO framework as well, and trying to identify, how do we not have only the base models. We would like to talk about the large language models, the frontier models, but also all of these AI applications, these agents, oracles, all these parts that are going to be dependent, which is the orchestration for security that Avani was referencing earlier. All of those parts have to be enabled to ensure that as we get input from a large language model, it is what we intend it to be.
Field: Troy, let's bring it home for our audience. How can security teams detect and mitigate the risk posed by models that may lie or intentionally mislead, especially when it comes to critical decision-making processes?
Leach: Yes, what we're seeing is some models are specifically for this type of protection. So, looking at the guardrail on top of the AI that will look for sensitive information being disclosed, is there some form of bias that's being presented that we wouldn't want to represent? Are we giving away corporate secrets or promoting our competitors? Things of that nature. So, we're starting to see those types of guardrails already. Part of it also is the fine-tuning. So, being able to take and control the expectations. So, we love the creativity that gen AI produces, but for work production where you're expecting to have very narrow results, working with the temperatures so that the deviation of how creative versus how realistic the expectations are, understanding how we fine-tune these models, how we build upon them, have our developers be very astute as to what they expect from their own design, and then being able to test and retest and always validating the model itself. To what you said earlier, Tom, we're always going to have to have that human element that is verifying that our expectations don't change, because developers get kind of giddy about this. But, the surprise is that as the AI models learn, they are changing how they are training themselves, and so sometimes they start to go off course, and we have to be able to identify that and self-correct if it's in a critical area.
Desai: Yes, the only thing I would add to that is that teams need to look at this dual approach - proactive and reactive. So, on the proactive side, one key tactic that we're working on is AI-specific pen testing, and the industry is calling it AI red teaming. So, simulating attacks on the AI model to see how they respond and then identify vulnerabilities. And so when companies are using pre-trained models like GPT-4, we're not attacking the model itself. We're testing the model's core ability to evaluate how it integrates into specific use cases. So, we're looking at it to see how it interacts and how it impacts the application it's providing. But to me, red teaming is critical during the foundation-building model phase, because, as Troy talked about, there's something called model watermarking and drift monitoring. That's going to help detect when a model's behavior starts to deviate. So, we can call it hallucination or lying from what its intended purpose is. But that proactive approach is going to allow teams to catch early signs that the model is lying or generating misleading output. So, proactive and reactive, and you're going to have to ensure that you have continuous monitoring and logging for any security team because you can't deploy an AI model and assume it's going to work perfectly forever. And with the constant change that we're having, especially with GPT-4 and new LLMs coming out, you're going to have to ensure validation and recalibration. And it's going to be important, especially in areas that we talk about, like high risk - so, healthcare, law enforcement, critical infrastructure, where you worry that misleading information can be severe.
Field: Avani, Troy, very insightful. I'm going to turn this back now to our good constable - Anna.
Delaney: Yes, time for the good cop. But as you both mentioned guardrails. It's worth doing another episode on what you mean by that and digging deeper into what you're seeing there. But for now, let's shift to good AI, what we're calling good AI. I'd love to explore how these technologies are being put to work in the real world, some of the challenges that come with them, and where they might be heading in the future. So that first question, maybe Troy you want to take this, and then Avani, I would love your perspective. What AI technologies or use cases are you seeing right now that are moving the needle in cybersecurity, and how well are they working in practice?
Leach: That's a great question, because people think this is very futuristic, but we are seeing this today. Earlier this year, Cloud Security Alliance did a survey with Google looking at all these good use cases, and we found that about 19% were already using the reasoning and logic of gen AI to do real creation. So, looking at what is our complex environment, and how do we zero down on that trust and use rule creation for that. The creative problem-solving of AI, which Avani just talked about, is red teaming being able to generate new scenarios that the team had never thought about. That's what it does. The synthesizing of large data. We see that as a way of finding the needle in the haystack for all of these possible anomalies that would have gone undetected, and looking at static signature-based or other types of traditional monitoring, and then I'll say pattern identification as well as that goes into the regulatory conversation that was mentioned about, how do we demonstrate that we are going to adhere to all of these regulations, whether it's data sovereignty and data localization or whether it's PCI or HIPAA or whatever it might be for sensitive information. That's a way for us to look and say, "How do I need to modify this environment? Because I'm violating regulations unintentionally." Also, this is where people get their eyes light up - the documentation of or regulation in general. That is one area where, if you're a good security professional, the last thing you love doing is documentation for the most part. And so, this is going to be an enabler for a lot of security professionals to provide to companies such as Schellman. Here is our evidence that we are constantly and regularly documenting what is changing in the environment, and how we know that we have a good health of this group that we might be overseeing. And so, those are some of the areas, based on the survey and conversations with CISOs, where they're saying, "Yeah, there's practical use today. I'm not going to trust it fully. I'm not going to put a full end work production and rely on that." In some cases, such as with the documentation, I'm told that an improvement of 80%-90% over the former documentation, and Avani in her long experience with doing this type of work in the assistant world, I'd be curious to get her opinion as to see if she's seen the same things.
Desai: Yes, definitely. So, documentation, gathering evidence, and as auditors, we always say that we want to have reasonable assurance. This is the first time we can use AI technologies to get absolute assurance right, because AI is going to be able to. So, one of the areas, that's the most promising, is anomaly detection. So typically, as an auditor, we go and select 25-40 samples, and we say, okay, based on these samples, things are looking good. You've passed this control. But what AI systems can now do is they can potentially go and monitor network traffic. They can look at patterns. They can find potential threats. They can find that one or two or three deficiencies and all of this. So, the ability to process vast amounts of data in real time and identify things much more effectively than a human can is important. So, you can use this with incident monitoring, as well as from an audit perspective. The other thing I want to talk about is AI-driven pen testing. So, I talked about AI red teaming before, and to clarify, AI red teaming is using offensive strategies against an AI model. I'm talking about AI performing the pen test itself. Not to simulate sophisticated attacks on a system and identify weaknesses, we're making pen testing more accessible and scalable. However, revolutionizing pen testing hasn't happened yet, but what is happening when I talk about AI-driven pen testing is AI is improving scanning tools. You're getting reduced false positives, which is important, because if you ask any security team, they spend a lot of time on false positives. And then the other thing, and this isn't going to be new to anyone who's listening to this, is predictive analytics. AI can now forecast potential breaches by looking at historical data patterns, and AI systems can also automate routine tasks that all organizations need, and sometimes they miss, such as patch management, and then it does things like Troy was saying. It's freeing up human teams to focus on more complex issues, and that is amazing to be able to balance that because that's what we want. We want AI to focus on the automation and the tasks, and we want humans to focus on the complicated things. And that's a few things that I'm saying are exciting right now.
Leach: Yes, and I'll add one more thing - speed of recovery. So, looking at the root cause and being able to dissect and understand the entire sequence of events. We see it all the time, and I read it in many of the articles that you publish, and the data breaches, where there are 5-7 months of undiscovered activity by a criminal. I'm going back to a root cause here, and understanding how these things are happening and finding these anomalies. And in the meantime, from infiltration to discovery is going to be significantly reduced as a result of what Avani is saying.
Delaney: Then, Avani, looking ahead, do you see AI moving beyond a supportive role to a more autonomous one in cybersecurity? What do you think the pros and cons of that shift would be?
Desai: Yes, I definitely think AI is going to take on that role. But, with anything like that, it's going to have its own set of challenges. So, let's consider this - spam detection in email systems - a task that AI has been handling autonomously for years. We mostly let it run on its own. I don't even know when I have intervened. You intervene when necessary. We're going to see more of that with AI handling repetitive tasks. So, I talked about scanning vulnerabilities and patching systems. AI can scan but I still don't think we're at the point yet today, and don't quote me, because next year, I may say we are there, but we're not at a point where we can fully trust it to automatically pass patch systems in production without human input. And that leap requires a level of trust and assurance that we're still building. And with autonomy comes more risk, like if an AI system makes a mistake, such as blocking legitimate traffic, because, let's say it misidentifies it as a threat, that could disrupt business operations. And yes, a human can too. But with that, the question of accountability comes up. So, who's responsible if an autonomous AI system makes an error? And this reminds me, I was talking to our pen testers, and they said, this reminds me of a line from the 1979 IBM presentation, which says a computer can never be held accountable; therefore, a computer must never make a management decision. Like it when he said that to me, I was like, oh gosh, this is a reminder that we have to tread carefully as AI becomes more autonomous. The World Economic Forum came out with an AI governance alliance that's working on global standards, such as ISO 42001, and the CSA is doing it as well, but one of the things they're talking about in responsible AI development that's going to be there is a specific portion that talks about moving towards more autonomous systems. So, clear governance transparency, and for now, we need human oversight, until we feel like we can trust an AI system to do its own tasks without any type of oversight.
Delaney: Troy, your thoughts? Is there a balance to strike?
Leach: Yes, the points that we'll see most quickly are what Avani pointed out, parts of the streamline that are not directly impacted. So, I look at coding as a former programmer, I see AI, there's a lot of models out there already that can take in open source code, which is constantly a threat. First of all, AI is going to be able to finally accomplish a software bill of materials and have a catalog of what we have. I don't think we do that very well manually, but it's going to be able to, and again today, take in code, see where there are vulnerabilities, even if there's not an enumeration by some other organizations to say that there's something vulnerable. It will be able to self-heal the applications and take and modify them. And sometimes, these vulnerabilities happen when it's taking code that may not be flawed on its own, but putting it with other code and creating some access problems, or whatever the issue might be, and that's an area that we'll see more quickly evolve. Going back to Google, they have seen in their fuzzing and looking out, kind of their proassessment of different open source code. They've identified more than 10,000 vulnerabilities in the last 7-8 years using forms of machine learning and AI. At the same time, the autonomy, and can be dangerous, and we need to be very guarded. So, Avani did a great setup for the IBM commercial in 1979. Air Canada would love to use that commercial because recently, I don't know if it was this year or last year, where they had a chatbot on their website, and it decided, in its own reasoning, that it was going to create a refund policy for airline passengers. And the courts in Canada decided that Air Canada was responsible for what their AI chatbot was saying, and they had to honor the refund. So, we are getting into the space where we have to be careful about how we apply and launch these chatbots and other forms of gen AI, because there is an accountability that now we're expecting that we didn't think would exist 40-50 years ago.
Delaney: Fascinating stuff. That's all the time we have for now unfortunately. Avani and Troy, thank you so much as always for all the information and timely insight you've shared with us. It's been excellent. I hope we can do this again soon.
Leach: Absolutely.
Delaney: And thank you very much, Tom.
Field: Pleasure. We'll continue this one.
Delaney: Absolutely. And thank you so much for watching. Until next time.