Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Don't Let AI Frenzy Lead to Overlooking Security Risks

Successful AI Implementation Requires a Secure Foundation, Attention to Regulations
Don't Let AI Frenzy Lead to Overlooking Security Risks
Google Cloud "Chaos Coordinator" John Stone speaks on Sept. 19, 2023, at information Security Media Group's London Cybersecurity Summit (Image: ISMG)

The private sector's frenzy to incorporate generative artificial intelligence into products is leading companies to overlook basic security practices, a Google executive warned Tuesday.

See Also: The Anatomy of a Deepfake: What if You Can’t Trust Your Own Eyes and Ears?

"Everybody is talking about prompt injection or backporting models because it is so cool and hot. But most people are still struggling with the basics when it comes to security, and these basics continue to be wrong," said John Stone - whose title at Google Cloud is "chaos coordinator" - while speaking at Information Security Media Group's London Cybersecurity Summit.

Successful AI implementation requires a secure foundation, meaning that firms should focus on remediating vulnerabilities in the supply chain, source code, and larger IT infrastructure, Stone said.

"There are always new things to think about. But the older security risks are still going to happen. You still have infrastructure. You still have your software supply chain and source code to think about."

Andy Chakraborty, head of technology platforms at Santander U.K., told the audience that highly regulated sectors such as banking and finance must especially exercise caution when deploying AI solutions that are trained on public data sets.

"Depending on the industry, there might be huge regulatory concerns regarding applications of AI, especially in financial services, which process sensitive financial data and personal information," Chakraborty said.

In the wake of increased regulatory focus, especially with the passing of the proposed European Union's AI Act, more organizations will pivot toward applications similar to ChatGPT that are trained on private data, he said.

"That is more secure, plus you can train it with your own private data and keep it within your own ecosystem. So these private models are going to be the future."

For "safety and risk critical" businesses such as aerospace, the use of AI is currently limited to "decision-supporting" and not "decision-making," said Adam Wedgbury, head of enterprise security architecture at Airbus.

"Internally, it's very difficult to use AI at the moment for anything to do with engineering. Again, our dilemma is: Should it be security for AI or AI for security?"

About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.

Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing, you agree to our use of cookies.