Beyond the hype, AI is transforming cybersecurity by automating threat detection, streamlining incident response and predicting attacker behaviors. Organizations are increasingly deploying AI to protect their data, stay ahead of cybercriminals and build more resilient security systems.
A federal government IT modernization funding program is looking to invest in projects that will help hasten the implementation of artificial intelligence to improve efficiencies and service delivery among government agencies. It will favor proposals with budgets under $6 million.
Large language models may boost the capabilities of novice hackers but are of little use to threat actors past their salad days, concludes a British governmental evaluation. "There may be a limited number of tasks in which use of currently deployed LLMs could increase the capability of a novice."
Officials said the Artificial Intelligence Safety Institute Consortium will provide a "critical forum" for the public and private sectors as the federal government aims to use input from more than 200 stakeholders across public society to develop AI safety and security standards.
With over 1 billion people across more than 50 countries - including the U.S., the U.K. and India - due to hold elections this year, one open question remains: How can nations combat adversaries who attempt to influence elections or otherwise interfere via physical, cyber or operational means?
In the latest weekly update, Joe Sullivan, CEO of Ukraine Friends, joins three editors at ISMG to discuss the challenges of being a CISO in 2024, growing threats from disinformation, vulnerabilities in MFA, AI's role in cybersecurity, and the obstacles to public-private information sharing.
This week, the U.S. banned AI robocalls, researchers discovered a Linux bootloader flaw, France investigated health sector hackings, the feds offered money for Hive information, Verizon disclosed an insider breach, Germany opened a cybersecurity center, and cyberattack victims reported high costs.
Entrust, a pioneer payment, identity and data security software and services provider, is in talks to acquire Onfido, a pioneer in cloud-based, AI-powered identity verification technology, for a reported $400 million. The combined solution will help customers fight identity fraud.
The U.S. Department of Homeland Security is recruiting dozens of artificial intelligence experts to integrate AI abilities into government work such as defending against cyberthreats and using AI-powered computer vision to assess damages after a disaster.
Hackers can use generative AI and deepfake technology to manipulate live conversations, IBM security researchers said. They used the "surprising and scarily easy" audio-jacking technique to intercept a speaker's audio, replace an authentic voice with a deepfake, and share fake bank account data.
The escalating adoption of generative AI has introduced concerns regarding data privacy, fake data and bias amplification. Ashley Casovan, managing director of the IAPP AI Governance Center, discusses the need to develop governance models and standardize AI systems.
Fraudsters used deepfake technology to trick an employee at a Hong Kong-based multinational company to transfer $25.57 million to their bank accounts. Hong Kong Police said Sunday that the fraudsters had created deepfake likenesses of top company executives in a video conference to fool the worker.
A U.K. parliamentary committee scrutinizing the artificial intelligence market urged the British competition regulator to closely monitor developers of foundation models and warned against regulatory capture. Already, the market is trending toward consolidation, said a House of Lords committee.
The Biden administration is contemplating updating for the artificial intelligence age the privacy guidance that federal agencies must follow before activating new systems or adding a new collection of personal identifiable information to existing information technology systems.
In the latest "Proof of Concept," Sam Curry of Zscaler and Heather West of Venable assess how vulnerable AI models are to potential attacks, offer practical measures to bolster the resilience of AI models and discuss how to address bias in training data and model predictions.