Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Video
How to Handle AI-Biased Results
Microsoft's Aditya Vasekar Shares Case Studies on Challenges of AIBusinesses and governments have been using artificial intelligence and machine learning for years, but little has been done to understand its biases. Aditya Vasekar, senior principal for product security with Microsoft, discussed AI bias challenges and how organizations can address them.
See Also: Convergence of Cyber and Physical Security for a Safer World
Part of the problem, Vasekar said, is tracking the various types of AI attacks within the organization's environment. "There are biased results being produced by AI and when it comes to liability to the organization, the data cannot be traced back," he said.
The prospect of AI bias could pose legal issues in the future. Attorneys and advocates are debating how these bias cases can be litigated in court, he said. Most big-tech companies, including Microsoft, have pledged to deliver "responsible AI."
In this video interview with Information Security Media Group at the C0c0n 2023 conference, Vasekar also discussed:
- The challenges of adopting AI solutions;
- What Microsoft is doing to promote responsible AI;
- How blockchain can help curb AI threats.
Vasekar leads product security at Microsoft including security reviews and threat modeling for industry and enterprise IT domains. He also focuses on blockchain, artificial intelligence and other emerging technologies.