IBM, Nvidia, Others Commit to Develop 'Trustworthy' AI
White House Secures 8 Additional Commitments to AI PledgeAdobe, IBM, Nvidia, Salesforce and four additional technology giants on Tuesday signed on to a White House-driven initiative for developing secure and trustworthy generative artificial intelligence models.
See Also: 40 Ways to Use Splunk in Financial Services
A voluntary pledge drafted by the Biden administration commits signatories to a slew of commitments including investing in AI model cybersecurity, red-teaming against misuse or national security concerns and accepting vulnerability reports from third parties.
Companies also say they will watermark AI-developed audio and visual material that is otherwise indistinguishable from organic content and develop tools to identify content created within their own systems.
The new wave of companies joins an original set of tech heavyweights including Amazon, Google, Meta, OpenAI and OpenAI partner Microsoft, who signed on to the White House initiative in July.
"The president has been clear: Harness the benefits of AI, manage the risks, and move fast - very fast," White House chief of staff Jeffrey Zients said in a statement. "We are doing just that by partnering with the private sector and pulling every lever we have to get this done."
Cohere, Palantir, Scale and Stability also joined the ranks of voluntary signatories on Tuesday. The commitments, at least for now, are the closet approximation of targeted AI regulation in the United States. The federal government has said that existing laws against discrimination and bias generally apply to algorithmic processes.
Otherwise, the country trails far behind the European Union, which is close to finalizing continentwide rules on the deployment of AI systems, including bans on AI applications considered too risky for society, such as real-time facial recognition.
More than one bipartisan group in Congress has proposed legislation - including Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo. - who together released a framework proposing a licensing regime and legal liability for AI firms for acts such as non-consensual explicit deepfake imagery of real people or election interference.
Members of the Congressional Artificial Intelligence Caucus in late July introduced in the House and the Senate a bill that would establish the National Artificial Intelligence Research Resource in a bid to foster "safe, reliable, and trustworthy" AI model development at universities.