Artificial Intelligence & Machine Learning , Leadership & Executive Communication , Next-Generation Technologies & Secure Development
On Point: Risk Management Strategies for AI Tools
What to Do to Protect the Sensitive Data You Submit to Online AI ToolsArtificial intelligence tools are both a blessing and a curse for companies. They enable staff to be more efficient and get tasks done quicker, but they also allow an ever-increasing amount of sensitive data to walk out of the organization. Data that is moved to AI for analysis can create legal and contractual nightmare for companies.
See Also: IDC Whitepaper I Business Value of Dell VxRail HCI
ChatGPT and Google's Gemini operate under a freemium model, which offers users basic functionality. The more advanced features are locked behind a paywall. Like libreware, these online tools have vague and user-unfriendly license agreements, and buried inside these agreements are clauses that grant the provider rights concerning the data submitted.
These are the top three issues usually found in the licensing or use agreements.
- The provider has the right to use your data for training and the improvement of its products. The provider can incorporate your company data into its training platforms without further permission or consideration. This can benefit the provider's products, and it can also potentially expose your company's sensitive data to the underlying algorithms.
- The provider has the right to use your data for data analytics and marketing. The risk here is that the data you submit could be aggregated with data from other users to reveal industry trends or allow competitors to glean insights into your strategies.
- The license agreement allows for third-party sharing. This might allow the provider to share your data with affiliates or third-party vendors, which would further expand the circle of those with access to your sensitive information.
Consequences of Submitting Sensitive Data to AI Tools
The risks inherent in the license/use agreement clauses raise serious concerns about your contractual and legal compliance, particularly with data privacy regulations. The General Data Protection Regulation addresses the principle of data minimization. It mandates that companies only collect and process the minimum amount of data necessary for a specific purpose. Submitting company data to an AI tool for a nonessential task directly contradicts this principle.
If you submit personal identifiable information to an AI platform without express consent, your company can be at serious risk of breach under the GDPR. It says that a company can be fined up to 20 million euros or 4% of its annual global turnover, whichever is higher, for this violation.
The consequences of a data breach caused by providing sensitive information to an AI tool extend far beyond regulatory fines. Your company will suffer reputational damage as a result of a data breach, leading to customer churn and a decline in brand value. And new and existing clients may be hesitant to entrust sensitive information to a company with a history of leaks.
If customer data is compromised through an AI tool, the company also could face legal action from affected individuals and/or from regulatory bodies seeking some form of restitution for damages suffered.
Risk Management for AI Tools
Risk management controls need to be in place when sensitive data is submitted to AI tools. Good security hygiene practices are essential. They include:
- A data classification policy and associated awareness training: Educate your employees on data classification and the importance of identifying sensitive information. This will greatly reduce the risk of a compliance breach. Sending regular emails that contain easy-to-read information and guidance is an easy way to accomplish this.
- A list of vetted AI providers: If AI is a business need, thoroughly research potential AI providers, scrutinize their license agreements and prioritize those with robust security practices and clear data usage policies. Make this list available to all staff and keep it updated.
You can also consider developing internal AI solutions. This is especially beneficial where high-risk tasks involving sensitive or PII data could be used. In-house AI solutions let you keep control of your data entirely within your organization.
AI technology will only continue to grow and evolve, so the issue of data security is paramount. The EU's AI Act is a step in the right direction (see: EU Parliament Approves the Artificial Intelligence Act).
There is always a cost associated with tools, and AI is no exception. Ensure that you fully understand the risks associated with AI and other online tools, and implement robust business- specific mitigation strategies.
CyberEdBoard is ISMG's premier members-only community of senior-most executives and thought leaders in the fields of security, risk, privacy and IT. CyberEdBoard provides executives with a powerful, peer-driven collaborative ecosystem, private meetings and a library of resources to address complex challenges shared by thousands of CISOs and senior security leaders located in 65 different countries worldwide.
Join the Community - CyberEdBoard.io.
Ian Keller has over three decades of experience in information security. Currently, he leverages his extensive knowledge and expertise to bridge the gap between corporate telecommunications intelligence and business communication, providing data-driven solutions for informed decision-making and enhancing product quality in line with ISO and best practices. Keller is a chief information security officer whose career has encompassed sectors including telecommunications, network security, financial services, consulting and healthcare. His expertise in customer security, identity and access management, information security, and security awareness has made him a sought-after speaker at international events.