The rapid adoption of artificial intelligence (AI) is unlocking unprecedented value for organizations that can effectively harness its potential. However, as AI continues to evolve, so does the need for robust risk management to ensure its ethical and transparent use. With forthcoming oversight from industry, government, and international bodies, maintaining trustworthy AI is critical to unlocking its full value while ensuring safety and accountability. Here’s where to start on the journey toward building AI that can be trusted, valued, and guided responsibly.
8 Principles for Trustworthy AI
Organizations should establish an AI governance framework that’s grounded in the concept of “Trustworthy AI” principles that guide people, processes, and technology throughout the development and deployment of AI. The core principles of trustworthy AI principles include:
- Accountability: The obligation and responsibility to ensure systems operate ethically, fairly, transparently, and compliantly (e.g., traceable actions, decisions, outcomes).
- Contestability: Ensuring system outputs and actions can be questioned and challenged.
- Explainability (XAI): The ability to describe AI’s output and decision-making.
- Fairness: Relatively equal treatment of individuals and groups.
- Reliability: Ensuring systems behave as expected (e.g., perform intended functions consistently and accurately, especially with unseen data).
- Robustness: Systems maintain functionality and perform accurately in a variety of circumstances (e.g., new environments, unseen data, against adversarial attacks).
- Safety: Minimizing potential harm to individuals, society, and the environment.
- Transparency: Ensuring information about the system is available to stakeholders.
While it may not be possible to maximize all characteristics of trustworthy AI, organizations still need to determine and accept tradeoffs in a risk-based manner. Effectively balancing risk and implementing trustworthy AI is key to:
- Improved decision-making.
- Stronger competitive advantage.
- Preparation for regulatory compliance.
- Enhanced security and privacy.
- Mitigation of bias and harm.
- Sustainability and long-term viability.
Featured Insight
Implementing Trustworthy AI
So where do you start? Organizations struggling to operationalize trustworthy AI or seeking a health check on their existing framework may benefit from a baseline risk assessment or audit. A few relevant assessment/audit types are program assessments, development workflow assessments, and model assessments.
Program assessments provide organizations an enterprise-wide analysis of the culture of AI governance. A formal assessment (e.g., performed using the NIST AI Risk Management Framework) will reveal overall program maturity, gaps, and recommendations to achieve trustworthy AI.
Development workflow assessments provide organizations a targeted report on their AI development lifecycle. A formal assessment of the AI development lifecycle will reveal strengths, weaknesses, and potential risks associated with the AI dev workflow (e.g., plan, design, development, and deployment).
Model assessments, or “conformity assessments,” may be required for organizations subject to the EU AI Act developing or deploying high-risk systems. A model assessment may include verifying and/or demonstrating that a “high-risk AI system” complies with the requirements of the EU AI Act, including by evidencing:
- The organization’s risk management system.
- Implementation of effective data governance (bias mitigation).
- Maintenance of up-to-date technical documentation and logging.
- Testing of systems for cybersecurity resiliency.
- Other requirements around human oversight and transparency.
Transforming AI Risk Into Opportunity
Few organizations will get every aspect of AI right on their own. As with many emerging technologies, the instinct to move quickly often outweighs caution. However, by adopting a thoughtful, strategic approach to AI, companies can mitigate risks, maximize value, and outpace competitors.
To start a conversation on building a foundation of trustworthy AI, contact CrossCountry Consulting.