Back

Understanding AI Policies and Governance

AI Policies

AI policies and governance are essential to ensure that AI technologies are developed and deployed responsibly. Join us live on Zoom on September 26th for our AI Policies and Governance webinar, featuring speakers David Dunbar and Peter Torn. Register here

Here is a breakdown of existing structures and the nuances organizations should be aware of as an organization leveraging AI tools.  

The EU AI Act 

The European Union (EU) has been a pioneer in this area with its comprehensive AI Act, which classifies AI systems based on their risk levels. This risk-based approach is crucial for managing the potential harms and benefits of AI. 

The EU AI Act categorizes AI systems into four risk levels: unacceptable risk, high risk, minimal risk and limited risk. 

  • Unacceptable Risk: These AI systems are prohibited due to their potential to cause significant harm. Examples include deploying subliminal, manipulative, or deceptive techniques to distort behaviour, exploiting vulnerabilities related to age or disability, and social scoring based on personal traits. 
  • High Risk: These systems require stringent oversight and compliance measures. High-risk AI systems include those used in law enforcement, employment, and biometric identification. Providers of high-risk AI must establish a risk management system, ensure data governance, and design systems for accuracy, robustness, and cybersecurity. 
  • Limited Risk: These systems are subject to lighter transparency obligations; for example, developers and deployers must ensure that end-users are aware that they are interacting with AI, such as chatbots and deepfakes. Other legislation (e.g. GDPR) still applies. 
  • Minimal Risk: These are unregulated, including the majority of AI applications currently available on the EU single market, such as AI-enabled video games and spam filters. However, this is rapidly expected to change with the impact of generative AI.  

The Canadian Perspective: Voluntary Codes and Future Legislation 

While Canada does not yet have a comprehensive AI-specific legislative regime, it has introduced a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. This code is based on six principles: accountability, safety, fairness and equity, transparency, human oversight and monitoring, and validity and robustness. 

Key Principles of the Canadian Voluntary Code 

  • Accountability: Organizations must implement an appropriate risk management framework and share information with other firms as needed to avoid gaps. 
  • Safety: AI systems should undergo comprehensive risk assessments to identify and mitigate potential adverse impacts.  
  • Fairness and Equity: The potential impacts on fairness and equity must be assessed and addressed throughout the AI system’s lifecycle.  
  • Transparency: Sufficient information should be published to allow consumers to make informed decisions and for experts to evaluate risks.  
  • Human Oversight and Monitoring: Continuous monitoring of AI systems is essential to identify and mitigate harmful uses or impacts. 
  • Validity and Robustness: AI systems must operate as intended, be secure against cyber-attacks, and perform reliably across various tasks and situations.  

Developing Internal AI Policies 

Organizations must develop internal AI policies to ensure responsible AI deployment. These policies should be aligned with regulatory requirements and best practices. Here are some key considerations for developing internal AI policies: 

Risk Management and Compliance 

  • Establish a Risk Management Framework: Organizations should adopt a risk management framework that is proportionate to the nature and risk profile of their AI activities. This includes establishing policies, procedures, and training to ensure staff are familiar with their duties and the organization’s risk management practices.  
  • Conduct Comprehensive Risk Assessments: Organizations should perform thorough assessments of potential adverse impacts, including risks associated with inappropriate or malicious use of AI systems.  

Transparency and Accountability 

  • Publish Plain-Language Descriptions: Organizations should provide clear and accessible information about how AI systems are being used, the types of output they generate, and the risk mitigation measures in place.  
  • Implement Human Oversight: Organizations should ensure that AI systems are designed to allow for human oversight and intervention. This is crucial for maintaining control over AI systems and mitigating potential risks. 

Training and Education 

  • AI Literacy: Organizations need to train all employees on the technological and human dimensions of AI. This includes understanding the potential risks and benefits of AI and the ethical considerations involved in its deployment. 
  • Stakeholder Involvement: Organizations need to involve all stakeholders from various teams across the organization in the AI development process. Ensure that each stakeholder understands the AI needs versus the AI risks. 

The responsible and ethical deployment of AI in organizations is a multifaceted challenge that requires a comprehensive approach to governance and policy development. By understanding and implementing the principles outlined in the EU AI Act and the Canadian Voluntary Code, organizations can navigate the complexities of AI regulation and ensure that their AI systems are safe, fair, and transparent. 

Caravel has over 100 legal experts, including those equipped to help you navigate the nuances of AI policies and governance. To learn more about our services, connect with us today! 

  • Share:

Work with a law firm that gives your business the attention it deserves.

Contact us