Welcome to our recap of Caravel Law’s recent Continuing Professional Development (CPD) event, “AI Policies & Governance”. This session brought together legal professionals to discuss the evolving landscape of AI regulation.
The session was moderated by Monica Goyal, Caravel Law’s VP of Legal Innovation. Monica guided the conversation with David Dunbar and Peter Torn. David is a senior-level lawyer who advises his clients on the swiftly developing AI regulatory landscape in Canada and abroad. Peter is a senior corporate lawyer and an Artificial Intelligence Governance Professional through IAPP who provides legal advice to government organizations and private corporations regarding AI tool implementation.
The session kicked off with a helpful reminder that with the ever-evolving territory that is AI, things are always changing, especially when it comes to governance. David led us into the presentation with a discussion of the current regulatory landscape and an overview of the various approaches we have seen thus far. Currently leading the charge with the regulation of AI is the EU.
EU Regulations and Risk Structure
The European Union (EU) is at the forefront of AI regulation with its EU Artificial Intelligence Act. The Act adopts a risk-based approach to AI governance, categorizing AI systems into four risk levels: unacceptable, high, limited, and minimal.
The unacceptable and high-risk categories were notable points in this presentation. Let’s do an overview of those risk levels:
- Unacceptable Risk
AI systems that fall under the unacceptable risk category are outright prohibited. These include:
- Subliminal, manipulative, or deceptive techniques;
- Inferring emotions;
- Exploiting vulnerable persons;
- Social scoring;
- Criminal profiling;
- Facial Recognition databases;
- Compiling facial recognition databases; and
- Biometric categorization, including “real time” remote identification (with certain exceptions).
2. High Risk
High-risk AI systems are subject to stringent compliance requirements. These systems often impact safety and fundamental rights, covering areas such as:
- Biometric identification and critical infrastructure;
- Employment and education;
- Access to public and private services; and
- Law enforcement and immigration.
Providers of high-risk AI must establish a risk management system, conduct data governance, and ensure technical documentation to demonstrate compliance. They must also design their systems for record-keeping, provide instructions for use, and allow for human oversight.
Canadian Regulations and AIDA
Canada is also making strides in AI governance. Currently, there is no comprehensive AI legislation in Canada, but existing policies and laws are being leveraged to apply to AI technologies.
Voluntary Code of Practice:
Canada has introduced a Voluntary Code of Practice for the responsible development and management of advanced generative AI systems. This code focuses on:
- Safety
- Transparency
- Fairness & Equity
- Human Oversight & Monitoring
- Validity & Robustness
- Accountability
Artificial Intelligence and Data Act (AIDA)
AIDA is a bill currently under review in The House of Commons. AIDA aims to regulate AI systems based on their impact. Managers of high-impact systems under AIDA will be responsible for using systems that have been properly risk-assessed, keeping required records, establishing risk mitigation measures, and ensuring human oversight. They must also publish plain-language descriptions of how the system is used, the types of output it generates, and the risk mitigation measures in place.
[To learn more about AIDA, check out our blog post!]
AI policies and AI governance, while closely related, serve distinct roles in the responsible deployment and management of AI technologies. AI governance encompasses the broader oversight mechanisms and structures that ensure compliance with these policies and regulatory requirements. It involves the establishment of roles and responsibilities, the implementation of risk management systems, and the continuous monitoring and auditing of AI systems to ensure they operate within acceptable ethical and legal boundaries.
AI policies, on the other hand, refer to the specific rules, guidelines, and best practices that organizations establish to ensure the ethical and effective use of AI. These policies typically cover areas such as data privacy, transparency, accountability, and risk management, providing a framework for how AI should be developed, deployed, and monitored.
AI Risk Model Frameworks
Effective AI governance requires robust risk management frameworks. Using AI risk model frameworks helps to ensure comprehensive risk management and adherence to ethical principles, enhancing trustworthiness and safety. The webinar highlighted several key frameworks that organizations can use to design their AI risk models.
- ISO 31000:2018
ISO 31000:2018 provides guidelines for risk management, emphasizing principles such as inclusivity, dynamism, and the use of the best available information. It encourages continuous improvement and integration into organizational processes.
2. NIST AI RMF
The National Institute of Standards and Technology (NIST) AI Risk Management Framework focuses on trustworthiness, outlining seven characteristics of trustworthy AI: validity, safety, security, accountability, transparency, explainability, and fairness.
3. Council of Europe’s HUDERIA
The Council of Europe’s Human Rights, Democracy, and the Rule of Law Assurance Framework for AI Systems (HUDERIA) incorporates human rights into AI governance. It emphasizes human dignity, freedom, prevention of harm, non-discrimination, transparency, and data protection.
4. IEEE 7000-21
The IEEE 7000-21 standard guides organizations on considering ethical values throughout system design, ensuring that AI systems are developed with transparency, sustainability, privacy, fairness, and accountability.
5. ISO/IEC Guide 51:2014
ISO/IEC Guide 51:2014 provides guidance on including safety aspects in AI standards, aiming to reduce risks associated with AI products and systems throughout their lifecycle.
Best Practices for AI Policies
Implementing effective AI policies is crucial for managing AI risks. The webinar outlined several best practices:
- Law-, Industry-, and Technology-Agnostic Frameworks
Ensure that your AI development framework is adaptable to various legal, industry, and technological contexts. This flexibility allows for intelligent self-management and responsiveness to evolving regulations.
2. Non-Prescriptive Approach
Adopt a non-prescriptive approach to AI governance, allowing for customization based on your organization’s specific needs and risk tolerances.
3. Risk-Centric Governance
Focus on risk-centric governance, ensuring that all AI-related activities are aligned with your organization’s risk management strategies.
4. Third-Party Risk Management
Create policies to manage third-party risks, ensuring end-to-end accountability. This includes developing contracts and auditing procedures to monitor third-party AI solutions.
5. Stakeholder Engagement
Engage stakeholders early in the AI governance process to build common ground and rapport. This collaborative approach helps in identifying and mitigating potential risks effectively.
6. Training and Human Oversight
Invest in AI literacy training for all employees, covering both technological and human dimensions of AI. Ensure that there is human oversight in AI decision-making processes to maintain accountability and ethical standards.
By incorporating these insights and best practices, organizations can navigate the complexities of AI governance and ensure responsible and ethical use of AI technologies. Keep an eye on Caravel’s LinkedIn for future webinars and CPDs! To connect with an expert about AI governance and policies or to view the full webinar recording, reach out to our team today!