The explosion of artificial intelligence (AI) generated content over the last few months has been staggering. The growth has been so prolific that experts predict as much as 90% of online content will be synthetically generated – or created by an AI – by 2026.
At Caravel, we’ve been following the development of AI and its impact on the law for quite some time. Monica Goyal, our Director of Legal Innovation, has written several articles on the implications of artificial intelligence for lawyers for Canadian Lawyer magazine, including posts on legal AI tools like Harvey and how ChatGPT will affect the law.
And there are real legal, moral and ethical threats when it comes to AI technology. In fact, more than 1,400 tech leaders have called for a pause on training some sophisticated AI tools, citing unforeseen risks to our civilization.
Like most technological advancements experiencing lightning-fast growth, the law and regulations around AI are still murky, especially regarding images created using artificial intelligence.
The benefits of a tool that enables you to generate an image with a simple user prompt – like OpenAI’s DALL-E 2 – seem like an incredibly cost-effective and time-saving option. But the regulation around the use of these images and who owns them is still unclear. And businesses leave themselves exposed when they use these tools without properly considering the legal implications for their organization.
The legality of using AI-generated imagery
AI imagery is not created in a vacuum. AI platforms have access to billions of images from across the internet. When you use an AI tool to create a new image, the output is actually a pixel-by-pixel rendering based on images the system has been trained on and categorized.
Earlier this year, Getty Images sued Stability AI for allegedly copying more than 12 million photographs from their collection without permission or compensation.
Getty claimed the popular artificial intelligence image creator was engaging in “brazen infringement of their intellectual property on a staggering scale” and that the AI company was effectively trying to start a competing business.
In Canada, it’s possible that a defence against copyright infringement – fair dealing – would not be successful in such a lawsuit. The fair dealing defence is an exception to copyright infringement. Whether the defence is successful depends on whether the copyrighted work is used for a permitted fair dealing purpose under the Canadian Copyright Act and whether the use is fair.
Assessing whether the dealing is fair depends on several factors, including the dealing’s effect on the market for the original work. If a competing image creator uses images in which a stock photography company owns copyright, it could result in lost market share for the stock photography company, in which case fair dealing may not be a successful defence.
The unauthorized use of images by generative AI may also violate moral rights, which are the rights of the image’s creator to be associated or not associated with the image, as well as the creator’s right to restrain alternation of the image they created.
Writing AI protection into law in Canada
In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA), part of Bill C-27, the Digital Charter Implementation Act 2022. The goal of the proposed AIDA legislation is to ensure the design, development and use of AI systems are safe, and to encourage the responsible adoption of AI technologies by Canadians and Canadian businesses.
However, many argue that the legislation as it currently stands is not strong enough. Part of the challenge that regulators and politicians face is that they are attempting to write legislation for technology that is continuing to change and evolve on a daily basis.
If the AIDA, which is currently being studied by a parliamentary standing committee, becomes law, it is expected to come into force around 2025 and would be Canada’s first national regulatory scheme for AI systems.
How to protect your business from the risks of AI-generated imagery
In the meantime, AI will continue to grow, evolve, and significantly impact Canadian companies.
To help manage some of the risks AI poses to businesses, Caravel Senior Counsel Lorraine Fleck recommends creating and implementing an AI use policy. An AI use policy should outline which behaviour and actions are acceptable and unacceptable when using AI tools in the workplace, with explicit references to intellectual property and data protected by privacy law, such as personally identifiable information (PII).
Businesses should hold employee training sessions that explain the AI policy in detail and why it’s necessary. It’s also important to ensure all employees attend the training session, sign a document after the training confirming among other things that they attended the training, and, and receive a certificate of completion. This will help your business demonstrate due diligence if an employee commits AI-related unauthorized use of intellectual property and PII.
Balancing AI’s benefits and risks
In many ways, artificial intelligence is like a knife: it’s a useful tool that can be valuable but also have less positive applications. AI has beneficial uses, such as cancer diagnostics, supporting those with disabilities, and helping tackle climate change. But when used maliciously, it can be incredibly dangerous and harmful.
If you need help understanding the risks AI poses for your business, we can help. Caravel has a team of 85 qualified and experienced lawyers, including those specializing in the law and artificial intelligence. Get in touch with our team today to find out more.