Insights

AI and Copyright in Canada: authorship, liability, and TDMs

August 4, 2021
By Paul Horbal, Naomi Zener and Tamara Céline Winegust

In July, the Federal Government of Canada launched a Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things (discussed in our separate article, Consultation on Copyright, AI, and IoT now open). The consultation seeks public comment on the future of copyright law in Canada in the fields of artificial intelligence (AI) and Internet of Things (IoT) technologies. The comment period is open until September 17, 2021. Submissions may be e-mailed to copyright-consultation-droitdauteur@canada.ca.

This article focuses on the issues related to AI and copyright canvassed in the consultation paper.  

What exactly is AI, you ask? As the consultation paper acknowledges, coming up with a blanket definition is no easy task. One such definition is “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions designed to operate with varying levels of autonomy.” AI may be involved in creating data and making use of models for processing, validating, distributing, and monitoring information. Under the current Canadian Copyright Act, computer software code and programs are protected as literary works. However, a work created entirely by AI without human intervention—if such a thing is even possible—likely is not.

In the United States, entirely AI-created works are not copyright-protected—there must be a human author. Policy issues exist around the use of copyright-protected works to train AI and machine learning, as well as the use of AI to create literary, dramatic, musical, and artistic works. For example, when AI or a machine creates a work, who is the author? When such works are used and monetized, who earns the revenue? If AI infringes copyright in a work, who is liable? Further complicating the discussion is the lack of a concrete universal definition of what constitutes AI—there are many different types of AI (e.g., general AI, narrow AI, deep learning, algorithmic methods that do not necessarily involve AI, and automation and computation generally, etc.).

The consultation paper cavasses three areas of particular interest to the government in the area of copyright and AI: (1) authorship and ownership of works generated by AI; (2) copyright infringement and liability regarding AI; and (3) text and data mining (TDM), also known as “Big Data”.

Authorship and AI

One fundamental issue for which comment is sought is the determination of what constitutes authorship and ownership of works created by, or with the assistance of AI.

This question is not easily answered. There is tension within the Copyright Act itself—it does not define “author”. However, aspects of the Act and the current jurisprudence seem to require the author be a natural person (i.e., a living and breathing human). For example, the general term of protection is calculated as “the life of the author” plus a time. Similarly, for copyright to even arise under the current Canadian jurisprudence, the work must be produced by an exercise of the “author’s skill and judgement”. It cannot be so trivial as being a purely mechanical exercise or a process. There needs to be some sentience and directed effort underlying the creation.

The consultation paper highlights some potential solutions. One approach could see the authorship of AI-created works attributed to the human(s) that arranged for the creation of the work. Such a strategy has been adopted in the UK, Ireland, and New Zealand. Another approach could be to clearly restrict copyright and authorship to human produced works, with no copyright protection afforded when there is no human involvement. A final approach could be to simply create a new set of rights unique to AI-generated works.

AI and Liability

Whatever approach is ultimately adopted with respect to the assessment of “authorship”, will also no doubt play into how liability for copyright infringement is assessed. Like authorship, assessing infringement by AI is a tricky matter, because it, in part, requires a determination of who (of what) created the infringing work or copy. Moreover, any consideration of AI-related infringements would need to determine whether such acts constitute direct infringement or secondary/contributory infringement. If the latter, a rights holder would need to prove an underlying direct infringement as well. 

In addition to the issue of liability, the consultation paper raises the foundational question of what activities by AI could constitute “infringement”. In general, “infringement” of a copyrighted work requires the reproduction/performance/publication of a “substantial part” of the work. For example, it is not clear whether a substantial part of a work is used or reproduced when AI is trained or generates a new work. Some types of AI synthesize existing information to create new kinds of information. If, for example, such an AI synthesizes the works of renaissance masters, there is an open question of whether a painting it creates be considered a reproduction of the Mona Lisa. Likewise, there is arguably a difference if the AI is trained only on the works of da Vinci, compared to the entire collection of the world’s museums. Like human produced work, the manner in which AI operates—i.e., it’s “creative process”—may be so opaque as to make it extremely difficult, if not impossible, to determine what the AI does with a copyright-protected work it is “trained” on when it produces new works.

Added to this is the issue of moral rights infringement. If the AI fails to give author attribution where it was reasonable to do so, or if it modifies the underlying work or associates it with a cause, organization, or institution without the author’s consent, such activity could give rise to moral rights claims.

TDM and Copyright

Further complicating the matter of authorship/ownership and infringement/liability is the application of text and data mining (TDM)—another area of interest highlighted by the consultation paper. TDM—sometimes called “Big Data”—encompasses different ways to analyze and synthesize information based on large amounts of machine-readable data in a way that would be infeasible for humans. It is an application of AI that typically requires large volumes of data to function. Such data is typically taken from existing content, such as books, letters, audio recordings, photographs, or videos.

We are already seeing AI and TDM used to “write” screenplays and (seemingly) reanimate the dead through voice mimicry and deepfakes. For now, there is still human direction by way of programmers or users giving AI and TDM instructions to work with, such that the AI and TDM can be considered mere tools, albeit sophisticated ones. As these technologies develop and become more human-like both in their “creativity” and autonomy, and less and less human intervention is required for them to function and “create”, determining authorship (if it even exists), could become more difficult. 

Moreover, there is uncertainty as to which of the existing exceptions under the Copyright Act (e.g., fair dealing) apply to TDM activity, if any. Rightsholders are already facing practical challenges in enforcing their copyright and commercializing their works in the context of TDMs, especially when such works are publicly available on the Internet. Likewise, innovators seeking to teach their AI or run their TDM often face questions about whether the copying inherent in training and using TDM technology runs afoul of copyright, or in some cases technological protection measures (TPMs) that block their use of certain works. There is uncertainty whether TPM circumvention exceptions under the Act would be sufficient to balance the rights of the copyright owners with the desires of innovators and users seeking to employ copyright works in this manner.

Conclusion

The consultation paper canvases issues raised by, and potential different approaches to, regulating activities involving AI and copyright. It suggests that lawmakers are aware of the complexity surrounding the impact of new and developing technologies on existing legal frameworks and have an interest in crafting a forward-looking solution. 

Whatever approach is ultimately adopted, however, will need to address fundamental philosophical questions underlying society’s relationship to AI specifically, and technology in general—when the process of creation is indistinguishable from magic, and technologically-generated content rivals human-produced creative works, how do we (or can we? or should we?) separate the maker from the machine? 

Subscribe to our newsletter

You can unsubscribe at any time. This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

This site is registered on wpml.org as a development site.