Copyright & AI - A Threat to Creativity

Submission to the Government of Canada Consultation on Copyright in the Age of Generative Artificial Intelligence (Jan 15, 2024)

Neon fungi 3D

The Canadian Government (Innovation, Science and Economic Development Canada) initiated a public consultation process to gather insights on potential copyright policy updates concerning artificial intelligence (AI) on October 12, 2023, with submissions from interested parties due by January 15, 2024. The consultation takes place after the Government of Canada's Consultation on a Modern Copyright Framework for Artificial Intelligence and the Internet of Things launched in July 2021 on adapting the Copyright Act for the digital age.

The purpose of the consultation was to collect both public opinion and technical evidence on AI's role and application in the current market, namely, around the negative impact on creators and their copyrights. The government consultation was around four key areas:

  1. Technical Evidence on AI: Soliciting feedback on the technical dimensions of AI to understand its capabilities and limitations.

  2. Text and Data Mining: Gathering perspectives on applying copyright laws to text and data mining (TDM) activities, assessing how current frameworks accommodate these processes.

  3. Authorship and Ownership of AI-generated Works: Seeking opinions on adapting copyright laws to content created with the assistance of AI or entirely generated by AI, focusing on how authorship and ownership should be defined.

  4. Infringement and Liability in the Context of AI: Exploring views on the copyright infringement and liability issues that arise with the use of AI, aiming to address the legal challenges presented by AI technologies.

The following is a summary of the submitted survey feedback, which also incorporated input from artists during polls conducted during the digiArt Art+Tech conference from November 24 to November 30, 2023.

Technical Aspects of AI

Many artists do not use AI in creating artwork. They are solely the result of human effort, while a smaller number of artists are using AI to assist them in creating art without the awareness that they may be potentially infringing the copyright of other artists and creators. Artists have discovered their works in AI models and are alarmingly concerned as these works were used without their permission.

The rise of generative AI tools like ChatGPT and DALL-E has sparked concerns among artists regarding using their copyright-protected content in training AI models. These concerns center around developers using their work without consent or giving artists proper credit, attribution, and compensation. The Copyright Act provides that artists should be compensated when their copyright-protected works are used or reproduced, and current exceptions and defenses are sufficient to address use without permission.

Transparency and clarity concerning the use of copyright-protected works in AI models are crucial in the evolving landscape of AI and creative industries. Artists feel that AI developers should maintain records and disclose their utilization of copyright-protected works by the provisions outlined in the Copyright Act. This helps in providing artists with proper compensation and credit. While users and AI developers may have a different view, the threat to creativity is real and protecting and respecting copyright is the only way to foster creativity.

Text and Data Mining 

There is a need for increased transparency and well-defined guidelines for attribution and compensation for artists when their works are used in text and data mining (TDM) activities. Generative AI models such as ChatGPT, GPT4 and DALL-E are trained on public domain and copyright-protected works including art and images found online, using text and data mining to write essays, music, write code, generate artwork. Currently, this is a violation of copyright laws with pressure from certain industry players to modify the Copyright Act to provide broader exception for such use. This would be harmful to artists and creators.

In the digiArt artist poll, many respondents expressed a need for more awareness regarding how to grant permission or monetize the use of their works in AI models. Therefore, mechanisms must be in place to simplify the process for artists who wish to provide licenses for their work, ensuring accessibility and ease of use.

While some artists have started offering licenses for text and data mining, they need help in enforcing their rights and seeking proper remuneration. Platforms that only utilize the work of artists who have explicitly consented to participate and receive compensation provide the solution for using the copyright-protected work of artists. Furthermore, an artist has the right to control how their work is ultimately being used and the Copyright Act provides that users must obtain explicit permission unless it falls under one of the specified exceptions or defences.

The debate over opt-in versus opt-out consent models extends beyond privacy and touches upon artists' fundamental control over their creations. Opt-in models align better with ethical standards, allowing artists to make informed decisions about their work's usage and proper compensation. Opt-in models uphold ethical principles and empower artists by enabling them to make conscious and informed choices regarding how their creations are utilized as per the Copyright Act. 

Collective societies emerges as a potential avenue for compensation rights holders for the use of their work in AI models. Collective societies can provide legal defense an advocacy and more ease of licensing and use. However, artists and creators should be given the final choice as to which business model for licensing their works they prefer - collective licensing or direct licensing and permission. Artists and creators should have complete autonomy and control over the amount of fees and conditions for the usage of their work. A variation of the Creative Commons licenses for using artists' work in AI may provide artists with greater efficiency in delivering permissions and licenses. However, it is imperative that an artist/creator/copyright owner in the creative industry have complete control over which option they prefer and how they want their work used. This ensures fair compensation and clear attribution for an artist.

As AI technology advances, the legal landscape will provide more insights with lawsuits and court decisions that will shape the future of copyright law concerning AI-generated content. A TDM exception under the Copyright Act in Canada is unnecessary, and use without obtaining permission can fall under current fair dealing exceptions, similar to the approach taken in the United States, which lacks an explicit TDM exception and relies on fair use. Canada should follow the lead of the US regarding these issues.

There is a need for support, including legal defense and advocacy for the creative industry, in navigating copyright in the AI era. Many corporations controlling AI have legal defense funds, making it difficult for artists or creators to protect their rights. These include OpenAI, IBM, Microsoft, Amazon, Getty Images, Shutterstock and Adobe who will indemnify generative AI customers over copyright and IP rights claims for customers using their services. OpenAI created Copyright Shield, to cover the legal costs incurred by customers who use OpenAI’s developer platform and its business tier, ChatGPT Enterprise. A collective approach ensures that artists' rights are safeguarded and facilitates a unified voice in negotiating fair compensation and protection. 

Authorship and Ownership of Works Generated by AI

Under the Copyright Act and the moral rights provision, copyright owners possess the inherent right to attribution, credit, and exclusive permission to use their intellectual creations. This fundamental principle should extend seamlessly to AI-assisted and AI-generated works. Copyright owners must retain the unequivocal ability to grant authorization for utilizing their works within AI systems while also delineating the specific conditions and parameters governing such usage.

The government must play a role in ensuring clarity and transparency when AI models incorporate copyright-protected works with the appropriate permissions. This entails indicating whether a work is being licensed within an AI model and enforcing the obligation to provide due credit and attribution, contingent upon the artist or creator's permission.

The approach by the United Kingdom in providing a code of practice for the use of AI systems is a good one. A similar framework in Canada would enhance clarity for copyright owners and users, fostering a balanced environment where authorship and ownership of AI models are well-defined and respected. Clear guidelines are essential to address the intricacies of authorship and ownership in AI models.

Infringement and Liability regarding AI

The existing legal criteria for determining copyright infringement may prove inadequate in cases where the authorship of AI-generated works could be clearer, mainly when multiple pre-existing works are utilized or amalgamated. To address this issue and enhance transparency, it is essential that when users create an output using AI, they are provided with information about which copyright-protected works contributed to the output.

It's worth noting that many artists are reluctant to embrace AI due to concerns about using copyrighted works without permission. This hesitancy reflects a broader sentiment within the artistic community, emphasizing the need for greater clarity in defining liability when AI-generated works potentially infringe upon copyright.

The Copyright Office in the US has provided guidance on whether AI works and AI-assisted works can obtain a copyright registration. Developing case law of various class actions in the creative industries may also provide guidance. It is crucial to strike a balance where the rights of copyright owners are preserved and not diminished in any way under the Copyright Act. A comprehensive and evolving legal framework will help navigate the intricate terrain of copyright in the age of AI, ensuring that authorship, ownership, and protection remain robust and equitable.

In addition to the potential copyright violations by using copyright materials without proper authorization, other legal risks in the use of AI for the creative industries include:

Lack of Transparency: The opacity of AI-generated creative processes can create challenges in identifying the origin of content, raising issues of transparency and attribution.

Privacy Violations: AI systems may collect and process personal data, risking privacy breaches and violations of data protection laws if not managed appropriately.

Ownership and Authorship Ambiguity: Determining ownership and authorship of AI-generated artworks or content can be complex, leading to disputes over intellectual property rights.

Ethical Concerns: The use of AI in creative works may raise ethical questions, such as the potential for bias in AI-generated content or the use of AI to replicate an artist's style without consent.

Data Security: The handling and storage of large datasets for AI training can pose security risks if not adequately protected, leading to data breaches and legal repercussions.

Regulatory Compliance: Compliance with evolving regulations related to AI, copyright, data privacy, and consumer protection is essential, with non-compliance carrying legal penalties.

Algorithmic Accountability: Lack of accountability in AI algorithms can result in unintended consequences, discrimination, or bias, leading to legal challenges and liabilities.

Licensing and Permissions: Using third-party datasets or copyrighted materials in AI applications requires proper licensing and permissions, with failure to do so risking legal action.

Liability for AI-generated Content: Determining liability for AI-generated content, especially in cases of harm or misinformation, poses legal challenges that need resolution.

These legal risks underscore the complexity of integrating AI into the creative industries, necessitating clear legal frameworks and responsible practices to mitigate potential legal issues.

Liability for Copyright Infringement of AI-generated Works - Guidance from the US Copyright Office and US Court Decisions

The existing legal tests for infringement may need to be more discerning, clear authorship, mainly if several pre-existing works are used or merged. When a user is creating an output, they should be advised as to which copyright-protected works were used in the output for better clarity and transparency. Many artists refuse to use AI as a stance against using others' works, especially since the current AI models have been using copyright-protected works without permission. The rights of copyright owners need to be protected equally and not diminished in any way under the Copyright Act.

There needs to be greater clarity on where liability lies when an AI-generated work infringes copyright. The US Copyright Board and developing US case law provide some guidance on these issues. The US Copyright Office Guidance: Work Containing Material Generated by Artificial Intelligence, March 16, 2023, is extremely helpful and should serve as guidance for the Canadian government. According to the US Copyright Office, it is well established that copyright only protects "material that is the product of human creativity." Most fundamentally, the term "author," which is used in both the Constitution and the Copyright Act, excludes non-humans." 

It further notes, "a human may select or arrange AI-generated material in a sufficiently creative way "that the resulting work as a whole constitutes an original work of authorship." In previous US case law, a monkey cannot register a copyright in photos it captures with a camera because the Copyright Act refers to an author's "children," "widow," "grandchildren," and "widower," — terms that "all imply humanity and necessarily exclude animals." A similar line of reasoning can be applied to AI-generated works. 

The US Copyright Office reviewed the potential copyright registration of a graphic novel in February 2023 comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but the individual images themselves could not be protected by copyright. The Offices also note that an artist can modify AI-generated works as long as such modifications meet the standard for copyright protection.

With respect to works containing AI-generated material, the Office will consider whether the AI contributions are the result of "mechanical reproduction" or instead of an author's "own original mental conception, to which [the author] gave visible form." The Offices note that this will depend on the circumstances on a case-by-case basis, "particularly how the AI tool operates and how it was used to create the final work."

According to the US Copyright Office, prompts are not considered copyright protectable as prompts as "traditional elements of authorship" are "determined and executed by the technology – not the human user. Users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist— they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output".

The US Copyright Office has previously ruled out copyright being registered to an AI as the author instead of a human. Stephen Thaler, "Creativity Machine" – AI-created image rejected by the US Copyright Office in 2022 as it "lacks the human authorship necessary to support a copyright claim." 

In another potential registration, the US Copyright Office initially registered a graphic novel featuring AI-generated artwork on September 15, 2022, using MidJourney by Kris Kashtanova, Zary, but which was later retracted. The artwork was AI-assisted and not created entirely by the AI, as Kashtanova wrote the comic book story, created the layout, and made artistic choices to piece the images together. Then, in February 2023, the US Copyright canceled the original certification. The writing and other original elements are protected, but the images are not protected. They noted that only human-made images and art are eligible for copyright protection, not ones made mechanically. 

On January 13, 2023, Sarah Andersen, Kelly McKernan, and Karla Ortiz (along with proposed class actions on behalf of other artists) brought a copyright infringement lawsuit against the developers of AI art generation tools, including Stability AI (Stability Diffusion and DreamStudio), MidJourney, DeviantArt.

OpenAI, which created Dall-E 2, wasn't in the lawsuit because its training date has yet to be made public. AI image generators are nothing more than "a 21st-century collage tool that remixes the copyright works of millions of artists whose work was used as training data". I agree that the current usage seriously threatens the livelihood of artists and creators in the creative industries. 

The lawsuit claims that the images generated by the AI tools are an unlawful and infringing "derivative work" based on the billions of copyrighted images used to train the models.

The existing legal criteria for determining copyright infringement may prove inadequate in cases where the authorship of AI-generated works is ambiguous, mainly when multiple pre-existing works are utilized or amalgamated. To address this issue and enhance transparency, it is essential that when users create an output using AI, they are provided with information about which copyright-protected works contributed to the output.

Many artists are reluctant to embrace AI due to concerns about using copyrighted works without permission. This hesitancy reflects a broader sentiment within the artistic community, emphasizing the need for greater clarity in defining liability when AI-generated works potentially infringe upon copyright. Anti-AI theft technologies such as Nightshade and Glaze are emerging as protective measures and should be explored, provided their use is lawful and ethical. Nightshade employs data poisoning to introduce errors into AI training datasets and Glaze mask artwork to prevent scraping. 

It is crucial to strike a balance where the rights of copyright owners are preserved and not diminished in any way under the Copyright Act. A comprehensive and evolving legal framework will help navigate the intricate terrain of copyright in the age of AI, ensuring that authorship, ownership, and protection remain robust and equitable.

The 2021 UNESCO Recommendations on AI offer a comprehensive framework that underscores the importance of ethical considerations in the development and application of artificial intelligence. These guidelines emphasize essential principles such as Proportionality and the imperative to Do No Harm, ensuring Safety and Security; promoting Fairness and Non-discrimination; adhering to Sustainability; protecting the Right to Privacy and Data Protection; ensuring Human Oversight and Determination; enhancing Transparency and Explainability; upholding Responsibility and Accountability; fostering Awareness and Literacy; and advocating for Multi-stakeholder and Adaptive Governance.

Furthermore, the General Conference of the United Nations Educational, Scientific and Cultural Organization (UNESCO) advocates for adopting globally accepted ethical standards for AI technologies. These standards, which are to be in strict adherence to international law, especially human rights law, are envisaged to play a pivotal role in shaping AI-related norms worldwide.

In this context, the legal frameworks of neighbouring countries, in conjunction with international guidelines, present a compelling case for integration and adherence. The recommendation underscores the necessity of a collaborative approach to establishing and following ethical standards that not only respect human rights but also guide the responsible evolution of AI technologies globally.