By Shirsha Chakraborty, Vera Dvorakova, Martina Hrgovic, and Andy Peñafuerte III

The arrival of generative ChatGPT has lately taken social media by storm. Almost everyone was suddenly using this chatbot to do different things, from drafting a school essay to telling them a joke about clowns.

But the rise of this technology has also been met with a great deal of skepticism and worry. If generative AI is here to stay, how do we make sure it’s used properly? Whose data it uses and how? And why is it difficult to regulate it?

The principles of generative AI

This type of AI basically works like any other software: you give it a prompt (or a command) and it executes it. The trick here is that these AI models are trained to learn from massive datasets and synthesise all that data to answer your prompt. They can be used to give answers to questions, sort complicated data or generate original images. ChatGPT specifically caused so much excitement because it is able to reply to prompts in a way that seems natural and human-like. 

It’s important not to confuse generative AI with regular AI. AI is an umbrella term for any technology that learns from patterns but it cannot produce new content. In contrast, generative AI technology artificially produces or manipulates data to generate unique content. It improves and facilitates the tasks of creatives by coming up with newer concepts in a surprisingly shorter time.

But just like with any new technology, there are efforts to set legal boundaries for its use. In the media, Chat GPT has already been accused of undermining democracy and destroying education among other things. The very nature of this AI, however, is a fertile ground for copyright regulation because of the way it gathers, processes, and creates content. If generative AI uses the data available on the internet to produce information, how do we conceptualise our current intellectual property laws?

Before discussing the intricacies of AI regulations, let us dive into the history of copyright. The concept of copyright infringement is not new. It dates back to as old as 6th century B.C.E to the Greek, Roman and Jewish cultures. Scroll through the ThingLink below to trace the history of copyright through various centuries.

Issues with intellectual property

The main questions concerning generative AI and copyright are: what can get copyright-protected and who can get the copyrights? 

The issues with data

Software such as ChatGPT or DALL-E is trained on data taken from the internet. So what’s the catch? There’s no mechanism for generative AI to credit its sources. 

So, how is it legal for a machine to use real people’s work? In the US, the fair use doctrine allows for limited use of copyrighted data for criticism, commentary, news reporting, teaching, and research. Similarly, the EU acknowledges that text and data mining can “benefit the research community and, in so doing, support innovation”. 

However, fair use only gets us so far. We need to consider the authorship of “new” content created and drawn by generative AI from existing copyrighted material. This becomes a problem particularly for creatives, such as painters or digital artists. Three of them are now suing several generative AI organisations for collecting and using data, including original art, to train bots – without the authors’ consent.

Then there is the question of the products generated by this type of AI: you give a prompt and it provides a response. Can the result be considered your intellectual property? Not really. A substantial amount of human input must be proven to protect the output. In the US, the Copyright Act provides an exclusive right to create derivative work based upon the copyrighted material. Derivative work can become copyrightable if it involves substantial originality. 

As Daniel Gervais, the Director of the Vanderbilt Intellectual Property Program, sums it up: originality implies human creative choices. He claims that machines cannot produce originality, hence they cannot violate the derivative work right. Annemarie Bridy, copyright counsel at Google, points out that work generated by AI doesn’t include the code used for its production. This means that it cannot be considered derivative work.

The question of authorship

Whether the creator of generative AI can immediately be considered the author of its output, even if they cannot predict it, is still debatable. Here, we go back to the question of creative choices: if the output is not shaped by creative choices made by the machine’s programmer, protecting it over-rewards them.

Claiming copyright of the machine’s product as its user is trickier, because it depends on the type of the machine. Traditionally, there are three distinctions: 

Now, it all boils down to the big question: can the machine itself be considered the author of its work? If you ask Stephen Thaler, president and CEO of Imagination Engines, the AI program he created should be considered an inventor and claim the copyright for its creations. If you ask the US Supreme Court, even after several lawsuits, it’s not happening.

The regulations now

Some states and supranational bodies are in the process of establishing regulations and clarifying guidelines on AI technologies, but most of these bills or frameworks have yet to be approved. Carnegie Endowment for International Peace fellows and global technology experts Matt O’Shaughnessy and Matt Sheehan say there are several approaches that many of these frameworks follow: 

Horizontal approach: The European Artificial Intelligence Act

Basically, a horizontal regulation is an umbrella-like approach that tries to check or govern a several areas at once, just like the proposed European AI Act. This regulatory framework defines AI technologies and put them into four “risk categories” to match contexts and keep up with developments. Each category has different bodies such as courts, standards organizations, and developers that define parameters. 


This set-up comes with a risk, however. Those bodies could have varied interpretations of what and how AI should be and how regulations could be enforced. That adds a layer of complexity given the different social and national contexts in different EU member states.

Vertical approach: China’s regulations

Vertical regulation is like spokes: it takes on a top-down approach that targets specific applications and monitors how these technologies are deployed. One recent example is China’s regulation that mandates companies to notify consumers whenever AI algorithms are being used in online recommendation systems. Within this vertical regulation, tools that follow ideas of a horizontal approach can also be applied.

Since a vertical approach is targeted, it runs the risk of being outpaced by rapidly evolving technologies. In China’s case, AI regulations are so vaguely defined that they “shift power from technology companies to government regulators.” 

Combined approach: United States’s guidance on AI 

Both horizontal and vertical approaches have strengths and limitations, so combining them can maximize the extent of regulations. The United States currently has a disparate set of AI regulatory principles, including the Trump administration’s guidance focusing on “weak AI” (released in November 2020) and the Biden administration’s Blueprint for an AI Bill of Rights, which are fundamentally horizontal approaches that aim to inform vertical-level regulators. A bipartisan framework on AI risk management, released in January 2023, is a non-binding guideline that applies some horizontal-level tools to support vertical regulations.

While governments scramble over finding comprehensive regulations on generative AI, and creators fight legal battles over ownership of AI-generated work, technology continues rapidly developing and the lawmakers are trying to catch up. it. The new regulations will shape the future of generative AI software, setting boundaries to how it can be used and what for. As regulations may limit the development of this new technology,  we better keep an eye on what the lawmakers come up with.