AI innovation and ownership: Navigating the legal landscape
What's inside?
If Gen AI were to invent a novel drug or efficient chip manufacturing process, would it be protected as a patent? What about if Gen AI created a script or a piece of art?
So far, most jurisdictions have not allowed AI inventions to be patentable. Computer scientist Stephen Thaler applied for AI-created inventions to be patented in several jurisdictions around the world — all of which denied Thaler’s patent application except for South Africa. In terms of copyrighted works, most jurisdictions have similarly not allowed AI to be listed as the owner of the works.
Where it gets interesting is AI-assisted inventions where a human utilises AI to help them create an invention – can this AI and human made invention be patented? A good example is the drug halicin, potentially one of the most powerful antibiotic compounds ever discovered, which was identified by a machine learning model created and deployed by researchers at MIT. Intellectual property offices in different jurisdictions are coming out with more guidance on this matter.
In the US case of Pannu v. Iolab Corp (Fed. Cir. 1998), a human inventor must make a significant contribution to the invention, that is not insignificant in quality, which does more than merely explain the current state of the art or well-known concepts. Applying this to AI-assisted innovations, these factors do not penalise the human for utilising AI; it still places the burden on the human inventor to prove his or her significant contributions to the invention. However, the US Patent and Trademarks Office provided further guidance beyond the Pannu factors and allowed for public comment earlier this year.
In the UK case of Thaler v Comptroller-General of Patents, Designs and Trade Marks [2024] EWCA Civ 825, the UK Supreme Court also held that the AI inventor could not be patentable. The court further acknowledged that the patentability of AI raises policy issues regarding:
… the purpose of a patent system, the need to incentivise technical innovation and the provision of an appropriate monopoly in return for the making available to the public of new and non-obvious technical advances, and an explanation of how to put them into practice across the range of the monopoly sought.
While both the US and UK’s respective intellectual property offices have provided guidance, more intellectual property bills in relation to AI will come across Congress and Parliament, respectively. Essentially, this leaves companies with a strong research and development budget centred around AI at risk that they will not receive protection, royalties and the recognition that comes along with a standard registered patent. Developments in the legal framework related to AI intellectual property should be watched closely by companies that rely on AI for innovation.
How are governments regulating AI?
European Union
The European Union has already put forth the EU Artificial Intelligence Act which regulates general AI and classifies AI into a 4-tier framework:
- Prohibited
- High Risk
- Limited
- Minimal-risk AI systems
Like the distinction between processor and controller under GDPR, the AI Act also distinguishes between a Deployer (user) and Provider (developer). Furthermore, the Act requires a Fundamental Rights Impact Assessment (“FRIA”) for certain deployers utilising High Risk AI systems in order to reduce any possibility of harm on the fundamental rights of individuals along with risk management systems, data governance, records and transparency, oversight, and cybersecurity. However, limited risk AI systems are subject to transparency obligations and are encouraged to have best practices and codes of conducts implemented.
Mark Zuckerburg of Meta along with other Silicon Valley tech leaders have criticised the European regulatory landscape as “incoherent” and “unpredictable”, further predicting that Europe could fall behind as a result.
United Kingdom
In comparison to the EU, the UK has adopted a more pro-innovation approach to the regulation of AI which has three categories of AI and is primarily based on outcomes and five core principles:
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
It should be noted that the new Labour government may seek to implement laws to promote responsible use of AI. It’s very likely that the Starmer administration will seek to develop AI regulation which treads a fine line between the US and EU, potentially creating a regime which is an attractive stopgap for US AI companies seeking geographic access to the European market.
United States
The AI regulatory landscape in the US is beginning to parallel the data protection and privacy landscape – federal law is either absent or inadequate on this matter. In Fall 2023, President Biden did sign Executive Order 14110: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence which serves as an agenda for AI-related issues to be addressed. A change in the presidential administration and Congress following the 2024 elections could potentially lead to more bills passed that provide stronger regulatory enforcement in the AI space.
In an effort to protect creatives and artists, Congress is also contemplating passing a Generative AI Copyright Disclosure Bill which would require companies to disclose what they have used to train AI models with. However, some states such as tech-heavy California already have enacted legislation in regard to sharing AI training data and replacing teaching jobs with AI - and many states have applicable privacy laws that may need to be taken into account, as well.
However, even as new laws and policies emerge globally over the coming years, innovators should ensure that the use and invention of AI is not in breach of intellectual property, data protection or anti-trust laws.
CONDITIONS AND LIMITATIONS
This information is not intended to constitute any form of opinion or specific guidance and recipients should not infer any opinion or specific guidance from its content, including but not limited to legal advice. Recipients should not rely exclusively on the information contained in the bulletin and should make decisions based on a full consideration of all available information. We make no warranties, express or implied, as to the accuracy, reliability or correctness of the information provided. We and our officers, employees or agents shall not be responsible for any loss whatsoever arising from the recipient’s reliance upon any information we provide and exclude liability for the statistical content to fullest extent permitted by law.
Stay a step ahead in an increasingly complex and unpredictable world
Our consultants stay on top of the latest megatrends that influence how organisations are attacked, whether related to terrorism, criminality, war or cyber.
We document their analysis here. Be the first to see it.