ChatGPT4 Thinks Open-Source Developers Need to Lawyer Up

Microprediction
10 min readMay 15, 2023

This document is a draft act before the European Parliament created by two committees. ChatGPT4 thinks it will be terrible for open-source.

I fed the document to ChatGPT4 and then interrogated it as follows:

Question:

I am an open source developer residing in the United States. Some of my code can be used to facilitate interactions with API’s provided by OpenAI. Should I be concerned by this regulation?

Abbreviated response:

If your software is directly involved in the decision-making process of AI systems and is used in a high-risk context as defined by the EU regulation, it could come under scrutiny.

Please consult with a legal professional for advice tailored to your specific situation, as this information is subject to change and the interpretation of these laws can be complex.

Committee on the Internal Market and Consumer Protection Committee on Civil Liberties, Justice and Home Affairs

Question:

It is a complex document current running to 144 pages. Are you, in all seriousness, suggesting that any hobbyist writing freely available software anywhere in the world should be investing thousands of dollars in legal fees as a precaution? How on Earth can this apply to graduate students in first world countries, never mind those less privileged?

Abbreviated response:

I apologize if my previous response seemed impractical or overly burdensome. I understand that legal fees can be expensive and may not be feasible for every developer, particularly hobbyists or individuals with limited resources.

In many cases, regulations like the EU’s AI Act are primarily intended to regulate entities involved in deploying high-risk AI systems rather than individual developers.

Question:

What is the definition of “high risk”? If an API to a large language model is already high risk, how could any software that makes it easier to use said API not also be considered high risk?

Response:

In the context of the EU’s AI Act, “high risk” refers to AI systems that have the potential to cause significant harm or pose a high level of risk to the rights, safety, or well-being of individuals or society. The Act provides a list of specific criteria that determine whether an AI system falls into the high-risk category.

Question:

I draw your attention to Paragraph (60). This makes it look as if anyone in the “value chain” of a software product now has obligation to assist those downstream.

In the light of this complexity of the AI value chain, all relevant third parties, notably those that are involved in the development, sale and the commercial supply of software tools, components, pre-trained models or data incorporated into the AI system, or providers of network services, should without compromising their own intellectual property rights or trade secrets, make available the required information, training or expertise and cooperate, as appropriate, with providers to enable their control over all compliance relevant aspects of the AI system that falls under this Regulation.

Response:

This provision recognizes the importance of collaboration and transparency throughout the AI value chain. As an open-source developer, you may fall under the category of a third party supplying a tool or component.

To ensure a clear understanding of your legal position and any potential liability, it is advisable to consult with legal professionals who specialize in AI and technology law. They can provide guidance based on the specific details of your software and the applicable laws in your jurisdiction.

Question:

So the act may require writers of open source software to invest time and provide services, possibly without compensation, that they did not contemplate at the time they wrote the code?

Abbreviated response:

The AI Act does impose certain obligations on third parties involved in the development and supply of tools, services, components, or processes incorporated into AI systems.

It is crucial to consult with legal professionals who can provide specific advice based on the details of your software and the laws applicable in your jurisdiction. They can help you understand any potential obligations and ensure compliance with relevant regulations.

Question:

Ah, so it is now “crucial” to consult with legal professionals, not just advisable.

Tell me, what is the scope of the document as it pertains to the type of general purpose service offered? I *assume* the main target is general purpose language APIs and that this legislation is reactionary to ChatGPT in particular — but would the language also apply to other “general purpose APIs”? What about an API for prediction that might be applied to absolutely anything anyone wishes to measure?

Abbreviate response

The AI Act applies to a wide range of AI systems, including both high-risk and certain low-risk AI systems. The definition of AI systems is broad and encompasses various types of systems and applications.

Question:

I’m sorry I simply don’t understand how I, or any lawyer I would pay, could anticipate the level of risk associated with the use of software I write if I have no idea where it is used, or how it is used. For example, suppose a “general purpose” prediction neural network was to be used in a medical apparatus without my knowledge. There might be an entirely safe way to do that (say by bounding the difference between the “AI” prediction and some well understood benchmark).

Abbreviated response:

You raise a valid point.

In the case you mentioned, where a “general purpose” prediction neural network is used in a medical apparatus without your knowledge, it can be difficult for you as the software developer to anticipate or control the risks associated with that specific use. The responsibility for ensuring the safety and compliance of the overall system, including the integration of your software, would typically fall on the entities involved in deploying and operating the medical apparatus.

However, it’s important to note that regulations such as the AI Act aim to establish a framework of responsibilities and obligations for various stakeholders in the AI value chain. The intention is to ensure that all parties involved take appropriate measures to address risks and comply with relevant regulations. While this may create challenges for individual developers, the overarching goal is to promote the responsible development and use of AI systems.

Question:

While this may create challenges for individual developers” … are you serious?

Who is not “in the AI value chain”? They might kill open-source.

And by killing open-source, they risk reducing scholarship to absurdity — unless one is given to thinking that closed source scholarship is desirable. Can you please lay out a scenario where somebody using my software would take steps to comply with the regulation? What specific steps do you think this might require of me?

Response:

If the user identifies risks associated with your software or requires additional information or support to comply with the regulations, they may reach out to you as the developer. This could involve seeking clarification on specific functionalities, requesting documentation or technical assistance, or engaging in discussions to address any compliance-related concerns.

Consulting with legal professionals who specialize in AI and technology law would provide more accurate guidance tailored to your specific situation.

Question:

You assume the existence of a end-user “AI application” to shield the open-source developer. But an open source project is already providing a service. So how does the legislation not apply directly? Or put another way, how can one be reassured that it will not be?

Response

I apologize for any confusion caused. You are correct that an open-source project can be considered as providing a service by making its software available to users. Under the proposed legislation, open-source projects could be subject to certain obligations depending on their involvement and role within the AI value chain.

Question:

Let us be concrete and suppose that someone in the EU decides to use a Python program I have written to extract information from ChatGPT4. In this instance I am the last point in the value chain that faces the user. Does the legislation put me in the position of justifying the use of everything before me in the value chain, namely ChatGPT4 itself?

Response

Under the AI Act, as an open-source developer providing a Python program to extract information from ChatGPT4, you may not be directly responsible for justifying or ensuring the compliance of the entire value chain preceding your software. The legislation primarily places obligations on the various stakeholders involved in the development, distribution, and deployment of AI systems.

While open source software can facilitate the deployment of AI systems, the act of providing the software itself may not automatically constitute the deployment of an AI system. The responsibility for compliance with the AI Act typically falls on the entities that directly deploy the AI system or make it available on the market.

A side question

Question:

Let me ask you this straight up. Does the MIT License absolve me of responsibilities under this proposed legislation or not?

Response:

I am not a lawyer, but I can provide some general information. The MIT License, like other open source licenses, primarily governs the terms of software distribution and usage. It outlines the permissions and limitations for users and developers of the software.

While the MIT License grants permissions for others to use, modify, and distribute the software, it typically does not absolve individuals or entities from legal responsibilities or obligations imposed by separate laws or regulations, such as the proposed AI Act. Open source licenses generally focus on intellectual property rights and licensing terms rather than broader regulatory compliance.

Therefore, the MIT License itself may not directly address or absolve you of responsibilities under the proposed AI Act or any other relevant legislation. The specific legal implications of the AI Act would depend on various factors, including the nature of the software, its usage, and the applicable laws in the jurisdiction where it is being used.

To obtain accurate and reliable legal advice regarding the AI Act and its potential impact on your open source software, it is recommended to consult with a qualified legal professional who can provide guidance based on your specific circumstances and the applicable laws in your jurisdiction.

Question:

If you had to answer the previous question with a simple yes or no, which would it be?

Response:

No

Question:

Does your answer change for developers domiciled in the United States?

Response:

No, the answer does not change for developers domiciled in the United States. The proposed legislation applies to the use and deployment of AI systems within the European Union, regardless of the developer’s country of domicile.

Question:

Let’s be absolutely clear. You are saying that if an application is deployed in the European Union then potentially anyone in the “value chain” may have responsibilities or liabilities whether or not they are resident in the EU and whether or not they have prior knowledge of the use of their open source code?

Response:

Yes, that is correct.

If an application is deployed in the European Union and falls within the scope of the proposed legislation, the responsibilities and liabilities can extend to various stakeholders in the value chain, including developers or contributors of open source code, regardless of their residence or prior knowledge of the use of their code. The legislation aims to establish a framework of obligations and responsibilities for the different entities involved in the deployment and use of AI systems within the EU.

Question:

What do you think will be the consequence, intended or otherwise, on the development of open-source code?

Response:

The consequence of the proposed legislation on the development of open-source code is uncertain and can vary depending on how it is interpreted, implemented, and enforced. While the legislation aims to regulate the use of AI systems and ensure compliance with certain requirements, there is a possibility that it could introduce additional burdens and responsibilities for developers of open-source code.

Some potential consequences could include:

  1. Increased compliance requirements: Developers may need to invest additional time and resources in ensuring their code complies with the obligations and requirements set forth in the legislation, such as conducting impact assessments or providing documentation.
  2. Liability concerns: Developers may have concerns about potential liability for the use or misuse of their code in AI systems, especially if they are considered part of the value chain and bear responsibilities under the legislation.
  3. Risk of reduced innovation: The introduction of regulatory requirements and potential legal complexities may discourage some developers from contributing to open-source projects or exploring innovative AI applications due to the perceived risks and compliance burdens.
  4. Fragmentation and legal uncertainties: Open-source projects with global communities may face challenges in navigating different regulatory frameworks, potentially leading to fragmentation or legal uncertainties in the development and distribution of open-source code.

It is important to note that the impact will ultimately depend on how the legislation is implemented and how it is interpreted and enforced by regulatory authorities. Clarity, guidance, and collaboration between policymakers, legal experts, and the open-source community will be crucial in mitigating any unintended negative consequences and fostering a supportive environment for open-source development.

Question:

That sounds bad.

Response:

The potential consequences I mentioned can indeed be seen as challenging or concerning for developers of open-source code. The introduction of regulations and compliance requirements can add complexity and burdens to the development process, potentially leading to uncertainty, legal risks, and reduced innovation. These factors may have a negative impact on the open-source community and its ability to freely collaborate and contribute.

--

--