shaking hands, handshake, skyline-3096229.jpg



The European Commission proposed the Artificial Intelligence Act (AIA) in April 2021 to establish a unified legal framework for the development and use of artificial intelligence (AI) systems inside the European Union (EU).

In order to guarantee that AI is created and utilized in a way that respects fundamental rights and ideals including privacy, human dignity, non-discrimination, and justice, the proposed rule was created. The legislation outlines a risk-based approach to AI regulation, with varying standards for various types of AI systems based on the degree of danger involved.

According to the AIA, there are four levels of danger for AI systems: unacceptable risk, high risk, restricted risk, and low risk. High-risk AI systems are subject to rigorous responsibilities, such as technical documentation, risk management systems, human monitoring, transparency, and responsibility, whilst unacceptable-risk AI systems are simply forbidden.

In order to maintain accountability, the rule also mandates that the AI system’s decision-making process be open to audit. Additionally, it requires that before deployment, AI systems be put through the necessary testing and certification.

After weeks of intense negotiations, Members of the European Parliament have reached a tentative agreement on a new version of the Artificial Intelligence Act. This ambitious legislation was first drafted two years ago and aims to address a range of ethical and regulatory issues related to the use of artificial intelligence (AI) systems in various industries.


Legislators have suggested the term “GPAIS” (General Purpose AI System) to describe AI systems with multiple uses, such as generative AI models like ChatGPT.

Determining whether all GPAIS will be considered high risk and the implications for IT businesses trying to incorporate AI into their products are now being debated by legislators. What responsibilities AI system producers would be subject to are not explicitly stated in the draught.


One of the key areas of focus for the AI Act is the regulation of general-purpose artificial intelligence systems (GPAIS), such as the popular chatbot ChatGPT developed by OpenAI. After much debate, lawmakers have agreed on a framework that will bring these systems under the ambit of regulation, ensuring that they meet stringent standards for data quality, openness, human oversight, and accountability.

The AI Act also aims to address a range of ethical and implementation issues in various industries, including healthcare, education, banking, and energy. By introducing new regulations and guidelines, the legislation seeks to ensure that AI technology is used in a responsible and ethical manner, while also promoting innovation and growth in these sectors.

At the heart of the AI Act is a categorization system that assesses the potential harm that AI technology poses to a person’s health, safety, or basic rights. This framework includes four risk classifications, ranging from “unacceptable” to “minimum,” and will form the basis for determining the level of regulation that is required for different types of AI systems.

Overall, the AI Act represents a significant step forward in the regulation of AI technology in the European Union. By introducing a range of new regulations and guidelines, the legislation seeks to ensure that AI systems are developed and used in a responsible and ethical manner, while also promoting innovation and growth in a range of industries.

There are few restrictions on the usage of AI systems that pose little danger, such as spam filters and video games, with the exception of transparency standards. Systems regarded to represent an intolerable risk, such as real-time biometric identification systems in public places and government social scores, are generally outlawed.


The proposed Artificial Intelligence Act has generated a mixed response from business leaders, with some expressing support for the measure and others voicing concerns that the regulations may stifle innovation. One area of particular concern is the Act’s explainability standards, which require AI algorithms to be transparent and explainable. While this is seen as a positive step towards ensuring accountability and ethical use of AI, some businesses have raised concerns that it may be impossible to explain how some complex algorithms work, even for programmers.

In addition to transparency concerns, some businesses have also expressed apprehension about the Act’s requirements for disclosure of trade secrets. They fear that mandatory transparency requirements may compel them to reveal sensitive information and trade secrets, potentially harming their competitive advantage in the market.

On the other hand, some lawmakers and consumer advocacy organizations have criticized the Act for not going far enough to address concerns related to AI systems. They argue that the regulations fail to adequately address issues such as bias, discrimination, and privacy violations that may arise from the use of AI technology.

One key feature of the Act is that it empowers the EU’s expert standard-setting organizations in certain industries to develop technical standards for AI technology. This move is seen as an important step towards ensuring consistency and clarity in the development and use of AI systems across different industries.

Overall, while the proposed Artificial Intelligence Act has been met with both praise and criticism, it is clear that the regulation of AI technology is a complex and challenging issue that requires careful consideration and a balanced approach. By introducing new regulations and guidelines for the development and use of AI systems, the Act seeks to promote innovation while also ensuring the ethical and responsible use of this powerful technology.


The industry is eagerly awaiting the approval of the Act, although there is no concrete timeframe for when this might occur. Currently, the Act is being debated by Parliamentarians, and it is hoped that they will reach a consensus on its contents soon. Once this happens, there will be a trilogy involving key stakeholders such as the European Commission, the Council of the European Union, and the European Parliament.

To ensure that impacted parties have sufficient time to comply with the new regulations, a grace period of approximately two years will be provided after the conditions have been established. This will allow businesses and organizations to make the necessary adjustments to their operations, policies, and procedures.

The primary objective of the Act is to strike a balance between promoting innovation and safeguarding the basic rights of EU citizens. To achieve this goal, the legislation will introduce a range of measures designed to address issues such as online harms, data privacy, and digital market competition.

Following its approval, the Act will undergo a period of review and comment to ensure that it is effective, fair, and in line with the EU’s broader policy objectives. It is expected that the legislation will come into effect in 2024, marking a significant milestone in the EU’s efforts to create a more secure and trustworthy digital environment.

Author’s Name: Suhana Roy

Sign Up to Our Newsletter

Be the first to know the latest updates

Whoops, you're not connected to Mailchimp. You need to enter a valid Mailchimp API key.