Header generic

New rules and actions aimed at making Europe the worldleader of artificial intelligence (AI)

The new rules will be directly applicable in all Member States. They follow an approach based on risks:

Unacceptable risk: AI systems are seen as a clear threat to security, livelihoods, and human rights will be prohibited.
High risk: AI technologies that are used in the infrastructure reviews (by example the transport); the technologies AI used in education or vocational training, which can determine access to education and the pathway of a professional; AI technologies used in product security components (e.g. example, the application of AI in robot-assisted surgery); AI technologies used in the field of employment, workforce management and access to self-employment (for example, sorting software CVs for recruitment procedures); AI technologies used in private and essential public services (for example, credit risk assessment); AI technologies used in the field the maintenance of order, which may interfere with the fundamental rights of individuals; the AI technologies used in the field of migration management, asylum and border controls; AI technologies used in the administration of justice and democratic processes.
Limited risk, i.e. AI systems to which specific obligations apply in terms of transparency: When using AI systems such as chatbots, users should be aware that they are interacting with a machine in order to be able to make an informed decision whether or not to proceed. Risk minimal: The legislative proposal allows the free use of applications such as video games or anti- AI-powered spam. The vast majority of AI systems fall into this category. The draft regulation does not foresee any intervention in this area, as these systems represent only minimal or no risk for the rights or security of citizens. Communication on Fostering a European approach to Artificial Intelligence Shaping Europe's digital future.

For more information, click here.