AI Act - European Regulation on Artificial Intelligence

The new regulation of the European Union governs the use of artificial intelligence systems.

The regulation was published on July 12, 2024, and is valid from August 2, 2024. Individual parts will gradually come into effect in 2025 and 2026.

The regulation classifies artificial intelligence systems based on their level of risk. The higher the risk of the system, the more obligations will need to be fulfilled.

Systems with unacceptable risk (for example, systems enabling real-time biometric identification of individuals, systems manipulating human decision-making, etc.) are prohibited. A few exceptions to this prohibition apply only to law enforcement authorities.

Systems with high risk (biometric identification systems that do not fall under unacceptable systems, systems for assessing and recruiting employees, or systems for access to services, such as in banks or insurance companies, etc.) require a risk assessment to be conducted before their implementation and during their use. If it is determined that the system poses too great a risk, it must be modified.

Systems with limited risk (for example, chatbots or voicebots, systems for generating text, audio, images, or video, etc.) require transparency in their use. The user must be informed that they are interacting with an artificial intelligence system.

Systems with minimal risk (for example, spam filters) are not regulated by the regulation, and nothing changes in their use.

If you are developing an artificial intelligence system or wish to implement an existing system into your operations, the regulation requires, among other things, that you train your employees in AI literacy.

Are you unsure how the regulation will affect you? Contact us at info@stuchlikova.com or call +420 222 767 393, and we will help you properly implement the new rules.