According to the AI Act, AI system means “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”
The AI Act applies to all systems that meet the definition provided in the Act.
The AI Act prescribes a risk-based approach to AI systems. The Act divides AI practices into the following categories:
- prohibited AI practices
- high-risk AI systems
- limited risk AI systems,
- minimal risk AI systems.
The Act also imposes requirements for providers of general-purpose AI models regarding documentation and the description of the training data used, among other things.
Furthermore, the AI Act states that providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.