Skip to main content
In this article, we would like to discuss the most important questions regarding the EU AI Act:
  • What regulatory obligations arise for you as a customer and user?
  • What regulatory obligations do we (skillconomy) have and how do we fulfill them?
The most important thing first: As a customer of skillconomy, you are not an actor within the meaning of the EU AI Act! This means that you have no regulatory obligations arising from the EU AI Act!

General Information

The EU AI Act is the world’s first comprehensive regulation for the governance of artificial intelligence. It follows a risk-based approach and obliges developers as well as operators of AI systems to comply with transparent, fair, and safe frameworks—especially in sensitive application areas such as recruiting.

When does the EU AI Act apply?

Entry into force: 1. August 2024 Although the EU AI Act already entered into force in summer 2024, its implementation will take place over several years. From August 2026, the full scope of the EU AI Act will apply in all EU member states.

Implementation Timeline

PhaseDate
Ban on AI with unacceptable risksFebruary 1, 2025
Voluntary guidelines for developersMay 1, 2025
Regulations for General-Purpose AIAugust 2, 2025
Transparency obligationsAugust 2, 2026
High-risk requirements mandatoryAugust 2, 2026

Who is subject to the provisions of the EU AI Act?

The EU AI Act regulates, among other things, obligations and prohibitions for:
  • Providers of AI systems (Art. 3 Sec. 3)
  • Operators of AI systems (Art. 3 Sec. 4)
Both groups include both natural and legal persons. The term “operator” is broadly defined and includes anyone who uses AI systems under their own supervision for purposes not exclusively private. Furthermore, the EU AI Act distinguishes four risk categories for AI systems:
  • Unacceptable risk (Art. 5)
  • High risk (Art. 6 Sec. 2)
  • Limited risk
  • Minimal risk
The regulatory obligations under the EU AI Act essentially arise from the risk category of the AI system and the role of the respective actor. It is therefore crucial to classify the AI systems used and to determine who assumes which role(s).

Classification in the context of skillconomy

Here is an overview of the AI systems used at skillconomy, their respective risk classification, and the roles according to the EU AI Act for skillconomy and you and your company (customer).
AI SystemRisk CategoryRole(s) skillconomyRole(s) Customer
Job posting captureMinimalOperatorNone
Creation of jobsitesMinimalOperatorNone
Creation of longlistHighProvider, OperatorNone

Risk classification of skillconomy’s AI

The following explanations mainly refer to the AI system for creating longlists, i.e., the selection of candidates to be approached for a position. The EU AI Act explicitly names, among others, “AI systems intended to be used for the recruitment or selection of natural persons, in particular for targeted job advertising, …” (Annex III Section 4) as high-risk AI systems. Our sourcing AI thus falls into this category.

Role(s) of skillconomy and customers

The active sourcing approach of skillconomy aims to take over the entire process—from capturing requirements from job descriptions, creating media for the candidate experience (messages, jobsites, application chat content), to selecting and contacting suitable candidates. Unlike many other providers, we do not provide tools that require or allow you to independently perform steps in active sourcing. The immediate results of the AI (the list of suitable candidates) are manually reviewed by our staff before any contact is made (“human-in-the-loop”). Only candidates who actively apply for the position and explicitly instruct skillconomy to forward their data to you will be passed on to you. You therefore do not interact with our AI systems yourself, nor do AI-based processes take place under your direct or indirect supervision. You simply release a position and then receive applications. The use of AI for this purpose is carried out entirely by us and under our supervision. This results in significant advantages for you as a customer regarding regulatory requirements from the EU AI Act: You are not an actor according to the definitions in Article 3, so the law does not apply to you as a customer of skillconomy. The regulatory obligations, including liability for failures, rest entirely with us, as we alone are the provider and operator of our AI systems. You are only subject to general due diligence obligations when transferring data to us and handling the applications you receive from us.
As a customer (both as an individual and as a company), you are not an actor within the meaning of the EU AI Act! This means you have no regulatory obligations arising from the EU AI Act.
When selecting any AI systems in recruiting, you should always consider: What is your role under the EU AI Act? If you become an operator through your use, you are subject to extensive obligations such as establishing quality and risk management systems, comprehensive documentation, and technical and organizational measures.

What obligations does skillconomy have?

The EU AI Act imposes special requirements on us as the provider and operator of a high-risk AI system. We would like to provide the greatest possible transparency regarding what these are and how we ensure compliance. The following explanations refer to our AI system for creating longlists. We also take into account the regulatory requirements of the other AI systems we use. However, since these are much less stringent, we focus here on the high-risk AI system. In our role as provider and operator of a high-risk system, we (skillconomy) are subject to the following obligations:
MeasureStatusEntry into force
Establishment and implementation of quality management08/02/2026
Establishment and implementation of risk management08/02/2026
Use of high-quality datasets08/02/2026
Documentation of the AI system and its development08/02/2026
Ensuring human oversight08/02/2026
Ensuring intended use08/02/2026
Monitoring intended use08/02/2026
Guaranteeing robustness, accuracy, safetyby 12/31/2508/02/2026
Monitoring of safety and robustness metrics08/02/2026
Logging of system decisions and incidentsby 12/31/2508/02/2026
Provision of information to users08/02/2026
Training of employees02/02/2025
Information of candidates08/01/2026
Usage stop in case of incidents08/02/2026
Declaration of conformityby 12/31/25
CE markingby 12/31/25
Registration in EU databaseas soon as possible

How does skillconomy fulfill its obligations?

We would like to briefly address each point below and provide transparency on how we specifically fulfill each obligation.

Establishment and implementation of quality management

Our quality management system (QMS) sets guidelines for the development and operation of our AI systems and defines processes and responsibilities for fulfilling and monitoring all regulatory requirements and the continuous improvement of the QMS. In particular, we have defined standards for internal security and compliance audits to be conducted every six months. These include the risk management system described in the following section, as well as a review of documentation practices, a review of management practices and processes, and technical and organizational measures. Responsible for the management and monitoring of QM processes is Lars Branscheid, Managing Director.

Establishment and implementation of risk management

Our risk management system (RMS) ensures that we address risks in accordance with the EU AI Act adequately and proactively. In addition, our RMS also addresses risks arising from other regulatory requirements. These are, in particular, the General Data Protection Regulation (GDPR) and the General Equal Treatment Act (AGG). Our RMS ensures a broad consideration of possible risks. In addition, we have defined three focus areas, each of which is specifically addressed due to its outstanding importance and is intended to identify risks regarding:
  • Protection of personal data in the provision of our services
  • Protection of personal data in the training of our AI models
  • Ensuring non-discrimination in our AI and non-AI processes
Through training of employees involved in AI development and the formalization of best practices, we ensure continuous consideration of possible risks. Furthermore, we consolidate findings in internal risk audits and document the seamless execution of RM processes. We define a number of triggers that make a risk audit mandatory: A risk audit is mandatory in the following cases:
  • Regularly once per quarter
  • Before commissioning new solutions or extensive features
  • Before implementing countermeasures to risks identified in previous audits
  • Before deploying models trained on modified or extended datasets
  • Before deploying models whose architecture has been modified
  • In case of changes or clarifications of regulatory requirements or their announcement
  • In case of incident reports by supervisors or users

What specific measures does skillconomy take?

Beyond the requirements of the EU AI Act, we see ourselves as pioneers in the development and use of responsible AI systems. You can find out what this means in concrete and technical terms in the AI Ethics section.
Responsible for the management and monitoring of RM processes is: Marc Branscheid, Managing Director, [email protected]

Use of high-quality datasets

We make great efforts to ensure that
  • We have a sufficient amount of data available
  • The quality of the data is consistently ensured
We use only pseudonymized data. We analyze the distribution of features in the datasets and compare them with generally accessible statistics or—if these do not exist—well-founded estimates. This ensures the representativeness of the data. In addition, we conduct knowledgeable manual analysis of the data in extensive samples to identify and address possible risks.
We invest heavily in building high-quality datasets for training and using our AI models. Our datasets contain no direct encoding of discriminatory characteristics such as gender, ethnic origin, or age.

Documentation of the AI system and its development

We have created technical documentation for our AI models and log important changes. This includes a comprehensible justification for the motivation and assumptions behind a decision, as well as the results of the risk assessment according to our RMS. Documentation is carried out with complete version control.

Ensuring human oversight

The EU AI Act requires the possibility of human oversight and intervention in high-risk AI systems. We ensure this through our administration system (“backend”). Our employees can view and track the results of all AI processes here. Regarding intervention options, we go beyond the requirements of the EU AI Act and have implemented active approval processes at all relevant points. This means: For each position, the requirement profile, candidate experience media (messages, jobsites, chatbot), and the shortlist of candidates are carefully reviewed by an experienced recruiter. Only after active approval (which is also logged) are processes initiated in which errors could have a concrete impact (especially contacting candidates). With this mechanism, we not only ensure the requirements for human intervention but also establish an additional layer of security to minimize the risks of the AI system as much as possible.

Ensuring and monitoring intended use

For two reasons, the risk of unintended use of our AI systems is very low:
  • The AI and user interfaces are technically limited to intended use.
  • The AI is used exclusively by trained skillconomy employees.
As part of QM audits, log data is reviewed. Indications of unintended use are a review criterion. In addition, our employees have the opportunity to anonymously report evidence of unintended use.

Guaranteeing and monitoring robustness, accuracy, and safety

This point refers, according to the EU AI Act, in particular to:
  • A definition of what accuracy means for a particular AI system and ensuring this generally, but also under adverse circumstances (e.g., faulty or malicious use).
  • Safety against system failures
  • Resilience against manipulation in use or through influence on training data
Since our AI system is operated exclusively by employees of our company and is not publicly accessible, we have a high degree of control over possible influences on the system. In addition, we ensure safety by implementing best practices throughout the entire value chain and extensive certified hosting within the EU (Amsterdam, Frankfurt). Monitoring of metrics for robustness, accuracy, and safety is still pending. Accuracy metrics on validation datasets are already collected and documented for model training. Since our models are not automatically retrained or tuned in production, we believe this requirement is already largely met.

Logging of system decisions and incidents

Our AI systems automatically generate and store log data, in particular:
  • Time and duration of use of the AI system
  • Input data and results of each use
  • Identification data of each user
  • Performance metrics of each use (execution times)
  • Version number of the AI models used

Provision of information and training of employees

We are subject to two requirements regarding the ability to handle our AI systems responsibly: On the one hand, we are obliged to provide information to users for this purpose. In addition, according to Art. 4, we are required to train our employees in relation to AI. In our particular case, employees and users are the same people, so we combine this in practice. Written materials are provided and workshops are held every six months.

Information of candidates selected by the AI system

If candidates are selected by our AI system or with its support, we inform them on the jobsite that this is the case, for what purpose they were selected, and that they have a right to information regarding an explanation of the decision-making process.

Usage stop in case of serious incidents

We have implemented a reporting system for serious incidents for our internal users, which allows anonymous reporting of incidents or suspected cases. Incidents are immediately forwarded to all managing directors so that they can take appropriate measures and, if necessary, initiate a usage stop. In the event of a usage stop, our system allows for the complete substitution of all AI processes with human-led processes, so that our service provision can be maintained—even if with potentially high personnel effort.

Declaration of conformity, CE marking, registration

The declaration of conformity and CE marking will be carried out. Registration in the designated EU database will be completed as soon as it becomes available.

Frequently Asked Questions

No. You are not classified as an actor within the meaning of the EU AI Act.
The use of the skillconomy service is entirely under our responsibility. You have no regulatory obligations.
Because you have no influence on the operation of our AI systems,
do not operate any tools and also do not interact with the AI. You only receive the result—reviewed applications.
skillconomy is both provider and operator of a high-risk AI system.
We are therefore subject to extensive obligations—e.g., risk management, quality control, transparency, safety, and traceability.
We use several AI systems, e.g., for creating longlists (high risk) or for capturing job postings (minimal risk).
Only the longlisting system falls into the high-risk category according to the EU AI Act.
We have implemented, among other things, a risk management system, quality management, training, documentation,
manual control mechanisms, and comprehensive logging.
Compliance is regularly audited.
The EU AI Act comes into force on August 1, 2024.
Full implementation with all obligations will take place by August 2, 2026.
No, you do not need to take any action.
We only recommend handling the applications you receive with care and complying with applicable data protection rules.