- What regulatory obligations arise for you as a customer and user?
- What regulatory obligations do we (skillconomy) have and how do we fulfill them?
The most important thing first: As a customer of skillconomy, you are not an actor within the meaning of the EU AI Act! This means that you have no regulatory obligations arising from the EU AI Act!
General Information
The EU AI Act is the world’s first comprehensive regulation for the governance of artificial intelligence. It follows a risk-based approach and obliges developers as well as operators of AI systems to comply with transparent, fair, and safe frameworks—especially in sensitive application areas such as recruiting.When does the EU AI Act apply?
Entry into force:1. August 2024
Although the EU AI Act already entered into force in summer 2024, its implementation will take place over several years. From August 2026, the full scope of the EU AI Act will apply in all EU member states.
Implementation Timeline
| Phase | Date |
|---|---|
| Ban on AI with unacceptable risks | February 1, 2025 |
| Voluntary guidelines for developers | May 1, 2025 |
| Regulations for General-Purpose AI | August 2, 2025 |
| Transparency obligations | August 2, 2026 |
| High-risk requirements mandatory | August 2, 2026 |
Who is subject to the provisions of the EU AI Act?
The EU AI Act regulates, among other things, obligations and prohibitions for:- Providers of AI systems (Art. 3 Sec. 3)
- Operators of AI systems (Art. 3 Sec. 4)
- Unacceptable risk (Art. 5)
- High risk (Art. 6 Sec. 2)
- Limited risk
- Minimal risk
Classification in the context of skillconomy
Here is an overview of the AI systems used at skillconomy, their respective risk classification, and the roles according to the EU AI Act for skillconomy and you and your company (customer).| AI System | Risk Category | Role(s) skillconomy | Role(s) Customer |
|---|---|---|---|
| Job posting capture | Minimal | Operator | None |
| Creation of jobsites | Minimal | Operator | None |
| Creation of longlist | High | Provider, Operator | None |
Risk classification of skillconomy’s AI
The following explanations mainly refer to the AI system for creating longlists, i.e., the selection of candidates to be approached for a position. The EU AI Act explicitly names, among others, “AI systems intended to be used for the recruitment or selection of natural persons, in particular for targeted job advertising, …” (Annex III Section 4) as high-risk AI systems. Our sourcing AI thus falls into this category.Role(s) of skillconomy and customers
The active sourcing approach of skillconomy aims to take over the entire process—from capturing requirements from job descriptions, creating media for the candidate experience (messages, jobsites, application chat content), to selecting and contacting suitable candidates. Unlike many other providers, we do not provide tools that require or allow you to independently perform steps in active sourcing. The immediate results of the AI (the list of suitable candidates) are manually reviewed by our staff before any contact is made (“human-in-the-loop”). Only candidates who actively apply for the position and explicitly instruct skillconomy to forward their data to you will be passed on to you. You therefore do not interact with our AI systems yourself, nor do AI-based processes take place under your direct or indirect supervision. You simply release a position and then receive applications. The use of AI for this purpose is carried out entirely by us and under our supervision. This results in significant advantages for you as a customer regarding regulatory requirements from the EU AI Act: You are not an actor according to the definitions in Article 3, so the law does not apply to you as a customer of skillconomy. The regulatory obligations, including liability for failures, rest entirely with us, as we alone are the provider and operator of our AI systems. You are only subject to general due diligence obligations when transferring data to us and handling the applications you receive from us.As a customer (both as an individual and as a company), you are not an actor within the meaning of the EU AI Act! This means you have no regulatory obligations arising from the EU AI Act.
When selecting any AI systems in recruiting, you should always consider: What is your role under the EU AI Act? If you become an operator through your use, you are subject to extensive obligations such as establishing quality and risk management systems, comprehensive documentation, and technical and organizational measures.
What obligations does skillconomy have?
The EU AI Act imposes special requirements on us as the provider and operator of a high-risk AI system. We would like to provide the greatest possible transparency regarding what these are and how we ensure compliance. The following explanations refer to our AI system for creating longlists. We also take into account the regulatory requirements of the other AI systems we use. However, since these are much less stringent, we focus here on the high-risk AI system. In our role as provider and operator of a high-risk system, we (skillconomy) are subject to the following obligations:| Measure | Status | Entry into force |
|---|---|---|
| Establishment and implementation of quality management | ✅ | 08/02/2026 |
| Establishment and implementation of risk management | ✅ | 08/02/2026 |
| Use of high-quality datasets | ✅ | 08/02/2026 |
| Documentation of the AI system and its development | ✅ | 08/02/2026 |
| Ensuring human oversight | ✅ | 08/02/2026 |
| Ensuring intended use | ✅ | 08/02/2026 |
| Monitoring intended use | ✅ | 08/02/2026 |
| Guaranteeing robustness, accuracy, safety | by 12/31/25 | 08/02/2026 |
| Monitoring of safety and robustness metrics | ✅ | 08/02/2026 |
| Logging of system decisions and incidents | by 12/31/25 | 08/02/2026 |
| Provision of information to users | ✅ | 08/02/2026 |
| Training of employees | ✅ | 02/02/2025 |
| Information of candidates | ✅ | 08/01/2026 |
| Usage stop in case of incidents | ✅ | 08/02/2026 |
| Declaration of conformity | by 12/31/25 | — |
| CE marking | by 12/31/25 | — |
| Registration in EU database | as soon as possible | — |
How does skillconomy fulfill its obligations?
We would like to briefly address each point below and provide transparency on how we specifically fulfill each obligation.Establishment and implementation of quality management
Our quality management system (QMS) sets guidelines for the development and operation of our AI systems and defines processes and responsibilities for fulfilling and monitoring all regulatory requirements and the continuous improvement of the QMS. In particular, we have defined standards for internal security and compliance audits to be conducted every six months. These include the risk management system described in the following section, as well as a review of documentation practices, a review of management practices and processes, and technical and organizational measures. Responsible for the management and monitoring of QM processes is Lars Branscheid, Managing Director.Establishment and implementation of risk management
Our risk management system (RMS) ensures that we address risks in accordance with the EU AI Act adequately and proactively. In addition, our RMS also addresses risks arising from other regulatory requirements. These are, in particular, the General Data Protection Regulation (GDPR) and the General Equal Treatment Act (AGG). Our RMS ensures a broad consideration of possible risks. In addition, we have defined three focus areas, each of which is specifically addressed due to its outstanding importance and is intended to identify risks regarding:- Protection of personal data in the provision of our services
- Protection of personal data in the training of our AI models
- Ensuring non-discrimination in our AI and non-AI processes
- Regularly once per quarter
- Before commissioning new solutions or extensive features
- Before implementing countermeasures to risks identified in previous audits
- Before deploying models trained on modified or extended datasets
- Before deploying models whose architecture has been modified
- In case of changes or clarifications of regulatory requirements or their announcement
- In case of incident reports by supervisors or users
What specific measures does skillconomy take?
Beyond the requirements of the EU AI Act, we see ourselves as pioneers in the development and use of responsible AI systems. You can find out what this means in concrete and technical terms in the AI Ethics section.
Use of high-quality datasets
We make great efforts to ensure that- We have a sufficient amount of data available
- The quality of the data is consistently ensured
We invest heavily in building high-quality datasets for training and using our AI models. Our datasets contain no direct encoding of discriminatory characteristics such as gender, ethnic origin, or age.
Documentation of the AI system and its development
We have created technical documentation for our AI models and log important changes. This includes a comprehensible justification for the motivation and assumptions behind a decision, as well as the results of the risk assessment according to our RMS. Documentation is carried out with complete version control.Ensuring human oversight
The EU AI Act requires the possibility of human oversight and intervention in high-risk AI systems. We ensure this through our administration system (“backend”). Our employees can view and track the results of all AI processes here. Regarding intervention options, we go beyond the requirements of the EU AI Act and have implemented active approval processes at all relevant points. This means: For each position, the requirement profile, candidate experience media (messages, jobsites, chatbot), and the shortlist of candidates are carefully reviewed by an experienced recruiter. Only after active approval (which is also logged) are processes initiated in which errors could have a concrete impact (especially contacting candidates). With this mechanism, we not only ensure the requirements for human intervention but also establish an additional layer of security to minimize the risks of the AI system as much as possible.Ensuring and monitoring intended use
For two reasons, the risk of unintended use of our AI systems is very low:- The AI and user interfaces are technically limited to intended use.
- The AI is used exclusively by trained skillconomy employees.
Guaranteeing and monitoring robustness, accuracy, and safety
This point refers, according to the EU AI Act, in particular to:- A definition of what accuracy means for a particular AI system and ensuring this generally, but also under adverse circumstances (e.g., faulty or malicious use).
- Safety against system failures
- Resilience against manipulation in use or through influence on training data
Logging of system decisions and incidents
Our AI systems automatically generate and store log data, in particular:- Time and duration of use of the AI system
- Input data and results of each use
- Identification data of each user
- Performance metrics of each use (execution times)
- Version number of the AI models used
Provision of information and training of employees
We are subject to two requirements regarding the ability to handle our AI systems responsibly: On the one hand, we are obliged to provide information to users for this purpose. In addition, according to Art. 4, we are required to train our employees in relation to AI. In our particular case, employees and users are the same people, so we combine this in practice. Written materials are provided and workshops are held every six months.Information of candidates selected by the AI system
If candidates are selected by our AI system or with its support, we inform them on the jobsite that this is the case, for what purpose they were selected, and that they have a right to information regarding an explanation of the decision-making process.Usage stop in case of serious incidents
We have implemented a reporting system for serious incidents for our internal users, which allows anonymous reporting of incidents or suspected cases. Incidents are immediately forwarded to all managing directors so that they can take appropriate measures and, if necessary, initiate a usage stop. In the event of a usage stop, our system allows for the complete substitution of all AI processes with human-led processes, so that our service provision can be maintained—even if with potentially high personnel effort.Declaration of conformity, CE marking, registration
The declaration of conformity and CE marking will be carried out. Registration in the designated EU database will be completed as soon as it becomes available.Frequently Asked Questions
Am I affected by the EU AI Act as a customer of skillconomy?
Am I affected by the EU AI Act as a customer of skillconomy?
No. You are not classified as an actor within the meaning of the EU AI Act.
The use of the skillconomy service is entirely under our responsibility. You have no regulatory obligations.
The use of the skillconomy service is entirely under our responsibility. You have no regulatory obligations.
Why am I not considered an 'operator' as a customer?
Why am I not considered an 'operator' as a customer?
Because you have no influence on the operation of our AI systems,
do not operate any tools and also do not interact with the AI. You only receive the result—reviewed applications.
do not operate any tools and also do not interact with the AI. You only receive the result—reviewed applications.
What responsibility does skillconomy have in the context of the EU AI Act?
What responsibility does skillconomy have in the context of the EU AI Act?
skillconomy is both provider and operator of a high-risk AI system.
We are therefore subject to extensive obligations—e.g., risk management, quality control, transparency, safety, and traceability.
We are therefore subject to extensive obligations—e.g., risk management, quality control, transparency, safety, and traceability.
Which AI systems does skillconomy use and how are they classified?
Which AI systems does skillconomy use and how are they classified?
We use several AI systems, e.g., for creating longlists (high risk) or for capturing job postings (minimal risk).
Only the longlisting system falls into the high-risk category according to the EU AI Act.
Only the longlisting system falls into the high-risk category according to the EU AI Act.
What does skillconomy do to meet the requirements?
What does skillconomy do to meet the requirements?
We have implemented, among other things, a risk management system, quality management, training, documentation,
manual control mechanisms, and comprehensive logging.
Compliance is regularly audited.
manual control mechanisms, and comprehensive logging.
Compliance is regularly audited.
When does the EU AI Act fully apply?
When does the EU AI Act fully apply?
The EU AI Act comes into force on August 1, 2024.
Full implementation with all obligations will take place by August 2, 2026.
Full implementation with all obligations will take place by August 2, 2026.
Do I need to prepare or consider anything as a customer?
Do I need to prepare or consider anything as a customer?
No, you do not need to take any action.
We only recommend handling the applications you receive with care and complying with applicable data protection rules.
We only recommend handling the applications you receive with care and complying with applicable data protection rules.