The EU AI Act is no longer a distant prospect. Parts of the regulation are already in effect, and the major requirements hit in August 2026.
Yet I find that many Danish organizations are still operating in a kind of wait-and-see mode. "We're monitoring the situation." "We're waiting for the final guidelines." "It probably won't affect us."
That's a risky strategy. Because regulation is only half the story. The other half is about what your competitors are doing while you wait.
Here's what you actually need to know.
What's Already in Effect?
The EU AI Act entered into force on August 1, 2024. But implementation happens in phases. Here's the timeline:
February 2025 (already in effect):
→ Prohibited AI practices are now illegal
→ AI literacy requirements apply to all organizations that use or provide AI systems
August 2, 2025:
→ Rules for General Purpose AI models (like GPT, Claude, Gemini) apply
→ Member states must have designated national supervisory authorities
→ The sanctions framework is established, but broad enforcement escalates from August 2, 2026
August 2, 2026:
→ The majority of rules come into effect
→ Requirements for high-risk AI systems apply broadly (note: high-risk AI integrated into regulated products has a later deadline in 2027)
→ Transparency requirements are activated
→ Each member state must have at least one AI regulatory sandbox
August 2027:
→ Full implementation, including high-risk systems integrated into products
This means: If your organization uses AI today – and it probably does – the AI literacy requirement already applies.
The Four Risk Categories
The EU AI Act operates with a risk-based approach. Four levels:
1. Unacceptable Risk (Prohibited)
AI systems that pose a clear threat to people's safety, rights, or livelihood are completely banned. This includes:
→ Social scoring of citizens
→ Manipulation of vulnerable groups
→ Real-time biometric mass surveillance (with certain exceptions for law enforcement)
→ AI that exploits people's age, disability, or socioeconomic situation
2. High Risk
AI systems used in critical areas such as:
→ Recruitment and HR decisions
→ Credit assessment and financial services
→ Education (access, assessment)
→ Critical infrastructure
→ Health and medical devices
→ Law enforcement
These systems require extensive documentation, risk assessment, human oversight, and conformity assessment.
3. Limited Risk
AI systems with specific transparency requirements. Users must be informed that they are interacting with AI. This applies to chatbots and AI-generated content, for example.
4. Minimal Risk
No specific requirements. This covers the majority of AI systems in use today – spam filters, AI recommendations, simple automations.
AI Literacy: The Requirement Most Overlook
Article 4 is perhaps the most underestimated part of the regulation.
It requires all organizations that use or provide AI systems to ensure "an adequate level of AI literacy" among their employees and other persons working with AI on the organization's behalf.
What Does This Mean Concretely?
The European Commission has clarified that there is no single correct answer. But your organization must at minimum:
→ Ensure general AI understanding among employees
→ Adapt training to the context in which AI systems are used
→ Consider which persons or groups are affected by the AI systems
→ Document what training has been conducted
No certification is required. But you should be able to document what your organization has done.
And note: The requirement also applies to "other persons" – e.g., contractors and suppliers who operate or use AI systems on the organization's behalf.
The broad enforcement wave comes from August 2, 2026. But several requirements (including AI literacy) already apply from February 2, 2025, and lack of AI literacy may be weighed negatively during supervision.
The Fine Structure
The EU AI Act has teeth. Fine levels actually exceed GDPR:
Prohibited AI practices: Up to €35 million or 7% of global annual turnover (whichever is higher)
Violation of high-risk system requirements: Up to €15 million or 3% of global annual turnover
Incorrect or misleading information to authorities: Up to €7.5 million or 1% of global annual turnover
SMEs get lower fines – the lower of the two amounts applies.
But fines are only the visible part. The real costs include:
→ Requirements to withdraw non-compliant systems from the market
→ Reputation damage
→ Loss of business partners who require compliance documentation
→ Cascading compliance issues across other regulations (GDPR, product safety, sector-specific requirements)
What Should Danish Organizations Do Now?
Here's my pragmatic prioritization:
1. Map Your AI Landscape
What AI systems do you use? Include everything – from ChatGPT to suppliers' embedded AI. Many organizations drastically underestimate how much AI is already in their processes.
2. Classify by Risk
For each system: Which risk category does it fall into? Pay particular attention to HR processes, customer service, credit assessment, and decision support.
3. Start AI Literacy Now
Don't wait for August 2026. Establish a basic training program covering:
→ What is AI, and how does it work?
→ What AI systems do we use?
→ Opportunities and risks
→ Guidelines for responsible use
4. Establish Governance
Who is responsible for AI in your organization? It doesn't have to be a new role, but responsibility must be clearly assigned.
5. Document
Everything. Decisions, assessments, training, processes. If it's not documented, it doesn't exist for a supervisory authority.
It's Not Just About Compliance
Here's my real message:
I've said it before: 95% of GenAI projects fail. Not because of the technology. Because of people, skills, and organization.
The EU AI Act's focus on AI literacy hits exactly that point. The regulation implicitly recognizes that the biggest risk isn't the algorithm – it's organizations implementing AI without understanding what they're doing.
That's my 10/20/70 rule in practice: 10% algorithms, 20% technology, 70% people.
Compliance is baseline. The interesting part is what you build on top of it.
Organizations that take AI literacy seriously – not as a checkbox, but as a strategic investment – will have a competitive advantage. They will be able to implement AI faster, safer, and with better results.
Those who wait and watch? They'll spend the next years catching up.
Resources
→ AI Act Service Desk (European Commission)
Does your organization have AI literacy under control? I facilitate workshops and training programs that prepare teams for both compliance and practical AI implementation. Contact me.