Key Takeaways
- All EU Member States must have operational AI sandboxes or participate in at least one sandbox by August 2, 2026
- Spain's National AI Sandbox (Royal Decree 817/2023) is already operational
- EUSAiR EU-wide pilots running October 2025 - April 2026 with applications announced
- Participation is free for SMEs and startups under Article 58
- Sandboxes provide regulatory guidance before full market compliance requirements
What Is an AI Regulatory Sandbox?
An AI regulatory sandbox is a controlled environment where startups can develop, test, and validate innovative AI systems under regulatory supervision before facing full market compliance requirements. Think of it as a 'safe space' to work out compliance issues with direct regulator guidance.
Under Articles 57 and 58 of the EU AI Act, sandboxes serve four core purposes:
- Development and testing: Train and validate AI systems in controlled conditions
- Regulatory learning: Understand requirements with direct feedback from authorities
- Risk identification: Work with supervisors to identify and mitigate risks
- Market preparation: Build compliance readiness before full obligations apply
The AI Act Compliance Timeline: Where Sandboxes Fit
Understanding when different AI Act provisions apply helps you see how sandboxes can bridge the gap between innovation and compliance. The Act phases in over several years.
Full Implementation Timeline
| Date | Milestone | Sandbox Relevance |
|---|---|---|
| February 2, 2025 | Prohibited AI practices ban takes effect | Sandboxes can help assess if your AI falls into prohibited categories |
| August 2, 2025 | GPAI rules apply | Sandbox testing for GPAI models underway |
| August 2, 2026 | High-risk AI systems require full compliance; all sandboxes operational | Last chance to use sandbox before full requirements |
| August 2, 2027 | All remaining provisions apply | Post-sandbox, full market compliance required |
Why the Timeline Matters for Startups
If you're building AI systems now, you have a window:
- 2025: Evaluate your AI against prohibited practices and risk classifications
- 2025-2026: Use sandbox programs to test compliance approach
- 2026+: Deploy with confidence (or iterate based on sandbox learnings)
The sandboxes are designed to close the gap between "we're building innovative AI" and "we need to comply with comprehensive regulations we're still learning to interpret."
Current Sandbox Opportunities: Where to Apply
Three pathways currently exist for startup AI sandbox participation: national programs (Spain operational, others developing), EU-wide EUSAiR pilots, and planned multi-country sandboxes.
Spain: National AI Sandbox (Operational)
Spain moved early, establishing its National AI Sandbox under Royal Decree 817/2023 before the AI Act deadline. It's the most mature option currently available.
Key Details
- Launch: December 20, 2024 (first call)
- First cohort selected: April 3, 2025
- Testing period: April 2025 onwards
- Eligibility: Resident in Spain, permanent establishment, or Spanish representative
- Cost: Free for participants
- Regulator: Spanish Digital Transformation Agency
How to Apply
- Monitor the Spanish Digital Transformation Agency website for cohort announcements
- Prepare documentation on your AI system, intended use, and risk assessment
- Designate a Spanish representative if not Spain-based
- Submit application during open call periods
EUSAiR: EU-Wide Pilot Program
The European Union Sandboxes for AI Regulation (EUSAiR) project coordinates cross-border sandbox pilots, offering participation regardless of which Member State you're based in.
Key Details
- Pilot Period: October 2025 - April 2026
- Cohort 2: January - March 2026 (applications open)
- Eligibility: AI providers of all sizes, including startups and SMEs
- Application: Submit short proposal on EUSAiR website
- Partners: Multiple national authorities, EDIHs, AI Factories
How to Apply
- Visit the EUSAiR project website (eusair-project.eu)
- Navigate to the "EUSAiR Pilots" page
- Submit a short proposal including description of your AI system, regulatory challenges, and preferred testing environment
- If selected, attend discussion meeting
- Join cohort upon approval
Other National Sandboxes (Developing)
Several countries are developing national sandboxes to meet the August 2026 deadline:
| Country | Status | Authority |
|---|---|---|
| Italy | Early development | Agenzia per la Cybersicurezza Nazionale involved |
| Lithuania | Announced | Innovation Agency tasked |
| Denmark | Operational | One of the earliest |
| Germany/France/Netherlands | Planning | Expected 2025-2026 announcements |
Recommendation: If Spain or EUSAiR aren't suitable, monitor your national authority's announcements. The EU AI Act Service Desk (ai-act-service-desk.ec.europa.eu) maintains updated national resources.
What You Get from Sandbox Participation
Sandbox participation provides direct regulatory guidance, risk assessment support, legal certainty documentation, and a compliance roadmap—all free for startups and SMEs.
Regulatory Guidance and Feedback
Instead of interpreting the AI Act alone, you work directly with competent authorities who:
- Review your risk classification
- Provide feedback on your conformity assessment approach
- Identify compliance gaps before they become problems
- Answer specific questions about your use case
Risk Identification and Mitigation
Article 57 specifically requires sandbox supervisors to provide guidance on:
- Identifying risks to fundamental rights
- Health and safety risks
- Environmental risks
- Risk mitigation strategies
For high-risk AI systems, this supervised risk assessment is invaluable.
Sandbox Plan and Exit Report
Every sandbox participant operates under a specific "sandbox plan" documenting objectives, testing conditions, expected outcomes, and timelines (implementation may vary by Member State). When you exit, you receive an exit report documenting your participation, findings, and compliance status—evidence of good faith compliance efforts.
Legal Certainty for Participants
Free Access for SMEs
Article 58(6) is explicit: "Access to the AI regulatory sandboxes shall be free of charge for SMEs, including start-ups." Compare this to the cost of private compliance consulting or legal advice.
Eligibility Requirements: Can Your Startup Participate?
Any AI provider—including prospective providers still developing systems—can apply for sandbox participation, with selection based on transparent criteria. There's no minimum company size or revenue requirement.
Who Can Apply
Article 58(2) establishes that sandboxes must be open to:
- AI providers: Companies that develop AI systems for market placement
- Prospective providers: Companies developing AI that intends to place it on market
- SMEs and startups: Explicitly mentioned as target beneficiaries
- Authorized representatives: For non-EU companies with EU market intentions
Selection Criteria
The AI Act requires selection criteria to be "transparent and fair." Common factors include:
- Innovation level: Novel AI applications
- Market readiness: Reasonable path to commercialization
- Risk profile: Systems facing compliance challenges
- Societal benefit: Positive impact potential
- Compliance commitment: Genuine intent
- Resource adequacy: Ability to participate meaningfully
Application Timeline
Article 58(2)(a) requires authorities to inform applicants of their decision within three months. Plan accordingly:
- Applications for 2025 testing: by Q1-Q2 2025
- Applications for 2026 testing: by early 2026
- Allow 3 months for decision plus time for sandbox plan development
How to Prepare Your Sandbox Application
A strong sandbox application demonstrates your AI system, regulatory challenges, compliance approach, and capacity to participate meaningfully.
Step 1: Document Your AI System
Create clear documentation covering:
- System description (what it does, how it works, training data)
- Intended purpose
- Technical architecture
- Risk category assessment
- Current development stage
Step 2: Identify Your Regulatory Challenges
Be specific about:
- Which AI Act provisions are unclear
- Where you're unsure about risk classification
- Technical requirements you're struggling to interpret
- Conformity assessment questions
Step 3: Propose Your Testing Approach
Outline:
- Controlled testing parameters
- Real-world testing needs
- Metrics you'd measure
- Timeline expectations
- Resources you can commit
Step 4: Prepare Supporting Materials
Have ready:
- Company registration documents
- Technical team credentials
- Existing compliance documentation
- Data protection measures (GDPR compliance)
- Funding/runway information
Step 5: Submit and Follow Up
Submit through the designated portal, track application status, respond promptly to clarification requests, and prepare for discussion meeting if selected.
Sandbox Participation: What to Expect
Sandbox participation follows a structured process: agreed plan, supervised development, regular check-ins, and documented exit.
Phase 1: Sandbox Plan Development (2-4 weeks)
After selection, you work with authorities to develop your specific sandbox plan:
- Define testing objectives and scope
- Agree on conditions
- Establish timeline and milestones
- Document expected outcomes
- Set supervision arrangements
Phase 2: Active Testing (3-6 months typical)
During the testing period:
- Develop and iterate your AI system
- Conduct agreed testing
- Receive regular guidance from supervisors
- Document all activities and findings
- Flag any issues or risks identified
Key mindset: Treat authority reviews as collaboration, not audits.
Phase 3: Exit and Reporting (2-4 weeks)
When testing concludes:
- Compile participation records
- Submit findings to authorities
- Receive exit report with compliance status
- Get recommendations for remaining compliance work
- Transition to standard market compliance
Real-World Testing Conditions
Article 58(5) permits testing outside laboratory settings under sandbox supervision, with:
- Informed consent from subjects
- Safeguards against adverse effects
- Reversibility of AI decisions where feasible
- Compliance with existing law (especially GDPR)
Sandboxes vs. Full Compliance: Strategic Considerations
Sandboxes aren't right for every startup—they're most valuable when facing genuine regulatory uncertainty or building high-risk AI.
When Sandboxes Make Sense
- High-risk AI classification: Direct guidance on conformity assessment
- Uncertain risk classification: Authoritative determination from regulators
- Novel AI applications: No precedent to follow
- Cross-border deployment: Multi-jurisdiction coordination
- Limited compliance budget: Free access, expert guidance
When Standard Compliance May Be Better
- Low-risk AI clearly: Requirements are minimal, straightforward
- Established precedent exists: Others have solved same challenge
- Very early stage: Not ready to engage meaningfully
- Tight launch timeline: Sandbox adds time
- Strong in-house expertise: May not need external guidance
The Middle Path: Monitor and Decide
- Now: Monitor sandbox availability in your target markets
- Q2-Q3 2025: Assess your AI against risk categories
- Q3 2025: Decide whether sandbox or direct compliance is better
- Q4 2025 - Q1 2026: Apply if appropriate
- August 2026: Full compliance required regardless
Frequently Asked Questions
Are AI regulatory sandboxes free for startups?
Yes. Article 58(6) explicitly states that access shall be free of charge for SMEs, including startups. Exceptional costs may be recovered in some cases, but the baseline is free participation.
Can non-EU startups participate in EU AI sandboxes?
Yes, if you intend to place AI systems on the EU market. You'll need an authorized representative in the EU and may need to designate representatives in specific Member States.
Do I need a finished AI product to apply?
No. Sandboxes accept "prospective providers"—companies still developing AI systems. You need enough development progress to meaningfully test, but production readiness isn't required.
How long does sandbox participation last?
Typically 3-6 months, depending on the program and your testing needs. The sandbox plan you agree with authorities defines the timeline. Extensions may be possible for complex systems.
Does sandbox participation guarantee compliance?
No. Sandboxes provide guidance and a controlled environment for testing, but you still need to achieve actual compliance. The exit report documents your status and remaining work. However, participation demonstrates good faith compliance efforts.
What happens if my AI fails testing in the sandbox?
Sandbox testing is about learning and improving, not pass/fail certification. If issues emerge, you work with supervisors to address them. Serious risks to fundamental rights would halt participation, but normal compliance gaps are part of the process.
Can I sell my AI product while in the sandbox?
Generally no. Sandboxes are for pre-market testing. However, controlled real-world testing with specific users under sandbox supervision is permitted. Full commercial deployment waits until you've completed the sandbox and achieved standard compliance.
Which sandbox should I apply to: national or EUSAiR?
It depends on your situation but depends on national sandbox rules. If based in Spain with Spanish market focus, the Spanish sandbox offers the most mature program. For cross-border needs or if not Spain-based, EUSAiR provides pan-European coordination.
Will sandbox participation delay my market launch?
Possibly. Sandbox participation takes time (3-6 months typically). However, for complex high-risk AI, the alternative—attempting compliance independently—may take longer and carry more risk. Calculate the trade-off based on your specific situation.
What documentation do I receive from sandbox participation?
You receive an exit report documenting your participation, testing conducted, findings, compliance status, and recommendations. This serves as evidence of your compliance efforts and can be valuable in any future regulatory discussions.
Conclusion: Sandboxes as Strategic Compliance Tools
AI regulatory sandboxes represent a genuine opportunity for startups building innovative AI systems. They're not exemptions—they're supervised pathways that provide guidance, reduce uncertainty, and document your compliance efforts.
The strategic value is clearest for startups facing:
- High-risk classifications with complex conformity assessment requirements
- Regulatory uncertainty about how provisions apply to their specific systems
- Limited compliance resources that make expert guidance particularly valuable
- Cross-border intentions requiring multi-jurisdiction coordination
With Spain's sandbox already operational, EUSAiR pilots accepting applications, and all Member States required to have programs by August 2026, the infrastructure is rapidly developing.
The window between now and full compliance (August 2026) is your opportunity to use these resources. Apply early, engage meaningfully, and use sandbox participation to build compliance readiness before requirements become mandatory.
Reviewed by Outlex Legal Team
This content was reviewed by qualified legal professionals with experience advising European startups on compliance, contracts, and corporate matters. Outlex is backed by a major Portuguese law firm with expertise across EU jurisdictions.
Last updated: January 2025
Sources
- EU AI Act, Articles 57 and 58 (Official Journal of the European Union)
- Spanish Royal Decree 817/2023 - National AI Sandbox
- EUSAiR Project (European Union Sandboxes for AI Regulation)
- European Commission AI Act Implementation Timeline
- EU AI Act Service Desk - National Resources
- European Commission Draft Implementing Act on AI Regulatory Sandboxes (2024 Consultation)
Disclaimer: This article provides general information about EU AI Act regulatory sandboxes. It does not constitute legal advice. For advice specific to your situation, consult a qualified legal professional.



