The EU AI Act uses a risk-based approach: obligations depend on what your AI system does and whether you're a provider or deployer in the EU. For many seed-stage startups, the compliance lift is light—often limited to avoiding prohibited uses and meeting a few transparency requirements—unless you're building a high-risk system or a general-purpose/foundation model.
This guide cuts through the enterprise-focused noise to show what applies now, what changes as you scale, and what to deprioritize unless your product crosses into high-risk territory.
Last Updated: December 2025
Table of Contents
- The Reality: Most Startups Aren't Building "High-Risk" AI
- Quick Decision Tree: What Category Are You In?
- The Four Risk Categories Explained
- If You're Using Third-Party AI (OpenAI, Claude, etc.)
- Timeline: When Does This Actually Apply?
- The Seed-Stage Compliance Checklist
- What Investors Are Actually Asking About AI Compliance
- Common Mistakes Seed-Stage Founders Make
- Frequently Asked Questions
- Next Steps: Your Action Plan
The Reality: Most Startups Aren't Building "High-Risk" AI
Here's the truth that enterprise-focused compliance guides won't tell you: approximately 80% of startups fall into the "minimal risk" or "limited risk" categories under the EU AI Act. The Act is risk-based—most obligations concentrate on prohibited and high-risk systems, while certain "limited-risk" use cases mainly trigger transparency duties (like informing users they're interacting with AI).
If you're building a SaaS product with AI features for productivity, analytics, or customer support, you may have a relatively light EU AI Act burden—unless your use case falls into a high-risk area (e.g., hiring, education, credit, essential services) or you provide a general-purpose AI model.
Why This Matters for Your Runway
Enterprise compliance frameworks can easily run into tens or hundreds of thousands of euros to implement fully. For a seed-stage startup with 18 months of runway, that's not just expensive—it's potentially fatal. The good news? You likely don't need most of it.
SME Provisions in the AI Act
According to EU Commission guidance, the AI Act explicitly includes provisions to support SMEs and startups:
- Priority access to AI regulatory sandboxes
- Simplified conformity assessments
- Tailored documentation requirements based on company size
- Reduced conformity assessment fees
The key is understanding where you actually fall in the risk classification—not where fear-driven consultants want to place you.
Quick Decision Tree: What Category Are You In?
Before diving into compliance requirements, answer these questions to determine your risk level. This takes about 2 minutes and could save you months of unnecessary work.
Step 1: Are You Deploying AI That Does Any of These?
🚫 Immediate Red Flags (Article 5 Prohibited Practices)
- Social scoring systems that rank people's trustworthiness
- Real-time biometric identification in public spaces (generally prohibited except narrow exceptions)
- AI that manipulates behavior through subliminal techniques
- AI that exploits vulnerabilities of specific groups (children, disabilities)
If YES to any: Stop here. These uses are banned in the EU as of February 2025. Pivot your use case.
If NO: Continue to Step 2.
Step 2: Does Your AI Make Decisions in These Domains?
High-Risk Categories (Annex III of the AI Act)
- Employment: CV screening, job candidate ranking, performance evaluation
- Credit/Financial Services: Creditworthiness assessment, loan approvals
- Education: Exam grading, student assessment, admissions decisions
- Healthcare: Medical diagnosis assistance, treatment recommendations
- Critical Infrastructure: Energy grid management, water supply systems
- Law Enforcement: Crime prediction, evidence evaluation
- Immigration: Visa applications, asylum claim assessment
If YES to any: You're likely building high-risk AI. You'll need a comprehensive compliance framework from August 2026.
If NO: Continue to Step 3.
Step 3: Does Your AI Interact Directly with Users?
Limited Risk (Article 50 Transparency Obligations)
- Chatbots that users interact with directly
- AI-generated content (images, text, audio, video)
- Emotion recognition systems
- Biometric categorization systems
If YES: You have transparency obligations that should be in place from August 2026. For instance, users must know they're interacting with AI.
If NO: You're in the minimal risk category with no additional AI-Act-specific requirements beyond general law.
The Four Risk Categories Explained
1. Unacceptable Risk (Prohibited)
Completely banned in the EU. No exceptions. If your product does this, you need to pivot.
2. High-Risk
Heavy compliance burden: conformity assessments, technical documentation, human oversight requirements, quality management systems. August 2026 deadline.
3. Limited Risk
Transparency duties only. Users must know they're interacting with AI. Relatively light burden for most startups.
4. Minimal Risk
No AI-Act-specific requirements. You still need to comply with GDPR, consumer protection, and other existing laws—but no new AI Act obligations. (See our GDPR compliance guide for data protection requirements.)
If You're Using Third-Party AI (OpenAI, Claude, etc.)
Here's something most guides miss: if you're integrating third-party AI services like OpenAI's GPT models, Anthropic's Claude, or Google's Gemini, the compliance burden shifts significantly.
The Provider vs. Deployer Distinction
The EU AI Act distinguishes between:
- AI Providers: Companies that develop and train AI models (like OpenAI, Anthropic, Google)
- AI Deployers: Companies that use AI models in their products (probably you)
Key insight: If you're using a third-party AI API, that provider is responsible for most of the heavy compliance work under GPAI (General Purpose AI) regulations that took effect August 2025. You will have lighter deployer/system-level duties depending on your use case.
Be aware: You can become a "provider" (with heavier duties) if you market the system under your own name or materially modify/repurpose it.
What Third-Party Providers Handle
- Maintain technical documentation
- Provide transparency reports
- Document copyright compliance for training data
- Implement safety testing and red-teaming
- Report serious incidents to the AI Office
What You Still Need to Handle
- Appropriate use: Using the AI for intended purposes
- User transparency: Telling users when AI is involved
- Human oversight: Ensuring humans can intervene when needed
- Data protection: GDPR compliance for your users' data
- Monitoring: Watching for unexpected AI behavior in your application
Timeline: When Does This Actually Apply?
The EU AI Act has a phased implementation. Here's what matters for seed-stage founders in 2025-2026:
What's Already in Effect
February 2, 2025 (already passed)
Prohibited AI practices banned, AI literacy requirements, governance framework establishment
August 2, 2025 (already passed)
GPAI model rules apply, codes of practice, penalty regime enforceable
What's Coming
August 2, 2026 (your key deadline)
High-risk AI system requirements fully applicable, conformity assessments required, full enforcement begins
August 2, 2027
Extended deadline for high-risk AI in regulated products (medical devices, vehicles, etc.)
The Seed-Stage Compliance Checklist
Here's what to do now vs. what can wait:
Do Now (Before August 2026)
- Classify your AI use cases — Document whether each is prohibited, high-risk, limited risk, or minimal risk
- Avoid prohibited practices — If you're anywhere near these, pivot immediately
- Implement basic transparency — If users interact with AI, tell them
- Document your AI suppliers — Know which GPAI providers you depend on
- Review your data practices — Ensure GDPR compliance (this overlaps heavily with AI Act requirements)
Can Wait Until Growth Stage
- Full conformity assessments (unless you're high-risk)
- Comprehensive technical documentation
- Quality management systems
- Third-party audits
What Investors Are Actually Asking About AI Compliance
In due diligence, expect these questions:
- "Have you classified your AI use cases under the EU AI Act?" — Have a one-page mapping ready
- "Are any of your AI applications high-risk?" — Know the answer definitively
- "What's your compliance timeline?" — Show you understand the August 2026 deadline
- "Who are your AI providers and what are their compliance commitments?" — Document your supply chain
- "What transparency measures do you have in place?" — Describe your user-facing disclosures
Investors aren't expecting full compliance at seed stage. They're checking that you understand the landscape and won't be blindsided.
Common Mistakes Seed-Stage Founders Make
1. Assuming "We're just using OpenAI's API, so we're fine"
You're still a deployer with obligations. Using a compliant provider doesn't eliminate your duties—it just reduces them.
2. Over-engineering compliance too early
Don't hire a €200K/year compliance team at seed stage. Classify your use cases, implement basic transparency, and document as you go.
3. Ignoring it completely
The other extreme is also wrong. August 2026 will arrive fast. Start documentation habits now.
4. Not distinguishing internal vs. external use
Internal AI tools can still be high-risk (especially HR/hiring tools). Don't assume internal = exempt.
5. Relying on generic US-focused AI compliance guides
The EU AI Act has specific requirements. US frameworks (like NIST AI RMF) are useful but not sufficient for EU compliance.
Frequently Asked Questions
Does the EU AI Act apply to startups outside the EU?
Yes. The AI Act can apply to companies outside the EU if they place AI systems on the EU market, put them into service in the EU, or if the output of the AI system is used in the EU—so a US startup with customers/users in Europe may be in scope.
What if I'm just using AI internally, not in products?
Internal use can still trigger obligations, depending on use case, not whether it's "internal." Many internal tools are low-risk, but internal uses in high-risk areas—especially employment (hiring, promotion, performance management)—may fall into the high-risk regime.
Can I delay EU AI Act compliance until we raise Series A?
You can prioritize, but don't ignore. Basic classification and transparency measures should be done now. Comprehensive high-risk compliance can wait for growth stage—but build the documentation habit early.
What's the difference between AI provider and deployer under the EU AI Act?
A provider is typically the entity that places an AI system on the market or puts it into service under its name/trademark (even if it uses third-party models). A deployer uses an AI system under its control (e.g., using a system in its operations).
How do EU AI Act regulatory sandboxes help startups?
EU member states must establish regulatory sandboxes that give startups priority access to test AI in controlled environments with regulatory guidance. Check your national AI authority for sandbox applications.
What about UK AI regulation after Brexit?
The UK is pursuing a more principles-based approach (at least for now) compared to the EU's more prescriptive AI Act framework. If you operate in both markets, plan to track both regimes.
Who enforces the EU AI Act?
Each EU member state designates national competent authorities. The EU AI Office coordinates cross-border enforcement. Expect enforcement to start slowly after August 2026, focusing first on clear violations.
What EU AI Act penalties apply to startups?
The AI Act specifies maximum penalties (up to €35M or 7% of turnover for prohibited practices). For startups and SMEs, the lower of the two amounts applies. Regulators must also consider company size when determining actual fines.
How does the EU AI Act interact with sector-specific regulations?
For AI in regulated products (medical devices, vehicles, machinery), sector-specific rules apply with an extended timeline to August 2027. The AI Act adds requirements on top of existing sectoral regulation.
Where can I get official EU AI Act guidance?
Visit the European Commission AI Act pages and consult your national competent authority / market surveillance authority (once designated).
Key Takeaways
- 80% of seed-stage startups fall into minimal or limited risk categories with light compliance requirements
- If you're using third-party AI (OpenAI, Anthropic, Google), they handle most GPAI provider obligations
- August 2026 is your real deadline for most requirements (prohibited practices already apply)
- Document your AI use cases now—classification and mapping are your first and most important step
- Investors are asking about AI compliance in due diligence—be ready with answers
Reviewed by Outlex Legal Team
This guide was reviewed by qualified legal professionals with experience advising European startups on regulatory compliance. Outlex is backed by a major Portuguese law firm with expertise across EU jurisdictions.
Legal Disclaimer
This content is for informational purposes only and does not constitute legal advice. The EU AI Act is complex and still being interpreted—consult qualified legal counsel for your specific situation.
About Outlex
Outlex is the AI-powered legal OS for European startups. Our AI assistant Lexi can help you understand your EU AI Act obligations, draft transparency notices, and maintain compliance documentation—all with human lawyer oversight when you need it.
Ready to get your AI compliance sorted? See our pricing or learn how Outlex works.



