The EU AI Act in Greece
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. Its provisions are already in force, with the full compliance framework for high-risk systems taking effect in August 2026. Whether you are building AI solutions or using them, the obligations apply to your business. Our Technology & Digital Governance team helps Greek businesses navigate this new regulatory landscape – from initial assessment through to full compliance and beyond.
EU AI Act Compliance for Greek businesses
The EU AI Act is not approaching – it is here. Prohibited AI practices have been banned since February 2025. The penalty regime is active. And on 2 August 2026, the full compliance framework for high-risk AI systems takes effect, bringing with it the transparency obligations, conformity assessments, and enforcement powers that will reshape how every Greek company develops or uses artificial intelligence.
This is the most significant piece of technology regulation since the GDPR. The difference is that the GDPR gave organisations two years of relative quiet before enforcement began. The AI Act does not offer the same luxury – its provisions are staggered, some are already live, and the August 2026 deadline will arrive faster than most compliance timelines can accommodate.
Our firm advises Greek businesses at every stage of this transition, from first assessment through to full regulatory readiness and ongoing compliance.
THE PROBLEM WE SOLVE
Most Greek companies are not building AI from scratch. They are buying it – integrating third-party tools into HR processes, customer service, logistics, marketing, and financial operations. Under the AI Act, this makes them “deployers,” and deployers carry their own distinct set of legal obligations that many businesses are unaware of.
At the same time, Greece’s growing technology sector – health-tech startups, fintech companies, shipping logistics platforms – includes an increasing number of “providers” who are developing AI solutions and placing them on the EU market, with an entirely separate and more demanding compliance burden.
The challenge for both groups is the same: understanding which of your AI systems fall into which risk category, what obligations attach to each, and how to build compliance into your operations without paralysing your business. That is precisely what we do.
Our APPROACH
A Three-Phase Methodology
We have designed a structured compliance process that moves from diagnosis to implementation to sustained readiness.
Phase 1 – AI Inventory and Risk Classification
We conduct a comprehensive audit of every AI system your organisation uses or develops. This goes beyond IT – we work with your HR, marketing, operations, procurement, and finance teams to surface AI tools that may have been adopted informally or embedded in third-party software your teams use daily without recognising the AI component. Each system is then classified against the Act’s risk tiers: unacceptable, high, limited, or minimal. We also determine whether your organisation acts as a provider, a deployer, or both for each system, since the obligations differ significantly.
Phase 2 – Gap Analysis and Compliance Roadmap
With your AI inventory mapped, we identify precisely where your current practices fall short of the Act’s requirements. For high-risk systems, this means evaluating your risk management processes, data governance, technical documentation, human oversight mechanisms, and record-keeping. For limited-risk systems, we assess your transparency practices – whether customers and users are properly informed when they interact with AI. We then produce a prioritised compliance roadmap with clear timelines, responsibilities, and cost estimates, structured around the Act’s phased enforcement calendar.
Phase 3 – Implementation and Ongoing Support
We work alongside your internal teams and, where needed, your technology vendors to implement the roadmap. This includes drafting and reviewing AI governance policies, preparing conformity assessment documentation for high-risk systems, negotiating and amending vendor contracts to ensure AI providers are meeting their own obligations under the Act, establishing incident reporting and post-market monitoring procedures, and training your staff on AI literacy obligations – a standalone requirement under Article 4 of the Act that applies to all organisations.
EU AI Act Compliance for Greek businesses
Greece is not starting from zero on AI governance. Law 4961/2022, which entered into force in January 2023, already introduced requirements for medium and large Greek enterprises, including the obligation to maintain a registry of AI systems in use and to adopt a data ethics policy. These existing Greek requirements run in parallel with the EU AI Act, and compliance with one does not automatically satisfy the other.
We advise clients on the interplay between these two frameworks, as well as on the intersections with the GDPR (particularly around automated decision-making under Article 22), sector-specific regulations in financial services and healthcare, and the emerging obligations under the EU’s broader digital regulatory package – including the Digital Services Act, the Data Act, and the forthcoming Digital Omnibus simplification proposals.
SECTOR SPECIFIC EXPERIENCE
The AI Act does not affect every industry equally, and generic compliance advice misses the mark. We provide tailored guidance for the sectors where AI risk is concentrated.
In financial services, AI-driven credit scoring, fraud detection, and customer profiling all fall squarely within the high-risk category. We help banks, insurers, and fintech companies align their AI compliance with existing prudential and conduct-of-business obligations.
In healthcare and med-tech, AI used for diagnostics, treatment planning, or medical device functionality triggers some of the Act’s strictest requirements, intersecting with the Medical Devices Regulation and national bioethics guidelines.
In shipping and logistics, the use of AI for route optimisation, predictive maintenance, and supply chain management is widespread in Greece. While much of this falls into lower risk categories, the classification is not always straightforward, and we help clients navigate the boundary cases.
In recruitment and HR, any AI system that filters, ranks, or evaluates job applicants is classified as high-risk. This applies even if the tool was purchased off the shelf from a third-party vendor – the deployer obligations still fall on the company using it.
In technology and startups, Greek companies developing AI solutions for the EU market face the provider’s full compliance burden, including technical documentation, conformity assessments, post-market monitoring, and CE marking for high-risk systems. We help startups build compliance into their development process from the outset, rather than retrofitting it later at far greater cost.
THE GREEK ENFORCEMENT LANDSCAPE
Clients rightly want to know who will enforce the AI Act in Greece. In November 2024, the Ministry of Digital Governance published the list of authorities designated to supervise fundamental rights compliance for high-risk AI systems, including the Hellenic Data Protection Authority, the Greek Ombudsman, the Hellenic Authority for Communication Security and Privacy, and the National Commission for Human Rights. In 2025, the Ministry established a new Special Secretariat for Artificial Intelligence and Data Governance, signalling the government’s commitment to building the institutional infrastructure for enforcement.
The formal designation of Greece’s market surveillance authority and notifying authority – the bodies that will handle day-to-day supervision and conformity assessment – is still being finalised, as is the case in several other EU Member States. We monitor these developments continuously and advise clients on how evolving national enforcement structures affect their compliance obligations.
WHY EARLY COMPLIANCE IS A COMPETITICE ADVANTAGE
The fines for non-compliance are significant: up to €35 million or 7% of global annual turnover for violations involving prohibited practices, up to €15 million or 3% for other breaches, and up to €7.5 million or 1% for providing incorrect information to authorities. But the real incentive for early action is strategic, not punitive. European consumers, B2B procurement departments, and institutional investors are increasingly scrutinising how companies use AI. Being able to demonstrate that your systems are safe, unbiased, transparent, and legally compliant – before a regulator asks you to prove it – is a meaningful differentiator. Compliance is not just a cost centre; it is a trust signal that opens doors.
GET STARTED
If you are unsure whether your AI systems fall within the scope of the Act, or if you know they do and need a clear path to compliance, contact our Technology & Digital Governance team. We offer an initial assessment to evaluate your AI exposure, classify your risk levels, and outline the steps you need to take before the August 2026 deadline.
Frequently Asked Questions
Does the AI Act apply to my Greek business even if we didn’t build the AI we use?
Yes. This is one of the most common misconceptions. The AI Act distinguishes between “providers” (those who develop or place AI systems on the market) and “deployers” (those who use AI systems in a professional capacity). If your company has purchased an AI-powered recruitment tool, a chatbot, a credit scoring platform, or any other AI software and uses it in its operations, you are a deployer. Deployers have their own set of legal obligations under the Act, including ensuring proper human oversight, using systems in accordance with the provider’s instructions, monitoring operations for risks, and maintaining logs where required. The fact that you did not build the technology does not exempt you from compliance.
What are the risk categories under the AI Act?
The Act classifies AI systems into four tiers based on the risk they pose to health, safety, and fundamental rights. Unacceptable-risk systems are banned outright – these include social scoring, manipulative behavioural techniques, and certain uses of real-time biometric identification. High-risk systems are permitted but subject to strict requirements including risk management, data governance, technical documentation, human oversight, and conformity assessments. This category covers AI used in areas such as recruitment, credit scoring, medical devices, critical infrastructure, law enforcement, and education. Limited-risk systems, such as chatbots and AI that generates or manipulates content, are subject to transparency obligations – users must be informed that they are interacting with AI. Minimal-risk systems, which make up the vast majority of AI applications, remain largely unregulated.
When do the obligations take effect?
The AI Act follows a staggered implementation timeline. Prohibitions on unacceptable-risk AI systems have been in effect since 2 February 2025. Obligations for providers of general-purpose AI models and the requirement for Member States to designate national competent authorities applied from 2 August 2025. The bulk of the Act’s provisions – including the full compliance framework for high-risk AI systems, transparency obligations for limited-risk systems, and the Commission’s enforcement powers – take effect on 2 August 2026. Remaining provisions, including obligations for high-risk AI embedded in regulated products and for general-purpose AI models already on the market before August 2025, apply from 2 August 2027.
My company uses AI to screen CVs. Is that considered high-risk?
Almost certainly yes. AI systems used in recruitment and selection – specifically those that filter, rank, or evaluate candidates – are explicitly listed as high-risk in Annex III of the AI Act. This applies regardless of whether the system was developed in-house or purchased from a third-party vendor. As the deployer, your company must ensure the system complies with the Act’s requirements for human oversight, risk management, data quality, and transparency. You will also need to carry out a fundamental rights impact assessment before putting such a system into use.
We use a chatbot on our website. What do we need to do?
A chatbot falls into the limited-risk category. Your primary obligation is transparency: users must be clearly informed that they are interacting with an AI system rather than a human being, unless this is already obvious from the context. If your chatbot generates or manipulates text that could be mistaken for human-authored content, additional disclosure obligations may apply. While these requirements are less burdensome than those for high-risk systems, they are legally binding from August 2026 and should not be overlooked.
What is the “AI literacy” requirement and does it apply to my company?
Article 4 of the AI Act introduces an obligation that is often overlooked because it applies broadly, not just to high-risk systems. It requires that providers and deployers of AI systems take measures to ensure that their staff and other persons dealing with the operation and use of AI systems on their behalf have a sufficient level of AI literacy. This is not a suggestion – it is a legal requirement. What constitutes “sufficient” literacy depends on the context, including the technical knowledge of the persons involved, their experience, the level of education and training expected in their role, and the specific AI systems being used. In practical terms, most Greek businesses will need to develop or procure some form of AI training programme for relevant employees.
Do I need to worry about AI I use for internal purposes only, such as generating internal reports or summarising documents?
It depends on the nature and purpose of the AI system, not on whether it is used internally or externally. The AI Act’s scope is determined by the risk classification of the system and its intended purpose, not by the audience for its output. An AI system that summarises documents or generates reports for internal use will likely fall into the minimal-risk category and carry few obligations. However, if that same system is used to inform decisions about employees – for example, to assess performance, allocate tasks, or influence termination decisions – it may cross into the high-risk category. The key question is always: what decisions does this system influence, and who is affected by those decisions?
Our company is a startup developing an AI product. When should we start thinking about compliance?
Immediately. One of the most costly mistakes AI startups make is treating compliance as a post-launch problem. The AI Act imposes significant obligations on providers of high-risk AI systems, including preparing technical documentation, implementing a risk management system and quality management system, ensuring data governance, conducting conformity assessments, affixing CE marking, and establishing post-market monitoring. Retrofitting these requirements into a product that has already been designed and built is far more expensive and disruptive than building compliance into the development process from the outset. Beyond cost, early compliance is also a market signal – European B2B customers and investors are increasingly asking AI startups to demonstrate regulatory readiness as a condition of engagement.
Who will enforce the AI Act in Greece?
Enforcement operates at two levels – EU and national. At the EU level, the AI Office (established within the European Commission) oversees enforcement, particularly for general-purpose AI models. At the national level, each Member State must designate at least one market surveillance authority and one notifying authority. Greece has already designated the authorities responsible for supervising fundamental rights compliance in relation to high-risk AI systems, including the Hellenic Data Protection Authority, the Greek Ombudsman, the Hellenic Authority for Communication Security and Privacy, and the National Commission for Human Rights. The Ministry of Digital Governance has also established a Special Secretariat for Artificial Intelligence and Data Governance. The formal designation of Greece’s primary market surveillance authority – the body that will handle broader day-to-day enforcement – is still being finalised. Regardless, the obligations under the Act are directly applicable EU law and must be complied with irrespective of the status of national enforcement structures.
Where do I start?
The first step is understanding what you have. We recommend beginning with a structured AI inventory and risk classification exercise that maps every AI system your organisation uses or develops against the Act’s risk categories. This gives you a clear picture of your exposure and the specific obligations that attach to each system. From there, a gap analysis and compliance roadmap can be developed, prioritised around the enforcement timeline. Contact our Technology & Digital Governance team to schedule an initial assessment.
Contact us today for a free initial discussion
Click here to change this text. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.