The EU AI Act is reshaping how organisations use and buy AI‑enabled services. Translation is a perfect example: even when you’re not running AI systems yourself, your supplier might be. That makes it essential to understand how the Act categorises different workflows and where responsibilities sit.
This guide breaks down what the Act means in practice for translation buyers, in plain language, without the legal complexity.
It’s the first of a series of four blogs, which will address the following aspects in turn: the workflow model and regulatory implications; the buyer and Q&A considerations; governance, ethics & maturity; risk tiers.
Translation workflows under the EU AI Act
Most translation providers now use a mix of human and AI‑supported steps. The Act doesn’t regulate “translation” as such; it regulates systems and how they’re used. That means different workflows fall under different risk levels.
Here are the four core workflows you’ll encounter:
HUM – Human‑Only
No machine translation, no LLMs (large language models), no ASR (automatic speech recognition).
Regulatory status: Outside AI Act scope.MT+PE – Machine Translation + Human Post‑Editing
An MT (machine translation) engine creates a draft; a linguist corrects it.
Regulatory relevance: Requires human oversight and monitoring for errors. Input data must be lawfully accessible.AI‑DRAFT+EDIT – LLM‑Assisted Drafting + Human Editing
An LLM proposes phrasings; a linguist rewrites and approves.
Regulatory relevance: Transparency is required when AI contributes meaningfully to content. Human reviewers must ensure the model is used correctly.ASR+EDIT – Automated Speech Recognition + Human Correction
Audio is transcribed by an ASR system, then corrected by a human.
Regulatory relevance: ASR output must undergo full human correction; providers must monitor the model for errors or bias.
These workflows will appear increasingly often in proposals and SOWs because from 2 August 2026, transparency becomes mandatory.
What the Act expects from providers
Human oversight
Every AI‑generated draft must be reviewed by a qualified human – and that review must be meaningful; not a skim.
Correct use and monitoring
Providers must follow model instructions, check for problems and suspend tools if risks appear.
Transparency
Clients must be informed when AI has been used in a way that contributes to the final output.
Lawful data handling
Providers must ensure AI tools only process data they’re legally entitled to use.
What this means for you as a buyer
- You should always know which workflow is being used.
- You can expect clear disclosure and rationale behind workflow selection.
- You get stronger quality assurance through mandatory human review.
- You get more predictable and auditable translation processes.
A practical checklist
To help you establish if AI is being used and how, we’ve put together this handy checklist you can run through with your translation services provider to ensure transparency:
– Which workflow(s) will be used for this project?
– What human review steps are included?
– How does the provider monitor model behaviour?
– How is confidentiality protected when AI tools are involved?
– How will AI use be disclosed in deliverables?
Final takeaway
The EU AI Act doesn’t complicate translation; it clarifies it. With the right provider, the result is better quality, clearer expectations and safer use of AI‑assisted workflows. We’ve set out our position in our AI Policy statement, with the full AI Policy provided when you enter into a contract with us.
In our next article in the series,we look at the EU AI transparency in translation and 6 key questions you should be asking your provider.