Moving on from our first article on what the EU AI Act means for businesses using translation services, which explores workflows and responsibilities, we’ll now take a look at buyer and Q&A considerations.
The EU AI Act has made one thing unmistakably clear: if a provider uses AI to produce content, clients – meaning you – have the right to know. But transparency isn’t just a legal requirement – it’s a quality and risk‑management necessity.
If you’re buying translation services today, these are the five key questions that reveal whether a provider truly understands responsible AI use.
Abbreviations and terms explained
MT: machine translation > software systems that translate text automatically from one language into another with/without human intervention at the initial drafting stage (e.g. neural MT engines).
PE: post editing > human review and correction of machine‑translated output by a qualified linguist to ensure accuracy, fluency, terminology correctness and suitability for purpose.
AI: artificial intelligence > broadly refers to automated systems that generate or assist with language output, including machine translation engines and large language models used for drafting, rewriting or suggesting content.
ASR: automatic speech recognition > systems that convert spoken language (audio or video) into written text, which is then typically edited and corrected by a human linguist in professional translation or subtitling workflows.
1. What workflow will you use for my project?
Even a small project can use different workflows – MT+PE, AI‑DRAFT+EDIT, ASR+EDIT or purely human.
You don’t need the technical details, but you do need:
- The chosen workflow
- Why it was chosen
- What risks it introduces
- How those risks are mitigated
If a provider can’t explain this clearly, that’s a warning sign.
2. How does human oversight work in your process
Ask directly:
- Who reviews the AI output?
- How is quality verified?
- What happens if the AI output is wrong?
Effective oversight should be structured, not improvised, such as in the form of a QA process.
3. How do you show transparency in documents?
From August 2026, AI use must be disclosed. Look for:
- Workflow labels in proposals
- Notes in SOWs
- Clear disclosure in delivery documentation when required
Transparency should be visible — not hidden in footnotes.
4. How do you protect confidential data when AI tools are used?
Responsible providers will explain:
- Whether data is retained by AI tools
- What access controls exist
- Whether prompts or audio are submitted to cloud systems
- How they prevent unauthorised training or reuse
If confidentiality matters to you, this question is non‑negotiable.
5. Do you retain logs, and why?
Most translation‑related workflows aren’t high‑risk*, so extensive logging isn’t required.
But your provider should still understand:
- When logs might apply
- How long they are stored
- What information they contain
Procurement teams need this for due diligence.
*high risk: AI systems that are used in areas where failures could significantly affect people’s rights, safety or access to essential services (e.g. medical devices, recruitment screening, credit scoring).
Professional translation workflows – including MT, post‑editing, AI‑assisted drafting and ASR — do not normally fall into the high‑risk category. However, they are still subject to obligations around human oversight, transparency (from August 2026), correct use and monitoring, depending on how the AI contributes to the final content.
What “good transparency” looks like
- Clear workflow explanations on quotes
- Straightforward answers about AI involvement
- Documented oversight steps
- A public AI policy statement
- No defensiveness; no vagueness
The right questions shield your business from risk – and they quickly reveal whether a provider is using AI in their workflows confidently, competently and ethically. We explore all this in our next post on what to look out for in a translation provider’s AI policy, along with governance, ethics and maturity considerations.