Alconost Announces Public Availability of the MQM Tool API
ALCONOST — ALEXANDRIA, Va., 18 FEBRUARY 2026 — Alconost today announced the public release of the MQM Tool API, extending its free web-based MQM Tool with automation capabilities for project orchestration, data import, progress tracking, and export of structured translation quality evaluation results.
The MQM Tool is designed for manual quality evaluation of translations — including human translations, machine translation (MT) outputs, LLM-generated content, and vendor-delivered localization — using the industry-standard MQM (Multidimensional Quality Metrics) framework.
While translation assessment inside the MQM Tool remains fully manual and linguist-driven, the new API enables organizations to automate project setup and results management — making it easier to coordinate distributed reviewers and scale structured translation quality evaluation across teams.
API documentation:
alconost.mt/mqm-tool/api-guide
Automating the Operational Side of Human Evaluation
The MQM Tool API does not automate linguistic judgment. Instead, it automates the administrative and technical processes surrounding evaluation workflows.
With the API, organizations can:
- Create MQM projects programmatically
- Upload large batches of source and target segments
- Manage distributed linguist assignments
- Monitor annotation progress in real time
- Export structured evaluation results automatically
This approach preserves expert-driven quality assessment while eliminating manual project setup and reporting overhead.
Typical Workflow with the MQM Tool API
1. Automated Project Creation
Localization engineers or QA managers create projects via API, configure language pairs and evaluation settings, and upload translation segments (e.g., TSV or JSON). This step can be triggered from a TMS, MT evaluation pipeline, sampling system, or CI/CD workflow.
2. Manual Annotation by Linguists
Assigned linguists log into the MQM Tool web interface to review translations and annotate errors according to MQM categories and severity levels. All evaluation remains fully human-performed and structured.
3. Programmatic Progress Monitoring
Project owners retrieve project status and completion metrics via API, enabling efficient coordination of distributed reviewers across vendors or internal teams.
4. Automated Export and Integration
Once evaluation is complete, structured annotation data and reports can be exported via API and integrated into dashboards, vendor scorecards, business intelligence tools, or machine translation benchmarking systems.
Designed for Scalable, Human-in-the-Loop Quality Programs
The MQM Tool API is particularly suited for:
- Enterprise localization teams running ongoing LQA programs
- Language service providers managing vendor quality audits
- AI and MT research teams benchmarking translation models
- Organizations implementing structured quality governance
"Manual MQM evaluation remains the gold standard for measuring translation quality. With the MQM Tool API, we're enabling teams to scale that gold standard efficiently across distributed reviewers and modern localization workflows."
— Alexander Murauski, CEO, Alconost
About the MQM Tool
The MQM Tool is a free web-based platform for structured manual translation quality evaluation based on the MQM framework. It enables systematic error annotation, standardized scoring, and detailed reporting across translation workflows.
Learn more and access the API guide: alconost.mt/mqm-tool/api-guide
About Alconost
Alconost is a language services company specializing in translation, localization, and quality evaluation. With a network of 3,500+ professional linguists covering 100+ language pairs, Alconost serves technology companies, game developers, and enterprises requiring high-quality multilingual content. The company offers on-demand MQM annotation services and custom dataset creation, with methodology aligned to Workshop on Machine Translation (WMT) standards.
MEDIA CONTACT
Alconost MQM Tool Team
mqm-tool@alconost.com
alconost.mt/mqm-tool
