Automated, AI powered TAUS DQF-MQM LQA report
LQA is a great way for translation buyers to evaluate the quality of a vendor's work. LQA is always desirable, but often difficult to justify due to its complexity. A proofreader has to spend a lot of time annotating each change made to the translator's work, which is time consuming and has an impact on the budget.
As the LLM technology has advanced, this application wants to experiment whether the machine can annotate the changes introduced by the proofreader against the frequently used TAUS DQF-MQM model.
The prerequisite for using the report is a project or at least one file has already been translated and reviewed. This means that a proofreader has reviewed each translation and either approved it without changes or made changes to the final translation.
When you start using the application, it will prompt you to select a translator or vendor whose work you are evaluating, then the language and a proofreader who reviewed the translations. You can optionally limit the scope of the report to a few files or run it for the entire project.
What the application will then do is basically generate the post-edit distance report. This report is then sent to the LLM model with a prompt to annotate any changes made by the reviewer.
The report can then be downloaded as an XLSX spreadsheet for future analysis.
The app comes with a free offer of 5 USD to start experimenting with the technology quickly. If you run out of free credits, you will be asked to provide your own API key to access the OpenAI API.
The advanced settings would also allow you to fine-tune the prompt and change the XLSX report template. You can use this to improve the annotation according to your requirements or to make the XLSX look exactly as you need it for your analysis. For example - introduce penalties for different types of errors.
In the advanced settings, you can also select the LLM model used for the evaluation.