Comprehensive Guide to Language Quality Assurance (LQA) in Crowdon
Learn about Language Quality Assurance (LQA) in Crowdon, from setting up quality metrics to generating detailed reports for translation projects.
File
Linguistic Quality Assurance (LQA) in Crowdin
Added on 09/27/2024
Speakers
add Add new speaker

Speaker 1: Hello and welcome to this video on Language Quality Assurance or LQA. LQA is a process of evaluating the linguistic quality of translated content. It is done by following a set of predetermined procedures and guidelines, and the results are reported in a detailed report with an overall summary of the evaluated text. LQA is typically the final step in the localization process. Its purpose is to detect and fix any errors or issues that could affect a user's experience with an application, software, game, or website. LQA can also be used to evaluate the quality of a machine translation engine for specific content, or to assess the quality of work of a translation vendor. So now let's jump to the LQA app demonstration. I'm assuming that you already have a crowd and project set up and your files were already translated. To get started, I will install the app from the crowd and marketplace. When installing the app, be sure to make it available to all users within your organization. You also have the option to limit the availability of this app to specific crowd and projects. Now let's move on to the app itself. The first thing you'll want to do is define your quality metrics, also known as an LQA model. After installing the app, you will have several pre-configured LQA models. You can fine-tune an existing model, clone it, or create your own one. Let's clone the most popular one, the TOS-DQF-MQM, and fine-tune it for our particular project. I'll check the option Use penalties, and for the purpose of this example, let's say I want to set the penalty multiplier for mistranslations to 2. As you can see, there are many configurations available here. You can require proofreaders to leave comments when reporting errors. You can have multiple rounds of arbitrations for errors that are identified. You can modify pre-configured error categories or create your own, etc. For now, let's save this model and move on. Next, I'll need to create an LQA project. The app will prompt me to select the crowd and project for which I want to run an LQA and the type of LQA I will be conducting. Rating LQAs are the simplest option. They do not use an LQA model to evaluate translation. This type of LQA is often used for quick evaluation of machine translation engines. On the other hand, the model type is used for advanced linguistic quality assurance. When selecting the model LQA project type, you have the option to track the time spent by proofreaders during the LQA process and include it in the final report. Now I can invite my linguists to work on linguistic quality assurance. Let me show you what the work will look like for the linguists. Linguists would typically review all the segments and correct translations as necessary. After improving the translation, a linguist will use the LQA window to describe the types of problems that were addressed. Crowdon automatically selects the original and corrected translations and displays the differences. A linguist can click on the differences to quickly report one or multiple issues. That's it. Linguists will perform regular proofreading and document any issues encountered during the process. Now let's get to the final step, the report generation. You go back to the LQA app in an organization menu and click the Reports tab. To download a report of the LQA results, follow the prompts in the app and click Download XLSX. This report provides detailed information about the quality of your translations. It also includes a log spreadsheet with information about every identified issue and any comments from the proofreader. The TOSS DQF MQM model allows you to calculate an overall quality score and a final conclusion for the quality of your translations. To do this, you need to input the error allowance per thousand words and the total word count for the project. Since the LQA app does not know the scope of your LQA exercise, these two fields are not automatically filled in. If you have conducted an LQA for an entire project, you can obtain the total word count from your Crowdon project or you can enter the word count for the specific files you were evaluating. Let me put some random values here. Alright, it now shows my LQA result. That's it. Now you know what is LQA in Crowdon and its role in the localization process. If you need any assistance, please don't hesitate to contact our 24x7 technical support team. Thanks for watching.

ai AI Insights
Summary

Generate a brief summary highlighting the main points of the transcript.

Generate
Title

Generate a concise and relevant title for the transcript based on the main themes and content discussed.

Generate
Keywords

Identify and highlight the key words or phrases most relevant to the content of the transcript.

Generate
Enter your query
Sentiments

Analyze the emotional tone of the transcript to determine whether the sentiment is positive, negative, or neutral.

Generate
Quizzes

Create interactive quizzes based on the content of the transcript to test comprehension or engage users.

Generate
{{ secondsToHumanTime(time) }}
Back
Forward
{{ Math.round(speed * 100) / 100 }}x
{{ secondsToHumanTime(duration) }}
close
New speaker
Add speaker
close
Edit speaker
Save changes
close
Share Transcript