DevTalk is webinar series for L10n solution-seekers and solution-providers. It is hosted and moderated from a purely user/client perspective and thus solutions will be presented based on their interests and practical use-cases. You can find more details about the format here.
Participants can submit questions, use cases, or suggestions for the guest to prepare for in advance to the event.
For the “pilot” edition of DevTalk, I have invited Adam Bittlingmayer to talk about
post-editing effort, comparing engines or training or building your own?
ModelFront correlates with human evaluation and beats BLEU on accuracy and convenience – no reference translations required.
Automated evaluation is a game-changer for everyone from linguists to machine translation researchers.
Do you need to translate at quality and scale?
ModelFront catches critical errors in machine translation to let you balance machine scale and human quality.
Hybrid translation can be orders of magnitude faster than traditional post-editing.
Checking final human translations is almost as much work as translation itself.
ModelFront makes it super easy to sort by risk for quick and targeted final validation.
Better training data is key to better machine translation.