YouTube Tests a New Process to Crowd-Source Feedback on Automated Caption Accuracy

YouTube-Tests a-New-Process-to-Crowd-Source-Feedback-on-Automated-Caption-Accuracy
YouTube-Tests a-New-Process-to-Crowd-Source-Feedback-on-Automated-Caption-Accuracy

By allowing users to suggest updates and revisions based on the captions displayed, YouTube is experimenting with a new method to increase the accuracy of its automated captions. The automated translation may contain errors, which viewers may correct.

YouTube states that suggested edits won’t be visible to the creator or the larger YouTube audience and will only be surfaced to the person who edited the transcript themselves. It’s crowd-sourced editing, somewhat akin to the downvote feature being tested on TikTok and even Twitter’s Birdwatch program.

Users provide feedback, which is then incorporated into the larger feedback and assessment loop for content in the hopes of leveraging the wisdom of the crowd to enhance each step.

Read More: YouTube’s Testing a New Process to Crowd-Source Feedback on Automated Caption Accuracy

For more such updates follow us on Google News TalkCMO News.