The negative impact and upsetting effects of online non-inclusive and hate speech content is an increasing problem for all of us. Global brands continue to look for reliable solutions to identify and monitor offensive content or content that alienates certain racial, social, and sexual groups.
The sheer volume of online, offline, and in-application mobile content in multiple languages means brands struggle to monitor, understand, and act on harmful content of various origins. 500 million tweets and Facebook stories are shared daily.
Artificial Intelligence (AI) is increasingly being used to manage large datasets of multilingual content – often user-generated content (UGC) and company-branded “authored” content. AI systems are far from perfect. For starters, it’s well-known that some algorithms don’t work to flag hate or offensive speech and accurately recognize the context. Then there’s the task of understanding multilingual elements and local, cultural elements to determine whether the content is offensive or harmful for this locale and language. Lastly, sometimes a synonym or a paraphrase can disguise an offensive expression, which will require a different set of algorithms that can identify semantic similarities.
Join us and hear from our panel of industry experts, led by the Welocalize AI, Quality, and Diversity Teams who will share insights on how we can move towards AI as a more reliable solution for brands to identify and remove such content, thereby protecting their brand and global audiences.
This webinar will be approximately 45 minutes plus 15 minutes of Q&A. Can’t make the live event? Don’t worry! Anyone who registers will automatically receive a link to the recording.