"Experts will design, train, oversee, and evaluate our AI systems, measuring performance and making the most complex, high‑impact decisions. For example, people will continue to play a key role in how we make the highest risk and most critical decisions, such as appeals of account disablement or reports to law enforcement."
"A little more than a year after ditching third-party fact checkers and rolling back much of its proactive content moderation, the company says it will further "transform" its approach by drastically reducing the number of human moderators in favor of AI-based systems. The company says the change will happen "over the next few years," and will allow the company to catch more issues faster than its current approach."
"The company has been testing LLM-based systems for content moderation for a while and says that early tests have had "promising" results. Another advantage is that its AI can handle languages used by "98% of people online," compared with the 80 languages currently supported by its moderation capabilities."
Meta is transforming its content moderation approach by significantly reducing its contract workforce of human moderators and shifting toward AI-based systems over the coming years. The company claims this transition will enable faster issue detection and broader language coverage, supporting 98% of languages used online compared to the current 80 languages. While humans will remain involved in critical decisions such as account disablement appeals and law enforcement coordination, experts will primarily design, train, oversee, and evaluate AI systems. Meta has conducted promising early tests of large language model-based moderation systems. However, this shift could alter user perception of policy enforcement, as many users already believe AI systems make excessive mistakes and hinder appeals from reaching human reviewers.
Read at Engadget
Unable to calculate read time
Collection
[
|
...
]