Google has outlined a four-part strategy meant to further tackle the problem of extremist content on YouTube. Its strategy is largely centered around increasing and improving its use of AI and reviewing systems, which can find extremist videos among the millions posted to the site.
With an op-ed in FT, Google explained how it'll deal with the issue. Its article comes in the wake of efforts by some European countries to implement more regulations aimed at making companies deal with extremism or face consequences.
The four steps include increasing the use of machine learning tech, bringing in more independent human experts, taking a tougher stance on controversial videos, and expanding its counter-radicalization efforts.
Using machine learning is still imperfect, which is why YouTube is also going to rely on human reviewers and reports.
Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern, - Kent Walker, general counsel, Google