AS WE APPROACH THE 2020 ELECTION IN THE UNITED STATES, content moderation on social media platforms is taking center stage. From speech issues on Facebook and Twitter to YouTube videos and TikTok brigands, the current election season is being reshaped by curation concerns about what’s allowed online, what’s not, upranking and downranking, and who’s deciding.
Trust in the content is another major challenge as conspiracies and mis- and disinformation go viral. With billions of pieces of content posted every day, what balance should be struck between automated and human moderation? Are AI and machine learning to blame when companies miss content they promised to remove, or do we need to look to human content moderators and those sitting in their board rooms?