WASHINGTON — On Jan. 6, a violent mob of insurrectionists stormed the U.S. Capitol in an try to overturn our nation’s 2020 presidential election. The assault, which resulted within the loss of life of 5 individuals, was fueled by a continuing stream of disinformation and hate speech Donald Trump and different dangerous actors flooded throughout social media platforms earlier than, throughout and after the election. Regardless of their civic integrity and content material moderation insurance policies, platforms have been sluggish or unwilling to take motion to restrict the unfold of content material designed to disrupt our democracy. This failure is inherently tied to platforms’ enterprise fashions and practices that incentivize the proliferation of dangerous speech. Content material that generates essentially the most engagement on social media tends to be disinformation, hate speech and conspiracy theories. Platforms have carried out enterprise fashions designed to maximise consumer engagement and prioritize their revenue shares over combating dangerous content material. Whereas the First Modification limits our authorities from regulating speech, there are legislative and regulatory instruments at its disposal that may rein in social media enterprise practices dangerous actors exploit to unfold and amplify speech that interferes with our democracy. The core element of each main social media platforms’ enterprise mannequin is to gather as a lot consumer knowledge as potential, together with traits similar to age, gender, location, earnings and political views. Platforms then share related knowledge factors with advertisers for focused promoting.
It ought to come as no shock that disinformation brokers exploit social media platforms’ data-collection practices and focused promoting capabilities to micro-target dangerous content material,» Read more from www.thelcn.com