Section 230 of the Communications Decency Act is a shield that protects online sources from liability for damage from dark free speech (DFS) that the owner or users post on searchable platforms. Algorithms that Facebook, Twitter and the like use tend to amplify DFS by highlighting it in online searches. Lies, hate, outrage, crackpottery and the like pop up and then spread much more than truth, clam and reason.
This is a reflection of how the human mind generally works. Humans are made to quickly feel and react or decide, not to slowly think and react or decide. Feeling and reacting-deciding fast is usually a lot more fun and easier than doing it more slowly.
The New York Times writes that congress is wrestling with how to deal with the problem of algorithms spreading DFS and causing damage in the process. The NYT writes:
Lawmakers have spent years investigating how hate speech, misinformation and bullying on social media sites can lead to real-world harm. Increasingly, they have pointed a finger at the algorithms powering sites like Facebook and Twitter, the software that decides what content users will see and when they see it.
Some lawmakers from both parties argue that when social media sites boost the performance of hateful or violent posts, the sites become accomplices. And they have proposed bills to strip the companies of a legal shield that allows them to fend off lawsuits over most content posted by their users, in cases when the platform amplified a harmful post’s reach.
The House Energy and Commerce Committee discussed several of the proposals at a hearing on Wednesday. The hearing also included testimony from Frances Haugen, the former Facebook employee who recently leaked a trove of revealing internal documents from the company.
Removing the legal shield, known as Section 230, would mean a sea change for the internet, because it has long enabled the vast scale of social media websites. Ms. Haugen has said she supports changing Section 230, which is a part of the Communications Decency Act, so that it no longer covers certain decisions made by algorithms at tech platforms.But what, exactly, counts as algorithmic amplification? And what, exactly, is the definition of harmful? The proposals offer far different answers to these crucial questions. And how they answer them may determine whether the courts find the bills constitutional.
The congressional attempt to reign in DFS is so complex that it may not be possible. Some proposed laws define the behavior they want to regulate in general terms. One proposal exposes a platform to lawsuits if it “promotes” algorithmic spread of public health misinformation. Social media platforms would be safe if their algorithms promote content in a “neutral” way, for example, ranking posts in chronological order.
Other proposed legislation is tries to be more specific. One proposal defines dangerous amplification as doing anything to “rank, order, promote, recommend, amplify or similarly alter the delivery or display of information.” Think about how that might be implemented and enforced.
The NYT points out that companies already use people's personal information to target DFS content to them if they are inclined to receive it, e.g., conspiracy theory crackpots who want to get crackpottery and lies from QAnon. Contemplated legal exemptions from liability for DFS-caused damage include (i) exempting sites with five million or fewer monthly users, and (ii) posts that show up when a user finds it in a search, even if the algorithm ranks bad content higher than the more honest stuff. The concern is negative unintended consequences.
Most of the proposals the NYT discussed come from democrats in congress. Given how critically necessary the free flow of DFS is to the Republican Party, Christian nationalism and laissez-faire capitalists, it is hard to imagine that any regulation of DFS will pass out of congress. We are probably locked onto the current status quo for a long time to come.
Free speech absolutists generally argue that more speech is better and people will figure out for themselves what is truth and what isn't. From that point of view, there is no reason to even try to regulate any speech, dark or honest. Clearly, that line is reasoning is false. Tens of millions of adult Americans are deceived, bamboozled and manipulated by political partisan lies and crackpottery all the time. It may be the rule, not the exception.
One expert commented: “The issue becomes: Can the government directly ban algorithmic amplification? It’s going to be hard, especially if you’re trying to say you can’t amplify certain types of speech.” At least the matter of damage that DFS causes is on the minds of some people. That is a lot better than most everyone seeing all speech, dark or honest, as equal.
Questions:
1. Does the social cost-benefit indicate that it is better to try to limit DFS knowing that some honest speech will be collateral damage and some online sources might go out of business? Or, is there enough value inherent in DFS that it should just be left alone, even if it means the end of democracy and the rule of law as we now know it?
2. Is it possible to regulate DFS without violating free speech law?
3. Compared to purveyors of honest speech, how much power does the combination of DFS on social media and algorithms that promote it transfer to people and interests who routinely rely on DFS, e.g., Russia, Exxon-Mobile, the GOP, kleptocrats, dictators, etc.?
No comments:
Post a Comment