uber fuzz

Terms of Use

Big Tech Needs To Be Regulated. Here Are 4 Ways to Curb Disinformation and Protect Our Privacy


Read More

Recent months have seen mounting evidence that the algorithmic spread of hate speech, disinformation, and conspiracy theories by major internet platforms has undermined America’s response to the COVID-19 pandemic. It has also increased political polarization and helped enable white supremacist organizations. As the Big Tech titans appear before Congress there are increasing calls for regulation of Facebook, YouTube, and others. And, finally, some advertisers on which internet platforms depend for revenue are voicing concern. StopHateForProfit.org, a campaign organized by the civil rights organizations NAACP, ADL, Color of Change, FreePress, and Common Sense Media, has attracted more than 1,100 marketers who are pausing advertising on Facebook for a month or more to protest the amplification of hate on that platform.

Demands from policy makers for change began nearly a decade ago, when the Federal Trade Commission entered into a consent decree with Facebook designed to prevent the platform from sharing user data with third parties without prior consent. As we learned with the Cambridge Analytica scandal, Facebook paid lip service to that consent decree, following a pattern of “apologize, promise to do better, return to business as usual” that persists to this day. Other platforms, especially Google and Twitter, have also resisted calls to change business models partly responsible for the amplification of hate speech, disinformation, and conspiracy theories.

The nation’s current period of self-reflection has broadened the coalition calling for change, adding civil rights organizations and a growing number of consumers. Politicians and regulators at the state and federal level are responding. Regulation, which has faced an uphill battle, now appears likely. The next step must be to consider the options and trade-offs.

There are at least four areas that need regulation: safety, privacy, competition, and honesty. Only by coordinating action across all four will policy makers have any hope of reducing the harm from internet platforms.

Safety: The top priority for regulation relates to the safety of new technologies. There are two aspects of safety that require attention: product development and business models. Until the past decade, technology products generally empowered the people who used them. Safety was not an issue. Today the technology that enabled internet platforms to become dominant poses risks to society. At the same time, the idealism of Silicon Valley gave way to a Machiavellian aggressiveness. As a result, business practices that had been harmless in prior eras became dangerous. For example, the technology industry generally ships products as soon as they function – what is known as a minimally viable product – and leaves quality (and damage) control to end users. This philosophy worked well at small scale for products with limited functionality. It even worked at Google and Facebook in their early days, but not now. Catastrophic failures in new categories like facial recognition and artificial intelligence have exposed the danger of releasing new technology with no safeguards. For example, racial and gender bias has been found in both facial recognition and AI products, including products for law enforcement. As the country learned with medicine and chemicals, some industries are too important to operate without supervision. Like new medicines, new technologies should be required to demonstrate safety and efficacy (as well as freedom from bias) before coming to market. Like companies that create or use chemicals, internet platforms should be financially accountable for any harm their products cause. Personal liability for executives and engineers will be important to change incentives.

The second category of safety regulation for internet platforms relates to business models. Harmful content is unusually profitable. Facebook, Instagram, YouTube, and Twitter monetize through advertising, the value of which depends on user attention. Platforms use algorithms to amplify content that maximizes user engagement. Hate speech, disinformation, and conspiracy theories are particularly engaging – they trigger our flight or fight instinct, which forces us to pay attention – so the algorithms amplify them more than most content. Other platform tools, such as Facebook Groups and the recommendation engines of each platform, increase engagement with harmful content.

Platforms have no economic incentive to reduce harmful content. They are protected from liability by Section 230 of the Communications Decency Act of 1996, which courts have interpreted as providing blanket immunity for harm caused by third party content. Until a few months ago, Section 230 was untouchable in Congress, but that is no longer the case. Republican Senator Josh Hawley has introduced a bill that would remove some kinds of political content from the safe harbor. Vice President Biden has called for eliminating Section 230. Neither proposal is optimal.

A better approach would be to change incentives while retaining the positive attributes of Section 230, including the protection it offers to startups. Policy makers should target algorithmic amplification, as it is the reason harmful content pervades the mainstream instead of just the corners of the internet. They can do so by eliminating the safe harbor of Section 230 whenever a platform chooses to treat different pieces of content differently. Reverse chronological order, which was the original organizational framework for newsfeeds, would remain a safe harbor, as would other frameworks that treat all content the same. But algorithmic amplification would not enjoy Section 230 protection because it is a choice of the platform, rather than the user. This change would not force platforms to behave differently, but would give them an incentive to do so. In combination with a guaranteed right for consumers to pursue litigation in cases of harm and personal liability for executives, it would be an important step on the path to a safer ecosystem for internet platforms.

Privacy: Privacy has been on the radar of policy makers since the Cambridge Analytica scandal. The data economy evolved over decades, and government largely took a hands off approach. Corporations asserted ownership to any data they touched, as well as the right to use or transfer it without restriction. Smartphones and the internet enabled the tracking of every human activity, making it possible to capture a complete digital representation – what the activist Tristan Harris calls a data voodoo doll – of every consumer. Credit card purchases, financial records, employment history, cell phone location, medical tests and prescriptions, browsing history, social media activity all create data that is deeply personal and available in a data marketplace. Marketers and internet platforms use this data to understand, predict, and manipulate our behavior. This is not just about advertising. Marketers and platforms also limit the choices available to us, without us even being aware. Worse yet, data about other people can be used to manipulate and exploit our vulnerabilities. For example, marketers can use data from others to predict that a woman is pregnant before she knows it.

Policy makers understand that the status quo leaves consumers vulnerable to manipulation, but have struggled to find an effective solution. Europe’s General Data Protection Regulation (GDPR) and the California Computer Privacy Act (CCPA) took first steps, but both place the burden on consumers to “opt out” of data usage. This is burdensome, as consumers are not aware of many of the corporations that hold and exploit their data. A better solution would be to shift the burden to corporations with an “opt in” requirement, where every corporation that holds data about us would be required to get our permission prior to every use or transfer.

Brittany Kaiser, Andrew Yang, and others argue that Google, Facebook, and others should pay us to exploit our data. That idea sounds wonderful, but will likely prove disappointing. The platforms are opaque, so it may be impossible to verify the value they derive from our data. They will set a low price and we will be stuck with it. Worse still, this model fails to capture the exploitation of data outside of targeted advertising. Whether they admit it or not, internet platforms should love “own your data,” because it would bless their harmful business practices in exchange for token payments.

Harvard’s Professor Shoshana Zuboff has argued that personal data should be treated like bodily organs, as a human right, rather than an asset to be bought or sold. She makes a compelling case that no corporation should be allowed to use data voodoo dolls to manipulate our choices. The challenge for regulators will be to enable uses of our data that benefit consumers, while eliminating those that do not. For this reason, “opt in” may be the most practical path for privacy regulation. In combination with changes to Section 230, opt in would begin to reduce the spread of harmful content. But it will not be enough.

Competition: Competition – or the lack thereof – has prevented the creation of viable alternatives to the attention-based platforms. Google, Facebook, and Amazon have exploited scale and network effects to crush competitors, while also undermining the autonomy of users, suppliers, and communities. Without viable competitors (or regulation), internet platforms have no incentive to eliminate their harmful behaviors, and consumers have no better place to go.

Investors and others seem to fear antitrust action, which suggests they are not aware of the pro-growth history of antitrust in tech. Beginning with the 1956 Justice Department consent decree with AT&T, which separated the computer industry from telecom and put the transistor into the public domain, every major wave of technology can trace its roots to an antitrust action. The federal government and states attorneys general have initiated a range of antitrust investigations against Google, Facebook, Amazon, and Apple.

Most of the public discussion about antitrust revolves around breaking up the platforms, but that is just one part of the antitrust solution, and, ideally, the final piece. Ideally, antitrust policy makers will revive the principles of the Sherman, Clayton, and Federal Trade Commission Acts, forcing platforms to choose between operating marketplaces and participating in them and to end anticompetitive behavior towards suppliers, advertisers, and users. If all we do is break up Google, Facebook, and Amazon, the undesirable behaviors of internet platforms will continue, but distributed among a dozen players, rather than three.

Securities Law: The fourth opportunity for regulation is under securities law. At issue is revenue recognition, particularly with respect to advertising networks. The internet platforms, but especially Google and Facebook, are opaque. Unlike traditional advertising platforms, they do not allow marketers and agencies to audit their numbers, which include both their own platforms and networks that place ads on other internet sites. Every marketer knows that user counts, ad views, and video views on the internet are overstated – and have been for at least a decade – but no one has hard data to prove it. If user counts and ad views are overstated, then the same must be true of revenues. Under securities law, when the knowing overstatement of revenues is material over a period of years, it may give rise to felony violations and, potentially, jail time for executives. This issue will likely apply primarily to operators of advertising networks, including Google and Facebook. A securities law investigation would change incentives for these companies.

We should be able to enjoy the good aspects of internet platforms with many fewer harms. The platforms have failed at self-regulation. The future of our democracy, public health, privacy, and competition in our economy depend on thoughtful and comprehensive regulatory intervention. Collectively, the proposals above are only a first step, the scale and impact of internet platforms, and their ability to adapt around regulation, guarantee the need for further steps. The platforms will fight every step of the way, but their stonewalling over the past four years has cost them the moral high ground. It is our turn now.

Recent months have seen mounting evidence that the algorithmic spread of hate speech, disinformation, and conspiracy theories by major internet platforms has undermined America’s response to the COVID-19 pandemic. It has also increased political polarization and helped enable white supremacist organizations. As the Big Tech titans appear before Congress there are increasing calls for regulation…