Towards a Public Service Internet

by Mary Ann Badavi

Following the January 6 insurrection at the U.S. Capitol, steps are finally being taken to address political disinformation and the role of tech companies in spreading it. But algorithms have fanned the flames of disinformation on almost every topic that touches daily life: from climate change, to COVID-19, to immigration. And even though we see it right in front of us, companies like Facebook have no reason to stop it. Their powerful computing systems are made to gather data on our behavior, analyze that data, and use it to predict our future behavior. They’ve built algorithms to figure out what content performs best, and ensure that that content appears at the top of our news feeds. That content drives clicks, clicks drive ads, and ads drive money. The business model of social media churns on, no matter who storms the seat of government.

Congress has a lot to catch up on: though lawmakers are becoming more technologically savvy, Big Tech has obscured their inner workings on purpose—because they simply don’t want to change them.1 But by looking to existing laws on the regulation of other industries, we can imagine a new internet—one that deprioritizes misinformation and is made in the public interest.

Will targeted ads be the end of democracy?

We tend to think of social media as a free service; that’s what made it such a miracle at first. All that content, information, and connection with friends, and you don’t have to pay a dime! In 2018, a Republican senator infamously asked Mark Zuckerberg during a Congressional hearing how Facebook could provide the services it does for free. Zuckerberg’s terse response will go down in history: “Senator, we run ads.”2

To Zuckerberg’s credit, he was telling the truth: according to their annual financial report, 97.9% of Facebook’s revenue in 2020 came from advertising. What makes this particular kind of advertising special is that it’s based on your behavior—and on social media, there are a lot of datapoints on how you behave. It’s legal and valuable for Facebook to track all of the Couch to 5k groups you’ve joined, or for Google to see that you need directions to the nearest pet store. They then put you in a user interest group and sell automated access to your group based on your behavioral data—you’re the type of person who is interested in running and you own a cat—to the hundreds of thousands of companies that are part of the adtech world. And the sole responsibility of these companies is to programmatically send out your information to as many businesses as possible in order to show you a relevant ad in real time.3

There are some safeguards to the kind of behavioral data that companies can collect and sell, but it’s less than you might think. According to one adtech company, the Internet Advertising Bureau, they can collect behavioral information about everything from your household income to whether you’re looking for bail bonds, maternity clothes, or mental health services. Their audience taxonomy contains over 1500 kinds of personal behaviors they can target—all so the services you use can remain free, and the companies behind them make as much money as possible.

At first glance, these behaviors seem relatively innocuous—and there are those who argue that users prefer getting ads that are specific to their desires. But what happens when they come into contact with our political beliefs? Well, we know what happened on January 6th in Washington, DC. In response, Facebook set a moratorium on political ads, which has been lifted as of March.4 Supporters of keeping political ads have argued that even as a ban prevents the spread of advertising related to dangerous political disinformation, it keeps out good actors who are trying to get the truth out.5 But the problem isn’t political ads themselves: it’s that behavioral advertising enables people who are already at risk of falling for misinformation to be targeted even further. If you’ve already fallen prey to misinformation spread by your friends or news organizations, the advertising content you see will just reinforce your beliefs; it’s a perfect storm of influencing tactics where you’re being fed lies from multiple avenues. In their 2018 report “Weaponizing the Digital Influence Machine,” the research organization Data & Society wrote that “like beauty product marketers, political advertisers – those aiming to influence political discourse, sentiments around public issues, or political behaviors from voting to attending marches or calling representatives – are able to sift through data streams to identify prime points of vulnerability.”

On the internet, our truest selves are completely exposed. Targeted political ads prey on that. And unless we make a change, it’s going to keep getting worse.

What laws exist to protect us?

Last May, Congresswoman Anna Eshoo (D-WA) introduced the Banning Microtargeted Political Ads Act in order to address this exact issue. The legislation would ban “online platforms, including social media, ad networks, and streaming services, from targeting political ads based on the demographic or behavioral data of users.” Targeting for broad geographical areas would still be allowed, but it would be a huge change from the current state of play. But the bill hasn’t made any progress since it was introduced last year.

Eshoo, along with Congressman Tom Malinowski (D-NJ), has also introduced the Protecting Americans from Dangerous Algorithms Act, which would amend Section 230 of the Communications Decency Act to hold large platforms accountable if their algorithms were used to spread content that led to violence and extremism. The bill has good intent: it’s very limited in scope, leaves the rest of Section 230 intact, and is at least a first step towards trying to hold Big Tech in some way accountable for their algorithms. One problem, of course, is that it would be pretty hard to prove whether content was amplified by algorithms. First, because everything is; second, because Big Tech doesn’t share that data. In order to really work, the bill would have to include mandates on data and algorithmic transparency.

In the European Union, some of these protections do already exist under the General Data Protection Regulation (GDPR). The law prohibits companies from processing user data without unambiguous, revocable consent for legitimate and limited use, which is why you always get those pop-ups about cookies every time you visit a website these days. A number of technology policy experts in the UK have filed lawsuits that claim that adtech’s real-time bidding system prevents any kind of reasonable consent to be made by users. Even beyond the actual content of the GDPR, there are some concerns about whether the groundbreaking regulation is actually achieving its goals of protecting fundamental data rights; two years after its passing, internal reviews have found that enforcement of the GDPR isn’t as strict as it should be.6 With resources scarce, the number of regulators who have backgrounds in technology is slim; the EU simply doesn’t have enough people to track all of the violations of the law.

Regulating tech is hard, and the world’s governments are only now starting to realize that it needs to be done. But some legislators are looking outside of outright technology regulation to reimagine the internet, broken as it is today.

Antitrust experts across government and academia have long called for financial regulation to be applied to the technology sector. In her lauded 2019 paper “The Antitrust Case Against Facebook,” former advertising executive Dina Srinivasan argues that Facebook’s free model is actively harming consumers by using personalized ads against them. Her ideas have influenced state litigators across the nation, including New York attorney general Letitia James’s antitrust lawsuit against Facebook.

Other legislators are looking at a 1932 banking law to rein in Big Tech. Congressman David Cicilline (D-RI), chair of the House Antitrust Subcommittee, is pushing for a “Glass-Steagall Act for the Internet” in order to break up core parts of tech companies into separate entities. Cicilline has argued that this would help smaller tech companies have more of a fighting chance, as well as prevent companies like Facebook and Google from double-dipping by using their collected behavioral data for their ad businesses. But Cicilline has received criticism for this approach by competition policy experts who claim it would harm the core ways we use the internet to search for and purchase products.7

Still others are arguing that the core business models that drive the internet are broken beyond repair, and need to be rethought completely. Zephyr Teachout, an antitrust expert who ran for New York governor against Andrew Cuomo in 2014, has made the case for applying public utility laws to Big Tech. In it, she argues that this method has been applied “from water to electricity to telecommunications,” and ought to be applied to the internet as well.8

In our modern age, the internet is as ubiquitous as all three of those public utilities. But how do we make it work like one?

What would a new internet look like?

Legislation and adjudication are not the keys to ending disinformation, or predatory online advertising practices, or the monopolization of internet services. It is simply an attempt to try to stem the deluge of harmful information that we’re seeing today. A public service internet would have to be the effort of a broad coalition—from journalists, to technologists, to citizens, to educators, and yes, to legislators too.

Our education system needs to prioritize media literacy and the ability to discern fact from fiction. Our journalists must continue exposing harmful networks when they find them, ensuring they do so in responsible ways that don’t give those networks a larger audience. Our technologists need to be incentivized to act in the public interest by deprioritizing the design of easy targets for bad actors: microtargeted ads, algorithms driven by strong emotions. And our legislators need to not just pass laws, but hold us all accountable to these laws.

It’s easy to think that a public service internet will never happen. After all, the internet began as a public service9, only to be sublimated by predatory practices, privacy infringements, and more. Governments are only now taking a closer look at these issues, thirty years too late. People are being radicalized faster than we can track them. We are on the edge of a new era, one that is potentially more dangerous than ever.

But I actually believe we can make this new era a better one. More people than ever are confronting the ways in which click-driven content harms people. My parents, who are immigrants in their 70s, look for multiple sources and ask me to fact-check the news articles they read. I’ve taken part in crowdsourcing efforts to spot misinformation from small election protection nonprofits to companies like Twitter. I’ve listened to dozens of academics speak about the problems of algorithmic injustice and radicalization. And perhaps most surprisingly, I’ve heard lawmakers on both sides of the aisle express a real desire to regulate these companies, even if they disagree on exactly how to do it.

People around the world are demanding a better internet, and that’s how I know it’s possible. We can get there if we keep pushing legislators and technology giants to prioritize our well-being and protect us from harm, making sure technologists are aware of the pitfalls of the products they create, and prioritizing business models that provide real services instead of manipulated ones.

Throughout the past year of isolation, the internet has been an invaluable lifeline to our friends and family, as well as a way to see outside our small quarantined worlds. It’s provided education, news, resources, and communities. Simply put: the internet of 2021 already is a public service. If we work together, we can actually treat it as such.


Mary Ann Badavi is a second year MFA Design & Technology student who works at the intersection of data, ethics, and civic design. She believes in the power of technologists to inform policy and social change.

  1. Ovide, Shira. “Congress Doesn’t Get Big Tech. By Design.” The New York Times. The New York Times, July 29, 2020. https://www.nytimes.com/2020/07/29/technology/congress-big-tech.html

  2. Burch, Sean. “’Senator, We Run Ads’: Hatch Mocked for Basic Facebook Question to Zuckerberg.” The Wrap. The Wrap, April 10, 2018. https://www.thewrap.com/senator-orrin-hatch-facebook-biz-model-zuckerberg/

  3. Automatad Team. “Real Time Bidding - A Beginner’s Guide.” Automatad, January 27, 2021. https://headerbidding.co/real-time-bidding/

  4. Culliford, Elizabeth, and Sheila Dang. “Facebook to End Ban on Political Ads in United States.” Reuters. Thomson Reuters, March 3, 2021. https://www.reuters.com/article/us-usa-election-social-media-idUSKBN2AV2GQ

  5. Herndon, Astead W. “Alexandria Ocasio-Cortez on Biden’s Win, House Losses, and What’s Next for the Left.” The New York Times. The New York Times, November 8, 2020. https://www.nytimes.com/2020/11/07/us/politics/aoc-biden-progressives.html

  6. Lomas, Natasha. “GDPR’s Two-Year Review Flags Lack of ‘Vigorous’ Enforcement.” TechCrunch. TechCrunch, June 24, 2020. https://techcrunch.com/2020/06/24/gdprs-two-year-review-flags-lack-of-vigorous-enforcement

  7. Sam Bowman, Opinion Contributor. “The Folly of Cicilline’s ‘Glass-Steagall for Tech’.” TheHill. The Hill, September 8, 2020. https://thehill.com/blogs/congress-blog/lawmaker-news/515381-the-folly-of-cicillines-glass-steagall-for-tech

  8. K. Sabeel Rahman & Zephyr Teachout. “From Private Bads to Public Goods: Adapting Public Utility Regulation for Informational Infrastructure.” RSS, February 4, 2020. https://knightcolumbia.org/content/from-private-bads-to-public-goods-adapting-public-utility-regulation-for-informational-infrastructure

  9. “A Short History of the Web.” CERN. Accessed May 2, 2021. https://home.cern/science/computing/birth-web/short-history-web

1
Dark Connections

The internet is made up of interconnected pieces of data about its users. Every website has trackers installed in it, mostly belonging to Google or Facebook, that keep tabs on the people using it. This data is neither protected or encrypted, often fully accessible to anyone with the means to access it. Though these companies store our data and use it to sell their products to us, they are in no way responsible for it. This entire system is almost always not implicit and shrouded in the background of its utility. This section aims to connect these dots that exist in the dark underbelly of the internet, that we have a vague idea about, but that are not necessarily clear.
Making these connections can make the online experience feel scary and unsafe, but it already is. Although governments and large corporations are often seen as the problem, the truth is that they are far less interested in you or I than someone who knows us personally and has an agenda that involves us. This section shines a light on the dark patterns that enable your data to be collected and potentially mobilized against your interest.

2
Digital Forensics

In order to combat the practice of dark data, one can exploit the loopholes in its architecture. But in order to do this, we need to at least comprehend the full extent of the information that is collected about us. It is now possible for us to demand the data that is collected about us, though this option is not directly obvious to most people. Resources like APIs, Google Takeout, and OSINT tools allow us to conduct small-scale investigations with regards to where our data lives and what data exists about us. This section is a collection of attempts by the authors to gain access to and interpret their own data that exists online.
However, awareness of the data does not guarantee its control. Google may give us a copy of the data that exists about us in its servers through its Google Takeout service; but this does not mean that that we now own this data. Google can still use it however it likes, it has not been deleted from their databases. We are being given only an illusion of control and this is intentional. Digital Forensics can only grant us a window into this massive machine, the machinations of which may still continue to be unclear. This section explores these windows and what they teach us both about ourselves and about the technology that we utilize.

3
Data Futures

What is the future of dark data? People are increasingly aware that information about them is collected online. Governments are making efforts to regulate Big Tech and protect the privacy of citizens. How can we imagine better ways to exist in the system? How can we protect ourselves from its repercussions? This section speculates how dark data is changing as a practice. It discusses ways in which people can take action and re-examine their browsing methods. The ideas discussed here think about how technology can be used to propose solutions to the problem it has created.
It is important to consider that the practice of data collection and exploitation is ongoing. There is no easy way out of these cycles. However, we would like to believe that sparking deliberate thought and action to help you orient yourself in this Wild West landscape can make the process of coming to terms with dark data easier.

4
About

This digital edition was compiled from scholarship, research, and creative practice in spring 2021 to fulfill the requirements for PSAM 5752 Dark Data, a course at Parsons School of Design.

Editors

  • Sarah Nichols
  • Apurv Rayate

Art Directors

  • Nishra Ranpura
  • Pavithra Chandrasekhar

Technology Directors

  • Ege Uz
  • Olivier Brückner

Faculty

  • David Carroll
  • Melanie Crean

Contributors

This site needs no privacy policy because we did not install any tracking code and this site does not store any cookies.