“But will you commit to ending Finsta?”
Fake Instagram accounts, or ‘Finsta’ accounts, are private secondary accounts that Instagram users sometimes create for a limited and selective group of their followers. So contrary to what was seemingly implied by Senator Richard Blumenthal’s (D-CT) questions during a Commerce subcommittee hearing with Antigone Davis, Facebook’s head of global safety, Finsta accounts are not actually an official type of account that the social media platform can commit to terminating. Nor should they. Finsta accounts are a way for users to differentiate between posts intended for public audiences while keeping some posts private for families, select groups of friends, and other close contacts. The awkward back-and-forth between Blumenthal and Davis —an exchange that even the Senator later laughed off with a Steve Buscemi “How do you do, fellow kids?” gif (as well as a more informative thread on kids’ online safety)—comes at a time when the platforms, and Facebook in particular, have come under fire. They have been accused of knowingly enabling or ignoring the ways in which dangerous content is spread on social media and how it impacts vulnerable communities around the world. From rampant COVID-related misinformation to content contributing to online hate and extreme polarization, it is widely acknowledged that largely unregulated or unmonitored social media platforms can pose direct threats to the health and overall security of any country’s population. This has led to renewed calls from legislators on both sides of the aisle for immediate and comprehensive social media regulations.
However, while the Finsta line of questioning was more of an entertaining slip in an otherwise well-reasoned call for greater oversight, other proposals formally and informally circulating Congress in the past few years come from an overly reactionary position, calling for regulations that either cannot be consistently enforced or will invariably be seen as political once applied. Additional proposed options, while nobly attempting to curb the spread of malicious content online, miss the mark by failing to appreciate the complexities of the various online platforms and the services they offer.
For the national security concerns alone, Congress cannot afford to stall on identifying ways to meaningfully regulate social media. That said, mistaking a sense of urgency with a need for rushed legislation that does little to address the underlying harms brought on by social media is a very bad idea. Congress should not hastily regulate social media platforms without a fuller understanding, or at least a better appreciation, of what it is in fact regulating in the first place.
The good news (if you can call it that) is that this is by no means a new or novel concern; after all, the only thing unprecedented about ‘these unprecedented times’ is that we’ve been living through them for some time now. Every few months there is a new revelation about where and how platforms have failed to monitor dangerous online content, followed by heightened congressional demands for greater regulations. With some exceptions, partisanship intervenes, regulations are not successfully passed, and the status quo remains. But meaningful regulation is long overdue, especially considering how bad actors, both foreign and domestic, have grown even more adept at evading detection and using social media to sow further divisions between Americans and promote dangerous content online. Each crisis brings us one step closer to a breaking point where widespread regulations could be hurriedly introduced, considered, and passed without full acknowledgement of longer-term implications. For these reasons, there are merits in revisiting the broad categories of past regulation attempts, understanding why they were not successful, and identifying workarounds that will hopefully produce the desired outcome of simultaneously freer and safer online communities.
One example, perhaps the broadest category of proposed legislation, focuses on content regulation and platform liabilities—Section 230 reforms. Among other things, Section 230 of the Communications Decency Act provides social media platforms with immunity from liability provided that the platforms act in good faith to self-regulate and remove harmful content. Though there is widespread agreement that Section 230 should be reformed, why it should be reformed and consequently how it should be reformed is subject to great debate. Some members of Congress feel the platforms are not doing enough to self-regulate and curb the spread of dangerous misinformation and disinformation online; and for this reason they should not be immune from liability. Others believe that the social media companies are doing too much to regulate only certain types of content, and these biases indicate the platforms are not acting in “good faith” and therefore, should not be afforded protections under Section 230. Just in the last year, this debate was most strongly played out in the context of Covid and election mis- and disinformation.
Congress has gained greater consensus at the margins, identifying ways to chisel away at parts of Section 230 by attempting to hold social media platforms liable for overtly unacceptable and dangerous content, like posts enabling sex trafficking. But narrow legislation on what is or is not considered health mis- and disinformation in the context of Covid, or what should or should not be considered a reliable source of health information in this current climate, opens up the possibility of restrictive regulations. Such regulations cannot be consistently applied across different administrations and different contexts informed by newer understandings of certain health phenomena. Similarly, attempts to quickly pass legislation in response to the role social media played during and after the 2020 election suffered from too narrowly attempting to tie legislative proposals to current grievances. If successfully introduced and passed, the subsequent legislation would have potentially created more harm than good by appearing politically motivated and driving people towards platforms seemingly more aligned with their respective political persuasions, further siloing political discourse and nurturing hyperpolarization. The calls for doing away with Section 230 could have also had the unintended effect of favoring the very platforms that some members of Congress were looking to punish—larger platforms can afford to devote more resources to cover litigation expenses and potentially automate more parts of the content screening process, whereas smaller, newer platforms cannot.
Individuals across the country—inside and outside of Congress and on all points of the political spectrum—have been diligently thinking through these complex issues and proposing intermediary or supplemental steps that Congress could pursue in tandem with marginal platform regulations. There are a number of options Congress should seriously and urgently debate, including the following: growing societal resilience against online threats through reinvigorated civic education and media literacy initiatives; creating internal task forces dedicated to keeping Congressional members abreast on the latest happenings with the platforms; pursuing more ambitious proposals such as the creation of an independent body that can be tasked with initially monitoring, and maybe at some later point regulating, the actions of social media platforms in real-time. These actions are not all as flashy as proposals to tear apart Section 230 or immediately break up the platforms; however, they have the potential to be far-reaching and can set up a foundation for more productive debates and votes around platform regulations in the future. There is a national security imperative in not only making sure the platforms are regulated, but that they are done so properly in a way that enhances, not thwarts, the free and open discourse that our democracy relies on.
Which brings us back to Finsta. Not to put words in Senator Blumenthal’s mouth, but he will likely be the first to acknowledge that while the exact question of ‘ending Finsta’ might not have been crafted correctly, the larger discussion of how to protect children’s online safety was absolutely the right one to have. Congress has a very important role to play in challenging platforms and partnering with them to develop meaningful regulations. The bad idea is not Congress, as a body, regulating social media platforms or actively paving the way for increased competition; it is Congress sometimes being too eager in its proposed actions without a strategy for how certain types of regulations, when taken together, will unintentionally exacerbate current problems and potentially weaken Americans’ trust in Congress and other democratic institutions in the process. Now, Congress just needs to get out of its own way and resist the urge to prioritize crisis-derived proposals for platform regulation.
(Photo Credit: Chip Somodevilla/Getty Images)