In a Times op-ed titled “The Knowledge Coup,” The Age of Surveillance Capitalism author Shoshana Zuboff takes on three iconic issues of our digital age: surveillance, disinformation and censorship. She weaves these together around a common focal point: the unchecked rise of the “Big Tech” companies whose platforms and business models have transformed our economy, culture and politics (not, in her view, for the better). Most of the essay concerns itself with painting a familiar picture of our current dystopian catastrophe which she describes with the phrase “epistemic chaos.” The picture connects the profit-driven, unregulated trade in digital behavioral data to the “radical indifference” to truth embedded in targeting algorithms that shape people’s perception of reality on a mass scale. From there, it’s a short step to the condition where unelected and unaccountable autocrats make self-serving decisions about what propaganda and individuals to suppress or amplify, with violent outcomes. This is a familiar distillation of a growing view of what’s gone wrong with Big Tech, at least among people who think about such things. In her conclusion, Zuboff sketches a three-part remedy based on “new charters of rights that protect citizens from the massive-scale invasion and theft compelled by surveillance economics.” The privacy-first implication is, if we put an end to the data collection operations at the root of Big Tech’s exploitation of “human experience,” democratic principles can supplant their concentration of power and return to us what’s ours. Of course, many legislative initiatives are already pursuing this sort of remedy worldwide and increasing regulation of personal data collection is clearly inevitable. Antitrust forces aligning against Big Tech companies may also have some impact. And Big Tech companies themselves are actively working to supersede the technologies of data collection built into our phones and browsers (while protecting their revenue streams). But whatever the benefits, the link between these approaches and the problems of disinformation and its dark counterpart, censorship, is tenuous. At worst, it may be exacerbating. In any case, the effects of restricting data collection on the flow disinformation are surely worthy of consideration. One theory suggests that, without the precision targeting algorithms that feed on personal data, the filter-bubble structure that reinforces extremism and conspiracy-theory-based realities will dissipate. Instead, people will be exposed to views hatched and moderated by more mainstream media brands as extremists lose the automated amplification power afforded their most lurid fictions. Even if media brands are biased (which, of course, they are), they’re at least open to scrutiny and accountable for what they publish—unlike Big Tech companies which are protected or anonymous agitators operating in the shadows. However, we also need to consider the benefits that pervasive anonymity conveys on the business of spreading disinformation. People will still seek information that confirms and reinforces their biases and exploiting them might just be easier from behind a veil of anonymity. As bots and deepfakes become more widespread and sophisticated, we should expect disinformation to flourish under a system designed to protect all individuals from being tracked or personally identified. Big Tech may yet thrive on disinformation in a world where corporate targeting algorithms are replaced by hidden forces and platforms are forced to relinquish oversight (note Facebook’s stated commitment to end-to-end encryption in this context). Zuboff obliquely acknowledges this problem when she writes that we need to “tie data collection to fundamental rights and data use to public service, addressing the genuine needs of people and communities.” Fair enough, but this exposes the big question: how can we apply democratic accountability to the problem of selective surveillance and censorship (or, if you prefer, “content moderation”)? Our legacy systems of checks and balances – things like fact checking organizations and court orders with judicial and legislative oversight – come up short. Whatever loopholes we build into our privacy shield will be vulnerable to exploitation at scale. Even as the scope of disinformation appears ready to explode, many people dislike the idea of empowering government agencies with more surveillance and censorship capabilities as much as they dislike leaving the problem in the hands of unaccountable Big Tech companies mining behavior for their ad businesses. The advertisers that pay for all this are also in a bind. Is there some more democratic solution to this problem? This a question the digerati have been pondering for some time. Back in 2018 and ‘19, a flurry of proposals emerged for various ways that decentralized ledger technologies like blockchain might address some of the problems of fake news and build consensus around a single version of the truth (at least as it applies to data). The bloom is now off the blockchain rose; in 2020 Gartner’s Hype Cycle for Blockchain Technologies put blockchain squarely in the trough of disillusionment as it failed to deliver on extreme hype. Still, experiments press on. Last year the New York Times worked with IBM to test its News Provenance Project, which used blockchain to investigate ways to verify the authenticity of photojournalism. Although this is a small corner of the disinformation and censorship landscape, it does illuminate the idea that technology may play a significant role in restoring trust to the digital ecosystem. Technologies like blockchain have a complex relationship with issues like privacy and censorship. Blockchain’s immutable nature has led some to propose its use in censor-proof social networks and others to point out the dangers of injecting personal data into its immutable blocks. More research is needed to determine these technologies’ role in addressing the “epistemic chaos” described by Zuboff and others. However, a single-minded approach rooted in privacy laws is unlikely to dig us out of the hole we’re in with toxic, polarized realities. Above all, we need to be sure that the next wave of digital technology is developed with better incentives in mind than profit and power. If there is an answer, it must involve law and code working together, transparently, on our behalf.