Vanessa Schipani
Philosopher. educator. Journalist.
Research Statement
From our dinner tables to our democratic institutions, political polarization has shattered our ability to communicate with one another. To resolve this problem, we need solutions that are not only morally and epistemically justifiable but also practically effective. For this reason, my research bridges work in philosophy and the social sciences with the aim of mitigating political polarization such that deliberative democracy can be made to function. My research is also informed by my decade-long career in science journalism, which included reporting on policy disputes for FactCheck.org during the Obama and first Trump administrations.
While much scholarship concentrates on local sites of deliberative democracy (e.g., citizen juries, mini-publics), I focus on how the parts of large deliberative systems can complement (or clash with) one another. Galvanized by my experience at FactCheck.org, I’ve historically concentrated on two vital democratic institutions – science and journalism – and asked how their communicative practices can ease or escalate polarization. For example, in a series of papers, I argue responsiveness should guide scientists and journalists in how they communicate with one another and the public. Key to both scientific and democratic progress, responsiveness entails holding one’s own view while remaining open to and acknowledging those of others.
Along with vetting politicians’ claims, at FactCheck.org, I also helped curb misinformation online in partnership with Facebook. Thus, more recently, I’ve expanded my research program to include technology’s impact on polarization. For example, I’m now developing arguments for why we need to hold social media companies, much like we hold journalistic publishers, legally liable for the harm they cause to individuals and the information ecosystem. I’ve also begun to investigate how shifting generative AI development towards responsiveness, instead of sycophancy, might help foster, instead of hinder, democratic deliberation.
Mitigating Polarization with Responsive Science
During the coronavirus pandemic, we were often told to ‘follow the science.’ This phrase hints at the traditional justification for trust in scientists: Trust them because they’re certain, objective, and in consensus. The problem? Most policy-relevant science is rife with uncertainty, value judgments, and disagreement. Does this mean it’s all untrustworthy? No, it means the traditional justification for trust in science fails in real-world contexts. Instead, I focus on how scientists can earn public trust through responsive communication.
In one paper, I defend what I call the responsiveness model of trust in science. Many scholars of science argue policy-relevant science’s ethical legitimacy relies on value judgments. Still, some raise concerns about science’s democraticlegitimacy, which traditionally relies on science’s objectivity. With the model, I resolve this tension and sidestep an issue prior attempts faced – accounting for polarization, or pluralism generally. The model says scientists must be responsive to both leading and alternative hypotheses and the public’s diverse values. It also says journalists, acting as the public’s watchdogs, must verify whether scientists are being responsive.
In another paper, I extend the responsiveness model to include educators. In political theory, public reason holds that all must agree with the reasons justifying coercive policies. However, many don’t understand science. Since evidence-based policymaking uses science to justify policies, some contend that it conflicts with public reason. In response, I propose going in for a view of public reason that centers on agreement over methods, not reasons. I then argue that educators must impress the value of scientific methods, especially the centrality of responsiveness to producing epistemically and democratically legitimate science.
I’ve also been invited to write a chapter on ‘philosophy of science communication’ for an agenda-setting philosophy of science collection that will be published by Elsevier. In that chapter, I’m outlining both social scientific and philosophical research on science communication. I argue that, while philosophers of science do attend to communicative practices that are epistemically, ethically, and democratically legitimate, their neglect of social scientific research has led many to propose theories that struggle in real-world contexts.
Mitigating Polarization with Responsive Journalism
My work on science communication overlaps with my work on journalism. In a paper recently published in Synthese, I argue the public’s image of scientists as trustworthy only when in consensus makes it difficult for journalists to ethically communicate scientific disagreement. Using the science on masking during the coronavirus pandemic as a case study, I focus on a conflict between journalists’ accuracy and harm norms. In doing so, I show that journalists, like scientists, must make value judgments in their work. To resolve this problem, I sketch a model of trust that accommodates scientific disagreement: the responsiveness model.
That journalists must make value judgments has implications for journalistic objectivity. In another paper, I address this issue head-on. In the U.S., the First Amendment right to a free press is often premised on journalistic objectivity. Journalistic objectivity, in turn, requires journalists to forgo their First Amendment right to publicly express their values. To resolve this conflict, I argue objectivity should be a requirement of the journalistic community, not individual journalists. If we take this tack, then journalists can retain their right to free expression, and we can preserve the foundation on which the right to a free press is built – objectivity.
Which standards must the journalistic community meet to be sufficiently objective? To answer this question, I apply philosophical arguments about scientific objectivity to journalism. I argue that newsrooms must be deliberative bodies composed of diverse groups of reporters who are responsive to each other’s views. Accordingly, I claim an objective journalistic community is a depolarized one. Relatedly, I’m working with social scientists to survey public opinion on practices that would improve trust in the media, such as reporting that conveys multiple perspectives on issues, and on means to encourage such practices, such as legislation.
Along with social scientists, I also collaborate with practicing journalists. For example, I recently helped a fellow at Oxford’s Reuters Institute for the Study of Journalism to write a report on the role of science journalism in democracy. In fact, he spotlighted my research in his report because he said it “solves many of the problems” he outlines, such as how to communicate politicized science and navigate misinformation. He added that he hasn’t “seen anyone else discuss journalism in this way – and it is much needed.”
Mitigating Polarization with Responsive Technology
AI-powered social media algorithms also undoubtedly silo us. For this reason, in another project, I’m arguing for reform of Section 230 of the U.S. Communications Decency Act. This law absolves social media companies of liability for user-generated posts, despite the fact that their algorithms amplify harmful content, including scientific disinformation. Social media companies claim revising Section 230 would force them to censor posts, violating our right to free speech. I think this argument misses the forest for the trees.
We value free speech in democracies because it preserves our autonomy. Yet, much like censorship, social media algorithms also violate our autonomy, at least if we take the idea of social media addiction seriously. A key feature of the clinical definition of addiction is a loss of cognitive autonomy. Consequently, I argue democracy demands that we balance our right to free speech with remedying the public health concern of social media addiction – especially when it comes to children. How? By targeting how algorithms feed us information instead of censoring individual posts. In addition to fleshing out this project philosophically, including how responsiveness might apply here as well, I plan to work with social scientists and addiction scientists to investigate practically effective revisions to Section 230.
However, I’m already working with social scientists to survey public perceptions of generative AI, including its impact on the economy, environment, and public discourse. Informed by this study, I plan to find ways to minimize the harms and maximize the benefits that generative AI poses to democracy, especially as it relates to science and journalism. In particular, I’m interested in examining how generative AI may further fuel the ‘do your own research’ movement. Associated with distrust of traditional information sources, like scientists and journalists, this movement has had largely harmful effects on individuals and democracy.
However, the idea of doing your own research implies thinking for oneself. This is yet another manifestation of the ideal of autonomy, which suggests the movement isn’t necessarily at odds with democracy. Whether using generative AI to do your own research is harmful or beneficial depends, at least in part, on how we design the technology. We can let sycophantic responses feed polarization, misinformation, and distrust, or we can design generative AI to combat these phenomena, including by cultivating in users responsiveness to others’ points of view. I plan to theoretically and empirically explore how the latter might be possible.