Our Better Web is an interdisciplinary, independent initiative at UC Berkeley that brings together leadership from the Schools of Information; Journalism; Law; and Public Policy; the Division of Computing, Data Science, and Society; and the CITRIS Policy Lab.
Composed of interdisciplinary experts who hold diverse and differing perspectives, Our Better Web seeks to address the sharp rise of online harms such as disinformation, child safety, and algorithmic bias through research, policy analysis, training, and engagement.
Our team has the expertise and experience to translate our work into real change. Members of our team have argued before the United States Supreme Court on free speech issues, testified to Congress on platform content moderation, won the Pulitzer Prize for reporting on corporate corruption, overseen a federal executive department, and led innovation labs at a major tech company.
Empirical, independent research on the prevalence and effects of online harms is critical for the development of effective technology and policy strategies. We draw upon our diverse disciplinary perspectives and collaborations with impacted stakeholders to develop innovative research questions and methodological approaches that cut across traditional boundaries.
As online harms become more well understood, state and federal legislative and regulatory proposals to mitigate these harms are becoming more prevalent. We conduct robust policy analysis of emerging legislative and regulatory proposals and provide recommendations, often informed by our empirical research, to key decision-makers in the public and private sectors.
Our Better Web provides training to diverse practitioners in the public, private, and nonprofit sectors. We offer executive education programs on emerging platform governance topics, such as technology strategies to counter harmful content online; in-depth guidance on Section 230 and its implications for platforms, including effects of proposed reforms; and the benefits and risks of emerging domestic and international legislative and regulatory approaches to platform governance. We also offer training for students enrolled at UC Berkeley, including research opportunities and multimedia journalism training.
Our Better Web shares its research, policy analysis, and journalism with policymakers, decision-makers, and the public. We engage with private and public sector stakeholders through consultation, high-level convenings, and publication of peer-reviewed research articles and reports, and journalistic articles and op-eds.
Our Better Web targets its empirical research, policy analysis, training, and engagement in high-risk issue areas, such as algorithmic amplification and algorithmic bias, child safety, disinformation, platform transparency, and privacy.
Every day, billions of individuals upload and view content online. Recommendation algorithms curate what we read, see, hear, and—ultimately—believe. We seek to understand how these recommendation algorithms lead to echo chambers, radicalization, and the spread of disinformation, while identifying effective strategies to align algorithmic amplification with prosocial goals.
Algorithms can create unfair or subjective outcomes, strengthening and entrenching biases through unrepresentative or incomplete data or reliance on assumptions and information that reflect historical inequalities. We seek to understand how algorithmic bias manifests itself online, including in algorithms that drive recommender systems, content moderation, and online service provision, and appropriate technology and policy strategies to mitigate algorithms’ disparate effects.
Each year, hundreds of millions of images and videos of child sexual abuse circulate online. The majority of children in these materials are prepubescent, and many of them are infants and toddlers. Children are also increasingly exposed to unsolicited sexual advances and sexual content online. We seek to develop technologies and policies to protect children, both online and offline.
The Internet promised to democratize access to knowledge and make the world more open and understanding. The reality of today’s Internet, however, is far from this ideal. Disinformation, lies, and conspiracies dominate many social media platforms. This toxic online world has had real-world implications—from genocide to election interference and threats to global public health. We seek to understand the source and spread of disinformation and develop techniques and policies to mitigate the impact of disinformation campaigns, while ensuring an open and free exchange of ideas.
Platform Transparency & Accountability
As large platforms take on increasingly influential roles in our online social, economic, and political interactions, there is a growing demand for greater transparency and accountability. We seek to identify appropriate legislative and regulatory strategies to increase platform transparency and accountability while ensuring rights such as privacy and free speech are upheld.
As the Internet becomes more pervasive in our everyday lives, the amount and specificity of data collected poses serious risks to privacy and associated rights (e.g., freedom of speech). We seek to understand emerging privacy concerns posed by the Internet and new technological advancements, such as Web3 and virtual and augmented reality environments. We also seek to understand the effects of nascent data protection and privacy legislation, such as the EU General Data Protection Regulation and the California Consumer Privacy Act.
Journalists serve a critical role in uncovering and raising awareness of emerging online harms. We have co-developed an interdisciplinary solutions-driven journalism course, J276 Digital Accountability, that trains graduate students from UC Berkeley’s Schools of Journalism, Information, Law, and Public Policy in multimedia journalism. J276 Digital Accountability was first offered in spring 2022 and focused on the role Section 230 of the Communications Decency Act has played in the rise of mis-/disinformation and proposed solutions to address the problem. The course was taught by audio journalists Queena Kim and Aaron Glantz. Under the instructors’ supervision and editorial authority, teams of students produced a series of audio pieces, including four-minute segments for National Public Radio.
Danielle Elliott (Berkeley Law) and Celina Avalos Jaramillo (Goldman School of Public Policy) share their investigation into Spanish-language disinformation and Section 230 of the Communications Decency Act.
In collaboration with Our Better Web, the CITRIS Policy Lab maintains a downloadable database of all federal legislation that seeks to amend, repeal, or reform Section 230 of the Communications Decency Act.
Brandie Nonnecke is the Director of Our Better Web and the Founding Director of UC Berkeley’s CITRIS Policy Lab where she supports interdisciplinary tech policy research and engagement. She is a Technology and Human Rights Fellow at the Carr Center for Human Rights Policy at the Harvard Kennedy School and a Fellow at the Schmidt Futures International Strategy Forum.
Geeta Anand serves as Dean and Professor at Berkeley Journalism. She is a Pulitzer Prize-winning journalist and author who worked for 27 years as an investigative reporter, senior writer, and foreign correspondent for the Wall Street Journal and New York Times.
Erwin Chemerinsky is Dean of Berkeley Law. He is the author of 14 books, including leading casebooks and treatises about constitutional law, criminal procedure, and federal jurisdiction. He frequently argues appellate cases, including in the U.S. Supreme Court, is an American Academy of Arts and Sciences Fellow, and President of the Association of American Law Schools.
Hany Farid is a Professor in Electrical Engineering & Computer Sciences and the School of Information. Hany serves as Senior Faculty Advisor at the Center for Long-Term Cybersecurity and at the CITRIS Policy Lab at UC Berkeley. He is a leading expert in image analysis, digital forensics, forensic science, misinformation, and the intersection of technology and society particularly as it pertains to online harms.
Janet Napolitano is a Professor of Public Policy and Director of the Center for Security in Politics. As Secretary of Homeland Security from 2009-2013, she led the nation’s efforts to prevent terrorist attacks, secure its borders, respond to natural disasters, and build domestic resiliency. A distinguished public servant, she served as the President of the University of California from 2013 to 2020 and as Governor of Arizona from 2003 to 2009.