Algorithms are remaking our perceptions of reality and society, for better and worse. Nowhere is this more consequential than in the realm of human rights and political violence. In particular, both automated machine learning and human-in-the-loop decision systems offer the promise of an enhanced capability to understand, forecast, and potentially mitigate violence and abuses. However, there is a dark-side to this digital cascade of computation. The same algorithms and data that researchers and human rights non-governmental organizations use to study protections and violations are crucial instruments of surveillance, repression, and control in autocratic states - not only ubiquitous cameras and GPS tracking, but image, text, and network analysis tools. Moreover, computationally-accelerated rights' abuses are not limited to autocracies. Dataveillance, the use of machine learning on digital trace data to predict on and offline behavior, is a particular accelerant to polarization, distrust of experts, and extremism across the liberal world. Our research team has tracked, with machine learning, the increasing role of algorithms and computation in producing human rights abuses around the world in both autocracies and democracies. Despite the rise in illiberal uses of technology, our understanding of dataveillance is extremely limited because common misconceptions about computing and democracy lead to technical blind spots for social scientists and policy-makers as well as social and policy blindspots for technicians. While GDPR and other emerging regulations to protect privacy are one step in limiting the danger of dataveillance in democracies, new human rights threats, from deepfakes to the metaverse, are on the horizon and will require novel frameworks to simultaneously measure and protect rights.
Michael Colaresi is a computational social scientist with a background in modeling information ecosystems and political violence, Bayesian statistics, machine learning, and text-as-data. He has worked on and led interdisciplinary teams of researchers from the social sciences, computer and information sciences, social work, public health, and engineering. He co-founded the Pitt Disinformation Lab and interdisciplinary major in Computational Social Science at the University of Pittsburgh and founded the Social Science Data Analytics Initiative at Michigan State University. He was co-recipient of the Best Visualization Award from the Journal of Peace Research in 2017 and the Gosnell Prize for Excellence in Political Methodology from the Methodology section of the American Political Science Association in 2006. He has been PI or co-PI on four NSF grants and is a research affiliate for the ERC-funded Violence Early Warning Project at the University of Uppsala. He is currently the research and academic director at the Institute for Cyber Law, Policy and Security, director of the Pitt Disinformation Lab, and an affiliate scholar in the Intelligent Systems Program.