Minimizing the Pitfalls of Artificial Intelligence

Artificial intelligence technology (AI) makes modern life much easier—from tailoring the ads we see when we surf the web to allowing Siri or Alexa to turn on the lights before we get home from work.

But this growing automation also poses ethical problems, including discriminatory facial recognition, disinformation fueled by algorithms seeking attention and controversy, or built-in biases that affect everything from the jobs we get to the medical treatments we receive.

Suresh Venkatasubramanian

At the Computing Community Consortium Council, an arm of the Computing Research Association, Suresh Venkatasubramanian, a professor in the University of Utah’s School of Computing; together with colleagues Nadya Bliss, director of the Global Security Initiative at Arizona State University; Melanie Moses, a professor in the University of New Mexico’s Department of Computer Science; and Helen Nissenbaum, a professor of information science at Cornell Tech, have been working to figure out how to resolve AI’s drawbacks.

“We know why this story repeats itself,” says Venkatasubramanian. “It’s because AI systems fail to recognize that they affect people and society, and that people don’t—and shouldn’t—behave like compliant numbers in a database.”

In a recent paper exploring AI’s impact on society, the team argues that computer science and the social sciences need to work closely together to understand the effect that AI, and automation broadly, is having on society.

“We need better social science education for computer scientists, more financial and institutional support for joint research efforts, and a re-awakening of a ‘critical instinct’ among technologists, so that we don’t just ask what we should build and how, but why we are building it and whether we should,” Venkatasubramanian adds.

The white paper is part of a series of position statements compiled every four years by the CCC Council and members of the computing research community to inform policymakers and the public about research opportunities. Topics chosen represent areas of pressing national need.

Venkatasubramanian is working with Matt Haber, a professor in the College of Humanities, to develop a certificate that combines computing, ethics and social science elements. The Tanner Center for Human Rights also works to facilitate such cross-disciplinary discussions.


Suresh Venkatasubramanian, School of Computing,

Rebecca Walsh, Communications Manager,

Using Data Science as a Force for Good: Utah Informatics Initiative (UI2) Hosts Inaugural Symposium in Public Interest Technology University?

Zoom Webinar

University President Ruth Watkins, SVPAA Dan Reed, UT-Austin Dean Eric Meyer and others discuss Public Interest Technology.

By Rebecca Walsh

Communications Manager, University of Utah

Is the University of Utah already a Public Interest Technology university?

The first Utah Informatics Initiative (UI2) fall semester symposium suggests it might be. A virtual gathering Oct. 28 highlighted the work of several U researchers focused on data-based solutions to challenges facing the communities around the university, including:

  • The AQ&U (Air Quality and U) community-engaged environmental sensor network, managed as part of Chemical Engineering Assistant Professor Kerry Kelley’s air quality research
  • Research about the social impact of machine learning and algorithmic fairness from Suresh Venkatasubramanian, a professor in the university’s School of Computing
  • Study of regional climate issues and the degradation of silt crusts around the Great Salt Lake and its effect on air quality in the Salt Lake Valley from researchers in the Department of Atmospheric Sciences, including John Horel, department chair
  • U-Smart (the Utah Smart Energy Laboratory) research into the next generation of resilient and sustainable power and energy systems, led by Electrical and Computer Engineering Associate Professor Masood Parvania

Public Interest Technology (PIT) work simply requires using data to make better public policy, said keynote speaker Eric Meyer, dean of the University of Texas at Austin’s School of Information. The term, coined by the Ford Foundation and New America, is meant to capture the study and application of technology to advance solutions to community problems, particularly for marginalized groups. The University of Texas at Austin is part of the Public Interest Technology University Network, a group of colleges and universities that banded together in 2019 in an academic initiative committed to building on the ethical use of technology.

“There’s data everywhere. We all know that,” Meyer said. “But these data have policy implications and political consequences. How do you use data to make better public policy, to change the ways that policy decisions are made?”

To qualify for the new network, a university needs to meet some basic criteria, Meyer said: “As long as you’ve got a unit that thinks inter-disciplinarily, that thinks about public problems and wants to be able to apply technology to solve them, and that is willing to reach out to nonstandard partners to move that forward, you’re well on your way.”

Dan Reed, senior vice president for Academic Affairs, noted the structural obstacles to creating a PIT-focused campus. “These issues are complex, and they require a 360-degree perspective and collaborative forces that span colleges and departments,” he said.

UI2 Director Mike Kirby believes the Informatics Initiative could provide that perspective and collaboration, bringing together university researchers who already are doing PIT-related work, but just haven’t adopted the title. The Informatics Initiative was established by Reed in 2018 as an effort to build on the university’s existing education, research and workforce development strengths in data science. The initiative is funded using annual performance-based funding from the Utah Legislature.

“We are here to enhance inter-disciplinary and cross-disciplinary research and curriculum development efforts at the intersection of technology and policy so U graduates will be prepared to critically assess the ethical, political, and societal implications of new technologies, and then design technologies in service of the public good,” Kirby said. “We believe much of this work is already being done by researchers and educators at the University of Utah, we just need to break down the barriers that keep us working in individual silos.”

Venkatasubramanian noted students are already ahead in the process of thinking about data science as an inter-disciplinary force for good in the world.

“Students are naturally talking about broad topics—taking courses in sociology, philosophy, the humanities,” Venkatasubramanian said. “What makes it hard for them is working through the logistics. But there’s a lot of enthusiasm for a more eclectic way of thinking about STEM fields and how they apply to the world around us.”

For more information about UI2, visit, or contact Mike Kirby, executive director and professor Mike Kirby at