Skip to main content

A Human-Centered Approach to Artificial Intelligence

Stuart RussellInstitute for Practical Ethics 2020 keynote address advocates for cross-disciplinary development of groundbreaking technology

The UC San Diego Institute for Practical Ethics presented a new model for artificial intelligence technology Dec. 3, virtually hosting famed AI expert Stuart Russell as their third annual keynote speaker. Russell, former vice-chair of the World Economic Forum’s Global Agenda Council on AI and Robotics, advocated for artificial intelligence that takes a human-centered approach, one with the capacity to lift the living standards of everyone on Earth.

“How does a machine take actions in the service of our objectives when it doesn’t even know what they are?” said Russell, the Smith-Zadeh Chair in Engineering at UC Berkeley and founder of that university’s Center for Human-Compatible Artificial Intelligence. “That’s a puzzle, but it’s a solvable puzzle.”

Broadly defined, artificial intelligence is the development of computers to perform tasks that are traditionally completed by humans. Some of the most outrageous examples include cyborg-style robots, but AI already encompasses many aspects of daily life: speech and face recognition, social media algorithms and even password protection on the websites we log into every day.

But the same technological advances that led to these uses may have much bigger implications in the near future, as investment and interest in artificial intelligence grows. Will self-driving cars be safe? Can language barriers be more easily overcome for learning? Should governments expand the use of autonomous weapons?

Relying heavily on a humanistic approach

Russell explained his new model, what he calls “provably beneficial AI,” as grounded by three, informal principles: the only objective of the machine is to satisfy human preferences, the machine does not know what those preferences are—an uncertainty he said allows humans to remain in control—and human behavior, through active choice, gives evidence of what those preferences are and will become.

ai robot at computer screenDesigners then take these three principles and use them in development, allowing machines to behave very differently than the traditional, standard model of artificial intelligence technology known today: where human preferences do not exist.

Using the example of the self-driving car, a passenger tells the car to take it to the airport and, under the standard model, the car will attempt to achieve this objective at any cost, including not allowing itself to be “turned off” because, Russell explained, this would mean the machine had failed the task.

“In the new model, the thinking goes in a quite different way,” he said: the machine knows it may be turned off if it does something wrong, but it doesn’t know what “wrong” means, and therefore relies on the user to teach it. Optimally, the new model forces machines, robots or algorithms to automatically defer to humans, ask permission before taking action, be “minimally invasive” and empower action in the user by providing more choices.

“With this model, the better the AI, the better the outcome because it’s going to be better able to infer your preferences and better able to satisfy those preferences,” he said.


Watch the recorded livestream:

Image of Robot by Computerizer/

Stuart Russell received his bachelor's degree with first-class honours in physics from Oxford University in 1982 and his Ph.D. in computer science from Stanford University in 1986. He then joined the faculty of the University of California at Berkeley, where he is professor (and formerly chair) of Electrical Engineering and Computer Sciences and holder of the Smith-Zadeh Chair in Engineering. He is also an adjunct professor of neurological surgery at UC San Francisco and vice-chair of the World Economic Forum's Council on AI and Robotics.

Russell is a recipient of the Presidential Young Investigator Award of the National Science Foundation, the IJCAI Computers and Thought Award, the World Technology Award (Policy category), the Mitchell Prize of the American Statistical Association and the International Society for Bayesian Analysis, the ACM Karlstrom Outstanding Educator Award, and the AAAI/EAAI Outstanding Educator Award. In 1998, he gave the Forsythe Memorial Lectures at Stanford University, and from 2012 to 2014 he held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery and the American Association for the Advancement of Science.

His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring and philosophical foundations. His books include "The Use of Knowledge in Analogy and Induction," "Do the Right Thing: Studies in Limited Rationality" (with Eric Wefald) and "Artificial Intelligence: A Modern Approach" (with Peter Norvig). His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

2019: Changing Strategies to Save Nature

Emma Marris

During UC San Diego official Earth Month celebrations, the university’s Institute for Practical Ethics welcomed environmental journalist and author Emma Marris for a unique and optimistic talk — one where “rewilding” is a reality, assisted migration is possible and the romantic notion of pristine wilderness is tossed out.

Marris is the author of “Rambunctious Garden: Saving Nature in a Post-Wild World” and gave  the second keynote address in what is a yearly series for the institute, this year held Wednesday, April 24, 2019.

“Bringing guest speakers to campus is an important way for the Institute for Practical Ethics to introduce to the community new and groundbreaking ideas about ethics and science that matter to society,” said Craig Callender, co-director of the institute. Callender uses Marris’ “Rambunctious Garden” in his undergraduate course “Philosophy and the Environment,” and Marris will answer questions prepared by this quarter’s students after her public talk.

“Marris’ insights are provocative, challenging the way most of us think about environmental conservation, and I’m sure she will be well received both by students and the greater San Diego community,” he said.

Watch the full presentation, produced by University of California Television.


2018: Should We Bring Back the Woolly Mammoth?

At Shapiro talkAs scientists get closer and closer to being able to bring extinct animals back to life, big questions emerge. What led to extinction in the first place? What would be the impacts on other species or the environment? Just because we can do it, does that mean we should?

To help answer these questions and celebrate the inaugural year of the UC San Diego Institute for Practical Ethics, guest speaker Beth Shapiro, a world-renowned professor of ecology and evolutionary biology at UC Santa Cruz, spoke to a packed house of researchers and students from across campus and the greater community on April 19, 2018.

Organized by the Institute for Practical Ethics — with the overall purpose of promoting research and multi-disciplinary discussion about the ethics of science, technology and medicine—co-directors John Evans and Craig Callender said having Shapiro as their guest speaker was the perfect example of the mission and impact of the institute.

Shapiro said there will “probably” be an elephant born one day that has some form of mammoth DNA.

“But isn’t it great,” she said, “that we can have all of these conversations — talk about what we should do and could do, and how to regulate it, and who should own it … and what our moral authority is to do any of this — before that technology exists? And that’s why institutes like this have such an amazing place in society today.”

Watch the full presentation, produced by University of California Television.


Updated Jan. 11, 2021