Studying the decisions of artificial intelligence

Philosopher Ben Levinstein works to understand modern rationality
Illustration of artificial intelligence
Ben Levinstein has received a Mellon grant to study artificial intelligence and how it makes decisions. (Stock image.)

What does it mean to make a rational decision? 

As defined by centuries of philosophical thought, a rational decision is the action you choose that can be expected to result in better outcomes than any alternatives. This is how humans define it, anyway. What about artificial intelligence?

Ben Levinstein, a professor of philosophy at U of I, specializes in epistemology, which deals with the theory of knowledge: how we come to learn, reason, and know things. It’s usually studied by philosophers and economists, but it has implications in all facets of life. People everywhere are trying to make rational decisions.

As more decisions are being made by computers, however, Levinstein, who was recently awarded a Mellon grant to study artificial intelligence, got to thinking:  Artificial intelligence has the potential to change what it means to make a rational decision. Unlike human decision-making, we can’t always prove why a computer’s decision is rational.

Ben Levinstein
Ben Levinstein

“When a person makes a decision, you think through it and decide based on your own rules, values, and morals. You can kind of think of yourself as implementing a decision procedure or an algorithm, like a computer,” he said. However, unlike humans, we do not always have a complete understanding of why the computer made the decision it made.

"You kind of just have faith. Justified faith, but you cant really explain why it did what it did,” Levinstein said. However, he added, computers can still be wrong, and sometimes you cant even explain to a person how or why it was wrong.”

For example, Levinstein mentioned in his grant proposal the case of the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), an artificial intelligence system used in some U.S. courtrooms to gauge the likelihood of an offender committing a future crime. The system was brought to question in a 2016 report that revealed it was systematically rating Black Americans more likely to commit future crimes than similar white Americans.

While the system didn't explicitly use data involving race in its predictions, Levinstein wrote, it used factors correlated with race, such as whether an offender's parents had been to jail.

"Determining whether an algorithm is unfair is not a purely ethical question," he wrote. "It also depends on the nuts and bolts of the algorithm and data set themselves."

That's why, for fuller ethical consideration of these issues, it's important for philosophers to learn some computer science, Levinstein said. Under the Mellon grant, he will, among other things, work with and learn from computer science faculty at U of I and visit a computer science lab at Carnegie Mellon in the fall which is focused on cooperative artificial intelligence. He wants to combine his insights on decision theory with knowledge in machine learning and computer science to better understand the computerized decisions that are guiding our lives.

“We’ve been at it for a while in philosophy, but over in computer science and machine learning and artificial intelligence, theyre literally teaching computers how to learn,” Levinstein said. “This is important for epistemologists to know about because the theory of knowledge is being directly put into practice.”

The Mellon grant may be new, but it’s an issue that Levinstein has thought about for a long time. As an undergraduate, he was interested in philosophy and math. Ultimately, he chose to pursue philosophy, but he was always interested in how mathematics and related fields could be used to address philosophical issues. Later, as a post-doctoral researcher, Levinstein tackled issues about how artificial intelligence is advancing and starting to play a bigger role in our lives. Now, as a faculty member, he’s pursuing the topic even more.

“When I saw this grant come about, I thought it was a good opportunity to at least kind of get a new angle on some of these problems that I thought were under explored in philosophy,” Levinstein said.

By doing so, he thinks that he might be able to bring a little more human understanding to the workings of artificial minds.

“I thought to be able to comment intelligently on algorithms and artificial intelligence, I need to understand how these algorithms really work and think about how it should be used and what fairness would mean for them,” Levinstein said.  “There’s this whole new area in AI and machine learning and what role AI should play in our lives and in society, that philosophers need to know more about, and we should get more training in.”

News Source

Allison Winans

Date