Article 657V9 Mathematical Formula Tackles Complex Moral Decision-Making in AI

Mathematical Formula Tackles Complex Moral Decision-Making in AI

by
janrinok
from SoylentNews on (#657V9)

hubie writes:

Mathematical Formula Tackles Complex Moral Decision-Making in AI:

An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or "carebots" used in healthcare settings.

[...] "For example, let's say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?

"Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making," Dubljevi says. "Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies."

[...] To address the complexity of moral decision-making, the researchers developed a mathematical formula and a related series of decision trees that can be incorporated into AI programs. These tools draw on something called the Agent, Deed, and Consequence (ADC) Model, which was developed by Dubljevi and colleagues to reflect how people make complex ethical decisions in the real world.

[...] "With the rise of AI and robotics technologies, society needs such collaborative efforts between ethicists and engineers. Our future depends on it."

Journal Reference:
Michael Pflanzer, Zachary Traylor, Joseph B. Lyons, et al. Ethics in human-AI teaming: principles and perspectives [open]. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00214-z

Original Submission

Read more of this story at SoylentNews.

External Content
Source RSS or Atom Feed
Feed Location https://soylentnews.org/index.rss
Feed Title SoylentNews
Feed Link https://soylentnews.org/
Feed Copyright Copyright 2014, SoylentNews
Reply 0 comments