OpenAI Funds Research on Morality in AI Systems
OpenAI, the renowned artificial intelligence research organization, has taken a significant step towards addressing the complex issue of morality in AI systems. The nonprofit arm of OpenAI, which has given $500,000 to Duke University researchers for a project called Research AI Morality, filed the information with the IRS. This is just one piece of a larger effort to grant $1 million over three years to study how to make AI systems morally aware.
Led by Walt SinnottArmstrong (practical ethics), Jana Schaich Borg (machine learning algorithms), the project works to develop algorithms that can predict human moral thinking in medicine, law, and business, and beyond into the future.
Sinnott-Armstrong, who is a prominent philosophical figure, has also received copious application in applied ethics, moral psychology and neuroscience. At Duke University, he’s worked with his team on real world issues, designing algorithms to decide who should get organ transplants, weighing both public and expert input for those ranked algorithms to be more fair.
Challenges in Developing Moral AI
While the OpenAI-funded project’s goal of creating morality in AI systems is promising, history suggests that it is a challenging endeavor. Previous efforts, for example the Allen Institute for AI’s Ask Delphi effort to answer ethically, have been restricted. Although limited in the complexities they can fully address, these systems can be easily deceived into giving phony, morally repugnant solutions by rephrasing their questions.
The reason behind these limits is the way AI works. …the particular problematic of machine learning: machine learning models predict outcomes based on training data, which often reflect the biases of dominant cultures. The worrying question then arises as to whether AI could ever truly be “moral,” given that morality by nature changes between societies, and there are no universal moral frameworks to turn to.
Implications of Moral AI
The implications of successfully developing morality in AI systems could be profound. If we were able to do this correctly, how we as humans trusted machines in moral decision making might profoundly affect healthcare, justice, and corporate ethics. Yet if we can agree that AI does need human values, the question of whether and how AI is truly learning these remains an open debate.
As the world eagerly awaits the results of this groundbreaking project, expected to conclude in 2025, it is crucial to consider the potential impact of morality in AI systems on society as a whole. The development of moral AI, however, opens up the possibility of more ethical, more fair decision making, but brings to the fore the question of how much we should make decisions by machines at all.
The Future of Morality in AI
OpenAI’s funding of research on morality in AI systems marks a significant milestone in the field of artificial intelligence ethics. With AI driving toward greater and greater integration into different parts of our existences, the moral implications of these systems need to be considered.
The way OpenAI and Duke University researchers have come together has brought together an expertise as it is a collaboration of several disciplines like philosophy, ethics and computer science. This interdisciplinary approach is crucial in tackling the complex challenges associated with developing morality in AI systems.
The interesting direction the project takes is in developing the algorithms that ultimately predict human moral judgments as the project rolls out. If successful, this project could open the door for other ethical and more trustworthy AI in the future.
Final Thought
OpenAI‘s funding of research on morality in AI systems highlights the growing importance of addressing ethical considerations in the development of artificial intelligence. Experts at Duke University are at the helm of the project that wishes to develop algorithms that can foretell human moral judgments in different fields.
While the challenges associated with developing moral AI are significant, the potential benefits of successfully incorporating morality in AI systems are immense. We await with bated breath the fruits of this groundbreaking project, and while we wait we must discuss the role of ethics in relation to AI, and AI’s influence on society, continuously.