When will science create ethical robots?

Is it possible for robots to have a bias? If robots attempt to imitate human behavior, isn’t it possible that they could make unethical choices — just as people can make unethical choices?

Your bed isn’t able to move between rooms automatically with only a wireless phone request. Your toaster can’t make a bottle of water. Your garage door doesn’t wash itself. The bed, the toaster and the garage door each perform a specific function well — the function we need — nothing more nothing less.

But, what if on Monday your bed sensed that you should be at the gym at 9 a.m. and vibrated to force you to get up. Then the toaster didn’t turn on because it decided that you didn’t need those extra carbs in the bagel. It was helping you. And maybe you had been doing a lot of traveling, and the garage door knew that the ergonomics of traveling too much would be bad for your spine, so it didn’t open when you got in your car. Welcome to the intelligence world of smart A.I.

Can A.I. machines, agents and robots be too smart? Just because we could design a machine to be intelligent, doesn’t mean that we should.

Robots attempt to imitate human behavior. Then isn’t it logical that if ethical people can make unethical choices, that ethical robots could make unethical choices?

The moral compass of machines

Humans have morality. These guiding principles help us make the distinction between right and wrong or good and bad behavior. This concept centers around ethics, the philosophy to examine right and wrong moral behavior with ideas such as justice, virtue or duty.

When we think about our car, we might be interested in fuel economy. On reflections of our health, topics like comfort and lifestyle come to mind. And when our thoughts migrate to nature, we may think about natural selection and survival of the fittest.

The pontification of morality and virtue lands us quickly in the world of consequentialism. This doctrine holds that the morality of an action is to be judged solely by its consequences. The actions can have multiple and conflicting outcomes. If we as humans have trouble making these decisions, how are we going to program machines to make them? Utilitarianism could be a solution. We have more than one choice when deciding how we design machine intelligence.

  • Consequentialism: helps determine whether an act is morally right only based on consequences.
  • Actual consequentialism: adds that moral rightness depends on the actual consequences.
  • Direct consequentialism: assesses whether the act is moral based on the act itself.
  • Evaluative consequentialism: shifts the morality to the value of the consequences.
  • Hedonism: an entertaining derivative of action, determines moral rightness based on pleasures and pains of the consequences.
  • Maximizing consequentialism: depends on which of the consequences are best (versus average).
  • Aggregative consequentialism: focuses on moral rightness within function of the values of the parts of those consequences.
  • Total consequentialism: assesses moral rightness based on the total or net good of the consequences.
  • Universal consequentialism: is the assessment of moral rightness for all people involved in the consequences.
  • Equal consideration: determines moral rightness based on an equality of the consequences among the parties involved.
  • Agent-neutrality: moral rightness does not depend on whether the consequences are evaluated from the perspective of the agent or observer; it gives every agent the aim of maximizing utility.

Let’s just quickly program morality into the machine and get on our way. It turns out that programming morality is complex, even before we get to the evaluation of outcomes experienced through machine intelligence or robotic involvement.

Linking machine intelligence to ethical philosophy

Roboethics, or robot ethics, is how we as human beings design, construct and interact with artificially intelligent beings. Roboethics can be loosely categorized into three main areas:

  1. Surveillance: the abiSurveillance: the ability to sense, process and record; access; direct surveillance; sensors and processors; magnified capacity to observe; security, voyeurism and marketing.
  2. Access: new points of access; entrance into previously protected spaces; access information about space (physical, digital, virtual); objects in rooms, not files in a computer, e.g. micro-drones the size of a fly.
  3. Social: new social meaning from interactions with robots that implicate privacy flows; changing the sensation of being observed or evaluated.

Robots do not understand embarrassment. They don’t have fear, and they are tireless and have perfect memories. Designing robots that spy, either on your back porch or while your car is parked, brings into question how surveillance, access and social ethical considerations will be addressed as we further develop algorithms that assist humans.

We’ve heard about machine intelligence agents to enable ubiquitous wireless access to charge our mobile phones autonomously. We’ve fantasized about eating pancakes in bed while robots serve us (or maybe that was just me). There have been a lot of technological advances since George Orwell’s 1984 ramblings about the risk of visible drones patrolling cities. Or we could just reject the Big Brother theory altogether and join the vision of Daniel Solove, where we live in an uncertain world where we don’t know if the information collected is helping or hurting us.

The First Amendment appears like a logical addition. But how do we balance excessive surveillance with progress without violating the First Amendment’s prohibition on the interference with speech and assembly?

As we answer a question, three more rise to the surface.

Where is machine learning being used?

How much sensitivity do we design into machine intelligent beings? How much feeling should we architect into an armed drone? Should the ethical boundaries change if we’re simply designing a robotic vacuum cleaner that could climb walls? Where do we make the line between morality and objectives? You better cook my toast today. But tomorrow, I’m OK if the refrigerator is locked shut because I have exceeded my caloric intake for the day.

Society, ethics and technology will experience the heavy integration of rights and moral divisions over the next 10 years. Who designs the rules, processes and procedures for autonomous agents? This question remains unanswered.

Peter B. Nichol, empowers organizations to think different for different results. You can follow Peter on Twitter or his personal blog Leaders Need Pancakes or CIO.com. Peter can be reached at pnichol [dot] spamarrest.com.

Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls “a must-read for any leader wanting to compete in the innovation-powered landscape of today.”

Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience.

Previous article3 things to know before launching machine intelligence
Next articleHow can machine learning create features in human-understandable ways?
Peter is a technology executive with over 20 years of experience, dedicated to driving innovation, digital transformation, leadership, and data in business. He helps organizations connect strategy to execution to maximize company performance. He has been recognized for Digital Innovation by CIO 100, MIT Sloan, Computerworld, and the Project Management Institute. As Managing Director at OROCA Innovations, Peter leads the CXO advisory services practice, driving digital strategies. Peter was honored as an MIT Sloan CIO Leadership Award Finalist in 2015 and is a regular contributor to CIO.com on innovation. Peter has led businesses through complex changes, including the adoption of data-first approaches for portfolio management, lean six sigma for operational excellence, departmental transformations, process improvements, maximizing team performance, designing new IT operating models, digitizing platforms, leading large-scale mission-critical technology deployments, product management, agile methodologies, and building high-performance teams. As Chief Information Officer, Peter was responsible for Connecticut’s Health Insurance Exchange’s (HIX) industry-leading digital platform transforming consumerism and retail-oriented services for the health insurance industry. Peter championed the Connecticut marketplace digital implementation with a transformational cloud-based SaaS platform and mobile application recognized as a 2014 PMI Project of the Year Award finalist, CIO 100, and awards for best digital services, API, and platform. He also received a lifetime achievement award for leadership and digital transformation, honored as a 2016 Computerworld Premier 100 IT Leader. Peter is the author of Learning Intelligence: Expand Thinking. Absorb Alternative. Unlock Possibilities (2017), which Marshall Goldsmith, author of the New York Times No. 1 bestseller Triggers, calls "a must-read for any leader wanting to compete in the innovation-powered landscape of today." Peter also authored The Power of Blockchain for Healthcare: How Blockchain Will Ignite The Future of Healthcare (2017), the first book to explore the vast opportunities for blockchain to transform the patient experience. Peter has a B.S. in C.I.S from Bentley University and an MBA from Quinnipiac University, where he graduated Summa Cum Laude. He earned his PMP® in 2001 and is a certified Six Sigma Master Black Belt, Masters in Business Relationship Management (MBRM) and Certified Scrum Master. As a Commercial Rated Aviation Pilot and Master Scuba Diver, Peter understands first hand, how to anticipate change and lead boldly.