Character Education in a World of Artificial Intelligence
Even though there are many ways to define character education, for the moment let’s assume that Wikipedia provides a reasonable starting point: “…an umbrella term loosely used to describe the teaching of children in a manner that will help them develop variously as moral, civic, good, mannered, behaved, non-bullying, healthy, critical, successful, traditional, compliant or socially acceptable beings.” While there is plenty in this definition to inspire healthy debate, my primary concern with it is this: it assumes character development only applies to human beings.
The reality is that we are now infusing artificial
intelligence into most things that we make. The more
complex our machines become, the more their decisions begin to look like ethical judgments and, ultimately, expressions of character. Consider the case of self-driving cars that I pose to my media psychology graduate students.
Cars and Character Education
Imagine you are driving down the highway in the family SUV, your two children and the dog in the back seat. Suddenly, a deer jumps out in front of your car. You can:
1) Jump the curb and hope you don’t hurt your passengers, as well as two people who are walking their dog on the sidewalk.
2) Hit the deer, knowing that doing so would probably injure or maybe even kill you (not to mention the deer), your passengers and anyone in the cars behind you who swerve to avoid the accident.
3) Cross into oncoming traffic and take a chance you can outmaneuver all the cars headed straight for you.
A decision needs to be made in a split second.
And, oh yes, you aren’t driving. You are in an autonomous SUV, which means that your car will need to decide. Even if your car has some kind of override that allows you to take control of the vehicle, events are happening too fast. You have no choice but to let your car make the decision while you hope for the best. Essentially, your car is going to have to decide who is more valued in this scenario, leading it to decide who should be put most at risk.
This is not a contrived situation, particularly in light of recent fatalities caused by autonomous vehicles. Tech ethicists are already trying to unravel quandaries like this as AI permeates daily living. In many ways, technology is already an out of control rollercoaster. And the future is just getting started.
The Trolley Problem, Updated with AI
This dilemma is not unlike the one described in the The Trolley Problem, a foundational thought experiment in most college ethics classes that has been debated over the years by a number of specialists in the arena of moral decision making. In Dr. Judith Jarvis Thompson’s version, a trolley with failed brakes is hurtling downhill toward five workmen who are repairing the tracks. There is the very real possibility that the workmen will not see the train in time to move. However, you can throw a switch and send the trolley on to another track where it will definitely kill only one person. Which option is more ethically sound? Or, in more contemporary terms, how would we program an AI machine – like a self-driving car – to respond?
To some, the answer is simple: outlaw self-driving cars. But cars are just the beginning. Our robots and self-aware homes, even the bots we use to answer our email, will rely on the notions of character we build into their programming to address whatever moral dilemmas they encounter. As good consumers, we will shop for the smartest AI we can afford. The smarter our tech becomes, the more we will depend on programmers to craft AI that reflects who we are as moral human beings. Given that each of us might handle the deer and SUV situation differently, what kind of programmer will we turn to?
What Does This Mean for Character Education?