Even though there are many ways to define character education, for the moment let’s assume that Wikipedia provides a reasonable starting point: “…an umbrella term loosely used to describe the teaching of children in a manner that will help them develop variously as moral, civic, good, mannered, behaved, non-bullying, healthy, critical, successful, traditional, compliant or socially acceptable beings.” While there is plenty in this definition to inspire healthy debate, my primary concern with it is this: it assumes character development only applies to human beings.
The reality is that we are now infusing artificial
intelligence into most things that we make. The more
complex our machines become, the more their decisions begin to look like ethical judgments and, ultimately, expressions of character. Consider the case of self-driving cars that I pose to my media psychology graduate students.
Cars and Character Education
Imagine you are driving down the highway in the family SUV, your two children and the dog in the back seat. Suddenly, a deer jumps out in front of your car. You can:
1) Jump the curb and hope you don’t hurt your passengers, as well as two people who are walking their dog on the sidewalk.
2) Hit the deer, knowing that doing so would probably injure or maybe even kill you (not to mention the deer), your passengers and anyone in the cars behind you who swerve to avoid the accident.
3) Cross into oncoming traffic and take a chance you can outmaneuver all the cars headed straight for you.
A decision needs to be made in a split second.
And, oh yes, you aren’t driving. You are in an autonomous SUV, which means that your car will need to decide. Even if your car has some kind of override that allows you to take control of the vehicle, events are happening too fast. You have no choice but to let your car make the decision while you hope for the best. Essentially, your car is going to have to decide who is more valued in this scenario, leading it to decide who should be put most at risk.
This is not a contrived situation, particularly in light of recent fatalities caused by autonomous vehicles. Tech ethicists are already trying to unravel quandaries like this as AI permeates daily living. In many ways, technology is already an out of control rollercoaster. And the future is just getting started.
The Trolley Problem, Updated with AI
This dilemma is not unlike the one described in the The Trolley Problem, a foundational thought experiment in most college ethics classes that has been debated over the years by a number of specialists in the arena of moral decision making. In Dr. Judith Jarvis Thompson’s version, a trolley with failed brakes is hurtling downhill toward five workmen who are repairing the tracks. There is the very real possibility that the workmen will not see the train in time to move. However, you can throw a switch and send the trolley on to another track where it will definitely kill only one person. Which option is more ethically sound? Or, in more contemporary terms, how would we program an AI machine – like a self-driving car – to respond?
To some, the answer is simple: outlaw self-driving cars. But cars are just the beginning. Our robots and self-aware homes, even the bots we use to answer our email, will rely on the notions of character we build into their programming to address whatever moral dilemmas they encounter. As good consumers, we will shop for the smartest AI we can afford. The smarter our tech becomes, the more we will depend on programmers to craft AI that reflects who we are as moral human beings. Given that each of us might handle the deer and SUV situation differently, what kind of programmer will we turn to?
What Does This Mean for Character Education?
There are a number of points to consider. Here are just two.
First, when shopping for AI that supplements and in many ways co-authors our lives, we will want to consider not only how smart it is but also how it calibrates its moral compass. Doing so gives us our best chance at creating a future that is driven by “good character.” This means we will need a much broader description of our technology’s capabilities if we are to make informed purchases.
Second, we need to understand that currently the default for “character programming” comes from the technologists and programmers who make the intelligent machines. I don’t think that will produce the kind of future that anyone – including technologists – will want to call home. It is time for character educators to sit shoulder to shoulder with programmers so they can co-create the new version of tomorrow that awaits us all.
The bottom line is simply this: If we are wondering whether character education is important in the year 2018 and going forward, the answer is very clear: it is more important now than it has ever been. Let’s hope our schools, communities and business leaders share that sentiment. And let’s hope ethicists and “character specialists” become part of every entrepreneurial team.
Dr. Jason Ohler has been writing, researching, teaching and speaking about the application of character education to digital lifestyles for three decades. You can find more at jasonOhlerIdeas.com, where you can subscribe to his newsletter, Big Ideas, and read about his latest book, 4Four Big Ideas for the future.
Currently Dr. Ohler teaches for Fielding Graduate University's Media Psychology PhD program, and directs the University of Alaska's Masters in Educational Technology