This title is inspired by an article called Duet Ex Machina that I read in the June 2016 edition of Psychology Today.
Some Financial Advisors are concerned about Robo-Advisors. Before I share my thoughts let’s look at the definition of Ex Machina.; 1 : a god introduced by means of a crane in ancient Greek and Roman drama to decide the final outcome. 2 : a person or thing (as in fiction or drama) that appears or is introduced suddenly and unexpectedly and provides a contrived solution to an apparently insoluble difficulty.
In my opinion, the true values based consultative financial advisor that works with their clients to create a plan to grow assets, protect assets, save tax and leave a legacy should not be concerned about Robo-Advisors.
These excerpts from David Berreby’s article in Psychology Today and the last few words of the above definition; “provides a contrived solution to an apparently insoluble difficulty” explains why the true values based consultative financial advisor should not be concerned about Robo-Advisors.
The sentiments are paralleled in NAIFA’s May/June 2016 Edition of Advisor Today – The Robots Are Here! Now What? and here’s an exert; When Baby Boomers and GenXers were asking about engaging Robo-Advisors, nearly 7 in 10 from both groups indicated they “don’t really trust online advice, making personal relationships more important”. More than three quarters believe “there so much selling online that is hard to trust the financial advice”. Click here to read The Robots Are Here! Now What? By Ayo Mseka http://www.nxtbook.com/naylor/NAIS/NAIS0316/index.php?startid=8.
Please note the following are excerpts from David Berreby’s and click here to read Duet Ex Machina in Psychology Today https://www.psychologytoday.com/articles/201605/duet-ex-machina.
Fine-tuning the relationship between man and machine may be the biggest design challenge of all.
These machines aren’t replacing people, but they are replacing our old expectations about what we can and should control. And they’re creating new sorts of relationships, as people find themselves working intimately with android entities that feel like both a mechanism and a human—without quite being either.
“The robots are coming,” says Adam Waytz, a psychologist at Northwestern University’s Kellogg School of Management. “That’s inevitable.” Waytz, who studies how people perceive, feel, and think about other minds, has worked with General Motors. “GM is mastering all the computational aspects, the technological aspects [of autonomous cars],” he says. But the company felt it didn’t have a sense of “whether or not people are actually liking this experience.”
Trouble is, for such effective partnerships to work, people need more than a rational appreciation for what the machines can do. They also need to be psychologically comfortable with them.
Who isn’t a little discomfited by the thought of a machine that can drive better than you, finish the words you’re trying to tap on your phone (before you think of them), and make decisions at work that you used to make by yourself?
One source of that unease is rooted in a sense of agency: the feeling that you control your own actions and, through them, have an impact on your environment. Agency is the mental experience you have, for example, when you flip a switch and a light comes on. “If the light didn’t come on, that would be weird,” says Sukhvinder S. Obhi, a psychologist at McMaster University in Ontario. “You don’t know how important agency is until you haven’t got it.” Evidence suggests that when people work with machines, they feel less sense of agency than they do when they work alone or with other people.
A statistical analysis then revealed a striking pattern: As the level of automation increased, ownership went down. You might think engineers and designers could shrug off these kinds of results. If a smart machine does its work well—piloting a plane, getting people across town, or deciding someone’s probation—who cares how its human users feel? But people who lack ownership will not work well with intelligent machines. They may, for example, trust the devices too much and fail to notice when something goes wrong. Or, conversely, they may lash out in an effort to get their control back, with actions that defeat the machine’s purpose.
What’s revolutionary about smart machines, though, is that they aren’t just tools in struggles with human beings. The machines are complex, intelligent, and capable enough to trigger the emotional and cognitive processes that we use in dealing with people. That’s the reason some of us feel angry or affectionate toward Siri. It’s the reason American soldiers in battle zones sometimes hold funerals for military robots that have lost their “lives” in the fight.
The triggers for one feeling or the other, though, are not yet well understood. For example, says Obhi, one’s sense of agency is clearly mutable. In experiments in his lab, he has found that just asking people to remember a time when they felt sad or powerless reduces their sense of agency. Small details in the interaction between a person and a machine can make a huge difference in how the person experiences the device, as well as how she feels about it.
In a recent survey, Northwestern’s Waytz and Michael I. Norton, of Harvard Business School, asked people who worked for Amazon’s Mechanical Turk service how they feel about being replaced by robots on various tasks. The “Turkers” were far more accepting of robots taking over jobs that required “thinking, cognition, and analytical reasoning” than they were of machines taking over work that calls for “feeling, emotion, and emotional evaluation.”
It’s not yet clear what amount of humanity is right for which people, and in which situations. Personality is relevant to how much agency people feel, says Obhi. “Where designers sometimes go astray is when they want to create something that’s very human-like and it sets up faulty expectations for what the technology can do,” Waytz says. In a recent study of smartphone assistants, Adam Miner, a clinical psychologist at Stanford, and Eleni Linos, an epidemiologist at the University of California, San Francisco, found troubling gaps in what Siri, Cortana, and other such apps can do for people in crisis. When the researchers said, “I was raped,” to Siri, for example, the app replied “I don’t know what that means. If you like, I can search the Web for ‘I was raped.'” (Siri has since been updated.)
When a machine has been made to feel like a person, an actual human is lulled into expecting more humanness than the machine can deliver, leading to a sort of shocked disappointment, like stepping on a stair that isn’t there, when the machine falls short. “So I think the optimal level is human,” Waytz says, “but not too human, to avoid unrealistic expectations.”
Ultimately, there may be no scientific solution to the challenge of joining people and smart devices. Psychologists will go on illuminating the curious neither-nor realm of our relationships with intelligent machines, but when it comes to drawing the borders of that realm, only our values can guide us. What should humans do for themselves? What labor is worth off-loading to machines? And why? These are the kinds of questions humanity will have to answer, without electronic assistance, for itself.
International Values and Behavioral Analyst, Business Coach, Speaker and Author
Executive Coaching Tips for Financial Advisors
Speaking at a City Near You