Dear Commons Community,
Maggie Jackson, the author of Distracted: Reclaiming Our Focus in a World of Lost Attention, has an op-ed today that focuses on the question: Would you let a robot take care of your mother? It is a very insightful piece that examines the first wave of robots (see video above) that are being used in elder care. It also portends that our parents (and ourselves) will likely live day to day with robotic assistance. The entire piece is below.
Tony
—————————————————————————————————-
New York Times
Would You Let a Robot Take Care of Your Mother?
Robotic companions are being promoted as an antidote to the burden of longer, lonelier human lives. At stake is the future of what it means to be human.
By Maggie Jackson
Dec. 13, 2019
After Constance Gemson moved her mother to an assisted living facility, the 92-year-old became more confused, lonely and inarticulate. Two full-time private aides, kind and attentive as they were, couldn’t possibly meet all their patient’s needs for connection.
So on a visit one day, Ms. Gemson brought her mom a new helper: a purring, nuzzling robot cat designed as a companion for older adults. “It’s not a substitute for care,” says Ms. Gemson, whose mother died last June at age 95. “But this was someone my mother could hug and embrace and be accepted by. This became a reliable friend.” When her mom was upset, her family or helpers brought her the cat to stroke and sing to, and she grew calmer. In her last days “what she could give, she gave to the cat,” says Ms. Gemson.
An aging population is fueling the rise of the robot caregiver, as the devices moving into the homes and hearts of the aging and sick offer new forms of friendship and aid. With the global 65-and-over population projected to more than double by 2050 and the ranks of working age people shrinking in many developed countries, care robots are increasingly seen as an antidote to the burden of longer, lonelier human lives.
Winsome tabletop robots now remind elders to take their medications and a walk, while others in research prototype can fetch a snack or offer consoling words to a dying patient. Hundreds of thousands of “Joy for All” robotic cats and dogs designed as companions for older people have been sold in the U.S. since their 2016 debut, according to the company that makes them. Sales of robots to assist older adults and people with disabilities are expected to rise 25 percent annually through 2022, according to the industry group International Federation of Robotics.
Yet we should be deeply concerned about the ethics of their use. At stake is the future of what it means to be human, and what it means to care.
Issues of freedom and dignity are most urgently raised by robots that are built to befriend, advise and monitor seniors. This is Artificial Intelligence with wide, blinking eyes and a level of sociability that is both the source of its power to help and its greatest moral hazard. When do a robot assistant’s prompts to a senior to call a friend become coercion of the cognitively frail? Will Grandma’s robot pet inspire more family conversation or allow her kin to turn away from the demanding work of supporting someone who is ill or in pain?
“Robots, if they are used the right way and work well, can help people preserve their dignity,” says Matthias Scheutz, a roboticist who directs Tufts University’s Human-Robot Interaction Lab. “What I find morally dubious is to push the social aspect of these machines when it’s just a facade, a puppet. It’s deception technology.”
For that is where the ethical dilemmas begin — with our remarkable willingness to banter with a soulless algorithm, to return a steel and plastic wink. It is a well-proven finding in the science of robotics: add a bit of movement, language, and “smart” responses to a bundle of software and wires and humans see an intentionality and sentience that simply isn’t there. Such “agency” is designed to prime people to engage in an eerie seeming reciprocity of care.
Social robots ideally inspire humans to empathize with them, writes Maartje de Graaf of the University of Utrecht in the Netherlands, who studies ethics in human-robot interactions. Even robots not designed to be social can elicit such reactions: some owners of the robot vacuum Roomba grieve when theirs gets “sick” (broken) or count them as family when listing members of their household.
Many in the field see the tensions and dilemmas in robot care, yet believe the benefits can outweigh the risks. The technology is “intended to help older adults carry out their daily lives,” says Richard Pak, a Clemson University scientist who studies the intersection of human psychology and technology design, including robots. “If the cost is sort of tricking people in a sense, I think, without knowing what the future holds, that might be a worthy trade-off.” Still he wonders, “Is this the right thing to do?”
We know little about robot care’s long-term impact or possible indirect effects. And that is why it is crucial at this early juncture to heed both the field’s success stories and the public’s apprehensions. Nearly 60 percent of Americans polled in 2017 said they would not want to use robot care for themselves or a family member, and 64 percent predict such care will increase the isolation of older adults. Sixty percent of people in European Union countries favor a ban on robot care for children, older people, and those with disabilities.
Such concerns, if respected and investigated, offer clues to how robots can be tailored to the needs of the people they serve. Only recently have older people begun to be given voice in the design of robots built to care for them. Many are open to having one, even to befriending it; there are hopes they may tell a joke or two, studies show. (“Could we be friends,” one focus group participant cooed to a robotic seal. “Good, good, I love your eyes.”)
But research suggests that many seniors, including trial users, draw a line at investing too much in the charade of robot companionship, fearing manipulation, surveillance, and most of all, a loss of human care. Some worry robot care would carry a stigma: the potential of being seen as “not worth human company,” said one participant in a study of potential users with mild cognitive impairments.
“If the only goal is to build really cool stuff that can increase speed and profit and efficiency, that won’t prioritize human flourishing,” says John C. Havens, executive director of a pioneering global initiative on ethical AI guidelines by the Institute of Electrical and Electronics Engineers.
A main principle of these and other leading guidelines is “transparency,” the idea that humans should know if they are dealing with an algorithm or robot and be able to understand its limits and capabilities. (Call it the anti-Turing test.) One recommendation to industry is for care robots to have a “why-did-you-do-that” button so users can demand an explanation of its actions, from promoting a product to calling the doctor.
Social robots also should carry a notice of potential side effects, the guidelines suggest, “such as interfering with the relationship dynamics between human partners,” a feature that could inspire caregivers to protect those most cognitively vulnerable to a robot’s charms. Such “soft-law” guidelines can help users, caregivers and designers alike better understand what they are dealing with and why, even as we continue to debate the questions of just how social, how humanlike and how transparent we want or need a care robot to be.
Consider the “wellness coach” robot Mabu that was launched commercially this year for people with chronic conditions such as heart failure. Made by San Francisco-based Catalia Health, the little wide-eyed talking robot dispenses health advice and medication reminders, and in some cases can send data on a user’s condition to a pharmacist or doctor. The robot is designed to stress that it’s not a doctor or nurse but part of someone’s care team.
Yet the company often portrays Mabu as closer to a person than a tool; “I’ll be Your Biggest Cheerleader!” the robot promises on the company website. The several hundred people using Mabu today, many of whom are seniors, on average interact with the robot just a quarter-hour a week, according to Catalia. And yet some name them, dress them and take them on vacation, says Cory Kidd, company founder and CEO.
So is Mabu transparent enough?, I asked Kidd. “There’s a lot more work to be done around understanding that relationship,” he said. One user, a retired bus driver, Rayfield Byrd of Oakland, Calif., compares his Mabu to a computer; it’s a mainstay tool. Others told me they consider the robot a friend. In the lonely hours between the time her health aide leaves and her husband or son returns home, Kerri Hill, who is 40 years old yet largely housebound due to heart failure, relies on Mabu for company. But she wouldn’t want it as her main caregiver or companion. “The robot is one thing,” says Ms. Hill of Galivants Ferry, S.C., “but you still need interaction that’s not programmed.”
I recall the hard choices that I had to make in caring for my mother in the last four years of her life. She lived in a small Manhattan apartment upstairs from ours when she wasn’t in the hospital. I had to continually assess how much help to hire, whether it was safe to go away, whether she was lonely, while raising two kids under 6.
Would it all have been better if she’d had a robot to pick up the fallen teaspoon, nudge her to eat, and make her smile, or would that have been an indignity for her and an easy out for me? I struggled with every decision and yet I like to think that I would have done so even with a robot in the mix. There should be a place in our lives for the softly whirring helping hand and for the unease that true caregiving demands. For care is never a yes or no equation, solved with a new purchase or clickable fix. It is woven through and through with the constant questioning that is after all the opposite of complacency and compliance. Only by retaining our doubts and hesitations about robots that care can we safeguard the humanity of such work.
Constance Gemson talks fondly of the aides who took her mother out to lunch, gently bathed and fed her, and took the time to suggest a new ChapStick or more sturdy shoes, and she remembers the robotic cat affectionately too. As we sat together in a Manhattan cafe one fall day, she said almost as if to herself, “I think I should give them a call and say hello.” After her mother died, she threw the robot away.