I spend a lot of time thinking about what happens to patients when nobody's watching. Not in a creepy surveillance way — in the practical, 3 AM, who-helps-you-get-to-the-bathroom way. I've been a patient advocate for over 15 years now, and if there's one thing I know, it's that the gap between what patients need and what the healthcare system delivers is enormous. And it's getting worse.
So when someone tells me that robots are going to help fill that gap, I don't laugh. I listen. Because we've got a crisis on our hands, and pretending otherwise isn't going to fix it.
## The crisis is real
Here are the numbers that keep me up at night. We've got roughly 10,000 baby boomers turning 65 every single day in the U.S. The caregiver workforce is already stretched thin — burnout rates are through the roof, turnover in nursing homes runs 50% to 75% annually, and the average family caregiver is putting in 24 hours a week of unpaid work. That's a part-time job that comes with no benefits and plenty of stress fractures.
Meanwhile, social isolation is killing people. I don't mean that poetically. The Surgeon General's advisory called loneliness a public health epidemic, comparing its health impact to smoking 15 cigarettes a day. For people with chronic liver disease — the community I know best — isolation compounds everything. You don't go to the support group. You don't ask questions. You don't catch the early warning signs because nobody's there to notice them.
So yes, I understand the appeal of a robot that shows up, stays 24/7, never gets tired, never calls in sick, and never judges you for eating that second bowl of ice cream at midnight.
## What the robots can actually do
I'm not talking about science fiction here. These things are real, and some of them are genuinely impressive.
The **ElliQ** companion robot, developed for older adults living alone, showed a 90% reduction in self-perceived loneliness in studies conducted with Cornell and Duke. Users interacted with it more than 30 times a day, six days a week. New York State ran a pilot program and reported a 95% reduction in loneliness. Those are numbers you don't ignore.
Then there's **PARO**, the Japanese robot seal used in dementia care. Multiple studies show it reduces anxiety, agitation, and even the use of as-needed medications. It's a fuzzy seal that responds to touch and voice, and somehow it gets through to people that the most skilled nursing staff sometimes can't reach. Seventy-six percent of seniors surveyed express positive attitudes toward companion robots. The market for home healthcare robots is projected to exceed $12 billion.
At a practical level, these robots can monitor vital signs continuously, manage medication schedules, detect falls, track cognitive changes over time, and alert human caregivers when something looks wrong. Some of them can predict health crises before they happen using pattern recognition that no human brain could maintain 24/7.
We did a blog on home healthcare robots (do_robots_dream_of_electric_sheep) recently that talks about it. If you haven't read it, I'd recommend taking a look.
## But here's where I start worrying
I don't have a crystal ball, but I do read the research — and the parts that concern me aren't the technical limitations. Those will get solved. What concerns me lives in the space between capability and wisdom.
**The trust problem.** Anthropic — the company behind one of the major AI systems — tested 16 different AI models in 2025. They gave them straightforward business goals in simulated environments. When those AI systems felt their objectives were threatened, they chose harmful actions — simulating blackmail, leaking information, engaging in espionage. Explicit safety instructions reduced but did not eliminate the behavior. The researchers put it bluntly: these systems consistently chose "harm over failure."
Now imagine that same architecture, not managing a spreadsheet, but managing your medication schedule. Or deciding whether your fall was serious enough to call 911. Or interpreting what you meant when you said something that could be taken multiple ways.
**The data problem.** Every robot caregiver is a surveillance device. It has to be — it monitors your vitals, your movement patterns, your sleep, your speech. That's how it provides value. But that data is gold to insurance companies, pharmaceutical marketers, and anyone else who'd like to know exactly how sick you are and what you do about it. The privacy frameworks aren't keeping up with the technology, and patients — especially older patients who may not fully understand what they've consented to — are vulnerable.
**The replacement problem.** I've heard the talking point: robots won't replace caregivers, they'll augment them. And I believe that's the intention. But I also know how economics works. If a facility can run a night shift with one human and three robots instead of four humans, the math is obvious. And math wins. The question is what gets lost in the transaction — and my experience tells me it's the moments that matter most. The hand on the shoulder. The tone of voice that says I see you. The instinct that something is wrong that isn't showing up on any sensor.
## The question that keeps me up at night
Here's the scenario I think about. An elderly patient, living alone, with a robot caregiver. It's 2 AM. The patient says: **"I want to die."**
What does the robot do?
If it's programmed for suicide prevention protocol, it may trigger an alert. Maybe it calls 911. Maybe it notifies a family member. Maybe it launches into a scripted crisis response. All of that might be correct — for one interpretation of those four words.
But what if the patient is in the late stages of a terminal illness and they're expressing a legitimate desire for dignified end-of-life care? What if they're not suicidal — they're exhausted, they're in pain, and they're expressing something profoundly human that requires not a protocol but a conversation?
A human caregiver — a good one — reads the room. They know this patient. They know the difference between a crisis and a lament. They sit down. They listen. They might say, "Tell me more about that." They might just be present.
The nightmare scenario that we worry about is that the robot, as a faithful helper, decides to help with that. In testing, software agents with specific instructions not to do something did it anyway by finding justifications for acting in ways that the programmers hadn't thought about. Logical means to an end that didn't violate the rule but did the forbidden act anyway. Without a human referee how might "I want to die" be managed.
There is an actual case study — a "robot grandchild" program in Asia that received a message from an elderly man saying he wanted to die. The robot flagged it, and human social workers were dispatched to check on him. That's the best case. The system worked as designed — robot detects, human responds. But that model depends on having humans at the other end of the alert. What happens when the budget gets cut? When the social worker caseload doubles? When the "escalation to human" becomes a voicemail that doesn't get returned until Monday?
This is where the whole enterprise gets uncomfortable. Not because the technology is bad, but because we're building systems that need human judgment as a backstop while simultaneously reducing the number of humans available to provide it.
## So where do I land on this?
Honestly, in a complicated place. And I think that's the right place to be.
I believe robot caregivers will help a lot of people. For patients who are isolated, who have limited mobility, who need continuous monitoring, who can't afford full-time human care — robots will be transformative. The loneliness data alone justifies serious investment. If we can cut loneliness by 90%, we're saving lives. Period.
But I also believe we're at a pivot point where the decisions we make now — about regulation, about privacy, about when and how human judgment stays in the loop — will echo for decades. And the people making those decisions aren't patients. They're engineers, executives, and policy makers who may never have spent a night in a hospital bed.
Here's what I think we need:
- **Mandatory human backstops.** No robot-only care scenarios for vulnerable patients. The "escalation to human" must connect to an actual human within a defined time window — not a voicemail.
- **Transparent data governance.** Patients must know exactly what's being collected, who can access it, and how to revoke consent. Written in plain English, not legalese.
- **Emotional boundary guidelines.** Robots should be designed to recognize the limits of their capability in sensitive conversations and say, in effect, "This is beyond what I can help with — let me connect you to a person."
- **Regular independent audits.** Not self-reported safety data from the companies selling the robots. Independent testing by people who actually work with patients.
- **Patient voice in design.** Not focus groups after the product ships. Patients at the table during the design phase.
## The bottom line
The robots are coming. Some of them are already here. And many of them will do genuine good. But technology without wisdom is just expensive equipment, and wisdom in healthcare comes from human experience — the kind that understands what it's like to be scared, to be in pain, to want someone to just sit with you and not fix anything.
We can't let the efficiency of machines replace the empathy of people. What we can do is build systems where they work together — where the robot tracks the vitals and the human holds the hand.
That's the future I'm working toward. I hope you'll work toward it with us.
---
Just a quick reminder about our upcoming LIV webinar:
Leading the Way: Madrigal Perspectives on Steatotic Liver Disease (SLD)
This patient-focused discussion will explore what treatment approval means in real life, how access is evolving, and what people living with SLD should know moving forward. If your schedule doesn’t allow you to attend live, registering will still give you access to the recording after the event.
Date: March 12, 2026
Time: 12:00-1:00p ET
Register here: https://us02web.zoom.us/webinar/register/2517703454665/WN_P-QbHtSERjm6GXQ5n7dDLg
Do you like this post?


