I Won’t Use AI Smart Health Features for My Own Sake. Here’s Why

Share This Post

Living with Severe Health Anxiety: Why AI Needs to Stay Away from My Personal Health

A Pivotal Moment: When Health Anxiety Took Over

A few years ago, I found myself in a terrifying situation. Severe migraines that lasted for weeks sent my health anxiety spiraling out of control. When I called the UK’s NHS helpline, their advice to seek immediate medical attention only worsened my fears. They instructed me to walk to the nearest hospital with someone, insisting it would be quicker than waiting for an ambulance. That call confirmed my worst fears: I was convinced I was going to die.

But as it turned out, my fears were unfounded. The culprit behind my symptoms was not a life-threatening condition but something far more mundane—severe muscle strain from carrying heavy cameras around my neck all day while photographing a friend’s wedding. The NHS helpline operator had simply taken a cautious approach, focusing on the limited data I provided. Their "better safe than sorry" approach, though well-intentioned, sent me into a tailspin of panic. This experience became a turning point in my understanding of my health anxiety and how easily I could jump to the worst possible conclusions.

The Never-Ending Cycle of Health Anxiety

I’ve spent most of my adult life battling health anxiety, a relentless mental health struggle that has taught me a lot about my tendencies. A ringing in my ears? It must be a brain tumor. A minor stomach ache? Time to prepare for the worst. Over the years, I’ve learned to identify triggers and manage my anxiety to some extent, but it’s a constant battle. One of the most important lessons I’ve learned is to avoid Googling my symptoms at all costs. No matter how innocuous the symptom, medical websites—and even the NHS’s own resources—often lead me down a rabbit hole of panic, with cancer almost always appearing as a potential explanation.

This avoidance has become second nature to me, but the rise of health-tracking devices and AI-powered health tools has introduced a new challenge. While these technologies are designed to empower users with data, they can be a double-edged sword for someone like me. The more data I have about my body, the more I obsess over what it might mean. For now, I’ve learned to use wearable devices like my Apple Watch with caution, avoiding features like heart rate monitoring that could trigger unnecessary worry. But the idea of AI/category-techtrends/smartwatches/health-tracking/) interpreting my health data is a step too far.

erfolg.Listener();

The Dark Side of Health Tracking Devices

At first, I found my Apple Watch to be a helpful tool. Its ability to track my heart rate during workouts was useful, and it felt like a convenient way to stay on top of my health. But over time, I noticed myself becoming increasingly obsessed with the data it provided. I’d check my heart rate dozens of times a day, searching for reassurance that everything was "normal." But the more I checked, the more doubt began to creep in. "Why is my heart rate higher than usual? Is that normal? Maybe I should check again in a few minutes." And, of course, when my heart rate didn’t magically stabilize, panic set in.

This obsession extended to other metrics as well, like blood oxygen levels and sleep scores. Any deviation from the "normal" range sent me spiraling into worst-case scenarios. The more data the device provided, the more I felt like I had to worry about. While I’ve managed to find a balance with wearable devices—by avoiding certain features entirely—AI-powered health tools represent a new and even more daunting prospect. The idea of an AI analyzing my health data and providing insights is less reassuring than it is terrifying.

Samsung’s AI Health Tools: A Recipe for Disaster?

During Samsung’s January Unpacked event, the company showcased its new Galaxy AI tools, which aim to track and analyze health metrics like heart rate fluctuations and offer personalized insights. For many, this might sound like a revolutionary step forward in health monitoring. But for me, it’s a red flag. The idea of an AI constantly monitoring my health data and providing updates is not comforting—it’s a potential trigger waiting to happen.

The AI’s ability to answer health-related questions is particularly concerning. Like any AI, it relies on publicly available data to provide answers, which often means regurgitating a wide range of possibilities without context. As someone who has spent years avoiding medical websites for fear of spiraling into panic, the thought of having an AI health assistant that could provide me with a laundry list of potential explanations for every minor symptom is deeply unsettling. It’s like having a 24/7 health anxiety enabler built into my phone or watch.

The Risk of AI Misinterpretation

One of the most significant concerns with AI health tools is their inability to understand context. When I asked the NHS helpline about my symptoms, the operator rightly took a cautious approach based on the limited data I provided. But an AI lacks the nuance and empathy that a human doctor or nurse can provide. If I were to ask an AI about a minor headache, it might list a dozen potential causes, ranging from dehydration to something as serious as a brain tumor. While it’s statistically far more likely that the headache is caused by something benign, like not drinking enough water or staring at screens for too long, the AI won’t be able to reassure me of that. Instead, it would likely present all possibilities, leaving me to spiral into worst-case scenarios.

This lack of context is a major problem for someone with health anxiety. Instead of offering reassurance, AI health tools could exacerbate my fears by providing a laundry list of potential causes without emphasizing the most likely (and least alarming) explanation. The risk of misinformation or overinterpretation is high, especially if the AI relies on unverified or outdated sources. For someone who has spent years avoiding medical websites for fear of panic, the thought of having an AI health tool at my fingertips is a nightmare waiting to happen.

Keeping AI at Bay: A Matter of Mental Health

The idea that AI health tools could one day provide the kind of empathy and reassurance that a human doctor can offer is appealing. Maybe someday, AI will be advanced enough to recognize that I’m spiraling into panic and offer a calming, logical perspective. But until that day comes, I’m not willing to take the risk.

For now, I’ve made the conscious decision to keep AI as far away from my personal health as possible. While health tracking devices have their place, the addition of AI-powered insights feels like a step too far. My mental health is worth more than the promise of cutting-edge technology, and I’m not ready to trust an algorithm with something as delicate as my health. Until AI is capable of understanding the complexities of human anxiety—and responding with empathy rather than data—the best place for it is far away from my personal health journey.

Related Posts