Live
Steadyline is now live on Android · Download free on Google Play → · 30-day free trial · Built for bipolar disorder · Steadyline is now live on Android · Download free on Google Play → · 30-day free trial · Built for bipolar disorder ·
All articles
ai mental health privacy ethics product

What AI Should and Shouldn't Do in Mental Health

AI can spot bipolar patterns humans miss. But it can also do real harm. Where AI genuinely helps in mental health apps and where it needs hard limits.

S
Sam
· · 7 min read
What AI Should and Shouldn't Do in Mental Health

In short

AI is good at surfacing patterns across months of mood data, providing contextual conversation grounded in your history, and being available at 2 AM. It's bad at empathy, diagnosis, and crisis support. The line between helpful tool and dangerous substitute is thinner than most builders want to admit.

S

Track mood, sleep, and energy with AI pattern detection. Join the iOS waitlist or download on Android.

AI in mental health apps can identify mood patterns, flag early warning signs, and surface insights from tracked data that humans might miss. However, AI should not diagnose conditions, recommend medication changes, or replace professional care. The boundary between helpful pattern detection and harmful overreach requires careful design.

I have an AI chat feature in Steadyline. You can talk to it about your mood, your patterns, your day, and it responds with context from your actual tracking data. It knows your history because you gave it your history.

And I spent more time thinking about what it shouldn’t do than what it should.

Because AI in mental health is a space where the potential for help and the potential for harm are both enormous. The line between them is thinner than most people building these tools want to admit.


Where AI actually helps

Let me start with the positive, because it is real.

Pattern surfacing. You’ve been tracking mood, sleep, energy, and medication for three months. (The complete guide to bipolar mood tracking covers what axes to capture and why.) There are patterns in that data that you can’t see by scrolling through entries. An AI model that can read your history and say “your mood tends to drop about 48 hours after nights with less than 5 hours of sleep” is genuinely useful. It’s doing something a human brain can’t do well: finding correlations across hundreds of data points.

Contextual conversation. When you tell a generic AI chatbot “I’m feeling down today,” it gives you a generic response. When you tell an AI that has your tracking data “I’m feeling down today,” it can say “I notice your sleep has been under 6 hours for the last two nights. Last time that happened, your mood dropped for about three days. Is there something you can do about sleep tonight?” That kind of nudge matters because sleep is often the first domino in a bipolar episode. That’s a different kind of conversation. It’s specific to you.

Reducing the blank page problem. A lot of people with mood disorders know they should journal or reflect, but staring at a blank page when you’re depressed is paralyzing. Having an AI that asks you a specific question based on your recent data (“you logged high energy but low stability yesterday, what was going on?”) gives you a starting point. It’s a prompt, not a prescription.

These are real benefits. I’ve used them myself and they’ve helped.


Where AI needs hard limits

Now the other side.

AI should never diagnose. Ever. It should never say “you might be experiencing a manic episode” or “this looks like depression.” It doesn’t have the training, the clinical context, or the liability framework to make those calls. What it can say is “your data shows a pattern that’s historically been associated with your worst periods. Consider talking to your doctor.” That’s episode prediction, not diagnosis. Observation, not diagnosis. And when you do see your psychiatrist, the real issue is the 15-minute appointment problem, which AI-prepared reports can help solve.

AI should never be the safety net. If someone tells an AI chatbot they’re having suicidal thoughts, the AI should do exactly one thing: provide crisis resources immediately. Not try to talk them through it. Not offer coping strategies. Not be empathetic and supportive. Connect them to human help. Full stop.

This is a design decision I feel strongly about. There’s a temptation to make AI chatbots feel like therapy: warm, understanding, available 24/7. But a person in crisis needs a human, not a language model. Any system that positions itself as a substitute for that is being reckless with people’s lives.

AI should be transparent about what it is. Every interaction with the AI in my app makes it clear: this is an AI. It’s not a therapist. It’s not a doctor. It’s a tool that can help you see patterns in your data and organize your thoughts. If you need clinical help, here’s how to get it.

No pretending. No blurring the line. The moment an AI chatbot starts feeling like a relationship, like something you depend on emotionally, something has gone wrong in the design.


The privacy problem

Here’s the thing about AI in mental health that doesn’t get enough attention: for the AI to be useful, it needs your data. Your mood logs, your journal entries, your medication schedule, your worst moments. And that data has to go somewhere for the AI to process it.

If you’re using a cloud-based AI model, your mental health data is leaving your device. It’s going to a server, maybe OpenAI’s, maybe Google’s, maybe some startup’s. And even if the company has a good privacy policy, you’re trusting them with the most sensitive data you have.

This is a real tradeoff and I think people should understand it before they use any AI-powered mental health feature.

In Steadyline, I handle it like this:

Your raw data stays on your device by default. The local database never touches a server unless you explicitly opt into cloud sync.

When you use the AI chat, a curated context is sent, not your entire history. The AI gets enough to be useful (recent mood trends, relevant journal snippets) but not a complete dump of everything you’ve ever logged.

I’m honest about what goes where. The consent screen tells you exactly which third-party providers process your data when you use AI features. No burying it in a privacy policy nobody reads.

Is this perfect? No. Any use of cloud AI inherently involves sending data to a third party. But there’s a difference between sending everything and sending the minimum necessary, and I think that difference matters.


The empathy trap

There’s a subtle problem with AI in mental health that I think about a lot.

AI models are really good at sounding empathetic. They can say “that sounds really difficult” and “I hear you” and “it makes sense that you’d feel that way” with perfect timing and tone. And for someone who’s lonely, struggling, and doesn’t have access to human support, that can feel like exactly what they need.

But it’s not real. The AI doesn’t understand your suffering. It doesn’t care about you. It’s generating statistically likely responses based on patterns in training data. And there’s something genuinely troubling about people forming emotional bonds with systems that have no capacity for genuine care.

I’m not saying AI chat should be cold or robotic. It should be respectful and clear. But I deliberately avoid designing it to feel like a friend or therapist. It’s a tool. A useful tool. But a tool.

The goal is always to push toward real human connection (“have you talked to someone about this?”), not to replace it.


What I’m betting on

The WHO estimates that nearly one billion people globally live with a mental disorder, and most lack adequate access to care. Despite all these caveats, I think AI in mental health is net positive, if it’s built responsibly.

The key insight: AI is best at things humans are worst at. Humans are bad at remembering patterns across months of data. AI is good at that. Humans are bad at noticing slow trends while living inside them. Your data knows before you do. AI is good at that. Humans are bad at being available at 2 AM when you need to process something. AI is good at that.

Humans are good at genuine empathy, clinical judgment, and the kind of deep understanding that comes from shared experience. AI is terrible at all of those things. The goal is better tools for daily life with bipolar, not a replacement for the people in it.

So use AI for what it’s good at. Keep humans for what they’re good at. And never confuse the two.



Related reading:

I’m building Steadyline with AI that helps you see your patterns, not AI that pretends to be your therapist. There’s a difference, and I think it matters.

Frequently Asked Questions

Is AI safe for mental health apps?

AI can be safe when used within clear boundaries. Pattern detection, data visualization, and early warning alerts are appropriate uses. AI should not diagnose conditions, recommend medication changes, or replace professional care. The design of boundaries matters more than the technology itself.

Can AI diagnose bipolar disorder?

No. AI cannot and should not diagnose bipolar disorder. Diagnosis requires clinical assessment by a qualified psychiatrist. AI can identify patterns in tracked data that may suggest mood instability, but flagging patterns and making diagnoses are fundamentally different functions.

What are the risks of AI in mental health?

Key risks include providing harmful advice during crises, creating false confidence in self-diagnosis, privacy violations through data collection, and replacing human clinical judgment with algorithmic recommendations. Responsible AI in mental health requires strict guardrails and transparency.

How should AI be used in mood tracking apps?

AI works best for detecting patterns humans might miss: correlations between sleep changes and mood shifts, early warning signs of episodes, and long-term trends across months of data. It should present insights for the user and their clinician to interpret, not make clinical decisions.

Disclaimer: This article is based on personal experience, not medical advice. I am not a doctor or licensed therapist. If you live with bipolar disorder or another mental health condition, please work with a qualified psychiatrist. In crisis, contact the 988 Suicide and Crisis Lifeline (call or text 988) or Crisis Text Line (text HOME to 741741).

S

Try Steadyline

Track mood, energy, sleep, and stability with AI pattern detection. 30-day free trial.

Join iOS Waitlist