How Steadyline Keeps Your Mental Health Data Safe
Your mental health data is personal. Here's how Steadyline handles it: what we collect, what we don't, and the privacy architecture that keeps your data safe.
In short
AI in Steadyline never sees your raw journal entries. It receives anonymized aggregates only. Row Level Security in Supabase isolates every user's data at the database level. Health Connect data never leaves your device.
Track mood, sleep, and energy with AI pattern detection. Join the iOS waitlist or download on Android.
Steadyline protects mental health data through encryption, on-device processing for AI features where possible, and strict data minimization. The app collects only what is necessary for mood tracking and pattern detection. No data is sold or shared with advertisers, and users can export or delete all their data at any time.
Mental health data is not ordinary data.
It’s the log entry you wrote at 2 AM when you couldn’t sleep. The mood score that tracked a depressive episode, the kind of data that a complete tracking system needs to handle with care. The medication log that shows what you’re taking and when. If any of that leaked, or got used in ways you didn’t agree to, the damage would be personal in a way that a compromised credit card simply isn’t.
I built SAM knowing this. And I’ve made architectural decisions at every layer of the stack to ensure that using AI features in the app doesn’t mean handing your most vulnerable moments to a third party. This is a transparent account of exactly how it works.
The threat model I actually care about
Before getting into solutions, let me be specific about what I’m protecting against:
- Your raw diary entries or mood logs reaching AI training datasets. The most common concern, and a legitimate one.
- Identifiable data leaving your device when it doesn’t have to. AI should only see what it needs to do the job.
- Third-party AI providers seeing more than necessary. If they need to process something, they should get the minimum needed to do the job.
- Data breach exposure. If SAM’s backend were ever compromised, what’s there to steal?
- Opaque data practices. You should always be able to know what’s happening with your data.
Most wellness apps fail on at least one of these, which is part of why building a premium app solo meant owning every privacy decision from day one. Here’s where SAM stands on each.
Layer 1: AI never sees your raw entries
The most privacy-preserving thing you can do with AI is limit what it sees. SAM uses an InferenceOrchestrator that controls exactly what data reaches any AI provider.
Your raw journal entries
↓ (processed locally into anonymized aggregates)
AI provider receives only: mood averages, themes, stability scores
↓ (never)
Raw PII or journal text to external provider
When AI analyses your data (for mood insights, pattern detection, or the weekly summary), it receives structured, anonymized statistics. Not your words. Not your name. Not your dates.
The AI providers (Groq, Hugging Face) process these anonymized payloads and return insights. Your raw journal text stays in your local database.
Layer 2: Supabase Row Level Security. Your data is yours, period
All data synced to the cloud lives in Supabase (Postgres). The critical protection here isn’t encryption (though that’s present too). It’s Row Level Security (RLS).
Every table that holds user data has policies like this:
-- Users can only ever read their own rows
create policy "users_own_data"
on mood_logs
for all
using (auth.uid() = user_id);
What this means in practice: even if there were a bug in the app’s API layer that somehow constructed a query to fetch another user’s logs, Postgres would refuse it at the database level. There is no admin backdoor that can accidentally return your data in a query targeting someone else.
This isn’t a checkbox. It’s the actual security boundary for multi-tenant data isolation.
Layer 3: What actually gets sent to AI APIs, and how
For features that need cloud AI (like the weekly insights summary), SAM does not send your raw log entries. It sends a structured, anonymized context object.
Here’s the shape of what an AI provider actually receives:
{
"period": "2026-02-17 to 2026-02-23",
"mood_average": 6.2,
"mood_trend": "improving",
"energy_average": 5.8,
"log_count": 6,
"notable_themes": ["work stress", "sleep disruption", "social connection"],
"stability_score": 71,
"medication_adherence_pct": 92
}
What is not in that payload:
- Your name or any account identifier
- The actual text of your log entries
- Dates specific enough to be individually identifying
- Any Health Connect data
The AI sees aggregate statistics and theme labels, not your personal writing. The themes sent to cloud AI are extracted tags, not your original words.
Layer 4: Health Connect data never leaves the device
SAM integrates with Android Health Connect to optionally correlate health metrics (steps, sleep, heart rate) with your mood logs. This feature exists purely locally.
Health Connect data is:
- Read on-device only
- Used only for local correlation analysis
- Never synced to Supabase
- Never included in any AI API request
The correlation engine (“your mood scores are 1.4 points higher on days you walk more than 7,000 steps”) runs entirely in the app. That’s the kind of insight that helps you track bipolar patterns without compromising privacy, because it never needs to leave your phone.
Layer 5: Transparent controls, not hidden toggles
I think app privacy settings are usually theatre. A buried “data sharing” toggle that most users never find, defaulted to “on.”
SAM takes a different approach. The Privacy & Transparency screen shows you:
- Which AI provider processed your last request and what context was sent
- What data was included in each AI request
- A complete log of data sync events to Supabase, with timestamps
- A one-tap “delete everything” that triggers deletion on both the local SQLDelight database and all Supabase tables associated with your account
This isn’t just for users who care about privacy. It’s for anyone who wants to understand what an AI-powered app is doing with their data. That should be anyone who uses one.
What I don’t do (and why those decisions matter)
I don’t use your data to train models. SAM uses inference-only API calls to Groq and Gemini. No fine-tuning, no feedback loops that improve a model on your data. Your usage makes the app better at understanding you locally; it doesn’t feed a global model.
I don’t sell data. There’s no analytics SDK, no advertising SDK, no data broker integration. The business model is a subscription. That’s it. The NAMI recommends evaluating mental health apps for data practices before trusting them with sensitive information. This is why I don’t gamify mental health either: SAM has no financial incentive to monetize your data because the product itself is the revenue stream.
I don’t collect what I don’t need. SAM doesn’t ask for your real name. It doesn’t need it. Account identity is handled via Supabase Auth: an email or OAuth token, no PII profile beyond that.
The harder question: what about future AI features?
AI capability is moving fast. I’m planning features that will use more sophisticated AI: pattern recognition across months of data, a conversational AI that can genuinely discuss your mental state over time.
My commitment for those features:
- Same anonymization standard. The more powerful the AI task, the more careful I’ll be about what context it receives.
- Explicit opt-in for new AI features. No “we updated our terms” email that silently enables new processing.
- On-device where it genuinely adds value. I’m actively building local models for tasks like tone classification and theme extraction, where on-device inference means better privacy and faster results.
- Open about what’s changing. Posts like this one, every time something meaningful shifts.
The bottom line
Mental health apps have a trust problem because most of them are built for good days, and research confirms many apps share data with third parties without adequate disclosure. They collect broadly, process opaquely, and explain nothing.
SAM is built on the opposite assumptions: collect minimally, anonymize before processing, be explicit about what AI sees, and give you real controls.
If you’re evaluating SAM and have specific questions about data handling that this post doesn’t answer, email me. I’ll answer directly and, if the question is good, add it here.
Your data is yours. Your data knows things before you do, and the AI is there to help you understand it, not to consume it.
Related reading:
- What AI Should and Shouldn’t Do in Mental Health Apps
- I Built a Mood Tracker Because Nothing Else Took It Seriously
SAM is available on Android. The full Privacy Policy is accessible from the Settings screen within the app.
Frequently Asked Questions
Are mental health apps safe for personal data?
Not all mental health apps are safe. Research shows many apps share data with third parties or lack proper encryption. Look for apps that use on-device processing, offer data export and deletion, and clearly state what data they collect and how it is stored.
Does Steadyline sell user data?
No. Steadyline does not sell, share, or monetize user data in any way. Revenue comes exclusively from subscriptions. Your mental health data stays between you and your device, with encrypted cloud sync only for backup and cross-device access.
Can I delete my data from Steadyline?
Yes. Steadyline provides full data deletion at any time through the app settings. You can also export all your data before deleting it. When you request deletion, all data is permanently removed from both local storage and cloud servers.
Does Steadyline use my data to train AI models?
No. Steadyline uses inference-only API calls, meaning your data is processed to generate insights but never used to train or fine-tune any AI model. The AI providers receive only anonymized aggregates like mood averages and theme labels, not your raw journal entries or personal details.
Disclaimer: This article is based on personal experience, not medical advice. I am not a doctor or licensed therapist. If you live with bipolar disorder or another mental health condition, please work with a qualified psychiatrist. In crisis, contact the 988 Suicide and Crisis Lifeline (call or text 988) or Crisis Text Line (text HOME to 741741).
Try Steadyline
Track mood, energy, sleep, and stability with AI pattern detection. 30-day free trial.