All articles
ai privacy security mental health

How SAM Keeps Your Mental Health Data Safe While Using AI

Mental health data is among the most sensitive data you can share. Here's exactly what SAM does — and doesn't do — to protect it when AI is involved.

R
Ravi Mishra
· · 9 min read

Mental health data is not ordinary data.

It’s the log entry you wrote at 2 AM when you couldn’t sleep. The mood score that tracked a depressive episode. The medication log that shows what you’re taking and when. If any of that leaked — or got used in ways you didn’t agree to — the damage would be personal in a way that a compromised credit card simply isn’t.

I built SAM knowing this. And I’ve made architectural decisions at every layer of the stack to ensure that using AI features in the app doesn’t mean handing your most vulnerable moments to a third party. This is a transparent account of exactly how it works.


The threat model I actually care about

Before getting into solutions, let me be specific about what I’m protecting against:

  1. Your raw diary entries or mood logs reaching AI training datasets — the most common concern, and a legitimate one
  2. Identifiable data leaving your device when it doesn’t have to — why send it to the cloud if on-device works?
  3. Third-party AI providers seeing more than necessary — if they need to process something, they should get the minimum needed to do the job
  4. Data breach exposure — if SAM’s backend were ever compromised, what’s there to steal?
  5. Opaque data practices — you should always be able to know what’s happening with your data

Most wellness apps fail on at least one of these. Here’s where SAM stands on each.


Layer 1: On-device AI inference comes first

The most privacy-preserving thing you can do with AI is run it locally. No network request means no possibility of data leaving the device.

SAM uses an InferenceOrchestrator that selects the AI provider for each task based on a clear hierarchy:

On-device model
    ↓  (if task complexity exceeds local capability)
API provider with anonymized context
    ↓  (never)
Raw PII to external provider

Concretely, tasks like:

  • Generating a daily nudge based on your recent mood average
  • Classifying the tone of a log entry (positive / neutral / concerning)
  • Suggesting a breathing exercise based on your current stability score

…are handled entirely on-device using a quantized model. Your log entries never leave your phone for these features.

The on-device path is the default. Cloud AI is the exception, used only when the task genuinely requires it (e.g., generating a multi-paragraph weekly insight narrative).


Layer 2: Supabase Row Level Security — your data is yours, period

All data synced to the cloud lives in Supabase (Postgres). The critical protection here isn’t encryption (though that’s present too) — it’s Row Level Security (RLS).

Every table that holds user data has policies like this:

-- Users can only ever read their own rows
create policy "users_own_data"
  on mood_logs
  for all
  using (auth.uid() = user_id);

What this means in practice: even if there were a bug in the app’s API layer that somehow constructed a query to fetch another user’s logs, Postgres would refuse it at the database level. There is no admin backdoor that can accidentally return your data in a query targeting someone else.

This isn’t a checkbox — it’s the actual security boundary for multi-tenant data isolation.


Layer 3: What actually gets sent to AI APIs — and how

For features that need cloud AI (like the weekly insights summary), SAM does not send your raw log entries. It sends a structured, anonymized context object.

Here’s the shape of what an AI provider actually receives:

{
  "period": "2026-02-17 to 2026-02-23",
  "mood_average": 6.2,
  "mood_trend": "improving",
  "energy_average": 5.8,
  "log_count": 6,
  "notable_themes": ["work stress", "sleep disruption", "social connection"],
  "stability_score": 71,
  "medication_adherence_pct": 92
}

What is not in that payload:

  • Your name or any account identifier
  • The actual text of your log entries
  • Dates specific enough to be individually identifying
  • Any Health Connect data

The AI sees aggregate statistics and theme labels — not a window into your personal writing.

The theme extraction itself (turning “I had a terrible meeting today and couldn’t focus”) into "work stress") happens locally, on-device, before anything hits the network.


Layer 4: Health Connect data never leaves the device

SAM integrates with Android Health Connect to optionally correlate health metrics (steps, sleep, heart rate) with your mood logs. This feature exists purely locally.

Health Connect data is:

  • Read on-device only
  • Used only for local correlation analysis
  • Never synced to Supabase
  • Never included in any AI API request

The correlation engine — “your mood scores are 1.4 points higher on days you walk more than 7,000 steps” — runs entirely in the app. That insight is valuable because it’s personalized; it’s also private because it never needs to leave your phone.


Layer 5: Transparent controls, not hidden toggles

I think app privacy settings are usually theatre. A buried “data sharing” toggle that most users never find, defaulted to “on.”

SAM takes a different approach. The Privacy & Transparency screen shows you:

  • Which AI provider processed your last request and what context was sent
  • Whether on-device or cloud inference was used for each feature type
  • A complete log of data sync events to Supabase, with timestamps
  • A one-tap “delete everything” that triggers deletion on both the local SQLDelight database and all Supabase tables associated with your account

This isn’t just for users who care about privacy. It’s for anyone who wants to understand what an AI-powered app is doing with their data. That should be anyone who uses one.


What I don’t do (and why those decisions matter)

I don’t use your data to train models. SAM uses inference-only API calls to Groq and Gemini. No fine-tuning, no feedback loops that improve a model on your data. Your usage makes the app better at understanding you locally; it doesn’t feed a global model.

I don’t sell data. There’s no analytics SDK, no advertising SDK, no data broker integration. The business model is a subscription. That’s it. SAM has no financial incentive to monetize your data because the product itself is the revenue stream.

I don’t collect what I don’t need. SAM doesn’t ask for your real name. It doesn’t need it. Account identity is handled via Supabase Auth — an email or OAuth token, no PII profile beyond that.


The harder question: what about future AI features?

AI capability is moving fast. I’m planning features that will use more sophisticated AI — things like pattern recognition across months of data, or a conversational AI that can genuinely discuss your mental state over time.

My commitment for those features:

  1. On-device first, always. If a quantized model can do it acceptably, it will.
  2. Explicit opt-in for cloud AI features. No “we updated our terms” email that silently enables cloud processing.
  3. Same anonymization pipeline, stricter if anything. The more powerful the AI task, the more careful I’ll be about what context it receives.
  4. Open about what’s changing. Posts like this one, every time something meaningful shifts.

The bottom line

Mental health apps have a trust problem because most of them earn it poorly. They collect broadly, process opaquely, and explain nothing.

SAM is built on the opposite assumptions: collect minimally, process locally when possible, be explicit about the exceptions, and give you real controls.

If you’re evaluating SAM and have specific questions about data handling that this post doesn’t answer — email me. I’ll answer directly and, if the question is good, add it here.

Your data is yours. The AI is there to help you understand it, not to consume it.


SAM is available on Android. The full Privacy Policy is accessible from the Settings screen within the app.

S

Try Steadyline

AI-powered mental health tracking. Private by design. Free to start.

Get it on Google Play