← Back to blog

Are AI Therapy Notes HIPAA Compliant? What Every Therapist Needs to Know

13 min read
HIPAAAICompliancePrivacy

You became a therapist to help people, not to navigate federal privacy regulations. But if you are considering AI-assisted progress notes -- or already using them -- HIPAA compliance is not something you can afford to skim past.

This is the number-one concern therapists raise about AI documentation tools, and it is a legitimate one. You have spent your career protecting client confidentiality. The idea of feeding session content into an AI model, somewhere in the cloud, processed by a company you have never heard of, should make you uncomfortable. That discomfort is clinically appropriate.

Here is the good news: AI therapy notes can absolutely be HIPAA compliant. But "can be" is doing a lot of work in that sentence. Whether a specific tool meets the standard depends entirely on how it is built, how data flows through it, and what agreements are in place between you, the vendor, and their infrastructure providers.

This post will help you evaluate any AI notes tool -- not by taking a marketing page at face value, but by understanding what to look for and what questions to ask.

Key Takeaway

AI therapy notes can be HIPAA compliant, but only if the vendor has BAAs covering the entire data chain (you, the vendor, and the AI model provider), uses zero-retention AI processing, and encrypts data in transit and at rest. Use the vendor evaluation checklist in this post to verify compliance before trusting any tool with client data.

What HIPAA Actually Requires for AI Notes

HIPAA is not a single rule. It is a framework built on three categories of safeguards, and all three apply when AI processes protected health information (PHI).

Administrative Safeguards: The BAA Is Non-Negotiable

A Business Associate Agreement (BAA) is a legal contract between you (the covered entity) and any vendor that handles PHI on your behalf. If your AI notes tool processes, transmits, or stores any client information, the vendor is a business associate. Period.

No BAA, no deal. This is not a grey area. If a vendor will not sign a BAA, stop the evaluation there. It does not matter how good the AI is, how cheap the tool is, or how many therapists in your consultation group are using it. Without a BAA, you are personally liable for any breach involving client data processed through that tool.

But the BAA chain does not stop with you and the vendor. It extends to every downstream provider that touches PHI:

  • You and the AI notes vendor -- required.
  • The vendor and their cloud infrastructure provider (AWS, Google Cloud, Azure) -- required.
  • The vendor and their AI model provider -- required, if PHI passes through the model. This is the link most therapists never think to ask about.

If any single link in that chain lacks a BAA, the entire chain is non-compliant. A vendor can have a BAA with you and a BAA with AWS, but if they are sending your session data to an AI model provider without a BAA covering that relationship, you have a gap.

Physical Safeguards: Where Is the Data?

"The cloud" is not a place. It is a network of data centers run by specific companies in specific locations. When an AI tool processes your session content, that data exists on physical servers somewhere.

The major cloud providers -- AWS, Google Cloud, and Microsoft Azure -- all offer healthcare compliance certifications and will sign BAAs. The question is whether your vendor is actually using one of these providers with appropriate controls enabled, or stitching together a cheaper stack that cuts corners. Ask which cloud provider hosts the service, and ask which region.

Technical Safeguards: Encryption, Access Controls, and Audit Logs

  • Encryption in transit: TLS 1.2 or higher between your device and the vendor's servers. Standard for modern web applications, but verify it.
  • Encryption at rest: AES-256 or equivalent for stored data. Even if someone gains unauthorized access to the storage layer, the data is unreadable without encryption keys.
  • Access controls: Role-based controls limiting which employees can access client data. Ideally, no human at the vendor company ever sees your client's information.
  • Audit logging: Logs of who accessed what data and when. Essential for breach investigation and demonstrating compliance.

"HIPAA Compliant" vs. "HIPAA Eligible"

There is no such thing as a HIPAA certification. No government agency certifies software as "HIPAA compliant." Any vendor claiming to be "HIPAA certified" is either confused or misleading you.

Compliance is a set of practices and agreements, not a product feature. "HIPAA eligible" is a related but different term -- cloud providers like AWS use it to describe services that can be configured for HIPAA compliance but are not automatically compliant out of the box. AWS S3, for example, is HIPAA eligible. But an S3 bucket with default settings and no encryption is not compliant just because AWS signed a BAA. The vendor has to configure it correctly.

When a vendor tells you their tool is "HIPAA compliant," what you actually want to know is: Have you implemented the required safeguards? Do you have BAAs covering the entire data chain? Can you demonstrate this?

Where AI Notes Tools Actually Put Your Client Data

Understanding the data flow is the most important step in evaluating any AI notes tool. The typical path:

  1. Input: You provide session content -- live transcription, post-session dictation, or typed notes.
  2. Transmission: Content travels to the vendor's servers.
  3. AI Processing: The vendor sends session content to an AI model, which generates a draft note.
  4. Storage: The note is stored in the vendor's database.
  5. Retention (or not): The AI model provider may or may not retain a copy of the session content.

Step 5 is where most compliance risk hides.

Zero Retention vs. Training Data

Some AI providers retain data that passes through their APIs and may use it to improve their models. If your client's session content is being retained for training purposes, that is a HIPAA problem regardless of what the vendor's marketing page says.

The key distinction:

  • Enterprise AI services like AWS Bedrock, Azure OpenAI Service, and Google Cloud's Vertex AI offer zero-retention agreements. Your data passes through the model and is not stored, logged, or used for training. These services are designed for exactly this kind of sensitive-data use case, and the providers will sign BAAs covering them.
  • Consumer-facing AI APIs may have different terms. Some retain data for 30 days for abuse monitoring. Some use API data for model improvement unless you explicitly opt out. The terms change, and the burden is on the vendor to have a current, enforceable agreement.

If a vendor cannot tell you exactly which AI model provider they use and whether there is a zero-retention agreement in place, that should end the conversation.

The Live Transcription Risk

One data-flow question that deserves special attention: how does the session content get into the system in the first place?

Some AI notes tools use live transcription -- they listen to your session in real time, streaming audio to a transcription service and then to the AI model. This means PHI is being transmitted continuously throughout the session, often through multiple third-party services (a transcription API and an AI model API, which may be different providers).

Live transcription is not inherently non-compliant, but it does expand the attack surface. More data moves through more systems over a longer period. It also raises consent and recording concerns in two-party consent states.

A safer architecture, from a compliance standpoint, is post-session processing. The therapist provides session content after the session ends -- through dictation, typed notes, or structured input -- and the AI generates the note from that input. Less data in transit, fewer systems involved, and the therapist controls exactly what information enters the system.

Live transcription can be done compliantly, but the compliance burden is higher. If a vendor uses it, your questions about data flow, encryption, and retention become even more important.

Vendor Evaluation Checklist

Use these questions to evaluate any AI notes tool. Print this list. Bring it to a demo call. Send it in an email. Any vendor worth trusting will answer every one of these without hesitation.

  1. "Do you have a signed BAA? Can I review it before I sign up?" A yes is the minimum. If they hesitate, walk away.

  2. "Which AI model provider do you use, and do you have a BAA with them?" You want a specific answer -- "We use AWS Bedrock" or "We use Azure OpenAI" -- not "We use industry-leading AI technology."

  3. "Does the AI provider retain any session data? Is it used for model training?" The correct answer is no. Zero retention. Not used for training. Ask for documentation.

  4. "Is data encrypted in transit and at rest? What standards?" You want to hear TLS 1.2+ in transit and AES-256 at rest, at minimum.

  5. "Where is my data stored? Which cloud provider and which region?" A compliant vendor will give you a specific answer. "US-East on AWS" or similar.

  6. "What happens to audio, transcripts, or dictation after the note is generated?" Ideally, it is deleted immediately after processing. If it is retained, ask why and for how long.

  7. "Does any human at your company ever see my client data?" The answer should be no under normal operations. Some vendors have exception processes for support or debugging -- ask what those are.

  8. "Can I export or delete all my data at any time?" Data portability and the right to delete are important. You should never be locked into a platform that holds your client data hostage.

For transparency: TherapyDesk answers these questions as follows. It runs on AWS infrastructure with a BAA, uses AWS Bedrock for AI processing with zero data retention, encrypts all data in transit and at rest, and never routes PHI through consumer AI models. Session content is provided through post-session dictation -- not live transcription -- and notes are generated as drafts that the therapist reviews and finalizes before anything is saved to the client record.

The Draft Note Model: Why It Matters

How AI-generated notes are handled after generation is both a compliance issue and a clinical one.

AI notes should always be drafts. The therapist reviews, edits, and signs off before the note becomes part of the clinical record. This is important for two reasons:

From a compliance perspective, the draft model means the therapist is the gatekeeper. If the AI misinterprets something or includes information that should not be in the record, the therapist catches it before it is finalized. This is a safeguard against both inaccuracy and inadvertent PHI exposure.

From a clinical perspective, you are the clinician of record. Your name is on the note. An AI-generated draft is a starting point -- a way to get from a blank page to a 90% complete note in seconds instead of minutes. But the final document is yours, reflecting your clinical judgment.

If a tool auto-finalizes notes without therapist review, that should raise both clinical and legal concerns. The therapist must remain the final authority on what goes into the record.

Common Misconceptions About AI and HIPAA

"Using AI for therapy notes automatically violates HIPAA"

Wrong. A properly architected AI notes tool with BAAs, zero-retention processing, and encryption is no less compliant than any cloud-based EHR you already use. Your current EHR stores PHI on someone else's servers. Your telehealth platform transmits PHI over the internet. AI notes tools operate on the same infrastructure with the same safeguards. The question is not whether AI can be compliant -- it is whether a specific tool has done the work.

"Free AI tools can't be HIPAA compliant"

Price and compliance are not directly correlated. A free tool could be compliant if it has the right agreements in place. The concern is the business model: if you are not paying, the vendor needs revenue from somewhere. If that involves your data -- selling it, using it for training, or monetizing it indirectly -- that is a compliance problem. But "free" does not automatically mean non-compliant. Ask the same questions you would ask any paid vendor.

"My state has stricter rules, so HIPAA doesn't matter"

Some states do have privacy requirements that exceed HIPAA. California's CCPA, for example, or specific state laws around mental health records, substance abuse treatment (42 CFR Part 2), or minor consent. These are real, and they matter. But they are additive -- HIPAA compliance is the floor, not the ceiling. You still need a HIPAA-compliant tool; you may also need to verify that it meets your state's additional requirements. Check your state licensing board's current guidance on AI in clinical practice. This area is evolving quickly.

"If the AI is listening to my sessions, that's wiretapping"

Recording laws vary by state -- some require all-party consent, others only one-party consent. The safest approach: include AI-assisted documentation in your informed consent process. Explain to clients what information is captured and how it is protected, and get written consent. Good practice regardless of jurisdiction.

"I can just use ChatGPT to write my notes"

Typing session content into ChatGPT, Claude.ai, or any consumer AI chatbot is almost certainly a HIPAA violation. These consumer products are not covered by BAAs, and your input may be retained for training. The enterprise versions of these platforms (through cloud provider integrations) are a different story, but the consumer web interfaces are not appropriate for PHI. This is the most important misconception to correct, because it is the one therapists are most likely to act on.

Clinical Accuracy: The Other Half of Trust

HIPAA compliance is necessary but not sufficient. A tool can be perfectly compliant and still produce notes that you would never sign.

Generic AI -- the kind that does not understand therapeutic frameworks -- can produce a HIPAA-compliant note that summarizes a session without identifying the clinical work that happened. It might capture what was discussed without recognizing the cognitive distortions you identified, the parts work you facilitated, or the desensitization phase you completed.

A note that is safe to store but not safe to sign does not actually solve the documentation problem. When evaluating AI notes tools, compliance is the first filter. Clinical quality is the second. Both matter.

HIPAA compliance gets your data protected. Modality awareness gets your notes right. TherapyDesk was built to do both -- using AI trained on clinical treatment manuals to generate notes that speak in CBT, IFS, EMDR, DBT, ACT, or psychodynamic vocabulary, depending on how you practice. If you are evaluating AI notes tools and want to see how that works alongside the compliance architecture described in this post, schedule a demo.

The Bottom Line

AI therapy notes can be HIPAA compliant. But you cannot take any vendor's word for it -- including ours. Use the checklist in this post. Ask the questions. Demand specific answers. Any vendor that gets evasive when you ask about BAAs, data retention, or AI model providers is telling you something important.

Your instinct to protect your clients' privacy is correct. Channel it into due diligence, not avoidance. Compliance is table stakes. After you have confirmed that a tool meets the standard, evaluate the clinical quality of what it produces. The best AI notes tool is one you trust enough to use -- and one that generates drafts good enough to sign.