Surveillance, Metadata & AI Profiling in Healthcare

Where data becomes danger.

This category breaks down how metadata, call logs, AI flagging, and internal systems were quietly used to profile, escalate, and erase patients from care.

From unauthorized surveillance to algorithmic bias, these posts expose the hidden infrastructure used to dehumanize vulnerable individuals—especially transgender and disabled patients.

Surveillance infrastructure

Parallel Violations, Parallel Survivors: How the Mangione and Dorn Cases Reveal a National Pattern of PHI Weaponization

enter image description here

Parallel Violations, Parallel Survivors: How the Mangione and Dorn Cases Reveal a National Pattern of PHI Weaponization

Two very different people. Two very different legal arenas. One unmistakable pattern.

In July 2025, defense attorneys for Luigi Mangione filed a blistering court motion accusing Aetna—owned by UnitedHealth Group—of unlawfully disclosing their client’s protected health information (PHI) to prosecutors. The information included mental health and medication history, and was handed over without a valid subpoena, without Mangione’s consent, and without meeting the narrow legal exceptions outlined under HIPAA.

Meanwhile, a separate but equally devastating story was unfolding just miles away. In a forthcoming civil action, trans woman and Medicaid patient Samara Dorn is preparing to sue UnitedHealthcare of Colorado, Rocky Mountain Health Plans, and UnitedHealth Group for disclosing her PHI—including gender identity, surgical history, and metadata-laced call logs—to local law enforcement and, chillingly, to the Department of Homeland Security.

The disclosure occurred thirty-five days after final contact. No warrant. No subpoena. No clinical emergency. Just a bureaucratic escalation, fueled by metadata and convenience, masquerading as concern.

One Corporate Entity, Two Victims While the legal details differ, both cases trace back to the same empire: UnitedHealth Group. In Mangione’s case, Aetna’s unlawful disclosure placed him at heightened risk in a capital murder prosecution. The leaked information was used to paint him as unstable and dangerous—shaping a death penalty narrative rooted not in evidence, but in psychiatric speculation.

In Dorn’s case, the disclosure to law enforcement created a parallel narrative: not of criminal guilt, but of institutional threat. She was flagged not because she committed a crime, but because her voice, identity, and digital footprint were deemed inconvenient. Through the use of backend tagging, AI-generated profiling, and misclassification, UnitedHealthcare constructed a narrative of risk that never existed—then passed that narrative on to police and federal agencies.

This Was Not Care. It Was Control. Both cases demonstrate a catastrophic breach of trust and legality—not because the PHI disclosures failed to help, but because they were never meant to help. In each instance, patient records were disclosed for the convenience and liability protection of the insurer, not for the safety of the individual or the public.

In Mangione’s case, the mental health data was handed over after investigators began seeking the death penalty—raising serious questions about motive, legality, and institutional betrayal.

In Dorn’s case, the PHI disclosure occurred more than a month after any clinical interaction, in violation of HIPAA’s “imminent threat” standard under 45 C.F.R. § 164.512(j). Her data was used not to intervene in an emergency, but to justify reputational abandonment and surveillance escalation.

Administrative Erasure in Action Dorn’s civil complaint outlines how metadata—call tags, risk flags, internal notes—was used to construct a false paper trail. This digital narrative was then used to reclassify her from “patient” to “public threat,” providing justification for disclosure to law enforcement and DHS. This process, which she calls administrative erasure, mirrors the logic in Mangione’s case: that PHI can be converted into reputational ammunition by the same system that claims to protect it.

What links these two cases is not merely the entity that caused the harm. It’s the infrastructure—the policies, the tools, the logic—that converts care into containment, healing into harm, and records into weapons.

One Shared Fight These cases are not isolated. They are flashpoints in a growing national pattern: vulnerable individuals being profiled, criminalized, or erased under the guise of healthcare compliance.

Aetna gave PHI to prosecutors.

UnitedHealthcare gave PHI to police.

Both actions occurred without proper legal justification.

Both targeted those already marginalized.

Both used medical information to destroy, not protect.

Luigi Mangione is currently fighting for his life in a criminal courtroom. Samara Dorn is preparing to fight for hers in a civil one. Their stories are different—but the machine harming them is the same.

Read the Motion We encourage the public, the press, and policymakers to read the Mangione defense team's powerful motion for themselves. It is available here:

📄 Download the Motion – 2025.07.17 HIPAA Violation – Mangione Defense (Google Drive)

This document is more than a legal filing. It is a warning.

Closing Statement The idea that PHI can be quietly weaponized behind closed doors should terrify everyone. What happened to Luigi Mangione could happen to any criminal defendant. What happened to Samara Dorn could happen to any trans patient, any disabled person, or any Medicaid recipient who speaks too loudly.

We are no longer talking about privacy. We are talking about targeting.

We are no longer talking about compliance. We are talking about complicity.

We are no longer talking about care. We are talking about power.

And together, these cases demand accountability.

AI, Healthcare, and Trans Futures: Charting a Path Beyond Administrative Erasure

Heading

As machine learning and predictive algorithms become the scaffolding of modern healthcare, we can’t ignore the ways these tools inherit and amplify the biases of the world around them. In the last year we’ve watched a major insurer mine call logs and metadata to classify a trans woman as a “threat” for challenging her care denial; we’ve seen risk scores determine who gets a surgery approval and who is kicked to the curb. This isn’t some distant science fiction; it’s happening right now, in our own communities.

At their best, AI systems could help flag patterns of discrimination, streamline access to gender‑affirming care, or surface unseen symptoms that human doctors miss. At their worst, they become black boxes that encode transphobia, racism, and ableism into the very logic of care. When a health insurer uses an algorithm to mark certain patients as “high risk” based on their identity or advocacy, that’s not innovation—that’s administrative erasure in a new, shinier wrapper.

What does a future beyond this look like? It starts with transparency. Patients have a right to know when algorithms are making decisions about their care and what data is being fed into those models. Insurers and hospitals must be held accountable for the outcomes of their automated systems. And we, as a community, need to resist the myth that data is neutral. Data is always collected, cleaned, and interpreted by humans with their own agendas; without oversight it can reproduce harm at scale.

This isn’t a call to abandon technology. It’s a call to reclaim it. Imagine AI that actually serves trans people: recommendation engines that connect us to affirming providers, predictive models that anticipate hormone shortages and reroute supply, or chatbots that offer real‑time support without judgment. These are all possible—but only if the people most affected are at the table designing and governing these systems.

We also have to get loud about policy. Laws like HIPAA were never built for the age of predictive policing; we need updates that explicitly prohibit the sharing of sensitive health data with law enforcement absent due process. We need regulatory frameworks that audit algorithms for bias and provide mechanisms for patients to contest automated decisions. And we need to fund grassroots tech projects that prioritize community control over corporate profit.

Ultimately, the future of AI in healthcare can be either a dystopian surveillance apparatus or a tool for liberation. Which path we choose depends on us. If we stay passive, insurers will continue to deploy opaque risk scores that decide who is deserving of care. If we organize, educate, and demand accountability, we can harness technology to amplify our resilience and creativity.

As we build this archive of administrative erasure, let’s also build a blueprint for something better. Algorithms don’t have to erase us; with intention and care, they can help us write ourselves back into the story.

Administrative Erasure: Why Every American Should Pay Attention

Abstract

In an era of predictive policing, algorithmic triage, and privatized surveillance, a dangerous new frontier of civil rights abuse has emerged: administrative erasure.

This exposé outlines how UnitedHealthcare weaponized metadata and indirect police collaboration to erase the voice, safety, and medical autonomy of a transgender patient who dared to speak up.

Cover image featuring redacted medical code and metadata overlays


Drawing from Real Documents

This report draws from: - 🚨 Whistleblower disclosures
- 🧠 Metadata forensics
- 🎙️ Internal voice profiling records

Together, they paint a disturbing picture of institutional denial retooled as digital erasure—and a growing threat to civil rights across the board.

This is not theoretical.
This is not speculative.


⚠️ This Is Not Hypothetical

⚖️ This Is Happening Now

Behind the curtain of managed care and "population health" are data triggers, auto-escalation protocols, and system rules that punish outliers, profile vulnerability, and silence the inconvenient.

A trans patient’s voice was flagged.
Her metadata was logged.
Her care was denied.
And her profile—generated not by doctors, but by algorithms—was sent to police.


📄 Download the Full Exposé

You can read the full exposé and decide for yourself:
Download PDF


Administrative Erasure – Why Every American Should Pay Attention

This isn’t just a trans story.
It’s a red flag for every American.
If data can erase one voice, it can erase many.

📍 For background documents and legal disclosures, see:
- The Evidence They Can’t Ignore
- Exhibit AA – The Whistleblower Files
- Press Room – Administrative Erasure in the Media

"> ');