AI, Healthcare, and Trans Futures: Charting a Path Beyond Administrative Erasure
As machine learning and predictive algorithms become the scaffolding of modern healthcare, we can’t ignore the ways these tools inherit and amplify the biases of the world around them. In the last year we’ve watched a major insurer mine call logs and metadata to classify a trans woman as a “threat” for challenging her care denial; we’ve seen risk scores determine who gets a surgery approval and who is kicked to the curb. This isn’t some distant science fiction; it’s happening right now, in our own communities.
At their best, AI systems could help flag patterns of discrimination, streamline access to gender‑affirming care, or surface unseen symptoms that human doctors miss. At their worst, they become black boxes that encode transphobia, racism, and ableism into the very logic of care. When a health insurer uses an algorithm to mark certain patients as “high risk” based on their identity or advocacy, that’s not innovation—that’s administrative erasure in a new, shinier wrapper.
What does a future beyond this look like? It starts with transparency. Patients have a right to know when algorithms are making decisions about their care and what data is being fed into those models. Insurers and hospitals must be held accountable for the outcomes of their automated systems. And we, as a community, need to resist the myth that data is neutral. Data is always collected, cleaned, and interpreted by humans with their own agendas; without oversight it can reproduce harm at scale.
This isn’t a call to abandon technology. It’s a call to reclaim it. Imagine AI that actually serves trans people: recommendation engines that connect us to affirming providers, predictive models that anticipate hormone shortages and reroute supply, or chatbots that offer real‑time support without judgment. These are all possible—but only if the people most affected are at the table designing and governing these systems.
We also have to get loud about policy. Laws like HIPAA were never built for the age of predictive policing; we need updates that explicitly prohibit the sharing of sensitive health data with law enforcement absent due process. We need regulatory frameworks that audit algorithms for bias and provide mechanisms for patients to contest automated decisions. And we need to fund grassroots tech projects that prioritize community control over corporate profit.
Ultimately, the future of AI in healthcare can be either a dystopian surveillance apparatus or a tool for liberation. Which path we choose depends on us. If we stay passive, insurers will continue to deploy opaque risk scores that decide who is deserving of care. If we organize, educate, and demand accountability, we can harness technology to amplify our resilience and creativity.
As we build this archive of administrative erasure, let’s also build a blueprint for something better. Algorithms don’t have to erase us; with intention and care, they can help us write ourselves back into the story.