Considering the impact of AI on the disability community
The tools can be helpful, but they should be used with caution
Written by |
In 2024, part of North Carolina, my home state, was in the path of Hurricane Helene. With its powerful winds and torrential rain, Helene battered many communities. I was lucky I wasn’t in its path. In the aftermath of its destruction, many North Carolinians rallied in support of those affected.
In the midst of all this, an image was shared across various social media platforms of a teary-eyed girl in an orange life jacket, sitting in a rescue boat, clutching her small puppy. It sparked an emotional reaction within me before I could fully process it. Whoever generated it designed it to do so.
Sympathy hit me before I started noticing odd things about the image — certain proportions were a bit off, body parts were missing, the skin was too smooth, and the levels of detail were inconsistent. These were telltale signs of generative artificial intelligence (AI). Today, AI tools can create more impressive and realistic images.
Whether we like it or not, the age of AI is here. We’ll have to discover what that means for each of us and adapt accordingly. This, of course, includes those of us in the Charcot-Marie-Tooth disease (CMT) and broader disability communities.
A healthy skepticism
With the growing prevalence of AI tools in everyday life, I find myself mistrustful — not of the technology per se, but of the ability of large tech firms to reliably act in their users’ best interests.
Nevertheless, the usefulness of AI tools is clear across many applications, including CMT research.
As Katherine Forsey, PhD, the Charcot-Marie-Tooth Association‘s (CMTA) chief research officer, told me in an email, “Much of the work we fund today incorporates AI-enabled tools — whether for imaging, data analysis, gene and target discovery, outcome measures and biomarker identification, or treatment development. While AI is rarely the sole focus, it is increasingly embedded across our entire research pipeline.”
In a separate email exchange, Meghan Drummond, PhD, the vice president of research and drug development at the CMT Research Foundation (CMTRF), wrote that both AI tools and machine learning techniques are force multipliers for the research teams the CMTRF partners with.
“[AI tools and machine learning] can lower costs, timelines, and increase the predictability of outcomes. That said, they are not magic. It is critical to invest in advancing these tools thoughtfully and with diligence so that we don’t overinterpret what an AI model might predict or put more confidence behind a model than is justified,” Drummond said.
Granted, many of us aren’t professional researchers with large datasets from research that need interpretation. Instead, those of us in the disability community may use consumer-grade AI tools to help fill gaps in the healthcare system. Many of us don’t have easy access to physicians with deep knowledge of our rare conditions, and few of us have the technical knowledge to interpret test results or clinical jargon.
Perhaps AI technology can help democratize care. However, just as researchers are careful in their application of AI tools, everyday users should practice similar caution.
A few weeks ago, I spoke with Kenny Raymond, head of communications at CMTA. While he’s a fan of using AI both professionally and personally, he said, “[Consumer-level] AI tools are all basically large-language models designed to, first, recognize texts and then try to predict what comes next. They don’t care about correctness or accuracy. And they should definitely not be used as a source of truth.”
Ultimately, just as I’ve learned to employ a healthy skepticism toward images that appear in my social network feeds these days, it’s perhaps more important than ever to verify information as we fly into this new world of AI technology. And many tech firms still need to earn our trust.
Note: Charcot-Marie-Tooth News is strictly a news and information website about the disease. It does not provide medical advice, diagnosis, or treatment. This content is not intended to be a substitute for professional medical advice, diagnosis, or treatment. Always seek the advice of your physician or other qualified health provider with any questions you may have regarding a medical condition. Never disregard professional medical advice or delay in seeking it because of something you have read on this website. The opinions expressed in this column are not those of Charcot-Marie-Tooth News or its parent company, Bionews, and are intended to spark discussion about issues pertaining to Charcot-Marie-Tooth.
Christine Faria
I wish I knew people in my area who are living with CMT. The few people I have read about that have CMT are all famous people. I would love to get together with others who have CMT.
Dan Knauss
That's interesting, Christine — I've never heard of anyone famous with CMT!
It is nice to get to know people like ourselves in our own contexts who are relatable, with or without disabilities.
I'd encourage you to look more broadly than CMT and even past the disease. All people are more alike than different, and all struggles and physical ailments lead to relatable stories.
My experience with groups that get together specifically because they have some form of CMT is that this is often not a healthy or helpful thing. I've never seen such depressive, desperate, and even angry people with no agenda for gathering except to share their negativity. The gatekeeping and identity politics, competitive narcissism, and "disability Olympics" that come up when people gather based on what they can't do or have lost might put things into perspective for you — or it could be a dangerous black hole to get sucked into.
Dan Knauss
Exactly, all tools are useful; it is more of a question of who is using them, what for, and whether they are aligned with your interests or against them. US tech firms and their leaders are the ones to question and be skeptical about, first and foremost.
In the same vein, it's humans, not LLM, that "don't care about correctness and accuracy." It's not meaningful (or accurate) to say an LLM cares or doesn't care. They do not have emotions or intentions. This limits their creative and analytical abilities in artistic and humanistic fields, but when it comes to technical knowledge in any field, or the "what comes next" game, they are incentivized to be correct, accurate, and efficient as they draw on the documentary archive of mostly modern, western societies — with all their limitations and internal contradictions.
It's who humans often don't care about correctness and accuracy, or who speak out of the contexts where they have sufficient knowledge and experience. Who you pick as a source of truth is always important. Kenneth Raymond, for example, doesn't have a technology background, and his description of "consumer-level" LLMs is inaccurate and misleading.
Predicting what comes next in a particular context is not really a deficit. It's a way of thinking. We do it, and our brains work this way in most contexts where language is involved. A lot of humans struggle with it.
As of 2026, the leading "frontier" models are very impressive, and in many contexts, they are a valid "source of truth." They can create and modify complex software, help research and think through complex issues, analyze documents, and do much of the mundane work common in professional roles requiring specialized training and advanced degrees.
Fact-checking and peer review are always good, but one person can easily do these tasks with multiple AI agents checking each other — a standard feature now in the newest models.
For the large family of known and not-yet-identified neuromuscular diseases, LLM-based and future AI tools will continue to add a lot of value to research done in labs by scientists and medical experts, and to anyone trying to learn about any disease, diagnosis, and current research. A lot of work that just amounts to assembling and moving documents around is likely to be replaced, however, and that has a lot of people running scared.