31 C
Miami
Wednesday, July 9, 2025

Changing the conversation in health care

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Generative artificial intelligence is transforming the ways humans write, read, speak, think, empathize, and act within and across languages and cultures. In health care, gaps in communication between patients and practitioners can worsen patient outcomes and prevent improvements in practice and care. The Language/AI Incubator, made possible through funding from the MIT Human Insight Collaborative (MITHIC), offers a potential response to these challenges. 

The project envisions a research community rooted in the humanities that will foster interdisciplinary collaboration across MIT to deepen understanding of generative AI’s impact on cross-linguistic and cross-cultural communication. The project’s focus on health care and communication seeks to build bridges across socioeconomic, cultural, and linguistic strata.

The incubator is co-led by Leo Celi, a physician and the research director and senior research scientist with the Institute for Medical Engineering and Science (IMES), and Per Urlaub, professor of the practice in German and second language studies and director of MIT’s Global Languages program. 

“The basis of health care delivery is the knowledge of health and disease,” Celi says. “We’re seeing poor outcomes despite massive investments because our knowledge system is broken.”

A chance collaboration

Urlaub and Celi met during a MITHIC launch event. Conversations during the event reception revealed a shared interest in exploring improvements in medical communication and practice with AI.

“We’re trying to incorporate data science into health-care delivery,” Celi says. “We’ve been recruiting social scientists [at IMES] to help advance our work, because the science we create isn’t neutral.”

Language is a non-neutral mediator in health care delivery, the team believes, and can be a boon or barrier to effective treatment. “Later, after we met, I joined one of his working groups whose focus was metaphors for pain: the language we use to describe it and its measurement,” Urlaub continues. “One of the questions we considered was how effective communication can occur between doctors and patients.”

Technology, they argue, impacts casual communication, and its impact depends on both users and creators. As AI and large language models (LLMs) gain power and prominence, their use is broadening to include fields like health care and wellness. 

Rodrigo Gameiro, a physician and researcher with MIT’s Laboratory for Computational Physiology, is another program participant. He notes that work at the laboratory centers responsible AI development and implementation. Designing systems that leverage AI effectively, particularly when considering challenges related to communicating across linguistic and cultural divides that can occur in health care, demands a nuanced approach. 

“When we build AI systems that interact with human language, we’re not just teaching machines how to process words; we’re teaching them to navigate the complex web of meaning embedded in language,” Gameiro says.

Language’s complexities can impact treatment and patient care. “Pain can only be communicated through metaphor,” Urlaub continues, “but metaphors don’t always match, linguistically and culturally.” Smiley faces and one-to-10 scales — pain measurement tools English-speaking medical professionals may use to assess their patients — may not travel well across racial, ethnic, cultural, and language boundaries.

“Science has to have a heart” 

LLMs can potentially help scientists improve health care, although there are some systemic and pedagogical challenges to consider. Science can focus on outcomes to the exclusion of the people it’s meant to help, Celi argues. “Science has to have a heart,” he says. “Measuring students’ effectiveness by counting the number of papers they publish or patents they produce misses the point.”

The point, Urlaub says, is to investigate carefully while simultaneously acknowledging what we don’t know, citing what philosophers call Epistemic Humility. Knowledge, the investigators argue, is provisional, and always incomplete. Deeply held beliefs may require revision in light of new evidence. 

“No one’s mental view of the world is complete,” Celi says. “You need to create an environment in which people are comfortable acknowledging their biases.”

“How do we share concerns between language educators and others interested in AI?” Urlaub asks. “How do we identify and investigate the relationship between medical professionals and language educators interested in AI’s potential to aid in the elimination of gaps in communication between doctors and patients?” 

Language, in Gameiro’s estimation, is more than just a tool for communication. “It reflects culture, identity, and power dynamics,” he says. In situations where a patient might not be comfortable describing pain or discomfort because of the physician’s position as an authority, or because their culture demands yielding to those perceived as authority figures, misunderstandings can be dangerous. 

Changing the conversation

AI’s facility with language can help medical professionals navigate these areas more carefully, providing digital frameworks offering valuable cultural and linguistic contexts in which patient and practitioner can rely on data-driven, research-supported tools to improve dialogue. Institutions need to reconsider how they educate medical professionals and invite the communities they serve into the conversation, the team says. 

‘We need to ask ourselves what we truly want,” Celi says. “Why are we measuring what we’re measuring?” The biases we bring with us to these interactions — doctors, patients, their families, and their communities — remain barriers to improved care, Urlaub and Gameiro say.

“We want to connect people who think differently, and make AI work for everyone,” Gameiro continues. “Technology without purpose is just exclusion at scale.”

“Collaborations like these can allow for deep processing and better ideas,” Urlaub says.

Creating spaces where ideas about AI and health care can potentially become actions is a key element of the project. The Language/AI Incubator hosted its first colloquium at MIT in May, which was led by Mena Ramos, a physician and the co-founder and CEO of the Global Ultrasound Institute

The colloquium also featured presentations from Celi, as well as Alfred Spector, a visiting scholar in MIT’s Department of Electrical Engineering and Computer Science, and Douglas Jones, a senior staff member in the MIT Lincoln Laboratory’s Human Language Technology Group. A second Language/AI Incubator colloquium is planned for August.

Greater integration between the social and hard sciences can potentially increase the likelihood of developing viable solutions and reducing biases. Allowing for shifts in the ways patients and doctors view the relationship, while offering each shared ownership of the interaction, can help improve outcomes. Facilitating these conversations with AI may speed the integration of these perspectives. 

“Community advocates have a voice and should be included in these conversations,” Celi says. “AI and statistical modeling can’t collect all the data needed to treat all the people who need it.”

Community needs and improved educational opportunities and practices should be coupled with cross-disciplinary approaches to knowledge acquisition and transfer. The ways people see things are limited by their perceptions and other factors. “Whose language are we modeling?” Gameiro asks about building LLMs. “Which varieties of speech are being included or excluded?” Since meaning and intent can shift across those contexts, it’s important to remember these when designing AI tools. 

“AI is our chance to rewrite the rules”

While there’s lots of potential in the collaboration, there are serious challenges to overcome, including establishing and scaling the technological means to improve patient-provider communication with AI, extending opportunities for collaboration to marginalized and underserved communities, and reconsidering and revamping patient care. 

But the team isn’t daunted.

Celi believes there are opportunities to address the widening gap between people and practitioners while addressing gaps in health care. “Our intent is to reattach the string that’s been cut between society and science,” he says. “We can empower scientists and the public to investigate the world together while also acknowledging the limitations engendered in overcoming their biases.”

Gameiro is a passionate advocate for AI’s ability to change everything we know about medicine. “I’m a medical doctor, and I don’t think I’m being hyperbolic when I say I believe AI is our chance to rewrite the rules of what medicine can do and who we can reach,” he says.

“Education changes humans from objects to subjects,” Urlaub argues, describing the difference between disinterested observers and active and engaged participants in the new care model he hopes to build. “We need to better understand technology’s impact on the lines between these states of being.”

Celi, Gameiro, and Urlaub each advocate for MITHIC-like spaces across health care, places where innovation and collaboration are allowed to occur without the kinds of arbitrary benchmarks institutions have previously used to mark success.

“AI will transform all these sectors,” Urlaub believes. “MITHIC is a generous framework that allows us to embrace uncertainty with flexibility.”

“We want to employ our power to build community among disparate audiences while admitting we don’t have all the answers,” Celi says. “If we fail, it’s because we failed to dream big enough about how a reimagined world could look.”

Source link

- Advertisement -spot_imgspot_img

Highlights

- Advertisement -spot_img

Latest News

- Advertisement -spot_img