Straddling Two Worlds: Confronting AI’s Promise and Pitfalls in Communications and Healthcare

Working at the crossroads of communications and healthcare—two areas that are becoming increasingly saturated with AI—has left me torn between excitement and unease. I often describe this tension as living in two worlds: academia, healthcare, and PR on one side, and my Indigenous identity, rooted in my ancestors’ knowledge and values, on the other. Although many corporate and academic practices conflict with Indigenous perspectives, I’m committed to bringing a holistic approach to these fields. I strive to carve out space for underrepresented voices, advocate for those who can’t be here, and pursue work that feels both meaningful and true to my heritage, which is beginning to be very conflicting when incorporating the need for AI within the advancements of these fields. 

Throughout the healthcare communications conference we spoke to high profile health communication organizations where we particularly discussed the use of AI tools to enhance the ease of clinical note-taking. They shared firsthand experiences of how AI-driven tools are already streamlining practitioners’ workflows: automating note-taking from session transcripts, allowing clinicians to remain fully present with clients, and ultimately increasing the number of patients seen without sacrificing documentation quality. Given the companies “technology-first” ethos—especially in marketing—AI integration seems both inevitable and well-aligned with corporate strategy. Yet, I find myself torn: the potential gains are impressive, but there are significant questions and caveats to consider. 

Pros of AI in Healthcare

  • Enhanced Presence
    By automating routine tasks—like transcribing conversations and pulling key themes—AI frees clinicians from their laptops and clipboards. Instead of splitting attention between clients and screens, practitioners can maintain eye contact, notice nonverbal cues (tone, body language, affect), and respond in real time. This deeper engagement builds stronger therapeutic alliances, which research shows is one of the most significant predictors of positive outcomes in mental health and primary care alike.

  • Efficiency in Documentation
    AI-driven note-taking platforms can generate a first draft of a clinical note almost instantaneously, summarizing session highlights, treatment plans, and follow-up tasks. Clinicians then spend minutes, not hours, reviewing and refining these summaries. This streamlines billing, coding, and record-keeping, reduces administrative backlogs, and helps prevent clinician burnout. Providers reclaim time to reinvest in continuing education, case consultation, or direct patient care.

  • Expanded Access
    Providers can increase their daily caseload without extending work hours with quicker documentation and streamlined workflows. In underserved or high-demand areas—rural clinics, community mental health centers, tribal health facilities—this boost in capacity can translate directly into shorter waitlists and more timely interventions. Over time, high-volume practices might also be able to offer sliding-scale or pro bono appointments, knowing they can maintain productivity without sacrificing quality.

  • Addressing Medical Racism and Biases
    Poor documentation and unchecked microaggressions can perpetuate harmful stereotypes in care settings. If trained on diverse and culturally informed datasets, AI tools could flag language indicative of bias, such as dismissive descriptors or assumptions about pain tolerance, and prompt real-time reflection or supervisor review. By embedding cultural humility checks into the record-keeping process, AI could help ensure that incidents of medical racism against Black, Brown, and Indigenous bodies are noted, addressed, and ultimately reduced. This “protective factor” can potentially make clinical spaces safer and more accountable for clients and practitioners of color. (I am unsure if this is already being incorporated or thought about when integrating AI into clinical spaces, but this is something that I thought of when trying to think of the positives of AI. This would be a way to use AI to protect BIPOC communities.) 

Cons of AI in Healthcare and Everyday High-Level Usage

  • Tone and Cultural Sensitivity
    AI systems don’t share our lived realities: they can’t grasp the weight of intergenerational trauma or the nuances of dialects forged by centuries of cultural resilience. When summarizing a session, they may misread storytelling metaphors, downplay signs of distress, or dismiss culturally specific expressions as “irrelevant.” Those errors undermine trust, skew treatment plans, and ultimately fail clients whose healing depends on being genuinely understood. And while we emphasize the importance of clinicians reviewing AI-generated transcripts and notes, I can’t help but worry that, over time, some may grow complacent and rely too heavily on the algorithm’s first draft.

  • Environmental Impacts
    Large-scale AI systems require tremendous computational power, often powered by data centers that consume vast amounts of electricity and generate significant heat. That translates to higher carbon emissions, greater water usage for cooling, and increased electronic waste from hardware upgrades. For Indigenous peoples, whose lifeways are inseparable from the land and water, this environmental toll is a direct threat to community health and sovereignty, not only for Indigenous people, but for all people. I bring in the perspective of Indigenous people because we are often thinking about the impacts society and technology are having on our lands, but this will be impacting everyone. If AI continues its trajectory unchecked, the same technologies designed to heal people may accelerate the ecological degradation that underpins many health disparities.

  • Medical Mistrust
    Generations of unethical experimentation and systemic neglect have seeded deep mistrust among BIPOC communities toward Western medicine. Inviting AI into the exam room—where every word is recorded and processed by an opaque algorithm—may heighten fears of surveillance, data misuse, or cultural exploitation. For Indigenous patients, concerns about data sovereignty are particularly acute: Who owns the audio recordings? Where will they be stored? Could they be repurposed for research without tribal consent? If these questions go unanswered, AI could become yet another barrier keeping vulnerable populations from seeking the care they need.

Moving Forward

Though these are just a few initial reflections, it’s clear that as of now, AI is here to stay—and only accelerating. As we look ahead to the future of AI-driven healthcare communications, here are some key questions and thoughts I’d like us all to consider when bringing AI into our professional spaces:

  • Audit AI’s Environmental Footprint
    – Map the full lifecycle impact of any AI platform you adopt—from data-center power consumption to hardware disposal.
    – Prioritize vendors who commit to renewable energy, carbon offsets, and circular-economy hardware practices.
    – Build sustainability metrics into your project proposals and annual reports so that environmental costs aren’t hidden “below the line.”

  • Co-design with Communities
    – Form advisory councils with tribal, Black, Latinx, and other underrepresented groups to guide AI requirements, data collection, and evaluation.
    – Host regular listening sessions—both virtual and on sovereign lands—to ground product roadmaps in real cultural needs.
    – Ensure community co-ownership of data or models built from their inputs, with clear benefit-sharing agreements.

  • Transparent Data Governance
    – Publish plain-language policies that specify who can access recorded sessions, for what purposes, and under what tribal or organizational approvals.
    – Use privacy-preserving techniques (e.g., federated learning, homomorphic encryption) to minimize raw data exposure.
    – Embed audit trails so individuals and tribal authorities can trace how and when their information was used or shared.

  • Limit and Advocate
    – Lobby for legislation or organizational charters that prohibit AI computation above a certain carbon-intensity threshold, especially for non-mission-critical tasks.
    – Define “AI-free zones” or “low-compute hours” in your institution where human-only processes are mandated to reduce energy spikes.
    – Partner with environmental justice groups and tribal governments to file comments on state and federal rulemakings, ensuring our lands and waters are protected from unchecked data center expansion.
    ——————————-

While these are just initial ideas, integrating these practices into our workflows, procurement processes, and policy advocacy can help make AI safer for our communities and our lands, protecting both human and environmental health. I’m eager to hear your perspectives and learn together, knowing I’m still educating myself on AI’s complexities, challenges, and potential benefits.

Leave a Reply

Your email address will not be published. Required fields are marked *