top of page

The AI Revolution in Healthcare: Empowering Clinical Nurse Reviewers

 
Elevance_Health_logo.png

The AI Revolution in Healthcare: Empowering Clinical Nurse Reviewers

 

Elevance Health, a leading health insurer, leverages its National Government Services team to navigate the complexities of government healthcare programs, ensuring access to quality care for millions of Americans.

​

My Role: UX Researcher

Stakeholders: UX Designer, UX Developer, Senior Technical Product Manager, Innovation Developers, Appeals Director, Staff Vice President, Clinical Nurse Reviewers

Challenge: Clinical nurse reviewers at Elevance Health were burdened by overwhelming case volumes and the need to sift through 2 to 18 disparate sources to locate supporting documentation for appeals decisions—resulting in significant time loss, cognitive strain, and delayed case resolution

Tools: Zoom (User Interviews), Mural, Figma, CheckMarket

Duration: 3 months

​

 
 
 
 
AI 1.png

Generative AI for Clinical Nurse Reviewers: Project Objective  

In the heart of the healthcare system lies the appeals process: a vital safeguard for patient rights and care quality. Yet for clinical nurse reviewers, the professionals tasked with evaluating these appeals, the process was anything but streamlined.
 

  • Overwhelming Case Volumes: Nurse reviewers were inundated with cases, each requiring swift, accurate decisions under tight deadlines.

  • Fragmented Information  Sources: To reach a conclusion, reviewers had to search through2 to potentially 18 separate manuals, policy documents, and official guidance. This time-intensive process was causing significant delay in case resolution while simultaneously draining cognitive bandwidth. 

  • Hidden Insights: Key details were buried within dense, often conflicting documentation, making it difficult to surface the information needed to support or deny a case.

  • Cognitive Overload: Managing multiple cases simultaneously while navigating complex reference materials placed a heavy cognitive burden on reviewers, threatening both speed and accuracy.

 

These challenges didn’t just slow down the appeals process, they risked delaying care and undermining clinical confidence. To address this, we designed a generative AI-powered clinical decision support system that streamlined documentation review, surfaced relevant insights, and reduced time spent locating critical information. Our ultimate goal was to empower nurse reviewers to focus on what matters most: making informed decisions that support patient care.

Confidentiality Note

 

This case study provides a high-level overview of the project and its outcomes. Due to the sensitive nature of the information involved and the existence of a Non-Disclosure Agreement (NDA) with the Centers for Medicare & Medicaid Services (CMS), specific details regarding the project implementation, data sources, visuals, and quantitative results cannot be disclosed in this document.

 

Credits

As the UX Researcher on this project, I:

  • Led stakeholder interviews to uncover pain points and define strategic priorities

  • Designed and analyzed a pre/post implementation survey to quantify AI impact

  • Developed user personas to guide design decisions and ensure clinical relevance

  • Mapped the end-to end journey of Clinical Nurse Reviewers to surface workflow inefficiencies

  • Collaborated on low-fidelity prototypes to visualize early concepts

  • Conducted iterative usability testing to validate design and optimize reviewer experience

 
 
Picture2.png

Unveiling the Pain Points: Collecting Data to Inform the AI Solution

I started my research by connect with the clinical nurse reviewers, the often over-looked champions of patient care. I conducted comprehensive interviews and shadowing sessions, immersing myself in their daily routines to truly grasp the intricacies of their workflows.

It quickly became clear that understanding their experiences was not just beneficial; it was critical for the success of our initiative. By identifying their challenges, I could uncover valuable insights that would inform our strategies and solutions. Along the way, I gathered qualitative data that highlighted the specific obstacles faced during the process, shedding light on areas that needed improvement.

This deep dive into their world not only empowered the reviewers by acknowledging their struggles but also ensured that project would be built on a foundation of real experiences and genuine needs. In this way, we could create effective solutions that resonate with those on the front lines, ultimately leading to better outcomes for patients and healthcare providers alike.

A Human-Centered Approach: Survey Data Tells the Story of Reviewer Success

 
 

To accurately measure the impact of the AI solution, a comprehensive survey was meticulously designed. This survey incorporated a mix of Likert scale questions, multiple-choice options, and open-ended questions to capture a holistic view of the reviewers' experiences.

 

Key areas of inquiry included:

  • perceived workload

  • efficiency gains

  • decision-making confidence

  • job satisfaction

  • and overall perceptions of AI

 

During early research, a critical challenge emerged: low trust in AI within a regulatory context. Many reviewers expressed skepticism about the role of AI in clinical decision-making, especially in appeals workflows governed by strict compliance standards. Concerns around PHI/PII security and the potential for AI to misinterpret nuanced documentation further amplified hesitation.

​

This data collection was essential not only for establishing a baseline but also for surfacing these trust barriers. By quantifying reviewers’ experiences and perceptions, the survey revealed how the AI tool influenced daily workflows, reduced cognitive strain, and began to shift attitudes—laying the groundwork for responsible, clinician-informed adoption.

 
survey.png

Driving Adoption and Building Trust in AI from the Beginning

 
 

Early in the project, it became clear that trust—not just functionality—would determine success. Many clinical nurse reviewers expressed skepticism about AI’s role in regulated healthcare workflows, citing concerns around PHI/PII securityclinical accuracy, and the potential for misinterpretation of nuanced documentation.

​

To address this, we took a participatory design approach:

  • Engaged reviewers early through interviews, shadowing, and usability testing

  • Incorporated their feedback directly into the tool’s logic and interface

  • Held transparent conversations about how data was handled, stored, and protected

  • Demonstrated reliability through real-world scenarios and pre/post survey results

 

These efforts weren’t just about adoption. They were about empowering reviewers to shape the tool they’d be using. As a result, post-survey data revealed a measurable shift in perception: reviewers reported increased confidence in the AI’s ability to surface relevant documentation and support decision-making without compromising compliance.

​

By centering trust, transparency, and clinical relevance, we didn’t just deploy a tool. We fostered a cultural shift toward responsible AI adoption.

From Data to Empathy: Developing Personas to Understand Reviewer Needs

 
 

Developing user personas for clinical nurse reviewers was a cornerstone of this project. These profiles helped us uncover the distinct needs, motivations, and pain points of the professionals navigating complex appeals workflows.

​

To build them, we conducted in-depth interviews, shadowed reviewers in their daily routines, and analyzed behavioral patterns across diverse cases.

This research surfaced key traits that shaped our design decisions such as:

  • documentation fatigue

  • decision-making pressure

  • and skepticism toward AI in regulated environments

​​

These personas became strategic tools throughout development, ensuring the AI solution remained grounded in real-world context. By aligning the tool with reviewers’ cognitive workflows and trust thresholds, we delivered a system that was not only technically robust, but intuitive, respectful of clinical judgment, and tailored to the realities of their work.

Personas

Smiling Portrait

Denise Carter

Bio: A seasoned RN with 30+ years in clinical care, Denise brings deep empathy and policy fluency to her appeals work, but remains cautious about AI’s role in regulated environments.

Portrait of Smiling Man

Miguel Torres 

Bio: A data-savvy nurse with a background in informatics, Miguel thrives on efficiency and sees AI as a promising ally—if it proves reliable and transparent.

Happy Woman

Priya Desai

Bio: A rising clinical reviewer with a background in med-surg, Priya is eager to learn but overwhelmed by the volume and complexity of appeals documentation.

​Mapping the Journey: Visualizing the Reviewer Workflow

 

To deeply understand the daily realities of clinical nurse reviewers, we developed comprehensive journey maps that visualized their end-to-end workflow from initial case assignment to final appeals decision. These maps captured key touch points, decision nodes, and pain points across the process.

​

By mapping the current state, we uncovered critical inefficiencies, including:

  • Time lost manually reviewing voluminous medical records and policy documents

  • Difficulty locating relevant guidance across 2 to 18 disparate sources

  • Repetitive data entry tasks that compounded cognitive strain

​​

This visual framework created a shared understanding across stakeholders, highlighting where generative AI could streamline documentation review, surface key insights, and reduce decision fatigue. The journey maps became a strategic blueprint for designing an AI solution that integrated seamlessly into reviewers’ routines—supporting both operational efficiency and clinical confidence.

user flow.png

Putting It to the Test: User Feedback Shapes the AI Solution

 

To ensure the AI tool was not only effective but also intuitive for clinical nurse reviewers, we conducted rigorous usability testing throughout the design process. Reviewers were observed interacting with the prototype, offering feedback on usability, clarity, and overall experience.

​

We gathered both:

  • Quantitative data: time-to-completion, error rates, and task efficiency

  • Qualitative insights: interviews and behavioral observations that surfaced usability friction and trust concerns

​

This feedback revealed key opportunities to improve the interface, clarify instructions, and better align the tool with reviewers’ cognitive workflows. By iteratively refining the prototype, we delivered a solution that was not only technically robust, but also trusted, usable, and seamlessly integrated into the high-stakes appeals environment

Transforming the Review Process: Achieving Success with AI

 
 

Post-implementation, the initiative delivered measurable improvements that directly addressed the core challenges faced by clinical nurse reviewers:

  • 40% reduction in time spent locating supporting documentation, enabling faster, more confident case decisions

  • Reviewers reported a 35% increase in time spent on high-value tasks, such as complex case analysis and patient advocacy, compared to pre-implementation workflows

  • Appeals requiring re-evaluation dropped by 22%, reflecting improved decision accuracy and fewer documentation gaps

  • Perception of AI as a trustworthy support tool increased from 28% pre-implementation to 66% post-implementation, following participatory design and transparency efforts.

 

Reflecting on the Road Ahead: Lessons Learned from Our AI Journey

 

This journey was not just about implementing AI; it was about empowering our clinical nurse reviewers.

 

Along the way, we learned some invaluable lessons:

  • The Continuous Feedback Loop: Building without listening is like designing a house without asking the residents what they need. Our regular usability sessions became open forums where reviewers shared what worked, what didn’t, and what mattered most. Their feedback shaped every iteration and ensured the tool truly served their needs.

  • The Power of Partnership: This project thrived on collaboration. From nurse reviewers to IT specialists to leadership, every stakeholder played a role. Their insights helped us navigate complexity, build trust, and drive adoption in a compliance-heavy space.

  • The Art of Adaptation: Rigid plans don’t survive in healthcare. We embraced an iterative mindset—refining the AI tool in response to emerging needs, evolving workflows, and real-time feedback. Flexibility became our greatest asset.

 

Conclusion

This generative AI initiative wasn’t just a technical success—it was a human-centered transformation. By equipping nurse reviewers with tools that respected their expertise and reduced cognitive strain, we improved not only the appeals process but also the quality of care delivered downstream.

It’s a testament to what’s possible when technology is designed with empathy, trust, and clinical relevance at its core. And it’s a reminder that the most powerful innovations don’t just optimize systems—they uplift the people within them.

 
bottom of page