The graveyard of failed enterprise technology projects is filled with solutions that worked perfectly in theory but were never adopted in practice.
The biggest barrier to AI-driven transformation isn't the technology; it's the organizational resistance to it.
Successful AI adoption is not just a technology project; it is a change management mission rooted in building trust and empowering people. The C-suite's role must evolve from implementer to organizational change leader.
We sat down with Hari Bala, CTO, and Thea Campbell, Business Director of Revenue Cycle at Solventum to discuss why a successful AI strategy is less about deployment schedules and more about leading cultural change.
The challenge: Overcoming fear and suspicion
Thea: From an operational standpoint, the immediate reaction to AI isn't excitement, it's anxiety. When we mention "automation" to coding teams or clinical staff, they hear "replacement." They worry about losing autonomy or being forced to trust a "black box" that might mess up their denial rates. If leadership doesn't actively address this with purpose and understanding, even the most sophisticated algorithm will fail to deliver ROI because the team simply won't use it. They will find ways to work around it.
Hari: That's the technical paradox we face. We can build models with incredibly high predictive accuracy, but if the end-user doesn't trust the data inputs or the logic flow, the model sits idle. Technically, the challenge isn't usually computing power; it's designing a system that accounts for human psychology. We have to engineer the technology to be an assistant, not an overlord, which requires a very different architectural approach than traditional batch-processing software.
Step 1: Reframe the narrative, from replacement to augmentation
Thea: Leadership must intentionally and consistently communicate a new narrative: AI is not here to replace expert staff; it's here to augment their skills. We need to frame this as automating the repetitive, low-value tasks that cause burnout, like verifying simple claims status, so our people can focus on the complex denials that actually require their expertise. Itβs about removing the drudgery, not the worker.
Hari: Exactly. From an engineering perspective, we design these systems to handle high-volume, low-variance data. We are essentially building a "digital pre-screener." The technology is optimized to flag anomalies for human review, not to make the final judgment call on complex clinical nuances. The goal is to present the human expert with a "cleaner" queue of work, ensuring their cognitive load is spent on decision-making rather than data entry.
Step 2: Build trust through transparency (kill the black box)
Hari: This brings us to "explainable AI" (XAI). In the past, neural networks were often black boxes. Data went in, and a decision came out with no explanation. That doesn't fly in healthcare. We are now prioritizing architectures that provide an audit trail. When the AI suggests a code, it highlights the specific phrase in the clinical documentation that triggered that suggestion. This technical feature is critical for adoption.
Thea: That transparency is the difference between a coder being a skeptic and a supervisor. If a coder can click a button and see why the AI made a suggestion, like linking it directly to the doctor's notes, they stop fighting the system. They start validating it. CFOs need to understand that investing in a transparent system reduces the long-term costs associated with manual overrides. If the users trust the math, they stop wasting time double-checking every single transaction.
Step 3: Redesign roles for a higher purpose
Thea: We are facing an identity crisis in the revenue cycle. Staff are asking, "If AI does the coding, what do I do?" The answer is that we need to proactively redesign roles. We need to shift titles from "Coder" to "Data Integrity Auditor" or "Clinical Revenue Analyst." We need to invest in upskilling programs so our best people can manage the 20% of cases that are high-risk and require deep clinical investigation.
Hari: This shift requires new tools. We aren't just building the AI; we have to build the dashboard that the human uses to audit the AI. This means the user interface (UI) changes from a data-entry screen to an analytics dashboard. We need the staff to become "pilots" of the software, monitoring trends and drift. We have to train them on how to spot when the model might be degrading, turning them into active participants in the system's maintenance.
Final thoughts
Thea: The true work of AI transformation lies in leading your people through change. It requires empathy, clear communication, and a commitment to building a culture where AI is seen as a trusted partner, not a threat.
Hari: And that culture is built on a foundation of reliable, explainable, and user-centric technology. When operations and engineering align on these goals, AI stops being a disruption and starts being a superpower.
To hear more from Thea, check out another post here. To hear more from Hari, check out another post here.
Hari Bala, CTO, Solventum
Thea Campbell, Business Director, Revenue Cycle, Solventum