Healthcare systems across the world are investing heavily in analytics (and by extension artificial intelligence), yet policymakers and hospital leaders continue to struggle with a central question: where do these investments generate the greatest value, and how long should we expect to wait for returns? A recent study by Abhijith Anand, Senior Research Fellow at Portulans with his coauthors Magno Queiroz, and Rajiv Kohli, published in MIS Quarterly, offers rigorous empirical evidence that helps answer this question.
The study moves beyond the simplistic assumption that “more analytics automatically leads to better outcomes” and instead demonstrates that the value of analytics investments depends critically on the complexity of the clinical processes they support. In other words, analytics do not create uniform returns across a hospital; they create differentiated returns depending on where they are deployed.
The authors distinguish clinical processes based on their degree of interdependence, ranging from relatively standardized and routine (low complexity), to structured multi-stage workflows (medium complexity), to highly interdependent, cross-specialty, dynamic processes such as ICU care and inpatient surgery (high complexity). Their findings are striking. When analytics investments are directed toward high-complexity clinical processes, hospitals experience approximately 75% higher efficiency gains and 93% higher productivity gains compared to investments directed toward low-complexity processes. However, these larger gains take about five times longer to materialize. By contrast, low-complexity processes show quick improvements, but those return plateau and decline over time. High-complexity processes exhibit the opposite pattern: slower initial impact, but a rising and compounding trajectory of returns.
These granular insights have profound implications for policy discussions. Most digital transformation initiatives are evaluated over short time horizons, often one to two quarters. Under such evaluation frameworks, high-complexity deployments may appear underwhelming in the short term, even though they generate substantially larger long-term benefits. Policymakers, payers, and regulators, such as the Center of Medicare and Medicaid Services in the United States, may inadvertently discourage transformative digital investments if they rely on short-term performance metrics. The evidence suggests that evaluation windows and incentive structures should be differentiated by process complexity. Complex care environments such as ICUs, emergency departments, and multi-disciplinary surgical workflows require longer gestation periods because analytics must be integrated into dynamic information flows, clinician routines, and cross-department coordination mechanisms. But once embedded, these systems produce compounding benefits as institutional learning accumulates and coordination improves.
The study also reframes why analytics and AI are not simply automation technologies, but rather as cognitive augmentation technologies. In high-complexity settings, analytics systems help integrate fragmented data, reconcile differing clinical interpretations, and reduce uncertainty and equivocality across stakeholders. Rather than replacing clinicians, analytics enhances collaborative sense-making and improves lateral communication across departments.
These insights are especially relevant for current global debates about analytics and AI governance. Regulatory frameworks that treat all systems as homogeneous technologies miss the important distinction between tools that automate routine tasks and those that augment complex, interdependent decision-making. Governance, funding, and training strategies should reflect this difference. Policymakers, nationalized systems, regional alliances, and emerging economies building digital infrastructure from scratch can use this complexity lens to prioritize investments. Instead of scattering analytics and AI pilots across low-risk use cases that generate quick but limited gains, policymakers can deliberately target high-complexity areas, while setting realistic expectations about delayed returns and committing to sustained funding. For instance, CMS evaluation frameworks, especially under value-based payment programs should differentiate incentive weights and evaluation timelines based on process complexity. Short-term pilots by hospitals may unfairly penalize transformative, high-complexity deployments simply because their benefits are delayed. Therefore, CMS could extend evaluation windows for complex-care analytics projects while maintaining shorter horizons for routine automation initiatives.
Ultimately, the study advances the policy conversation from technological optimism to structural realism. Advanced technologies like analytics and AI do not create value simply by existing. Its impact is contingent on the nature of the processes it supports, the interdependence among stakeholders, and the time horizon over which investments are sustained. Policymakers who fail to account for process complexity risk misallocating incentives, misjudging pilot programs, and under- and overinvesting in the very areas where they can produce transformative change. By introducing the link between process complexity and differential returns into policy discourse can significantly influence how reward structures are established by regulators.
Access the full publication in MIS Quarterly.
The post Understand Digital Transformations with Analytics: New Evidence for Policy and Practice for the Healthcare Industry appeared first on Portulans Institute.
