By Sagar Chavan, Founder and CEO, Janus Intellect.
The global AI in supply chain market reached $19.8 billion in 2026, according to industry market research, and is projected to cross $70 billion by 2030. Companies are committing real capital, announcing ambitious programmes, and presenting transformation roadmaps to their boards.
And yet, the 2025 MIT NANDA enterprise AI study found that 95% of enterprise AI pilots delivered zero measurable return. A separate 2026 industry survey of CFOs and supply chain leaders found that four in five organisations report no tangible impact on their P&L despite rolling out AI initiatives, even as the majority of those same leaders increased AI investment over the prior year.
This is not a technology problem. The tools exist, they work, and their capabilities are well-documented. This is a deployment problem. Specifically, a gap between how AI vendors position their solutions and what those solutions actually require to deliver results in a real operating environment. The gap between vendor promise and measurable ROI in AI in supply chain is the most expensive misunderstanding in enterprise operations today.
At Janus Intellect, Sagar Chavan and the team work with mid-market businesses across India and the Middle East to close this gap. This article is about how.
Why Pilots Are Where AI in Supply Chain Goes to Die
The pilot programme has become the graveyard of supply chain AI investment. A use case is identified. A vendor is selected. A pilot is scoped, typically one facility, one product category, one geography. The pilot runs for ninety days, produces results that are directionally positive, and is then presented to leadership as evidence that AI works. Budget is committed for a broader rollout. The rollout begins, and stalls. The results that appeared in the pilot do not replicate at scale. The organisation discovers that what worked in one clean, controlled, well-supported environment does not survive contact with the fragmented data, legacy systems, and misaligned incentives of the wider business.
The 2026 Supply Chain Management Review survey of 514 supply chain leaders found that 87% reported their technology investments had not fully delivered expected results, despite 85% claiming to be ahead of competitors in digital transformation. Furthermore, only 27% had fully embedded an AI strategy across business units. The pattern is universal. Confidence at the pilot stage, failure at the scaling stage, and a post-mortem that invariably identifies the same causes: fragmented data, siloed systems, and processes that were automated rather than redesigned.
The core structural problem is that pilots are designed to succeed in isolation. They are resourced with the best data, the most engaged team members, and the closest vendor support. They are not designed to test whether the AI application can sustain results when the data is messier, the team is less engaged, and the vendor is no longer in the room every day.
Sagar Chavan’s view, formed across Janus Intellect engagements with manufacturing, distribution, and logistics businesses, is that a pilot which does not test the scaling conditions is not a proof of concept. It is a demonstration. Demonstrations do not produce EBITDA. Scaled deployments do, but only when the conditions that made the pilot work are systematically replicated across the organisation.
The Five Reasons AI in Supply Chain Fails to Reach the P&L
After working with manufacturing, distribution, and logistics businesses across India and the Middle East, Sagar Chavan and the Janus Intellect team have identified five failure modes that consistently prevent AI in supply chain from converting investment into EBITDA impact. These are not theoretical failure modes drawn from academic literature. They are the specific, recurring diagnoses we encounter when a business asks us why their AI programme has not delivered.
Failure One. Fragmented Data Treated as a Post-Deployment Problem
The most consistent finding in every credible industry survey on supply chain AI failures is data fragmentation, and yet it is the problem most frequently deferred rather than resolved before deployment begins. Vendors pitch their solutions with the implicit assumption that the client’s data is reasonably clean, consistently attributed, and accessible from a single or well-integrated source. In most mid-market businesses, none of these conditions hold. Demand data sits in the ERP. Supplier lead time data sits in a procurement platform or in email. Inventory data is partially in the WMS and partially in Excel. Production data is in a system that does not have an API. The AI model ingests what it can reach, ignores what it cannot, and produces outputs that are only as reliable as the narrowest and least reliable data source feeding it.
Data readiness is not a post-deployment clean-up task. It is a prerequisite. Businesses that treat it otherwise spend the deployment period discovering data quality issues that should have been resolved in the design phase, and by the time those issues are surfaced, the programme timeline and budget have already been consumed.
Failure Two. Automating the Process, Not Redesigning It
The second failure mode is the most intellectually straightforward and the most commonly committed. Organisations take an existing, suboptimal process, a weekly demand planning cycle, a manual inventory replenishment workflow, a spreadsheet-driven procurement approval sequence, and automate it with AI. The result is a faster version of a bad process. The AI executes the flawed logic at machine speed, producing the same wrong outputs with greater efficiency and less visibility into why they are wrong.
Industry research, including the 2026 IBM Institute for Business Value Supply Chain report, consistently shows that organisations with a formal AI change management plan, including process redesign before automation, are approximately three times more likely to achieve ROI within the first twelve months of deployment. The causality is direct. The ROI comes from improving the process, not from the automation itself. AI is the execution mechanism for a better process design. Without the redesign, the automation is a cost, not an investment.
Failure Three. Success Metrics Defined After Deployment
It is a straightforward principle. You cannot measure an outcome you did not define before you started. Yet the majority of supply chain AI deployments begin without CFO-validated success metrics established before the first line of configuration. The result is a programme that produces outputs, dashboards, recommendations, alerts, without a defined standard against which those outputs constitute success or failure. Furthermore, when leadership subsequently asks whether the AI is working, the answer defaults to a qualitative assessment from the team that built it, which is structurally biased toward confirmation, not measurement.
The businesses that achieve measurable ROI from AI in supply chain define their success criteria upfront and with specificity. Forecast error reduction in percentage points. Inventory carrying cost reduction in rupees. Order fulfilment cycle time in days. Supplier lead time variance in standard deviations. These are not aspirational targets. They are CFO-approved benchmarks that the programme is contractually accountable to, and against which budget continuation decisions are made at defined intervals.
Failure Four. Building Internal When the Domain Requires Specialisation
The build versus buy decision for supply chain AI is not primarily a cost or control question. It is a domain expertise question. IT departments that build general-purpose AI agents on cloud infrastructure can produce tools that are technically functional but operationally inadequate for supply chain work. They cannot model the relationship between detention and demurrage in logistics. They do not carry the training data to understand on-time-in-full penalty structures, cross-dock timing dynamics, or the cascading inventory consequences of a tier-two supplier disruption.
The evidence on this is clear. Specialised supply chain AI vendors with domain-trained models succeed approximately 67% of the time in delivering measurable results, according to the 2025 Supply Chain Brain industry analysis. Internal builds succeed approximately one-third as often. The difference is not compute power or team capability. It is the domain-specific operational knowledge embedded in the model and in the vendor’s implementation methodology. Knowledge that takes years to accumulate and cannot be replicated in a twelve-month internal build programme.
Failure Five. Deploying AI Ahead of Organisational Readiness
The fifth failure mode is the least visible and the most damaging to long-term AI adoption. When AI generates a demand forecast recommendation that a planner does not trust, the planner overrides it. When overrides happen consistently, the AI system becomes expensive shelfware. Technically operational, practically ignored. This is not a technology failure. It is an organisational readiness failure. A quarter of supply chain executives cite trust gaps as their primary barrier to AI ROI, according to the 2026 SCMR research. Furthermore, 54% of supply chain leaders prefer AI to recommend rather than decide autonomously, meaning the human-AI collaboration model must be explicitly designed, not assumed.
Trust is not built by buying better software. It is built by demonstrating AI accuracy on low-risk decisions before expanding scope, by involving planners and operators in the design process, and by making the AI’s reasoning transparent so that practitioners can validate its logic rather than simply accept or reject its outputs.
The Use Cases That Are Genuinely Proven, and the Ones That Are Not
Not all supply chain AI is equally mature. There is a meaningful difference between use cases that have produced verified, repeatable ROI across multiple deployments at scale, and use cases that have produced promising pilot results in controlled conditions. The decision about where to invest first should be driven by this distinction, not by the vendor’s case study library, which is curated to show the best outcomes, not the median ones.
| Use Case | Maturity | Verified ROI Range | Condition for Success |
|---|---|---|---|
| Demand Forecasting | Proven at Scale | 20 to 50% forecast error reduction | Clean historical demand data, minimum 24 months |
| Inventory Optimisation | Proven at Scale | 20 to 30% inventory level reduction; 5 to 10% warehousing cost reduction | Real-time inventory visibility across all nodes |
| Logistics Route Optimisation | Proven at Scale | 5 to 20% logistics cost reduction | GPS and traffic data integration, fleet data quality |
| Supplier Risk Monitoring | Proven at Scale | 2 to 4 week earlier warning signals on supply disruptions | External data feeds, multi-tier supplier mapping |
| Procurement Automation | Proven at Scale | Days saved per RFP cycle, 15 to 25% reduction in processing cost | Standardised procurement taxonomy and approval workflow |
| Autonomous Control Towers | Early Production | High variance across deployments, best case substantially above median | Significant cross-functional alignment and data maturity |
| End-to-End Agentic Orchestration | Emerging | Verified at major global enterprises, mid-market still in pilot | Multi-year data infrastructure investment, high organisational readiness |
The pattern in this table is consistent with what Janus Intellect observes in practice. The use cases with verified ROI are those where the problem is well-defined, the required data is specifiable in advance, and the success metric is directly measurable. The use cases with unverified or highly variable ROI are those where the problem is diffuse, the required data is complex and multi-source, and the success metric requires significant attribution work to establish. Start with the former. Build toward the latter once the data foundation and organisational readiness are in place.
AI in supply chain is not failing because the technology is immature. It is failing because organisations are deploying mature technology into immature operating environments and expecting the technology to compensate for structural gaps that the technology was not designed to fill.
Sagar Chavan, Founder and CEO, Janus IntellectWhat the Mid-Market Supply Chain AI Deployment Actually Looks Like
The enterprise deployments referenced in vendor case studies, the major global retailers and logistics operators, were built on data infrastructure that took years to mature before AI was layered on top. Their success is real, but it is the output of a journey that started long before the AI decision was made. For a mid-market manufacturing or distribution business in the 100 to 500 crore range, attempting to replicate that outcome without the same foundation is not ambitious. It is expensive and predictable in its failure mode.
The correct deployment sequence for a mid-market business is narrower, faster, and more tightly governed. The first deployment should target a single, high-value use case where the data is already reasonably clean. Demand forecasting for the top twenty SKUs by revenue contribution, or inventory optimisation for the highest-turnover product category. The scope is deliberately constrained, not because the ambition is limited, but because a narrow deployment with measurable results builds the organisational trust and the data discipline that the next deployment requires. Furthermore, it produces a CFO-validated ROI case that funds the subsequent investment, which is the only sustainable basis for supply chain AI investment in a business that does not have a dedicated innovation budget.
A B2B industrial distribution business with 180 crore annual revenue was experiencing significant inventory imbalance. Chronic overstock in certain SKU categories and stockout-driven lost sales in others. The founding team attributed this to demand volatility. Janus Intellect’s diagnosis, led by Sagar Chavan, was different. The demand forecasting process was based on a rolling three-month sales average applied uniformly across all SKU categories, with no differentiation by seasonality, customer segment, or channel. The first intervention was not an AI tool. It was a data audit that identified the three SKU categories where demand patterns were both sufficiently regular and sufficiently data-rich to support AI-based forecasting with confidence. The second intervention was a targeted AI demand forecasting deployment for those three categories only, with a defined success metric. Forecast error reduction of at least 25% within ninety days, measured against the prior rolling-average baseline.
The Three Non-Negotiable Conditions for Supply Chain AI ROI
Across every supply chain AI deployment that Janus Intellect has observed, supported, or diagnosed, both the ones that worked and the many that did not, three conditions are consistently present when ROI is achieved and consistently absent when it is not. These are not technical conditions. They are organisational and governance conditions, which is precisely why they are so frequently underweighted in vendor-led deployment conversations.
The first condition is standardisation before automation. Industry research consistently shows that the vast majority of AI initiatives struggle to deliver sustained ROI due to fragmented data, siloed systems, and undocumented workflows. Standardisation, of data definitions, of process logic, of performance metrics, must precede automation. AI applied to a standardised, well-documented process delivers its projected outcomes with high reliability. AI applied to an undocumented, variable process delivers outputs that cannot be validated, results that cannot be attributed, and ROI that cannot be demonstrated to the CFO.
The second condition is workflow integration rather than parallel deployment. The most effective supply chain AI deployments augment how planners, buyers, and operators already work. They are embedded into the existing decision workflow, not presented as a separate system that requires a parallel process to consult. When AI sits alongside the existing workflow, practitioners must actively choose to use it. When AI is embedded inside the workflow, the recommendation appears at the moment of decision, and the path of least resistance is to act on it. The difference in adoption rates between embedded and parallel deployment is substantial, and adoption is the prerequisite for any ROI calculation.
The third condition is CFO-visible measurement from day one. Supply chain AI that cannot produce a monthly summary of its P&L impact, in rupees, in inventory days, in cycle time, cannot sustain executive sponsorship through the scaling phase. The measurement framework must be designed before deployment, validated by the CFO, and reported with the same rigour as any capital investment. Programmes that cannot demonstrate measurable returns within eighteen months should be restructured or terminated, with the budget redirected to deployments that can meet this standard.
The question to ask before approving any supply chain AI investment is not “Does this technology work?” The answer is almost always yes. The question is “Do we have the data quality, the process standardisation, and the organisational readiness to deploy it in a way that will show up on our P&L?” If the answer to any of those three is no, fix that first.
What to Ask Every Supply Chain AI Vendor Before Signing
The vendor selection process for supply chain AI is, in most mid-market businesses, driven by product demonstrations, case study presentations, and pricing negotiations. None of these inputs adequately assesses the conditions that actually determine whether the deployment will produce ROI. The following questions are the ones that matter, and a vendor who cannot answer them clearly and specifically is a vendor whose deployment carries material failure risk.
What are the minimum data quality requirements for your system to produce reliable outputs, and how will you assess whether our current data meets those requirements before contract signature? What does your implementation methodology include beyond tool configuration, specifically, what process redesign, workflow integration, and change management support is included? Can you provide a reference customer in a comparable industry and revenue range who has deployed your solution beyond the pilot stage and can verify their ROI with specific metrics? What is your standard definition of a successful deployment, and how is it measured, and what happens contractually if those metrics are not achieved within the agreed timeframe?
These questions do not guarantee a good deployment. However, they immediately distinguish vendors who understand the operating conditions that produce ROI from vendors who are selling technology capability without accountability for the conditions required to realise it. In AI in supply chain in 2026, that distinction is the difference between an investment and an expense.
Frequently Asked Questions About AI in Supply Chain Management
The primary causes are consistent across deployments. Fragmented data treated as a post-deployment problem rather than a prerequisite. Automation of flawed processes without redesigning them first. Success metrics defined after deployment rather than before. Organisational readiness failures where practitioners do not trust AI recommendations and override them systematically. The 2025 MIT NANDA study found that 95% of enterprise AI pilots delivered zero measurable return. The technology is not the bottleneck. The operating environment, data quality, process standardisation, and human adoption, is. Sagar Chavan and Janus Intellect address these structural conditions before any AI tool is selected or deployed.
The use cases with the strongest verified ROI across multiple at-scale deployments are demand forecasting (20 to 50% forecast error reduction), inventory optimisation (20 to 30% inventory level reduction), logistics route optimisation (5 to 20% cost reduction), supplier risk monitoring, and procurement automation. These use cases share common characteristics. Well-defined problems, specifiable data requirements, and directly measurable success metrics. More complex applications such as autonomous control towers and end-to-end agentic orchestration show high ROI in best-case deployments but require significantly more data maturity and organisational readiness than most mid-market businesses currently have. Starting with proven use cases and building toward complex ones is the right sequencing.
The evidence strongly favours buying from specialised vendors over building internally for supply chain AI. Specialised vendors with domain-trained models succeed in delivering measurable results approximately 67% of the time. Internal builds succeed approximately one-third as often. The difference is domain expertise. Supply chain AI requires operational knowledge of concepts like on-time-in-full penalties, cross-dock timing, multi-tier supplier disruption cascades, and demand seasonality patterns that general-purpose AI models built by IT departments cannot replicate without years of domain-specific training data. Build decisions are appropriate for highly proprietary or unique supply chain workflows where no commercial solution addresses the specific problem. For standard use cases, buy from a specialist.
For well-scoped, data-ready deployments in proven use cases (demand forecasting, inventory optimisation, procurement automation), measurable ROI is typically visible within six to twelve months. Broader programmes covering multiple use cases or requiring significant data infrastructure investment have payback periods of two to four years, which is the current median across enterprise AI initiatives according to 2025 industry research. Programmes that have not demonstrated measurable returns within eighteen months should be reviewed critically. Either the scope is wrong, the data foundation is insufficient, or the use case is not appropriate for the organisation’s current maturity level. Eighteen months without measurable P&L impact is a signal to restructure, not to wait longer.
Sagar Chavan is the founder and CEO of Janus Intellect, a strategic and management consulting firm advising founder-led and promoter-led businesses in the 100 to 500 crore revenue band. Janus Intellect is among the leading management consulting firms in India for AI strategy, supply chain transformation, and operating model design. Sagar Chavan’s supply chain AI advisory practice begins with a data readiness audit, identifies the two or three use cases with the highest ROI probability given the current data environment, designs the process changes required before deployment, and governs the deployment against CFO-validated metrics. This approach produces deployments that are slower to start and significantly more likely to show up on the P&L.
Sagar Chavan is the founder and CEO of Janus Intellect, a strategic and management consulting firm working with founder-led and promoter-led businesses in the 100 to 500 crore revenue band. Janus Intellect is among the leading management consulting firms in India for AI strategy, supply chain transformation, and operating model redesign. Sagar Chavan and the Janus Intellect team advise CEOs and leadership teams across manufacturing, distribution, healthcare, logistics, and technology in India, the UAE, and Southeast Asia.
Is Your Supply Chain AI Investment Built to Show Up on the P&L?
Sagar Chavan and the Janus Intellect team work with CEOs and supply chain leaders to diagnose AI readiness, identify proven use cases, and build the operating conditions that convert AI investment into measurable margin outcomes.
Request a Supply Chain AI Diagnostic