Beliefs

These are positions I hold with genuine conviction. Some are more settled than others. All are open to revision if the argument is good enough.

India’s education system is optimised for the wrong century

India’s school system was designed, largely by colonial inheritance and post-independence inertia, to produce compliant, certifiable workers at scale. For an industrial economy that needed clerks, accountants, and engineers who could execute well-defined tasks, this was a rational design. For an AI economy that will commoditise exactly those capabilities within a generation, it is a liability.

The skills that will matter — emotional intelligence, critical thinking, resilience, adaptability, the ability to navigate ambiguity and work with other humans — are precisely the ones the current system has no mechanism to develop or reward. They do not appear on board exams. They cannot be ranked. So they are treated as extracurricular at best, irrelevant at worst.

The obstacle is not awareness. Every education policy document mentions 21st century skills. The obstacle is that our assessment architecture makes these traits invisible to the system — and what the system cannot measure, it cannot value. Fixing Indian education without fixing what Indian education measures is not reform. It is decoration.

India’s deeptech gap is a deliberate choice, and the window to reverse it is closing

India’s absence from the foundational layers of technology — semiconductors, sovereign AI infrastructure, advanced materials, quantum systems — is not a market failure. It is a coordination failure compounded by strategic diffidence. Private capital will not build these things unprompted. The return horizons are too long, the upfront costs too large, and the strategic externalities — national security, technological sovereignty, the ability to make autonomous decisions in a crisis — are not priced into any investment thesis.

The state must set the mandate, de-risk the first bets, and create the conditions within which private capital and talent then operate. This is not a controversial model — it is how the US built its semiconductor industry, how South Korea built Samsung, how India itself built ISRO. ISRO proved that mission-mode, centralised execution works in the Indian context. The question is not whether the model is viable. The question is why it has not been applied to the domains that matter most now.

The first bets should be in defence and governance — both because the strategic case is unambiguous and because state procurement provides the demand certainty that private investors need to follow. Technological independence is not an abstract aspiration. It is the difference between a nation that makes decisions and one that is subject to them. The window is not a decade. It is closer to three to five years.

Most organisations will deploy AI and get less than they expect

The dominant failure mode in enterprise AI is not technical. It is organisational. Companies invest in models, APIs, and pilots — and then discover that the bottleneck is not the technology but the operating model around it: the workflows that were not redesigned, the governance that was not defined, the people who were not prepared, the incentives that still reward the old way of working.

AI does not slot into organisations. It restructures them — or it should. The companies that will extract disproportionate value from AI over the next decade are not necessarily the ones with the best models. They are the ones that treat AI deployment as an organisational transformation problem, not a technology procurement problem.

Agentic AI is a fundamentally different paradigm, not an upgrade

Most organisations and most commentators are thinking about agentic AI as a better version of what came before — faster answers, smarter search, more capable chatbots. This is the wrong mental model. Agentic AI systems take actions, run autonomously across extended tasks, and operate with a degree of initiative that previous software did not have. The governance frameworks, the risk models, the human oversight mechanisms — none of these transfer cleanly from earlier AI deployments.

The organisations that treat agentic AI as an incremental upgrade will deploy it in ways that create risks they are not prepared for. The ones that understand it as a paradigm shift will build the operating infrastructure — oversight layers, audit trails, intervention protocols — that makes deployment both effective and defensible. The gap between these two groups will be large, and it will become visible sooner than most people expect.