From Microchips to Macro Machines: Mapping Manufacturing Mysteries with AI

Written by Hanu Priya Indiran


In an era defined by high-stakes innovation, engineered systems are becoming increasingly complex. From the roaring heat of gas turbines to the delicate precision of biosensors and photonic chips, these technologies now shape critical products of modern life—jet engines that move passengers across continents, microchips that power our communication and computation, and medical implants that sustain lives. They must operate reliably under demanding conditions: turbine blades withstanding temperatures close to their melting point, semiconductor devices functioning at nanometer precision, and implants designed to endure continuous operation inside the human body.

Yet, despite exhaustive design simulations and rigorous component-level testing, many of these systems still stumble during final test-bed trials—the stage where complete systems are assembled and tested under real-world operating stresses. It is in these trials, where design meets reality, that hidden weaknesses suddenly surface.

The reason for these failings lies not in flawed design, but in the manufacturing process itself.

Whether you're building a nuclear reactor or a nanoscale biosensor, every component is shaped by a chain of interdependent manufacturing processes. Tiny variations—like temperature drift in a deposition chamber, misalignments during photolithography, or subtle tool wear during etching—can interact in non-obvious ways, leading to performance degradation, yield drops, or even catastrophic failure.

These issues are amplified in systems with complex physics and tight tolerances—think jet engines or high-density microchips—where invisible defects can trigger emergent behaviours once components are integrated.

Manufacturing diagnostics have long relied on two main approaches: empirical models and physics-based simulations. Empirical models act as statistical predictors—drawing correlations from past data to estimate outcomes. They can be quick and effective within well-characterised conditions, but their reliability tends to diminish once a process drifts into unfamiliar territory, such as an unexpected temperature fluctuation or an uncommon sequence of tool wear.

Physics-based simulations, in contrast, are grounded in first principles: equations of heat transfer, fluid flow, or material deformation. They provide valuable insight into specific processes, yet often simplify conditions in ways that make it difficult to capture the interconnected reality of manufacturing systems. A simulation, for example, might show how metal deforms under pressure, but it may not fully account for how tool vibrations, surface chemistry, and ambient humidity combine to influence the final product.

The challenge is that many failures do not arise from a single variable in isolation, but from subtle interactions across multiple steps. Traditional methods—whether data-driven or physics-driven—offer important tools, yet they can struggle to illuminate these hidden links, leaving gaps in our ability to predict outcomes where the stakes are highest.

What we need is a framework that fuses domain knowledge with adaptive intelligence. That is where my research steps in.

This work focuses on building intelligent diagnostic models that combine expert knowledge of failure modes with insights drawn directly from manufacturing and testing data. At the core are probabilistic graphical models (PGMs), which provide a rigorous way to reason under uncertainty and connect sparse, indirect signals back to their origins in the manufacturing chain. By framing processes as networks of nodes and edges, enriched with both domain knowledge and data, PGMs and related graph-based approaches (graph theory, knowledge graphs) can uncover dependencies that are otherwise hidden, capturing the cause–effect pathways that span multiple stages of manufacturing.  In practice, this means they can surface “hidden interactions”—such as how a minor misalignment during lithography might later amplify thermal stress—long before these effects accumulate into full-scale failures.

This direction is also part of a broader movement across industry and academia. Companies like Siemens and GE are embedding graph-based reasoning and digital-twin frameworks into their predictive maintenance strategies. Academic groups worldwide are advancing related efforts: Bayesian networks for process monitoring, knowledge-graph-driven fault diagnosis, and hybrid AI models that explicitly account for uncertainty in engineered systems. My work builds on these foundations but moves beyond after-the-fact fault prediction—toward uncovering how combinations of subtle manufacturing variations can interact in unknown ways to influence performance. By tracing these hidden interactions across the chain, the approach makes it possible to identify risks that would otherwise remain invisible until they manifest in costly failures.

From massive machines to microchips, building resilient, intelligent systems starts with understanding how they break. With the right fusion of AI and engineering, we can do more than prevent failure—we can design for sustainability, at scale.

Cover image by Jakub Zerdzicki


Hanu Priya Indiran is a PhD candidate in Engineering at the University of Cambridge, specialising in AI-driven Digital Twins for complex engineered systems. Her research develops probabilistic frameworks for diagnosing process variations and enabling tolerance-aware design for manufacturing reliability in collaboration with Siemens Energy. She was awarded the Cambridge International Scholarship and EPSRC studentship for her doctoral research. Hanu has been recognised through global programs and awards, including the McKinsey & Company Next Generation Women Leaders Fellowship, the Oxford Global Leadership Challenge, the UN SDSN Local Pathways Fellowship, and the AI Governance Fellowship at the Cambridge AI Safety Hub. She represented the UK as a UN Women UK Delegate to the UN Commission on the Status of Women.

 
Next
Next

‘Judement is Turned Backwards and Justice Standeth Afar Off’*: Considerations of AI Adjudication Via LLMs