From Frankenstein to AI: Why intelligence without understanding remains dangerous

In Frankenstein, Mary Shelley imagines a scientist who succeeds in animating lifeless matter, only to discover that creating life is not the same as creating understanding. Victor Frankenstein’s creature can perceive, feel, and even reason, yet it remains tragically excluded from the moral and emotional fabric that gives meaning to intelligence. The experiment does not fail because of a lack of technical brilliance; it falters because the creator does not fully comprehend the nature of what he has brought into being. This enduring literary moment offers a powerful analogy for our present engagement with Artificial Intelligence: we have learned to build systems that can process, predict, and perform, but we are still far from creating systems that truly understand.
Since the foundational work of Alan Turing, the modelling of Data, Information, Knowledge, and Wisdom has driven the design of intelligent systems. Successive generations of AI research have introduced increasingly sophisticated tools for managing uncertainty, including certainty factors, Bayesian networks, fuzzy logic, Dempster-Shafer theory, computing with words, and belief-based reasoning.
The modelling of Data, Information, Knowledge, and Wisdom to address everyday and complex real-life challenges has evolved significantly since Turing introduced the concept of a thinking machine. Turing theorised that to imitate an adult human mind, we must consider the first state of the mind, the education it has received, and other experiences it has undergone. His suggestion of creating a computational model of a child’s mind and then “educating” it continues to echo in neural networks and hierarchical learning systems today.
Yet, limitations persist. Neural network models have advanced in architecture design, learning schemes, and evaluation metrics, but their learning mechanisms remain fundamentally mathematical optimisation processes and do not faithfully represent how the biological brain acquires and applies knowledge. The brain integrates data, prior knowledge, and accumulated wisdom simultaneously; current AI relies largely on data alone.
Equally significant is the neglect of quantum information processing. While quantum approaches have shown measurable advantages in learning efficiency and computational performance, they remain largely unexplored within mainstream AI architectures. If quantum effects indeed play a role in biological neural processing, this omission represents a major missed opportunity. Similarly, higher-order cognitive functions — consciousness, abstraction, moral reasoning, and contextual wisdom — are either absent or only superficially approximated. The result is systems that are efficient but not genuinely intelligent in the human sense.
To move forward, integrating quantum and neuromorphic computing becomes essential. The classical Church-Turing thesis, long regarded as the theoretical foundation of computation, is now increasingly questioned in light of quantum computing capabilities. Emerging evidence suggests that quantum systems may process information in ways that classical systems cannot efficiently simulate. If such processes are intrinsic to human cognition, then only quantum artificial intelligence may eventually match the brain’s energy efficiency and noise tolerance.
A key opportunity lies in integrating these paradigms to develop brain-like structures capable of solving problems efficiently with minimal energy and limited data, while simultaneously providing concise explanations. Neuromorphic and brain-inspired computing emulate synaptic plasticity and energy-efficient processing observed in biological systems, while quantum computing offers exponential speed-ups in optimisation and pattern recognition. Together, they present a pathway to overcoming the limitations of current AI. Future AI systems must also be capable of explaining their reasoning. Transparency, accountability, and trust cannot emerge from opaque systems. This requires constructing cognitive frameworks modelled after human processes such as reasoning, memory, and decision-making. Integrating biology, psychology, and mathematics is essential for enhancing both traditional and quantum information-processing capabilities.
Incorporating models of mind, consciousness, and cognition is equally critical. Human learning integrates data, knowledge, and wisdom, whereas current AI relies primarily on data. Bridging this gap demands innovative approaches to reasoning and abstraction. It also requires a new skilling ecosystem that integrates computational thinking, biological intelligence, quantum literacy, and ethical reasoning.
Recent scholarship views AI not just as a computational tool but as a system involved in knowledge generation, making it essential to preserve human agency and critical thinking. The emerging “Einstein Test” goes beyond the Turing Test by assessing whether AI can produce genuine scientific discoveries rather than merely imitate human responses. However, despite their capabilities, current AI systems still lack true creativity, imagination, and causal reasoning. Addressing these challenges requires a structured convergence of four disciplines: biology, psychology, quantum science, mathematics, and data science. This integration is not additive but synergistic, opening possibilities such as quantum neuroinformatics, in which insights from brain science and quantum theory inform one another.
The way forward demands clear research priorities: quantum-neural interfaces, cognitive architecture integration, energy-efficient learning, and ethics and value alignment. However, progress will require more than technical innovation. Educational institutions must cultivate researchers capable of working across disciplinary boundaries and questioning foundational assumptions.
The trajectory of AI development cannot rely solely on scaling existing architectures. Their dependence on massive data, their opacity, and their indifference to ethics reflect the absence of biological, cognitive, and quantum insights. The framework proposed here is not a single technology but a research philosophy - one that treats the human brain as a guide to constructing genuinely intelligent machines.
The convergence of biology, information science, quantum science, and data science offers the most promising path towards AI that is not only powerful, but also comprehensible, efficient, and aligned with human values. Realising this vision will require interdisciplinary collaboration, ethical responsibility, and a deep commitment to understanding human cognition — one of the greatest open challenges of our time.
Prem Kumar Kalra is former Director, IIT Jodhpur; Jyoti Kumar Verma is Head, Department of English, Dayalbagh Educational Institute, Dayalbagh, Agra; and Rupali Satsangi is from the Department of Economics, Dayalbagh Educational Institute, Dayalbagh, Agra; Views presented are personal.















