AI revolution in symbolic physics
Published:
As a researcher deeply immersed in the world of AI and quantum physics, I have witnessed the revolutionary impact of AI on scientific discovery. In this post, I share my thoughts on recent developments in this field that relate to our SymPhysAI project: Stephen Wolfram’s insights and the emergence of Kolmogorov-Arnold Networks (KANs).
SymPhysAI: Unraveling the Quantum Nature
At its core, SymPhysAI aims to develop a symbolic AI tool to unravel hidden topological orders in quantum physics. What makes SymPhysAI special is its focus on symbolic regression — essentially guiding AI to help scientists learn formulas, not just crunch data. We are also prioritizing explainability, ensuring that our AI does not just present predictions but also explains its rationale. This transparency is crucial in scientific research, where understanding the why is often as important as the what.
Wolfram’s Vision: AI and the Computational Paradigm
I have been following the thoughts of Stephen Wolfram, as many of my works rely on the Wolfram Language to implement symbolic calculations. In a recent blog post in March, Wolfram shared some fascinating insights that resonate strongly with our work and offer a broader perspective on AI’s role in science.1 Wolfram emphasizes that there are “no model-less models” — the performance of neural networks heavily depends on their low-level architecture, which is ultimately designed based on human prior knowledge. In his latest blog in August, Wolfram delves into the idea that machine learning succeeds by sampling the inherent complexity of computational systems, much like biological evolution.2 This aligns perfectly with our Markovian neuroevolution algorithm, which helps researchers find suitable architectures of quantum circuits for different machine-learning tasks.3
Wolfram also points out a fundamental limitation of large language models like GPT: while they excel at learning broad concepts, they often struggle with precise details. This observation underscores the need for symbolic AI tools in scientific applications. One of Wolfram’s most intriguing ideas is framing scientific understanding as a form of storytelling, which he calls “Science as Narrative.” This perspective offers a fresh way to think about AI interpretability. In SymPhysAI, we are not just aiming for accurate predictions; we are trying to help physicists construct coherent narratives about quantum phenomena.
Wolfram also delves into the philosophical question of what makes a scientific result truly interesting. As we develop AI systems for scientific discovery, this becomes a crucial consideration. How do we ensure our AI doesn’t just find patterns, but identifies those patterns that push the boundaries of our understanding? Perhaps most encouragingly, Wolfram’s conclusion aligns closely with our SymPhysAI philosophy. He suggests that the greatest opportunity to advance science lies in combining the strengths of AI with formal computational approaches. This synthesis of human knowledge and machine learning power is exactly what we are striving for with SymPhysAI.
KANs: A Glimpse into the Future
As we continue our exploration in the SymPhysAI project, an exciting new development has emerged in the field: Kolmogorov-Arnold Networks (KANs).4 This new architecture represents a fundamental shift in how we think about neural networks. Instead of fixed activation functions on nodes, KANs use learnable functions on edges. It is like giving each connection in the network its own tiny, adaptable brain.
What makes KANs particularly intriguing is their strong mathematical foundation, based on the Kolmogorov-Arnold representation theorem. This theoretical grounding could lead to more rigorous and interpretable results — a key goal in scientific AI. Early studies show that KANs can achieve comparable or better accuracy than traditional neural networks, often with significantly fewer parameters. With the introduction of multiplicative nodes in KAN 2.0, the architecture now captures more complex, non-linear relationships that are particularly prevalent in scientific data.5
The combination of connectionism and symbolism in KANs aligns with the dual approach we’ve taken in quantum capsule networks (QCapsNets).6 While QCapsNets extend classical capsule networks to enhance the explainability of quantum neural networks, KANs strengthen the symbolic side by integrating scientific knowledge directly into the learning process. For complex scientific computations like those we are dealing with in quantum physics, such a higher degree of interpretability and interactivity could be a game-changer.
Future of AI in Scientific Discovery
Looking at these developments — Wolfram’s supportive insights and the potential of KANs — I am filled with excitement about the future of AI in scientific discovery for our SymPhysAI project. We are moving towards more interpretable models that can not only make predictions but also help us understand the why behind scientific phenomena. We are developing more efficient computational methods that could accelerate the pace of discovery. And perhaps most importantly, we are creating tools that augment human intelligence rather than replace it, helping us ask better questions and explore new avenues of research.
As we stand on the brink of this new era in scientific research, it is important to remember that AI is a tool –– a powerful one, but a tool nonetheless. The true breakthroughs will come from the synergy between human creativity and machine intelligence. The emergence of KANs represents more than just a technological progress; it is a testament to the evolving relationship between human scientists and AI. This partnership promises to unlock the deepest secrets of our universe.
In the coming years, I am looking forward to exploring how we can incorporate new ideas into our SymPhysAI project, guided by insights from pioneers like Wolfram and innovations like KANs. The future of scientific discovery is bright, and AI will undoubtedly play a crucial role in illuminating the path forward. As we continue to refine these approaches and develop new ones, who knows what incredible discoveries await us in the realm of quantum physics and beyond? 7
-
What’s Really Going On in Machine Learning? Some Minimal Models—Stephen Wolfram Writings. ↩
-
Z. Lu$^\ast$, P.-X. Shen$^\ast$, and D.-L. Deng, Markovian Quantum Neuroevolution for Machine Learning, Phys. Rev. Applied 16, 044039 (2021). ↩
-
Z. Liu, Y. Wang, S. Vaidya, F. Ruehle, J. Halverson, M. Soljačić, T. Y. Hou, and M. Tegmark, KAN: Kolmogorov-Arnold Networks, arXiv:2404.19756. ↩
-
Z. Liu, P. Ma, Y. Wang, W. Matusik, and M. Tegmark, KAN 2.0: Kolmogorov-Arnold Networks Meet Science, arXiv:2408.10205. ↩
-
Z. Liu$^\ast$, P.-X. Shen$^\ast$, W. Li, L.-M. Duan, and D.-L. Deng, Quantum Capsule Networks, Quantum Sci. Technol. 8, 015016 (2022). ↩
-
This blog post was refined with Claude AI and the image was generated by DALL-E-3. ↩