In this fascinating exposition, Stephen Wolfram connects two of the most important breakthroughs of our time: AI and the ruliad.
I ask Stephen how he thinks about knowledge hypergraphs, which I’m exploring at Open Web Mind.
He offers several important insights.
Stephen draws a distinction between human-like minds and formal knowledge.
Human-like minds include both our own brains and Large Language Models. Such minds, Stephen suggests, are good at making broad but shallow connections.
Formal knowledge, on the other hand, is deep and precise. Stephen has spent a lifetime building computational towers of such knowledge.
He proposes that Large Language Models might serve as interfaces to formal knowledge. He warns, however, that much of this knowledge might be inaccessible to minds like ours.
To illustrate the difficulty, Stephen contrasts the 50,000 or so concepts to which we humans have assigned words, such as “cat” and “dog”, with the infinite variability an AI can generate, both within human concepts and in the interconcept space in between.
Tying this back to physics, Stephen Wolfram posits that the concepts of space, time, energy, etc. we have internalized occupy only a tiny part of the ruliad.
—
Stephen Wolfram
Related writings from Stephen
- Generative AI Space and the Mental Imagery of Alien Minds
- How to Think Computationally about AI, the Universe and Everything
- The Concept of the Ruliad
More on knowledge hypergraphs at Open Web Mind:
—
The Last Theory is hosted by Mark Jeffery founder of Open Web Mind
for fresh insights into Wolfram Physics every other week
Check your inbox for an email to confirm your subscription
Oh no, something went wrong, and I was unable to subscribe you!
Please refresh your browser and try again