Liminal Spaces
2025 is coming to and end, what a year. Chatbots, agents, robotics, chat GPT in education, the AI bubble, LLM's integration, backlash and disillusionment from the creative community. This year I took a step back to evaluate where the next wave of the creative industries are headed.
In the fall of 2024-2025 I attended the The Artificiality Summit in Bend, OR featuring leaders from multiple genres at the cutting edge of creative thinking, biology, engineering, AI, learning and philosophy. Below are some of my favorite moments from the Artificiality Summit and what I learned.
“Life was computational from the start. The earliest organisms were machines for sensing, predicting, and responding to the world. Evolution is not a watchmaker; it is a search process. Over billions of years it has computed countless ways of modeling the world well enough to persist within it. Minds — ours or machines’ — are not departures from biology but continuations of it.” - Blaise Agüera y Arcas
1. The next wave of AI is focused on understanding what creates "intelligence" and the definition of "consciousness". "AI isn't augmenting human intelligence; it's revealing that intelligence was never solely human to begin with." — Helen Edwards, Artificiality Co-founder.
2. Blaise Agüera y Arcas — AI researcher and Vice President / Fellow at Google, where he is the CTO of Technology & Society and founder of Paradigms of Intelligence (Pi) – an organization working on basic research in AI and related fields, with a focus on the foundations of neural computing, active inference, sociality, evolution and Artificial Life defines "intelligence" in his book What Is Intelligence? "Today, we have arrived at this threshold. State-of-the-art models cannot yet perform at median human level on every test for intelligence or capability. They can still fail at logic, reasoning, and planning tasks that most people wouldn’t find challenging. Still, they handily reach human level on the most commonly used tests devised for evaluating human skill or aptitude, including the SAT, the GRE, and various professional qualifying exams. Tests designed to trip up AI on basic “being human stuff,” such as the Turing Test and CAPTCHAs, no longer pose meaningful challenges for large models. As these milestones recede in the rear-view mirror, there is an increasingly mad scramble to devise new tests humans can pass but AI still fails. Math Olympiad problems and visual challenges known as Bongard problems remain on the frontier, though AI models are making clear progress on these tests. (And they aren’t easy for most humans, either.) The radical yet obvious alternative is to accept that large models can be intelligent, and to consider the implications. Is the emergence of intelligence merely a side effect of “solving” prediction, or are prediction and intelligence actually equivalent?"
3. Michael Levin — Distinguished professor of biology at Tufts University and associate at Harvard's Wyss Institute, in 2024 gave a lecture about the emerging field of diverse intelligence and his frameworks for recognizing and communicating with the unconventional intelligence of cells, tissues, and biological robots. In 2025 he took it even farther. His research reveals intelligence emerging far beyond the brain—in cellular assemblies, living systems, and potentially synthetic ones we've barely begun to imagine." Levin is very taken with the fact that these beautiful structures seem to exist on their own in the mathematical world, and much of his argument rests on the fact that unshakeable mathematical rules have explanatory value for physical patterns. It’s not just that a fractal pattern (for example) might be found in a living organism — it’s that you can’t explain why that organism developed the way it did in any meaningful way without getting into the mathematical nature of the fractal. And the nature of fractals is inherently non-physical." -Daniel Witt
4. Language has power, how we define new forms of "intelligence" has an enormous influence on society. How will we define the rights of synthetic intelligence as it evolves? “Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development.”― Blaise Agüera y Arcas.
5. De Kai — holds a joint appointment at HKUST’s Department of Computer Science and Engineering and at Berkeley’s International Computer Science Institute, and is Independent Director of AI ethics think tank The Future Society, advocates for "raising" ai responsibly with the same care as we raise our children. "the ways we interact online don’t just shape our personal connections—they are the very lessons from which artificial intelligence learns to understand humanity. How do we ensure that this emerging intelligence honors the rich diversity of human culture, much like how we strive to protect biodiversity in nature?"
6. We are in a liminal space of "unknowing". Maggie Jackson — Journalist and author whose work focuses on how technology reshapes attention, home, and daily life in her latest book Uncertain: The Wisdom and Wonder of Being Unsure, explores why we should seek not knowing in an era of angst and flux. Her book has been nominated for a National Book Award and was named the Top 25 Non-Fiction Book of the Year by the Next Big Idea Club. "When people begin to understand the difference, the distinction between fear and uncertainty, they tell me it’s liberating. Suddenly the world opens up and you can begin to see that curiosity."
If you made it this far...you might be wondering...how this applies to a creative professional? As an artist/designer/creative professional I work at the intersection of the liminal and commercial industry. The creative leap on the horizon redefines our identity, our community and our relationship to nature and each other. LLM's super charged trends, questioned the value of creative thinking, ownership, and redefined the worth of creative professionals in the marketplace. In 2023 I wrote "AI mirrors our society, learning from our history, art, thinkers, writers, social hierarchies, and values. It reveals our flaws, and how we discriminate with astonishing speed and accuracy. Like humanity itself, AI possesses the potential for positive revolution and exponential progress, but also for the darkest and most destructive future". This future has arrived and we are swimming through mountains of information, a firehose of visual output, new tools, new career definitions and layoffs as companies attempt to automate, speed up and replace the messy human element. But do we really want to "replace" the human element? What kind of vacuum will be left behind to be filled by what?
Last month I spoke with Artificiality Summit speaker Tess Gilman Posner — musician, technologist, and social entrepreneur. As the founding CEO of AI4ALL, Tess helped shape a generation of diverse young leaders in artificial intelligence. Her career spans education innovation, public policy, and nonprofit leadership, including work at the White House. Tess challenged conference goers to identify which musical track was her actual voice verses an AI interpretation. The results were surprisingly mixed. What guardrails moving forward in this disorienting hall of mirrors will keep us grounded? Are we losing our identity to a bottom line that will eventually disrupt the very thing we need the most, connection?
As we begin to redefine society, intelligence, community and decide the fate of our responsibility to each other, our responsibility to nature and our responsibility to our planet I leave you with a quote from from "Speculative Philosophy of Planetary Computation" by Benjamin Bratton — Philosopher of technology. He is Professor of Philosophy of Technology and Speculative Design at the University of California, San Diego. Through the lens of planetary computation, his work establishes new philosophical frameworks for interpreting the past, present and future co-evolution of life, culture, and technology.
"But at the same time, we also recognize what we might call something like simulation anxiety. That is: the more that we interact with simulations in a meaningful way, the less certain we are of the boundaries, the uncertain liquid, fluid boundaries between sim and real. But as we do so, these movements and these kinds of negotiations become more difficult. And indeed, the more times we ask the question, “is this a simulation or not?” the less sure we are of the answer."

