In this essay, the author argues that a major obstacle to truly powerful AI systems isn’t lack of compute or better models — it’s the missing layer of structured memory and understanding. Right now, most AI tools operate in a stateless way: they respond to prompts without maintaining a coherent, evolving representation of past interactions or user-specific knowledge.
This gap matters because it undermines the long-term usefulness of AI. Without memory that persists across sessions, AI forgets what you told it yesterday or earlier in the conversation. As a result, every interaction effectively starts from scratch, forcing users to keep re-explaining context and rewriting what the system already “knows.”
To bridge this gap, the author suggests building a semantic architecture — one that includes memory (your ideas and notes), context (what you’re working on right now), and action (the ability to run tasks or tools). When AI is wired this way, it can recall your frameworks, reason using your past work, and execute on your intent, turning it into a real thinking partner rather than just a conversational assistant.
Ultimately, the piece argues that structure is the missing ingredient for intelligence. Rather than focusing only on making models smarter, designers should build systems that remember, reason, and act — systems where AI helps you build, grow, and execute on your own thoughts in a meaningful way.