AI development should abandon schema-first models, and directed acyclic graphs in particular, as the backbone for its entity models—to embrace Feedback-First, Schema-Second semiotics (FFSS) that encode feedback loops as foundational data patterns. This enables…
- Schema-second data patterns that don’t box you in like schema-first databases or rob you of structure like schema-less ones
- Fluid ontologies and knowledge graphs that emerge and evolve with new understanding
- Agile data–decision workflows that scale smoothly
- Double-loop reinforcement learning for Agentic AI
The AI challenge: not neural circuits, but rewiring them
The challenge for AI is not about replicating the neural circuits of animal wetware with man-made circuitry. The challenge is emulating the evolving synaptic network: the feedback loops signalling neurons to form new circuits or grow new neurons. Adaptation, learning, and cognition depends on synaptic re-networking where neural circuits modify other circuits. To emulate this neuroplasticity, AI must be able to encode feedback loops as data patterns—including loops that operate on other loops, i.e., code patterns. This demands a Feedback-First, Schema-Second (FFSS for short) pattern language that can treat schemas as any other pattern.
Incidentally, this lands us in semiotics, the science of patterns and pattern languages, and the budding field of biosemiotics. In brief, semiotic processes involve metaprogramming—patterns that interoperate with and operate on other patterns.
AI has an entity model problem
Real-world adaptive systems—like brains, bacteria, and human sciences—are inherently FFSS. They continuously create and discard schemas in response to feedback—new data and possible anomalies. Brains rewire synaptic networks. Bacteria edit genome–proteome ‘workflows’. Science revises theories and models. If we want to build actual learning machines and cognition engines, we should make feedback loops first-class citizens in our entity models—perhaps even the foundational primitives.
Modelling brains as directed acyclic graphs (DAG) in any form is an anti-pattern. DAG models commit ontological violence on real-world systems by severing the very cycles and loops that make adaptation and learning possible. The acyclic chains of DAGs yield hierarchical or sequential rigidity. They cannot deal with evolving patterns that refuse to behave, that continuously adjust to feedback and obsolete old schemas—or the compounding effects of that, such as emergent behaviours.
Alternatives to ‘DAG logic’ databases
Most databases and knowledge management systems suffer from the flaws of ‘DAG logic.’ They are either schema-first (think SQL or RDF)—or they go schema-less to escape rigidity (think NoSQL). So far, the author has identified only two available pattern/query languages that…
- support cyclic data patterns(‘feedback-first’) and
- treat schemas as data(‘schema-second’):
The Datomic superset of Datalog and the Cypher family of graph query languages (e.g., Neo4j). Datomic embraces the FFSS approach, while Cypher at least allows it.
Thinking, learning, doubting: semiotic metaprogramming
FFSS semiotics paves the way for AI systems that can doubt—the limit case for intelligent behaviour. Ultimately, learning is about re-encoding data and data patterns recursively in response to feedback. This includes operational schemas: practices, policies, programs, and paradigms. In practice, learning requires that agents, whether human or artificial, can CRUD the RAGs that govern their behaviour—including their CRUD capabilities. In two words: semiotic metaprogramming.
This goes double for Agentic AI. Having AI agents in agentic workflows necessitates shared databases and collaborative data patterns—not least for them/it to iteratively internalize input/feedback from other agents in other workflows.