The Cartographer of Drift

The Long View
My work began with a discipline rather than a thesis. Watch how people actually behave, in their own context, over long periods of time, and let the patterns speak before the theory does. That practice has taken me through more than thirty countries and across several technological eras, from the world before the internet to the world now being rewritten by artificial intelligence. The questions stayed constant. The variables kept changing.
Behavioral science is sometimes treated as a laboratory discipline. My version of it is a field discipline. The signal you want to understand rarely shows up cleanly in a controlled environment. It shows up in markets, in queues, in conversations, in the small frictions of daily decision-making. If you stay long enough in enough places, you start to see the shape of human behavior beneath the noise of any single culture or platform.
The Pattern That Started to Drift
Somewhere in the last fifteen years, the patterns I had been tracking began to behave differently. The change was not loud. It rarely is. People were still making decisions, still expressing preferences, still leaving signals behind. But the signals themselves were beginning to be shaped by the very systems intended to read them.
The issue rarely begins with the output alone. It starts earlier, at the source of the data.
Synthetic Drift is the term I use to describe the slow, compounding distortion that occurs when behavioral signals are captured, ranked, and recycled by systems that then influence the next round of behavior. Each pass looks neutral. The accumulated effect is anything but. Over time, the map of human preference can quietly diverge from the territory of human reality, and most of the people inside that drift cannot feel it happening.
Information Vertigo and Authority Displacement
Two adjacent ideas became necessary to describe what readers, users, and citizens were experiencing as a result. Information Vertigo names the disorienting sense that one no longer knows which surface of information to trust, even when the underlying facts are knowable. Authority Displacement names the quiet migration of credibility from institutions, experts, and lived experience toward whichever interface answers fastest.
Neither is a moral failing on the part of the reader. Both are predictable behavioral responses to environments that have been engineered, often unintentionally, to reward speed, confidence, and fluency over verification.
From Observation to Architecture
At a certain point, observation alone stops being sufficient. If the data layer is the source of the distortion, then the data layer is where the work has to happen. That conviction led to the Value Reinforcement System, the framework underlying.
click to open patent . U.S. Patent 12,205,176.
The Value Reinforcement System was not designed as a marketing tool or an engagement engine. It was designed as an answer to a structural question. How do you capture meaningful human signals in a way that the person would actually recognize as true. The standard data economy was built to extract residue and call it preference. The work I am interested in treats the person as the source of authority over their own signal, not as a surface to be measured.
The Case for Verified Human Data
The current debate around artificial intelligence often centers on the model. The more important question, in my reading, sits upstream of the model. What is the provenance of the data the model was trained on, and what is the provenance of the data it is generating in return.
Verified human data is not a feature. It is the precondition for any system that intends to remain accountable to the people it represents.
A system that cannot account for the origin of its inputs cannot be trusted to account for the consequences of its outputs. That is true for institutions, for platforms, and increasingly for the everyday tools people are being asked to rely on. Once that accountability is restored at the data layer, a great many other problems become tractable. Without it, the most sophisticated model in the world is still operating on quiet drift.
Current Work
My current focus is divided between field research and a venture called Digital Legacy AI, where I act as the Chief Behavioral Architect with Glenn Devitt. The work sits at the intersection of identity, memory, and verified data, and it is informed directly by the field observations and the patent architecture that preceded it.
Alongside that, I have returned to full-time international research after stepping out of a United States based business. The travel is not incidental. Comparative observation across cultures and economies remains the only way I know to test whether a pattern is local, contextual, or genuinely human. A theory that survives only in one country is not a theory yet.
What the Work Is For
I am less interested in predicting the future than in protecting the conditions under which a real future can still be chosen. That requires data that is honest about where it came from. It requires systems that are honest about what they are doing. And it requires readers, users, and citizens who are given the tools to recognize drift before it becomes the new ground.
If we get the data layer right, a great deal else becomes possible. If we get it wrong, very little else will matter.
The work continues, in the field and in the architecture. The questions are old. The stakes are new.




