Two Technology Revolutions and the Truth Layer AI Has Been Missing
How the birth of AI feels a lot like the birth of the internet
I graduated college in 1992, which turned out to be the perfect moment to step almost accidentally into the middle of a technological earthquake. The internet existed, technically, but very few people really understood what it was. I certainly was not one of the visionaries. I just sensed that something unusual was happening.
Most of us were still using fax machines and landlines, and going online meant listening to a modem screech and hoping no one picked up the phone. Email felt like a novelty rather than the foundation of global business.
Even without understanding the full implications, it was obvious that change was coming. People were experimenting, building things, and trying ideas that did not always make sense. The energy was chaotic and exciting, even if none of us could articulate where it was heading.
Then the hype cycle exploded.
Everyone jumped in. Venture capital poured into anything with a dot com attached to it. Brilliant ideas emerged next to terrible ones. Fortunes were made and fortunes were lost. Most of us were trying to keep up, observing, learning, and occasionally shaking our heads at the insanity of it all.
Eventually the noise settled. The world began to understand what the internet was actually good for. That was when the real transformation happened. Not gradually, but fundamentally. Entire industries were reshaped. New ones were born. Life changed in ways we could not have imagined during those early days.
Looking back, the pattern is much clearer than it ever was while living through it. And now, three decades later, I am seeing the same pattern unfold again, this time with artificial intelligence.
Once again, a small group understood the potential early. Once again, a massive wave of excitement followed. Once again, investment surged and everyone began experimenting in every direction.
And once again, the early problems are appearing.
The biggest one, and the one anyone who has tried to use AI in a technical setting encounters quickly, is hallucination. AI answers confidently and even eloquently, but it does not always answer correctly. It tries to fill gaps with invented information whenever the underlying structure is unclear.
In some cases, that is merely inconvenient. In project controls, program governance, and EVMS environments, it is unacceptable.
We operate in a discipline where facts matter. Traceability, accuracy, and data integrity are not optional. A variance explanation cannot be mostly correct. A baseline change cannot be inferred. A forecast that guesses is often worse than no forecast at all.
So while many organizations asked how they could bolt AI onto their existing tools, we found ourselves wrestling with a different question.
What would it take to make AI reliable in a world where guessing is not allowed?
That question became the foundation of our work at TMS, and it eventually led us to build the Project Governance Framework, or PGF.
We did not begin with the idea of building an AI assistant. We began by creating the truth layer that AI needs in order to be trustworthy.
During the internet era, infrastructure had to evolve before the real value emerged. Broadband, secure protocols, stable databases, and search technology all paved the way for everything that followed. AI is no different. Before it can transform project management, it needs structured and consistent data modeled around how programs actually operate.
PGF fills this need. It provides a vendor agnostic and system agnostic middle layer that organizes the core elements of project governance. That includes scope, cost, schedule, work authorization, baseline changes, risk, and performance. The result is a stable and coherent data model that does not depend on any one system or maturity level.
Once data flows through PGF, AI no longer needs to guess. The context becomes clear. The relationships are explicit. The meaning of each field is defined.
And when AI stops guessing, hallucination disappears.
This is what makes applications like VERA, our Variance Evaluation and Reporting Assistant, so effective. VERA does not hallucinate because it works inside a world where the truth is already structured. It can focus on reasoning rather than inventing.
But as we continued building VERA, something else became incredibly clear. AI does not only need structured data. It also needs expert boundaries.
For all of AI’s power, and for all of the excitement surrounding it, the technology still struggles when the universe of possibilities is too large or too undefined. The system does not know which information matters most unless someone teaches it how to narrow the problem.
IBM saw this with Watson. When they asked Watson to play chess, they gave it the rules of the game and a massive amount of processing power. Within that clearly defined world, Watson thrived. It learned, adapted, and eventually defeated a grandmaster. When Watson was asked to cure cancer, everything changed. Suddenly there were no boundaries, no universally agreed rules, and no single way to determine which data points were meaningful. Watson had more information than any human oncologist could ever read, yet it could not determine what to do with it.
Modern AI models are far more capable, but the underlying truth remains the same. AI excels when it has a defined world and struggles when it does not. In project governance, that world is full of rules, definitions, thresholds, and exceptions that only experts understand.
- PGF provides the factual universe.
- TMS expertise provides the boundaries inside that universe.
This combination creates something powerful. PGF ensures the data is correct. Our domain knowledge ensures the interpretation is correct. Together they give AI the clarity it needs to deliver accurate and meaningful analysis.
This also opens the door to something even more exciting. Once the foundation is stable and the expert boundaries are in place, organizations can integrate and tune their own expert systems. They can embed their specific rules, their approach to risk, their methods for forecasting, and their internal decision logic. The system becomes smarter and more aligned with real experience, and it can evolve as programs evolve.
This is how AI becomes a trusted partner instead of an unpredictable assistant. It does not replace experts. It amplifies them. It takes the best of human judgment and merges it with the processing power of technology. It allows organizations to move faster without sacrificing accuracy.
When the factual world of PGF meets the expert boundaries defined by TMS and our customers, something entirely new becomes possible. The system becomes reliable. The analysis becomes consistent. The intelligence becomes actionable.
We are not just adding AI to project management. We are creating the environment that makes AI capable of delivering the value the industry has been promised for years.
The revolution is already underway. Now it is time to build the infrastructure and the boundaries that make it reliable.