
Once I was an aerospace engineer engaged on the NASA House Shuttle Program, belief was mission-critical. Each bolt, each line of code, each system needed to be validated and examined fastidiously, or the shuttle would by no means go away the launchpad. After their missions, astronauts would stroll by the workplace and thank the 1000’s of engineers for getting them again dwelling safely to their households—that’s how deeply ingrained belief and security had been in our methods.
Regardless of the “transfer quick and break issues” rhetoric, tech needs to be no totally different. New applied sciences have to construct belief earlier than they’ll speed up development.
By 2027, about 50% of enterprises are anticipated to deploy AI brokers, and a McKinsey report forecasts that by 2030, as a lot as 30% of all work could possibly be carried out by AI brokers. Lots of the cybersecurity leaders I communicate with wish to usher in AI as quick as they’ll to allow the enterprise, but additionally acknowledge that they should guarantee these integrations are executed safely and securely with the fitting guardrails in place.
For AI to meet its promise, enterprise leaders have to belief AI. That received’t occur by itself. Safety leaders should take a lesson from aerospace engineering and construct belief into their processes from day one—or danger lacking out on the enterprise development it accelerates.
The connection between belief and development just isn’t theoretical. I’ve lived it.
Founding a enterprise primarily based on belief
After NASA’s House Shuttle program ended, I based my first firm: a platform for professionals and college students to showcase and share proof of their expertise and competencies. It was a easy concept, however one which demanded that our clients trusted us. We shortly found universities wouldn’t associate with us till we proved we may deal with delicate scholar information securely. That meant offering assurance by various totally different avenues, together with displaying a clear SOC 2 attestation, answering lengthy safety questionnaires, and finishing numerous compliance certifications by painstakingly guide processes.
That have formed the founding of Drata, the place my cofounders and I got down to construct the belief layer between nice corporations. By serving to GRC leaders and their corporations achieve and show their safety posture to clients, companions, and auditors, we take away friction and speed up development. Our fast trajectory from $1 million to $100 million in annual recurring income in just some years is proof that companies are seeing the worth, and slowly beginning to shift from viewing GRC groups as a price heart to a enterprise enabler. That interprets to actual, tangible outcomes–we’ve seen $18 billion in safety influenced income with safety groups utilizing our SafeBase Belief Heart.
Now, with AI, the stakes are even greater.
As we speak’s compliance frameworks and laws — like SOC 2, ISO 27001, and GDPR — had been designed for information privateness and safety, not for AI methods that generate textual content, make choices, or act autonomously.
Because of laws like California’s newly enacted AI safety standards, regulators are slowly beginning to catch up. However ready for brand new guidelines and laws isn’t sufficient—notably as companies depend on new AI applied sciences to remain forward.
You wouldn’t launch an untested rocket
In some ways, this second jogs my memory of the work I did at NASA. As an aerospace engineer, I by no means “examined in manufacturing.” Each shuttle mission was a meticulously deliberate operation.
Deploying AI with out understanding and acknowledging its danger is like launching an untested rocket: the injury will be quick and finish in catastrophic failure. Simply as a failed house mission can scale back the belief folks have in NASA, a misstep in the usage of AI, with out totally understanding the danger or making use of guardrails, can scale back the belief shoppers put in that group.
What we’d like now could be a brand new belief working system. To operationalize belief, leaders ought to create a program that’s:
- Clear. In aerospace engineering, exhaustive documentation isn’t paperwork, however a pressure for accountability. The identical applies to AI and belief. There must be traceability—from coverage to regulate to proof to attestation.
- Steady. Simply as NASA is constantly monitoring its missions around-the clock, companies should spend money on belief as a steady and ongoing course of quite than a point-in-time checkbox. Controls, for instance, have to be constantly monitored in order that audit readiness turns into extra a state of being, and never a final minute dash.
- Autonomous. Rocket engines right this moment can handle their very own operation by embedded computer systems, sensors, and management loops, with out pilots or floor crew immediately adjusting valves mid-flight. And as AI turns into a extra prevalent a part of on a regular basis enterprise, this should even be true of our belief applications. If people, brokers, and automatic workflows are going to transact, they’ve to have the ability to validate belief on their very own, deterministically, and with out ambiguity.
Once I suppose again to my aerospace days, what stands out isn’t just the complexity of house missions, however their interdependence. Tens of 1000’s of parts, constructed by totally different groups, need to operate collectively completely. Every crew trusts that others are doing their work successfully, and choices are documented to make sure transparency throughout the group. In different phrases, belief was the layer that held all the house shuttle program collectively.
The identical is true for AI right this moment, particularly as we enter this budding period of agentic AI. We’re shifting to a brand new manner of enterprise, with a whole bunch—sometime 1000’s—of brokers, people, and methods all constantly interacting with each other, producing tens of 1000’s of contact factors. The instruments are highly effective and the alternatives huge, however provided that we’re in a position to earn and maintain belief in each interplay. Firms that create a tradition of clear, steady, autonomous belief will lead the subsequent wave of innovation.
The way forward for AI is already beneath building. The query is straightforward: will you construct it on belief?
The opinions expressed in Fortune.com commentary items are solely the views of their authors and don’t essentially replicate the opinions and beliefs of Fortune.

