The conversation about AI regulation has collapsed into two camps, and I think they're both wrong.
On one side, you have the government supremacists who want to nationalize AI labs or bring them under direct federal control. On the other, you have the absolutist free-market crowd arguing that companies should be left alone to build whatever they want, consequences be damned. The first camp wants to turn the most consequential technology of the century into a government program. The second wants to pretend that a technology capable of reshaping entire economies doesn't warrant any external guardrails at all (just some self-governance, which has really worked out well with social media, huh?).
Both positions are idiotic.
I say this as someone whose default posture is laissez-faire. I think free markets solve more problems than bureaucracies, that competition drives innovation better than mandates, and that most regulation ends up protecting incumbents while punishing everyone else. That's been true for decades and I don't think AI changes the basic math.
But here's where I have to be honest with myself.
If an AI model can legitimately blow through all known data security protections, if it can crack encryption that everything from banking to national defense relies on, then you can't just let companies run wild without any oversight whatsoever. To be fair, I think we're a far cry from proving that's the case today. Most of the doomsday scenarios being thrown around feel more like science fiction than engineering reality. But let's take the premise seriously for a minute, because the people pushing for regulation certainly are.
If the capability is real, or even plausibly close, then "the market will figure it out" isn't a plan. It's an abdication.
So what's the least-bad approach?
I can't believe I'm saying this, especially on the heels of COVID, but I think the FDA might be the right model to look at.
The FDA, for all of its faults, and there are many, manages to accomplish something that most regulatory frameworks don't. It provides a general level of public safety that most people are comfortable with while preserving the profit incentive that drives investment and innovation. Pharmaceutical companies don't build drugs out of altruism. They build them because the FDA's approval process, combined with IP law, creates a structure where you can invest billions in R&D and still make your money back. The incentive to innovate isn't killed by the regulation. It's channeled by it.
Now think about AI models through that lens.
What if deploying a public-facing AI model required a safety certification, the way a new drug requires FDA approval before it hits pharmacies? Not government control of the labs. Not nationalization. Just a structured process that says: before you release this to 300 million people, demonstrate that it meets some baseline standard of safety. Show your work. Let independent reviewers stress-test it.
The profit motive stays intact. The IP stays protected. Companies still compete, still innovate, still capture value. But the models that interact with the public go through a gate that exists for the same reason the FDA exists: because some products have consequences that individual consumers can't evaluate on their own.
I think this leads to an even more interesting question. Are AI models the next patent frontier?
If you accept the FDA analogy, IP protection becomes the linchpin. Patent law is what makes the pharmaceutical model work. You invest in something risky and expensive, and in return you get a period of exclusivity that lets you recoup the investment. Without that protection, nobody builds the next breakthrough drug because somebody else will just copy it. The same logic could apply to frontier AI models. If you want companies to invest the billions required to push the boundary, they need to know the resulting model is protectable. Where does that leave the whole open source movement? ...honestly, I have no idea.
I've heard of crazier regulatory frameworks. Most of them are being proposed right now by people who think either the government should own the whole thing or that oversight is tyranny.
I think the answer is probably somewhere in the middle, which is the most boring sentence I've ever written and also the one I'm most confident in. A safety gate for public deployment. IP protection that rewards the investment. And a regulatory body that understands the technology well enough to evaluate it without strangling it.
Not perfect. Not elegant. But functional, which in regulation is about the best you can hope for.
Keep building,
-- JW