A National Security Insider Does the Math on the Dangers of AI

One type of risk you’ve been very interested in for a long time is “biorisk.” What’s the worst thing that could possibly happen? Take us through that.

I started out in public health before I worked in national security, working on infectious disease control—malaria and tuberculosis. In 2002, the first virus was synthesized from scratch on a Darpa project, and it was sort of an “oh crap” moment for the biosciences and the public health community, realizing biology is going to become an engineering discipline that could be potentially misused. I was working with veterans of the smallpox eradication campaign, and they thought, “Crap, we just spent decades eradicating a disease that now could be synthesized from scratch.”

more than $10 trillion, and yet what we invest in preventing the next pandemic is maybe $2 billion to $3 billion of federal investment.

Another category is intentional biological attacks. Aum Shinrikyo was a doomsday cult in Japan that had a biological weapons program. They believed that they would be fulfilling prophecy by killing everybody on the planet. Fortunately, they were working with 1990s biology, which wasn’t that sophisticated. Unfortunately, they then turned to chemical weapons and launched the Tokyo sarin gas attacks.

the research done by [AI safety and research company] Anthropic has looked at risk assessments to see if these tools could be misused by somebody who didn’t have a strong bio background. Could they basically get graduate-level training from a digital tutor in the form of a large language model? Right now, probably not. But if you map the progress over the last couple of years, the barrier to entry for somebody who wants to carry out a biological attack is eroding.

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading