Full Transcript

George Kamide:

Welcome back to the Zero Hour brought to you by SafeGuard Cyber. I'm George Kamide.

Ashley Stone:

I'm Ashley Stone. And today, George, I think we should talk about deepfake technology.

George Kamide:

Yeah, a lot of hullabaloo last year and the year before, and then it disappeared in the COVID melee, but it's still a real danger.

Ashley Stone:

And it's more than just manipulation of videos that we see on social media in a political context.

George Kamide:

Yeah. We have talked about the possibility of disinformation campaigns against private sector entities. And we have started to see some of those. Some of them are bot level attacks and social, but there exists the very real possibility of also using deepfakes or synthetic media as it is sometimes called.

Ashley Stone:

As we start to see technology and these solutions become easier to access, it seems like bad actors are going to start making their way towards these things that can give them financial gain pretty easily.

George Kamide:

Yeah. So once again, we have weighted in well outside of our expertise. So, we invited Jon Bateman to come talk to us. He is a fellow in the cyber policy initiative at the Carnegie Endowment for Peace, and he had a paper come out in July, which caught our attention, which is about deepfakes and synthetic media in the financial system. And it's really about thinking through several threat scenarios so that we can start to prepare. So, without further ado, let's turn it over to Jon Bateman.

Jon Bateman.  Welcome to the Zero Hour. We are thrilled to have you here, excited for this discussion.

Jon Bateman:

Thank you very happy to be here.

Ashley Stone:

So before we start let's talk a little bit about your background. Can you tell us about your journey from the defense department intelligence analyst to Carnegie fellow?

Jon Bateman:

Yeah, no problem I spent six years at the defense intelligence agency, which is part of the U S intelligence community and defense department. And my job was basically to try to make sense of cyber threats, emanating from foreign countries, specifically, Iran. And, that was a fascinating journey into understanding some of the most complex and sophisticated threats in cyberspace and how they intersect with geopolitics.

And I spent some time at the Pentagon as well, trying to develop the US military response to these threats, including the US military's own cyber capabilities. After doing that, I'm now at the Carnegie Endowment for International Peace. And for those who aren't familiar, we're an independent nonprofit think tank based in Washington, DC with a global footprint.

So we have offices in Moscow, Beijing, New Delhi, Brussels, Beirut. And at Carnegie, I can take more of a global perspective on technology threats and think about a wide variety of technologies and basically ask the question, "Which of these might somehow threaten global stability or influence international affairs in some harmful way?"

And how can we put safeguards in place, whether that's the US government foreign governments, diplomacy, the tech sector, and on and on.

George Kamide:

That's interesting. What years were you at DIA?

Jon Bateman:

I joined at the very beginning of 2013, which was when Iran at the time was unleashing a barrage of distributed denial of service tax on the US financial institutions.

So that was a fascinating time, right? I stayed there all throughout the negotiation of the nuclear deal and some fascinating events in cyberspace and left about a year and a half ago. And at that point I was working for the chairman of the joint chiefs, general Dunford.

George Kamide:

Yeah, so that you would have seen the rise and shift from full on state attributable to these proxy groups that have the tacit permission to operate, in the interest of the government.

So that's very interesting. And you would have seen that ramp up in cyberspace and then the curb as a result of the negotiations, because it's just clearly asymmetric conflict to just apply pressure when there is an unequal playing field, but that's really cool.

Jon Bateman:

Yeah. I think that's a fair assessment and of course part of my job and the job of my colleagues was to identify when shifts in activity levels were occurring and try to understand:

Is this because of the nuclear deal, is this because of some political shift inside of Iran, is this because there are things that we're not seeing?

So it's always a really intellectually enriching job to have. And, I don't envy the people who have that job now in wake of the killing of Qasem Soleimani and the US election and many other things that are throwing US - Iran relations in flux. Of course. I'm sure your listeners know that has immediate impact on cyberspace.

George Kamide:

Yes, indeed. And, it is private sector that can face the brunt of that since there, that's where you can do the most harm to your enemy's economy. We invited you here because we're very interested in your working paper, which was released in July on deepfakes and synthetic media.

And how, different threat scenarios might play out against the financial sector, but I want to take a pause before we get into the paper, we're excited to dig in and we'll try to keep the nerdom to a minimum. But I was curious as to, if you could give us your sense of the cybersecurity landscape at the moment. I think for our listeners and for the population at large, election security and influence operations, commonly called disinformation, might be top of mind.

But we're also living through just this hellacious ransomware wave, such that, the, uh, influential blogger Daniel Miessler has. Called it, the cyber Pearl Harbor. He's made the argument that what everyone was waiting for, which was like the big one, the event that would redefine cyberspace, we may have missed the mark, that it might be this just wave of smaller, disruptive attacks that is just on a continuous basis.

So that being said. I'm curious, what's your assessment of the landscape today?

Jon Bateman:

Such an important question. I tend to think that there's been almost three waves of intellectual history here and how people think about this. First there was the fear of the cyber Pearl Harbor, the big one. And I think it was actually a Leon Panetta who was a defense secretary at the time of Iran's attack on, Saudi Aramco, who started thinking about some kind of catastrophic cyber attack, akin to, an act of war. And then over time, this type of threat never really seemed to emerge. And instead, there was this sense that cyber threats are just an endemic fact of life, but that could be death by a thousand cuts if you're losing all of your intellectual property and the like, so that was what I viewed as the second phase.

Now I would argue we're in a third phase where the curve is swinging back up again. We still haven't seen the big one, but there's more and more risk accumulating in the system, whether it's distributed risk like ransomware or systemic risk, like events such as WannaCry or NotPetya more of a monoculture that could create accumulated catastrophes across the world in a single incident.

The other trend that's contributing to this is digitization itself, the assets at risk, as more and more of human life is transferred online. Effectively. Cyber risk has more of an impact on individual human lives. We just saw the first documented death caused by a cyber attack.

And the life of nations, thinking about some of these catastrophes that could be state-sponsored or accidental. So that's where we are today. And I think my own personal point of view is I do worry the most about. Informational and influence type threats and cyberspace and other things that might be categorized as intelligence, whether it's a hack and leak or the theft of intellectual property.

I think those things have probably a more insidious impact on a country like the United States, where it's really our political system and our economic competitiveness are the most vulnerable right now. I hate to say it, but if small numbers of people are injured or die in cyberspace due to some kind of attack or catastrophe, that probably is of a lesser consequence over the longer term than things that sap our democracy of self-confidence for example.

George Kamide:

Absolutely good point. Yes, it's the hacking of the mind, the hacking of the perception of civil institutions is more dangerous than the hacking of any one system.

Jon Bateman:

I fully agree with that.

Ashley Stone:

Yeah. And I think your, I want to shift to talk about the paper you put out on deepfake threat scenarios for the financial sector because we're starting to think about how it took a little bit turn. It's a little scary to think about in your paper really put these scenarios in perspective. And I know we want to understand your goals for this paper, but first I'd like to start with an operating definition.

Can you explain what synthetic media is? Especially outside the context of political disinformation.

Jon Bateman:

Sure. Yeah. I think a simple definition of synthetic media is:

it's the use of artificial intelligence and machine learning to fabricate media content or to alter media content. So we're all familiar with traditional forms of media manipulation that have existed, frankly, for decades or even centuries in some cases. You can splice videos together. You can airbrush someone out of a photo. You can forge a paper document.

So we've already seen digital versions of that and the use of sophisticated software and Hollywood can work wonders. What synthetic media is is the application of AI algorithms to this. Creating new types of digital deception that either weren't possible before, or required resources that most people didn't have.

So synthetic media could be video, and this would be your classic deep, fake face swap. For example, taking Scarlet Johansen's face and basically fusing it onto the body of a pornographic actress in a video. Then you can have audio synthetic media. This is called voice cloning, where I can record enough of your voice to then be able to duplicate it, mimic it, very convincingly having you say things you've never said before. There also is synthetic still images. So there's a site called thispersondoesnotexist.com where you can go and basically look at a photograph or what appears to be a photograph of a person who doesn't exist. Finally, what's getting a little bit more attention now and something I worry a lot about is synthetic text, meaning texts that is generated by an algorithm to make it appear as though a human being wrote it.

And what a lot of the algorithms do now is you can basically feed it a prompt, like:

"It was a dark and stormy night." And then that's how you want the text to start. And the algorithm picks it up from there.

George Kamide:

Yeah. you bring up a good point, which well, maybe we'll come back to, but as of this recording today, there was an article in the Washington Post about groups that have been pushing out, texts and not fully deep, fake, but altered stuff through, SMS messages.

And because your phone is much more trusted than social, which we have, I don't think we're completely endeared, but people have their guard on for that sort of thing on social media. And so it's interesting that new applications of synthetic media could be applied to old vectors of communication and have a similar impact.

Jon Bateman:

You know, that's really something I worry about a lot because when people think about the deepfake problem, they often hold up the social media platforms as the ones responsible for solving it. So there's limitations there, but I think a big one is social media might not be the channel of communication.

If you get something through your text message, not only are you more likely to trust it, but that's an unmonitored space, right? The phone company, or iMessage is really not applying content controls the way Facebook might. And then the other thing is no one else is really watching it except you.

So no one is going to write a news article about it, unless it was sent to thousands of people and somehow it gains the notice to be fact-checked.

George Kamide:

Yeah, and for the voice cloning, I guess robo-calls have also been used, to target voters, even at the district level. So I think what's interesting is that your paper came out in July and literally no sooner than that, did the giant Twitter hack happen. Which was not deepfake territory, but what came out of it was at the beginning of this month, the New York Department of Financial Services effectively, one of the largest watchdogs in the world, came out with recommendations that social media platforms need to be regulated because they are now considered systemically important, which I think is a big distinction. I think the conversation around influence operations and election is a tricky space because it's like voluntary participation in that conversation.

Whereas the NY DFS was making the argument that if Twitter is a source of news, it can move the market. And that was an instance of very much a broadcast taking over a lot of accounts, blasting out as much as possible. You make a distinction in your paper to narrow cast, which I really like as a term, can you, explain and expand on that idea of the narrow cast?

Jon Bateman:

Yeah, absolutely. So I think you captured what broadcast means in this context. You're really trying to influence millions of people at a time. So, whether that means going viral on social media or somehow being amplified through traditional media. In contrast to that, a narrow cast threat would be a targeted deception effort that is actually tailored for, and then ultimately delivered to, a small group of people or even one specific individual.

So there was a case about a year and a half ago, and this is one of the first documented cases of deepfakes in crime, where there was a form of payment fraud that occurred against a British company. The person called up the CEO of this company and used a cloned voice to basically simulate the voice of that company's parent company CEO, down to the accent, the intonations.

And they said, "Hey, we've got an emergency. We need a rapid wire transfer." And if your boss is on the line and it pretty much sounds like them, and it sounds urgent and you're being pushed forward through clever tools of emotional manipulation. Yeah, that would be the perfect example of a narrow cast threat, where a lot of research and effort probably went into creating that cloned voice.

Understanding the relationship between those two individuals, the person whose voice you're cloning and the victim, and then figuring out what to say on the phone that would be compelling and convincing. No social media and involved. No traditional media involved. It's point to point. And again, for a variety of reasons, that is one of the more difficult threats to thwart.

Now I do want to say though, in the context of systemically important that I think a major finding from my research is I don't envision deepfakes really threatening the stability of the global financial system or causing a market crash. And my argument there is that the global financial system is pretty resilient and there really have been relatively few cases of any kind of digital disinformation causing a dip in the market.

 

And when that's occurred, it's been very short lived and very small. So we can imagine that some kind of deepfake event could even be 10 or 20 times more effective than the most effective previous digital disinformation driven market crash, and still life would go on. So that's not really where I see the threat. I see the threat more in terms of crime against specific individuals.

 

George Kamide:

That's a good point. You bring up in your paper the 2013 takeover attack of AP news Twitter account. And, when it was said that there were two bombs that went off in the white house, and if you see the trading chart, it's just like this crater, and then it immediately recovers. It was like this V-shape.

 

To your point, I guess I would be concerned that a threat if the internet and the technology becomes like a monoculture, that if you were to have a convincing deepfake, maybe attacking like a smaller commodities market that you could basically inadvertently trigger a whole bunch of trading algorithms, that are already automated and you could have those sorts of unforeseen effects. And then of course I think a lot of those algorithms would self-correct or there would be a manual override in developed markets, but there could be more catastrophic consequences in less mature markets that are more prone to trading on emotions and stuff like that.

 

Jon Bateman:

I think you hit on two really important themes there. One is the greater vulnerability of not only emerging markets, but even mature markets during a financial crisis. Anytime where there's less trust institutionally or just the moment that you're living through deepfakes could have a greater impact.

 

I think the other point that you made is that the victim of a stock market manipulation could just be an individual company and its investors. And there's a variety of mechanisms that I consider as to how that happens. But bottom line, if your stock market is hit by a deepfake, even if you can correct the record and restore your price after a short period of time, that could be enough time for the criminals to have already profited.

 

George Kamide:

Oh, for sure. Yeah. It's just shorting and trading on options and derivatives.

 

Jon Bateman:

Exactly.

 

Ashley Stone:

While we're on the topic of deepfakes, I have to imagine that you saw a lot of different examples in the course of your research, whether they're really hard to perceive as being deepfakes or really bad. Can you share any of what you saw?

 

Jon Bateman:

Sure. And I will say the quality of deepfakes varies drastically. From things that, are obvious crude, almost bizarre, to others that are so uncanny that it would be extremely difficult for a computer algorithm or human to differentiate them from reality. Especially if placed in a context that made these judgments more difficult to make.

 

I'll give you a couple of examples. First of all, the actual use of deepfakes in crime is still quite rare, but it is happening. A company called Nisos captured one in the live. Basically it was a cloned voice that was left on someone's voicemail. And so they have it and they posted it on online.

 

If you listen to it, it sounds robotic. It does sound like an individual, like a person, but just tinny a little hollow, a little stiff. And from what I've seen, that's probably because of a lack of training data. To make a really high quality deepfake, you need an extensive amount of data on the person's voice or face.

 

And in all likelihood, this was one that was made with less data so it was more crude. Some of the most convincing deepfakes have actually been made for entertainment or educational purposes. If you have seen any deepfake, it was probably a deepfake of Barack Obama as voiced by Jordan Peele. So there's this whole genre of deepfake public service announcements about deepfakes.

 

One that I just saw today is one of the best that I have seen in terms of quality. The creators of South Park have just launched a satire show called Sassy Justice. And the main character of that show basically has a face swap with that of Donald Trump throughout the entire show. It's incredibly uncanny, and of course, if you know what you're watching, you can see little perturbations and oddities, but I have to say it impressed me.

 

George Kamide:

Well, and we're going to come back to this. If you're talking about Trump as your target, there is more than enough footage to feed into machine learning or a neural network algorithm.

 

I mean, there's just talk about refinement in terms of the volume of messages. Yeah, that's, that's really interesting. I remember somebody sharing a YouTube video with me that was, somebody had used deepfake technology to do a side-by-side of video killed the radio star, but it was using historical footage from Hitler and Stalin's old speeches. And I'm like on one level, that's funny, but also can we stop normalizing Nazis and genocidal tyrants? Like, yes, this is funny on like a visceral level, but that the normalization is frightening. The manipulation is frightening, especially given what we understand about the radicalization of individuals. You can see that being harnessed to further radicalize other people.

 

Jon Bateman:

So, yeah if I could just pick up on that because I'm seeing a couple of trends right now that concern me. One is the increasing use of deepfakes for entertainment, art, activism, advocacy. And that's all fine that's all protected speech, but it could potentially normalize the use of deepfakes, and in particular, when something can be described, as let's say, political satire, it's that much harder for a social media platform or a government to come in and with credibility, say, "We've got to stop this, we've got to clamp down and regulate this," because you can argue that it's there for a legitimate purpose.

 

Now, one thing that is a positive story in the financial sphere is. The types of crime and fraud and stock manipulation scenarios that we're talking about. I think societies worldwide can unite against no one is going to be arguing that this voice cloning of the CEO and stealing $250,000 is some kind of political satire.

 

This is all criminal behavior, it’s unlawful today. So hopefully it's something we can do something about.

 

George Kamide:

Yeah, let's quickly unite on the low hanging fruit. The stuff that we can all agree on. On this podcast we've talked a lot about how bad actors will always find the easiest path. If you're trying to break into a house, you're not going to go pick the lock if the back door is left unlocked. Um, and so for example, these days data extortion is the attack du jour because you can get into the systems quite easily and then take the data, encrypt it, and then threaten to release it, and then you've said permutation of ransomware. As you see deepfake technology becoming more readily available, do you foresee that its use could be deployed more easily? Right? If it's there, it's just as easy as mass blasting a phishing email.

 

Jon Bateman:

Well you're absolutely right that cost benefit analysis is key for disinformation no less than for other types of cyber online threats. I think that's why deepfakes and synthetic media still are not being widely used today for any type of harm. There are other methods that are simply more cost-effective from the bad guy's perspective.

 

A couple of trends that I think we need to watch for here. First of all, although it's easy to dismiss deepfakes as too complicated to really be useful, there are criminals and state actors whose business model is high dollar, high impact, complex schemes.

 

George Kamide:

Big game hunting, as we say.

 

Jon Bateman:

That's a perfect description of it. And for them. It's a strategy. It's ROI. If they invest more upfront, maybe they can draw more money in later. Again, whether it's a sophisticated criminal organization or a state actor.

 

So I'll give you an example. We've already been seeing the use of synthetic photographs by intelligence services, and these shadowy online influence actors, in order to basically create fake social media accounts. So that's just a small piece of this that's already being used in the wild by some more sophisticated actors.

 

The other trend that I would watch is the continued proliferation and democratization of this technology. So we're talking about software becoming more user-friendly. Processing power, becoming cheaper and more accessible. We're also talking about better algorithms being developed that are more convincing and require less training data to work.

 

And then of course, another uncomfortable reality is that the training data that fuels deepfakes; biometric data about our face, our voice, and the like. More of this is being captured all the time for many different purposes and exposed. So I was just reading some stories today about how there are major concerns in China about facial recognition databases just being stored insecurely and being misconfigured, such that all the images can be easily downloaded.

 

So these are some trends to be concerned about as far as the future growth of this threat.

 

George Kamide:

I'm also intrigued by the criminal ecosystem and economics. We've seen ransomware as a service, basically you can sell the tools to do the ransomware, but they're also now like shoring, quote unquote, like the customer service that's needed to negotiate the ransom. It's becoming professionalized. So, when it comes to deepfake technology, it's possible that you could foresee people offering cloud computing services. Like you need more computing power, like I'll lend you my servers so you can run your algorithms.

 

Jon Bateman:

Absolutely. So not only that, the Washington Post just published an article the other day about a service that was occurring on encrypted chat, where people could basically create fake nudes using deepfake technology.

 

Basically the way this works is, you send in a photograph of a woman typically fully clothed. This is the kind of deep nude technology. And then AI can imagine how that woman would look like naked. Now, this had already existed for people who were had this technical savvy and the willingness to download and maneuver inside the software.

 

Now it exists as a service. You can simply send the photograph over encrypted chat and then you will get back the result. And this comes one step closer to the kind of criminal ecosystem that you're describing. In my paper I talked about a cyber extortion scenario where something like this could be virtually automated to blackmail people into paying them Bitcoin, or else we'll release all of this nude material, which, you might believe, or you might not believe, but you maybe worry that other people might believe

 

George Kamide:

Yeah, I got no words. So I'll turn it over to Ashley

 

Ashley Stone:

I'm just thinking back to your paper, you said there was like, there was a stat of 90% of it has just been used for pornographic reasons. And of course it is, but I can see where it's going to go. You know, there's an interesting paradox that we've been talking about.

 

When you point out on your paper about the small cap companies might be more affected by synthetic media attacks given a lack of resources. Large caps have a different problem in that highly visible corporate leaders generate large volumes of data, media interviews, earnings calls, and other publicly available recordings, which can be used to better train these deepfake algorithms. So can you expand on those two sides of this threat?

 

Jon Bateman:

Yeah, sure. And what we're talking about here is really the difference between a technical vulnerability and overall or financial vulnerability. So on the technical side, if you're a highly visible CEO of a fortune 500 company, you generate a lot of data just by doing your job in interviews, earnings calls, everything that you just mentioned. That can then be captured to create a very convincing,deepfake due to the amount of training data that's available.

 

However, that type of large company has a lot of credibility to fall back on if there's some kind of reputational attack, and a lot of resources to manage its reputation publicly:

Provide counter evidence, talk to journalists, so it could more easily withstand a deepfake attack on its reputation.

 

And also the stock price of a major company is just much harder to move. It just requires a lot more volume and you have to get more traders to play along. Now contrast that with a small cap company, the leader of a really small obscure company may be generating very little data. But if you can generate enough to do some type of deepfake, or to have a deepfake that doesn't rely on impersonating the CEO, but as some other type of event, you know, a celebrity endorsing a competitor, for example. You only need to fool a small number of stockholders and market players, and there's less credible information circulating about that company to begin with. Then also that type of company may just lack the resources or the trust to respond effectively.

 

So these are classic reasons why a lot of market manipulation goes after small cap companies and a deepfake could be very effective there, if you could style it such that you get enough data to do what you want to do.

 

George Kamide:

Yeah. there are two sort of interesting distortions there. Right now we're in the midst of a moment where there are a lot of IPOs, and there are a lot of companies that have high valuations, but less rep to fall back on. They're not like the JP Morgans of the world, you know, that's certainly doable. But then the second is, as the technology becomes more readily accessible, your cycle time will go down too. You could just spin up an attack that jumps on the bandwagon of a very bad brand event.

 

And then you could just ride the momentum of that because the public perception has already been distorted and you just like slide into that pocket there. And I think to your point in the paper, if you narrow cast against the customers, you could get them to give up passwords, credentials, which is of course going to hurt the company in the long run, but that's where the financial incentive might be.

 

Jon Bateman:

Yeah. You hit the nail on the head. I think from an individual company's perspective and their reputation and stock price, my number one concern about deepfakes would be aggravating some kind of preexisting crisis of trust that exists.

 

So like one scenario would be a deepfake of a corporate leader making some type of racist remark in a private conversation. Now, once that's out there, that leader has to argue that the private conversation never happened. They have to prove a negative, right?

 

Now that's very difficult to do. So in practice, you're falling back onto your reputation. But let's say this deepfake is released exactly when this company is under fire and maybe they're being under fire for racial issues within that company, it would be much more difficult to combat that type of event.

 

George Kamide:

Yeah. On the geopolitical front, you could possibly go after, big defense contractors try to undermine contracts or, we've seen pressure campaigns, testing the networks of oil companies when they're bidding for contracts. There's a lot that you could fold a geopolitical tension into if you're a multinational trying to negotiate these big deals or move between countries that are currently experiencing a lot of friction.

 

Jon Bateman:

I think that's right and that's where the political deepfakes and the financial deepfakes start to merge. Um, and maybe a very effective financial deepfake would have a political theme or narrative to it, or vice versa, maybe a political deepfake could arise and then we would have to speculate; was this actually done for financial reasons, or to swing an election?

 

George Kamide:

Do you, see as a former intelligence analyst, this type of brand threat intel falling under the remit of your typical CISO. Is this something that they need to take on is to understand the brand landscape?

 

We have the analyst, Brian Kime at Forrester seems to think so. He was an Army intelligence analyst. He seems to think that threat intel plays a role in at least for the CISO understanding. It's odd because when you say brand most people's minds immediately go to marketing or PR or comms. So just curious on your take there.

 

Jon Bateman:

This is probably one of those examples as to why cyber issues and digital threats should not be confined to a specialized cadre of information people somewhere in your company. Ultimately they're all risks to the pillars of your company, whether that's cashflow, goodwill, access to capital, right? And so if you can start to think about cyber and informational threats as business risks, then it becomes clearer that the CEO and the board need to take ownership of these issues.

 

Now that doesn't mean they'll become an expert in deepfakes or ransomware, but that does mean they probably need to govern and supervise these issues differently or more intensively and create reporting lines and structures that are more agile, that involve more frequent types of communication and critical conversations. I don't think you want to be in a position of being a CEO and just feeling like "I've got an expert in charge of this. I don't even really understand what that person is saying. I hope they're doing their job well. "

 

George Kamide:

Yeah. I don't know that excuse is going to fly anymore.

 

Jon Bateman:

That's right. Yeah. Not in 2020.

 

George Kamide:

Great. Well, here's another question for you:

is this something that needs to be taken up in a consortium level? So for example, there's no more ransom.org, which is a lot of security companies feeding decryption keys, threat intel, right? It's a common, good resource.

 

Do you see, like industries need to come together or establish standards? Some, some kind of ecosystem understanding of that because it feels like anyone trying to combat it in isolation is going up against the tidal wave.

 

Jon Bateman:

No, I think that's spot on. And if you just think about the conversation we've had so far, who have we talked about?

 

We've talked about social media platforms, journalists, we've talked about SMS providers. we've talked about phone companies, businesses, financial institutions, intelligence agencies, and then you can add even more stakeholders as part of the mix. What about people in Silicon Valley who are investing in or researching new artificial intelligence techniques? Are they thinking about how to protect these?

 

What about, the general public? Is someone educating them about these threats? You know, what about a VoIP provider like Skype, are they considering how to detect deepfakes during a video call or, Zoom or anything like that?

 

So you really need to envision a multi-stakeholder response, which would be quite complex. Now I'm not here to say that this is the most urgent problem that all of these institutions are facing. And that's where the messaging gets tricky, because we need to be both realistic that this is an emerging threat that is not yet at the volume of, let's say just fraud that's occurring day in and day out. But we also need to be aware of the complexity and how it's rapidly developing and staying ahead of this threat will mean some prudent investments that are going to mature over time. And relationship building and convening consortiums and things like that. That's the exact type of policy intervention that does take time to bear fruit and those conversations should be happening now.

 

George Kamide:

Yeah, you don't want to do it the day after the catastrophic event. Cool. Well, I have one last bit. I don't know if it's a question. I did peruse your most recent paper on cyber insurance. I think my takeaway there is that when you're working off of the risk frameworks and the litigation that's currently going on with the not Petya payouts is basically this evolution of the understanding of these gigantic cyber issues being statecraft and therefore part of these war exclusions.

 

But I think what we touched on at the beginning is that the threat environment has split into these confederates, right? There's like professional criminal syndicates, like this is just one revenue stream inside of their larger illicit economy. And I'm just interested in your take on that.

 

If you could touch on the risk framework there and how does cyber insurance need to re-imagine this risk? Because if a lot of those movies from the nineties don't hold up, The Net, all of that, right? It always is like either basement or GRU. And those are not the only two options available.

 

Especially schools. We just talked with Frosty Walker last week about ransomware attacks on school systems. And you know, they probably need to start buying more insurance against this because they simply don't have the resources to fight it.

 

Jon Bateman:

Yes, absolutely. Just briefly on the background issue here, the NotPetya attack was so big that I think the insurance industry and its customers learned some uncomfortable lessons from it. They learned that cyber risk is simply bigger than we thought it was before, and it's more likely to be aggregated or accumulated across many victims at the exact same time.

 

And we learned that the existing insurance products that exist are more ambiguous and kind of limited in scope than what's needed. All of that adds up to just needing a new framework. And as you mentioned, I really dive into this war exclusion clause. This notion that a cyber attack might be considered hostile war-like action, which really has been excluded from insurance coverage for decades or even centuries in some cases, but no one's ever really applied this to the cyber world until today.

 

So you hit on a key complexity with making this type of application, which is, What is a hostile or war-like action in cyberspace? How do you know that someone is a government agent? And in the paper I talk about the huge variety and range of command and control relationships that exist between governments and the person on the keyboard.

 

Like you said, could be a GRU officer in uniform or could be a criminal that is given very vague taskings, or even asked to be a bit of an entrepreneur and may be tolerated by the state or protected or rewarded in some indirect way. And so that's a huge spectrum to try to litigate. if I were a company, I wouldn't want to bet a billion dollars of claims on the outcome of a legal case that gets into these arcane matters. And so the paper explores whether another way as possible.

 

George Kamide:

Awesome. Yeah. yeah, I think we are all rapidly learning that the policies, the legal systems and even our brains are not keeping up, like this is a lot of old tech. If you want to talk about it, old policy, you know, evolutionary hard wiring that's hundreds of thousands of years old and it's facing these threats and these pressure points that we are ill-equipped to deal with.

 

But, I want to thank you, and I want to thank the Carnegie Endowment for funding this type of research, because if it is not imagined today, then we can't prepare for it tomorrow.

 

Jon Bateman:

That's exactly why we do this type of work at Carnegie. Whether it's looking at influence operations or our broader project on financial sector cybersecurity, the goal is thinking ahead and trying to get market actors to behave in ways that will benefit everyone in society so that some of these risks can actually be ameliorated.

 

George Kamide:

Great. Thank you Jon for the time. Thank you for joining us.

 

Jon Bateman:

Thanks so much for having me. It's a great conversation.

 

Ashley Stone:

Thanks for joining