Artificial intelligence has ignited an arms race for trust, and security leaders need to reassess their defense strategies.

Bad actors are trying to build trust around their deep fakes or re-engineered content. At the same time, we, as defenders, want to build trust in our data and the relationships we have.

Embedding trust is a conundrum. As security leaders, we encourage skepticism and distrust of messages that land in an employee’s inbox or phone. At the same time, we want employees to trust what’s coming from us internally, and trust in the systems we use.

So there is a tension between enlisting employees to be a line of defense while asking them to trust in processes and systems that we are putting in place, and in the data we make available to them to do their jobs.

A recent study from Salesforce discovered that confidence in company data is falling. Just 40% of business leaders rate their company data as reliable, 36% have faith in its accuracy, and 34% believe it’s complete. That’s significantly down from a 2023 survey that showed 54% found the data to be reliable, 49% said it was accurate, and 34% assessed it as complete.

With today’s technology, it’s become fairly easy to leverage public content to use somebody’s voice to create other content or to authenticate themselves into a system that uses voice authentication. Additionally, there are images out there that are extremely difficult to distinguish from the real thing. The barrier of entry to create this AI-generated content has become quite low, as capabilities around creative content are becoming more accessible, cheaper and faster.

Bad actors are also leveraging AI to conduct very targeted and increasingly sophisticated phishing campaigns. You no longer see the general signs of phishing and spam markers, like misspellings. So the traditional security awareness around that isn’t working anymore because these very targeted and well crafted emails can be created and dispersed quickly, cheaply and at scale using AI platforms.

Bad actors are also starting to develop agentic AI capabilities, deploying bots that are running semi-autonomously or autonomously, leveraging data sets and large language models to generate their own content. Down the line bots will be trained to create these attacks, without need for human intervention.

As security leaders, we cannot afford to be entrenched in what we’ve been doing until now. Sometimes we’re constrained because we’re heavily invested in older solutions that are hard to displace. But to the extent that some flexibility exists, we need to be looking at new solutions to combat a new set of threats. 

A raft of security products have flooded the market in an attempt to combat or defend against these AI-related developments, but their track record has been spotty.

The area where capabilities are most mature is in the email defense space, where new players are developing products based on the same technologies that the bad actors are using to help detect some of the more sophisticated AI-generated spearfishing attacks. 

Typically what I’ve seen are API-based solutions that plug directly into the email suite and quickly process information as it lands in the inbox. We don’t want to slow down email delivery, or have people clicking on links before the email can be analyzed. So we need to consider a solution’s speed and scalability, as well as its ability to identify user behavior and take action post delivery.

Another thing to look out for is what players mean when they tout their AI capabilities. Is it really AI, or is it machine learning? It’s probably a combination of both. But you need to dig in to ascertain whether they are truly using AI and how that might affect your organization. Are they using your data to train their model? Is your data being exposed to other companies on their SaaS platform? What models are they using?
What are they willing to share with you about how they leverage those models?

I haven’t seen much progress around deep fakes and voice recognition. Some of the developers are tackling this from an identity standpoint, but the space remains immature. 

As organizations plan to defend themselves against AI-assisted bad actors, relationship building within the enterprise is crucial. It can be as simple as having a roadshow around different offices to talk about the technology and security programs that are in place. Solicit feedback, and show how you’re using that feedback to build trust in the relationship between employees and the technology, security and systems they’re using.

Digital users cannot bury their heads in the sand around artificial intelligence and its implications. We don’t need to become AI experts, but we do have to understand its capabilities and understand the terminology. Don’t brush it off as something that’s not going to completely change the world we live in, because it will. 

Just recently, Shopify CEO Tobi Lutke told his staff that no new hires will be made unless they can prove AI can’t do the job. 

If we, as security leaders, understand what AI is and start to work with it, that will help raise the tide for everybody.