Technology · May 24, 2023

AI in cybersecurity: Yesterday’s promise, today’s reality

For years, we’ve debated the benefits of artificial intelligence (AI) for society, but it wasn’t until now that people can finally see its daily impact. But why now? What changed that’s made AI in 2023 substantially more impactful than before?

First, consumer exposure to emerging AI innovations has elevated the subject, increasing acceptance. From songwriting and composing images in ways previously only imagined to writing college-level papers, generative AI has made its way into our everyday lives. Second, we’ve also reached a tipping point in the maturity curve for AI innovations in the enterprise—and in the cybersecurity industry, this advancement can’t come fast enough.

Together, the consumerization of AI and advancement of AI use-cases for security are creating the level of trust and efficacy needed for AI to start making a real-world impact in security operation centers (SOCs). Digging further into this evolution, let’s take a closer look at how AI-driven technologies are making their way into the hands of cybersecurity analysts today.

Driving cybersecurity with speed and precision through AI

After years of trial and refinement with real-world users, coupled with ongoing advancement of the AI models themselves, AI-driven cybersecurity capabilities are no longer just buzzwords for early adopters, or simple pattern- and rule-based capabilities. Data has exploded, as have signals and meaningful insights. The algorithms have matured and can better contextualize all the information they’re ingesting—from diverse use cases to unbiased, raw data. The promise that we have been waiting for AI to deliver on all these years is manifesting.

For cybersecurity teams, this translates into the ability to drive game-changing speed and accuracy in their defenses—and perhaps, finally, gain an edge in their face-off with cybercriminals. Cybersecurity is an industry that is inherently dependent on speed and precision to be effective, both intrinsic characteristics of AI. Security teams need to know exactly where to look and what to look for. They depend on the ability to move fast and act swiftly. However, speed and precision are not guaranteed in cybersecurity, primarily due to two challenges plaguing the industry: a skills shortage and an explosion of data due to infrastructure complexity.  

The reality is that a finite number of people in cybersecurity today take on infinite cyber threats. According to an IBM study, defenders are outnumbered—68% of responders to cybersecurity incidents say it’s common to respond to multiple incidents at the same time. There’s also more data flowing through an enterprise than ever before—and that enterprise is increasingly complex. Edge computing, internet of things, and remote needs are transforming modern business architectures, creating mazes with significant blind spots for security teams. And if these teams can’t “see,” then they can’t be precise in their security actions.

Today’s matured AI capabilities can help address these obstacles. But to be effective, AI must elicit trust—making it paramount that we surround it with guardrails that ensure reliable security outcomes. For example, when you drive speed for the sake of speed, the result is uncontrolled speed, leading to chaos. But when AI is trusted (i.e., the data we train the models with is free of bias and the AI models are transparent, free of drift, and explainable) it can drive reliable speed. And when it’s coupled with automation, it can improve our defense posture significantly—automatically taking action across the entire incident detection, investigation, and response lifecycle, without relying on human intervention.

Cybersecurity teams’ ‘right-hand man’

One of the common and mature use-cases in cybersecurity today is threat detection, with AI bringing in additional context from across large and disparate datasets or detecting anomalies in behavioral patterns of users. Let’s look at an example:

Imagine that an employee mistakenly clicks on a phishing email, triggering a malicious download onto their system that allows a threat actor to move laterally across the victim environment and operate in stealth. That threat actor tries to circumvent all the security tools that the environment has in place while they look for monetizable weaknesses. For example, they might be searching for compromised passwords or open protocols to exploit and deploy ransomware, allowing them to seize critical systems as leverage against the business.

Now let’s put AI on top of this prevalent scenario: The AI will notice that the behavior of the user who clicked on that email is now out of the ordinary.  For example, it will detect that the changes in user’s process, its interaction with systems it doesn’t typically interact with. Looking at the various processes, signals and interactions occurring, the AI will analyze and contextualize this behavior, whereas a static security feature couldn’t.

Because threat actors can’t imitate digital behaviors as easily as they can mimic static features, such as someone’s credentials, the behavioral edge that AI and automation give defenders makes these security capabilities all the more powerful.

Now imagine this example multiplied by a hundred. Or a thousand. Or tens and hundreds of thousands. Because that’s roughly the number of potential threats that a given enterprise faces in a single day. When you compare these numbers to the 3-to-5-person team running SOCs today on average, the odds are naturally in favor of the attacker. But with AI capabilities supporting SOC teams through risk-driven prioritization, these teams can now focus on the real threats amongst the noise. Add to that, AI can also help them speed up their investigation and response—for example, automatically mining data across systems for other evidence related to the incident or providing automated workflows for response actions.

IBM is bringing AI capabilities such as these natively into its threat detection and response technologies through the QRadar Suite. One factor making this a game changer is that these key AI capabilities are now brought together through a unified analyst experience that cuts across all core SOC technologies, making them easier to use across the entire incident lifecycle. In addition, these AI capabilities have been refined to the point where they can be trusted and automatically acted upon via orchestrated response, without human intervention. For example, IBM’s managed security services team used these AI capabilities to automate 70% of alert closures and speed up their threat management timeline by more than 50% within the first year of use.

The combination of AI and automation unlocks tangible benefits for speed and efficiency, which are desperately needed in today’s SOCs. After years of being put to the test, and with their maturity now at hand, AI innovations can optimize defenders’ use of time—through precision and accelerated action. The more AI is leveraged across security, the faster it will drive security teams’ ability to perform and the cybersecurity industry’s resilience and readiness to adapt to whatever lies ahead.

This content was produced by IBM. It was not written by MIT Technology Review’s editorial staff.

About The Author