People are the keyto Gen AI success
Despite the rapid advancements we’ve seen in Gen AI, the task of protecting organizations still sits squarely on the shoulders of humans. But while people bear the responsibility for maintaining and improving security, they can’t do it alone.
To protect organizationst, security professionals must be augmented with Gen AI. Modern threat surfaces have become too vast and complex for people to build an accurate understanding of an organization’s true security posture. And there’s too much data, with too many permutations, for analysts to identify patterns, correlations, and outliers. It’s easy to understand how vulnerabilities can remain hidden and anomalies can pass undetected if security professionals aren’t supported with AI-powered tools.
With varying levels of experience and skill across security teams, it’s common to find inconsistency in the speed and quality of analysis, response, and reporting. Security is a team effort, and to build the most effective teams, leaders must find ways to help every team member operate at the same level as their best performers.
Gen AI can help produce faster, more accurate risk assessments, upskill staff, bring consistency to security processes, and accelerate incident response and recovery. However, while automation can reduce the vulnerability window and accelerate incident response, it’s essential to have a human in the loop; AI can make recommendations, but humans must always validate its suggestions.
New skills for a new era of cybersecurity
To take full advantage of the Gen AI opportunity, security professionals need new skills.
A strong grasp of fundamental AI and Gen AI concepts is essential to understanding how to use AI models effectively. Some security professionals may even want to learn data science skills to better understand the models and the risks associated with them.
At a minimum, security teams must be equipped with the tools and skills to write effective natural language prompts. Small changes in the phrasing of prompts can have a dramatic impact on Gen AI outputs, so to get the best out of models, security professionals need to understand the interface. We’ve seen in the past – during the emergence of cloud computing and modern query languages, for example – that learning how to “talk” to the system leads to much better outcomes.
Helping security novices become senior experts
Gen AI can help people make better security decisions faster. It can help technically proficient security professionals to write better reports and help great communicators to increase their technical skills.
The problem is that it can be tempting for junior security staff to rely on AI recommendations too heavily. And if Gen AI does all the heavy lifting for them, how will today’s novices become tomorrow’s seasoned professionals?
It’s important that security leaders ensure the necessary skills and knowledge are embedded throughout their teams and not just in the minds of a few senior staff. That will help security professionals at all levels play an active role in implementing and optimizing Gen AI, rather than passively accepting its outputs.
Security professionals have an obligation to use AI responsibly
Throughout the history of IT, people have sought to use technologies in ways their creators never intended. Since the emergence of Gen AI into the mainstream, we’ve already seen how bad actors have manipulated publicly available LLMs to generate malicious content and code and other models to create deep fakes to accomplish malicious outcomes.
Security teams need to take action to prevent misuse of their Gen AI models and tools. To do that, they must understand and adhere to responsible AI principles so they can use AI models ethically and mitigate potential risks of abuse. As security professionals, we have a duty to consider how the Gen AI solutions we create might be abused and an obligation to raise the alarm if we identify unacceptable risks.