Security

How AI is changing the cybersecurity landscape

Vivian Schiller from Aspen Institute and Heather Adkins from Google discuss the future of cybersecurity.

Aug 01, 2024 6 min read

DIALOGUES is a platform for diverse perspectives and candid conversations on AI, technology, and society — and our collective responsibility to get it right.

At the recent Aspen Security Forum, we sat down with Vivian Schiller, VP & Executive Director of Aspen Digital, a program of the Aspen Institute, and Heather Adkins, founding member of the Google Security Team and VP of Security Engineering at Google to discuss how the safe and responsible use of AI can change the cybersecurity landscape for the better.

2024 has been another big year for AI. As it becomes more widely accessible, how do you see AI changing the cybersecurity landscape?

VS: We think about the intersection of AI and cybersecurity as two sides of the same coin. On the one side, there’s the cybersecurity of AI-enabled products. How do we keep them secure? On the flip side, there’s both the opportunities and risks that AI brings to the cybersecurity landscape. These are cross cutting issues. The work that private industry, civil society and governments take on now will determine who benefits more – attackers or defenders.

HA: Large language models (LLMs) are poised to revolutionize the cybersecurity landscape. Their ability to process massive datasets at scale opens new avenues for defenders, such as identifying code vulnerabilities, analyzing complex telemetry data, and streamlining operations. This proactive approach will significantly reduce attack opportunities and give defenders a much-needed advantage. However, it's crucial to acknowledge that these same capabilities will also be available to attackers. As the technology advances, we must remain vigilant and closely monitor the potential development of automated attack platforms.

The work that private industry, civil society and governments take on now will determine who benefits more – attackers or defenders.

Vivian Schiller, VP & Executive Director, Aspen Digital

In the face of growing cyber threats, how can AI be used to increase the cyber defenses of the world’s democracies?

VS: We’ve examined this question at both the micro and macro level. You need to tackle both to defend democratic societies and their institutions.

At the micro level, organizations must stay true to cybersecurity principles and not be swept up in the hype of deploying AI without good reason. AI can never be a complete substitute for human thinking and human judgment. Having a human in the loop - ALWAYS - is a core principle. We examine these questions in two papers: Navigating Uncharted Waters: Generative AI Guidance for Organizations and Envisioning Cyber Futures with AI, which we developed based on insights from our Aspen US and Global Cybersecurity Groups. In short, organizations must provide clear guidance for their employees about AI usage and set out a clear governance structure for how data will be used by any AI models.

At the macro - or societal - level, our AI and Democracy work examines how expanding AI capabilities make it easier for bad actors to mislead citizens and the steps we can take to neutralize targeted local voter suppression, language-based influence operations, and deepfakes.

HA: Our AI Cyber Defense Initiative outlines how AI can be used to tip the security scales in favor of the professionals tasked with protecting our infrastructure and the critical software we depend upon every day. AI can help today’s defenders in scaling their work in threat detection, malware analysis, vulnerability detection, vulnerability fixing and incident response. We also outline a proposed policy and technology agenda that can help to secure AI, encourage a balanced regulatory approach to AI usage and adoption, and advance research.

How do you envision AI impacting the future of cybersecurity jobs and the skill sets required for cybersecurity professionals?

HA: The cybersecurity industry currently faces a daunting skills gap and a shortage of qualified professionals, leaving many organizations vulnerable to threats. Large language model (LLM) assistants offer a transformative solution. By automating repetitive tasks and providing on-the-job training, we can empower existing talent, make the field more accessible, and enhance operational efficiency. Imagine new professionals reaching the effectiveness of seasoned veterans in a fraction of the time. Moreover, the potential impact on mental well-being cannot be overstated. By alleviating burnout, reducing pressure, and fostering a more supportive environment, we can improve job satisfaction and retention rates. The future of cybersecurity is bright, and LLMs can play a pivotal role in shaping a more resilient and sustainable workforce.

VS: Retaining cybersecurity talent is a big area of focus for us at Aspen Digital. In fact our Aspen US Cybersecurity Group put out a report on this recently called Bits, Bytes, and Loyalty. But I think Heather is exactly right that AI can make the field more attractive to talent by automating the least appealing parts of the work, and making it more productive and gratifying. But training and recruitment is just as important as retention. We all need to do a better job at inspiring and training young people for careers in cyber.

In your opinion, what are the most promising areas of research and development in AI for cybersecurity, and what advancements can we expect in the near future?

HA: Over the past few years we’ve seen some really promising research and public private collaborations to secure AI which have spanned bug hunting, threat analysis, red teaming and the development of new tools. Some great examples include our recently announced Coalition for Secure AI, that will invest in AI security and leverage Google's Secure AI Framework; DARPA’s AI Cyber Challenge, which focuses on building new AI tools to help secure major open-source projects; and last year’s DEF CON AI Village red teaming event. These all exemplify the unique opportunities that come from industry-wide collaboration.

VS: Another area of focus is on synthetic content designed to mislead or entrap. This means “old tricks” in cybersecurity, like phishing and social engineering, will become more effective and less costly to produce. The availability of large data sets and the growing sophistication of foundational models and access to compute means adversaries have greater access to human-like speech and realistic images for targeting the most vulnerable link in cybersecurity - humans. So we’re focused on advancement that can counter and detect adversarial, synthetic content attacks -through both technology and awareness

Anything else that is top of mind for you that you would like to share? 

VS: Cybersecurity is a whole-of-society challenge. That’s why we’re excited to be part of a new large-scale cybersecurity public service campaign aimed at protecting Americans against growing cyber threats. The campaign will promote the awareness, knowledge, and practical tools needed to build personal resilience, contributing to community and national resilience against cyber threats. The goal for the campaign is to empower individuals to manage their digital presence and online interactions. By practicing better cybersecurity behaviors, individuals can protect themselves and their communities.

HA: Our mission in cybersecurity is to create a safer digital landscape by minimizing attack opportunities, lowering defense costs, and raising the stakes for malicious actors. Research that empowers defenders and strengthens their advantage is paramount. Initiatives like the DARPA AI Cyber Challenge, are paving the way for a future where code is increasingly resilient and secure. While the potential of large language models is immense, we must also prioritize research into mitigating risks like hallucinations, prompting injection, and data leaks. This transformative technology will undoubtedly present challenges, but society has historically adapted to new risks when the potential benefits are significant. The return on investment for these advancements promises to be substantial, making the effort well worth it.

Search ads are always marked with a label like 'ad' or 'sponsored'.

If there are no useful ads, we won't show any ads at all – which is actually the case for the large majority of searches.