The AI Summit New York

The AI Summit at Black Hat News

Jul 10, 2025

From Neuroscience to AI Security: Meet Microsoft's AI Red Team Leader

From Neuroscience to AI Security: Meet Microsoft's AI Red Team Leader
The AI Summit Series is proud to present an interview with Tori Westerhoff, who leads Microsoft's AI Red Team operations and brings a refreshingly unconventional approach to AI in cybersecurity.

With a background in cognitive neuroscience rather than traditional cybersecurity, Tori reveals how she applies "mind hacks" to test AI systems, and stays ahead of rapidly evolving threats.

Read the full interview to learn about the creative attack vectors that "floor" even experienced professionals and why understanding how humans process information might be the key to better AI security.


The AI Summit Series: What drew you to join the AI Summit at Black Hat USA as an Advisory Board member?

Tori Westerhoff (TW): I'm really excited to bring a new, potentially less cybersecurity voice to why AI kind of repeats narratives that have been in the security space for a really long time, but also drives the need for new perspectives, which I'm probably an emblem of.

The opportunity to shape that dual handed approach - we're really thinking about cybersecurity in these traditional ways. The ways that vulnerabilities and flaws show up in systems are really elemental and talk through the nuances that AI integration and tech stacks brings is obviously something that I really love doing cause I sit and live in that.

I think it's also an instance where I'm really passionate about getting to this community. I live in a really specific slice. AI red teaming is one facet in a massive ecosystem. How to secure AI tech. And I think it's great to drive these narratives of how different security functions will interact going forward now that we have AI proliferate so broadly.

The AI Summit Series: What are you hoping that attendees will take away from The AI Summit at Black Hat USA this year?

TW: I really hope that folks feel as though we've demystified some of the hype around AI vulnerabilities or complications and security that AI integrates.

There are nuances. AI is different, it's new, it's constantly evolving. That is fundamentally true. And also so many of the key insights that we bring to decision makers on my team are landed in traditional security understanding.

I want that to be the bridge for folks to understand how AI tech stacks are building on top of the way that security has been understood kind of pre AI. And also get people really excited to bring their expertise to new methods. That new perspective that I'm talking about, the reason why I am actually on it as someone who doesn't have a traditional security background.

So I'm hoping that people get inspired to bring a multidisciplinary approach, but also really feel like they can dive in the next day with tools and practical applications and how-to's.

The AI Summit Series: Other than being able to take away those practical tools, why else do you think cybersecurity professionals should attend The AI Summit at Blackhat?

TW:  Something that ignites me about AI is that it kind of forces you to think about multiple disciplines, multiple industries, different implications of AI.

The context where AI is being integrated has effects of how we think about security effects, about how we think of risk and impact. And it's really difficult sometimes in our everyday to peel ourselves away from our perspective or our learned silo and hear about the financial industry or hear about career hackers or career defenders.

I think understanding at this particular maturity curve slice of AI, how so many different spaces across different industries are thinking about it will make the tool kit that folks leave with way more robust than if we were just thinking, it's cybersecurity professionals to just cybersecurity professionals.

The AI Summit Series: Your work leading Microsoft's AI Red team sounds really fascinating. What are some of the many considerations that you and the team are taking into account at the current moment?

TW: The team's main mission is to bring as much creativity as we can put into every ounce of work that we do. Beyond informing business risk decisions, a lot of our team's ethos is to be the small helpful team within product development.

AI red teaming at Microsoft happens before launch. It happens before mitigation. It is truly an indicator light. And so we are trying to push even deeper into the tails of the bell curve of scenarios to really work with product teams to think even more creatively, even more impactfully industry context specific into how they are creating their safety frameworks.

The biggest concern for us is that we innovate at the same clip that AI is evolving and that becomes this ever changing 3D puzzle of understanding new models, hyper capabilities of innovating and building on and breaking apart so that we can analyse different jailbreaks or different ways to actually manipulate AI.

What I found over my tenure there is that whatever we're doing 3 months prior, we're never doing now. It breaks speed innovation. So the core of what we're thinking about is creatively match stepping or outpacing evolution.

Secondarily, we are thinking about how we can bring as many perspectives, as many industry deep dives, as many out-of-the-box thinking frameworks to inspire AI red teaming. We really feel like AI gives us the canvas for that.

The AI Summit Series: For people who don't necessarily have a background in cybersecurity but are AI-curious, could you explain what red teaming is?

TW: The traditional AI red teaming in the security space is this very specific double-blind adversarial exercise where teams are intentionally emulating adversaries and proving kind of getting proof positive that there are security vulnerabilities within a system and they're double-blind. So theoretically, the product team who's working with that red team may not know that there's an emulative adversary in their system.

At Microsoft, our AI red team is single blind and we work with product. That means that we have preferential information about build. We are part of the product development life cycle and we're actually white hat hacking before it hits any user's plate, any customer. And the point is that we are hardening that system as part of the development process.

We are also emulating adversaries and also benign users because in the AI instance, there are a lot of scenarios where we want to make security and safety consistent even if you are not emulating a nation's debt actor.

So our job is still the same thing, find proof  that there's security or safety vulnerabilities or risks, but we work with product teams to help them understand and refine the way that they create their safety mitigations and frameworks to secure their products before they're even launching.

The AI Summit Series: You've mentioned that you don't have a traditional cybersecurity background. How has your background in cognitive neuroscience shaped your unique approach to identifying AI vulnerabilities that others might miss?

TW: My education's in cognitive neuroscience and my early career was actually in national security strategy. Both of those elements have helped me do two different things that I think differentiates my methods from someone who's coming from a traditional cybersecurity background.

In the cognitive neuroscience element, I focused on decision making cognitive neuroscience - very convenient for AI hacking cause inherently there are a lot of similarities that you can use almost as an allegory or framework to understand our human experience.

An example could be that humans are really great and recognising faces with eyes. So if I sensor eyes, will I actually be able to manipulate input outputs to evidence of unwanted system behaviour? Or humans are really great at filling in consistent texture. So if you have a blind spot in your vision and you'll actually create a smooth or consistent surface if you have context around that blind spot. LLMs kind of do the same exact thing. They'll fill in holes in photos, they'll behave in a similar way.

The national security bit really helped me understand risk impact. So having deep industry knowledge, I was able to kind of matriculate how vulnerabilities or safety risks could accrue into really high impact risks, to clarify how they work in systems, how they work in individual products, and really coach products about the kind of tertiary effects of particular vulnerabilities - a couple steps after. And I think that perspective is also really helpful because it's not just in the code, it's not just in the product, it's really in the contextual application of technology and industry.

The AI Summit Series: What conversations are you most looking forward to having there with other industry leaders?

TW: I'm really excited to hear what people are most looking forward to or positive or hopeful around AI. My little facet focuses a lot on those vulnerabilities and how AI can go wrong so we can make it so that that doesn't go wrong. But that means that my day-to-day isn't always living in the art of the future.

Selfishly, I think that's really going to help me. Similar to the way that national security helps me understand the impact or the risk of particular vulnerabilities, I think understanding where people are really moving in an energetic way will help me understand how to coach my team to focus on vulnerabilities and the applied impact of those.

The AI Summit Series: This is also your first experience attending the AI Summit at Black Hat USA. How would you recommend someone who's new to the industry or to the event, who's coming to the event immerse themselves in the experience?

TW: My general advice is always make yourself comfortable with being a little bit uncomfortable and talk to strangers. I got this advice maybe decades ago, and my experience, especially sitting on the board digging into everyone's perspective, it's already been kind of eye opening to the different apertures and ways that folks think about things.

And I do think because AI is at an early maturity space, everyone's learning. I say to my team a lot, you have to be comfortable being humbled by the technology itself. And in some ways, I think that AI is humbling in industry, and we can all kind of connect on the fact that we're curious and we want to learn and we really want to grow as the technology grows.

And that's an open invitation to sit down at a table and say, Oh my gosh, what did you learn today? What absolutely surprised you? And I'm very certain that something will surprise someone.

The AI Summit Series: Beyond all of your impressive professional achievements, what's something about you that might surprise people who only know you through your work in AI security?

TW: I used to really intensely play trombone until I was kind of well past undergraduate and almost went to Conservatory instead of kind of a traditional academic undergrad. So I toured Europe and I loved it. I was really into it. I played jazz and classical, and if you had known me decades ago, you only would have known that I played trombone.


The AI Summit at Black Hat USA is your gateway to mastering the dual edged sword of AI and cybersecurity. Taking place on August 5 2025, this groundbreaking event brings together two pivotal forces shaping today’s technology landscape – artificial intelligence and cybersecurity – for an unmissable day of insights, innovation, and strategy. 

Explore your pass options here

View all The AI Summit at Black Hat News
Loading

2024 Sponsors

Headline Partners

Loading

Industry Partners

Loading

Diamond Sponsors

Loading

Platinum Sponsors

Loading

Gold Sponsors

Loading

Silver Sponsors

Bronze Sponsors

Associate Sponsors

Loading

Media & Strategic Partners

VISIONAIRES VIP LOUNGE AND VIP PROGRAM SPONSORS

Loading

 

The Hackathon Sponsors

Loading