Learn from industry leaders, security researchers, and practitioners sharing their expertise and insights.
Join us for exclusive presentations from industry leaders and special documentary screenings
KeynoteChief Adversarial Officer
Author
Talk to be announced
DocumentaryTranshumanist & Cybersecurity Speaker
Author and Actor
Sneek peak into the documentary I AM MACHINE with commentary from the director and producer.

Purple Team Lead
Security Risk Advisors (SRA)
Sarah leads the Purple Team service at Security Risk Advisors (SRA). She has led hundreds of Threat Intelligence-based Purple Team exercises for organizations in the Fortune 500 and Global 1000 over the past 7 years. Her background is in offensive security, primarily internal network, OT/ICS, and physical security penetration testing. Sarah also has experience in external network penetration testing, web application assessments, OSINT, phishing/vishing campaigns, vulnerability management, and cloud assessments. She was a DEF CON 33 speaker. Sarah graduated Summa Cum Laude from Penn State with a B.S. in Cybersecurity. She is a Certified Red Team Operator (CRTO), Certified Information Systems Security Professional (CISSP), Google Digital Cloud Leader, AWS Certified Cloud Practitioner, and Advanced Infrastructure Hacking Certified. She lives in Philadelphia with her dog, Paxton.
Letthemin: Facilitating High-Value Purple Teams Using an Assumed Compromise Approach
Purple Teaming has become a critical component of modern cybersecurity programs, but its definition and application vary widely across organizations. This presentation introduces a refined, regimented, and repeatable methodology for running Purple Team engagements, developed and battle-tested for over a decade. As the term 'Purple Team' means different things to different people— a methodology, a team of people, a program, an assessment, or even a state of mind—and as Purple Team engagements themselves come in all shapes and sizes, the speaker will begin by aligning recommended definitions and applications of common Purple Team terminology. The presentation will explain how to apply an Assumed Compromise approach to Purple Teams. Any organization can be vulnerable at any point in time. This style of Purple Team testing follows the adversary through the entire life cycle of an attack, from Initial Access to Impact, assuming vulnerabilities exist to instead focus on the visibility of security tools. This is a powerful method of identifying ways to improve detection and prevention capabilities at each layer of an organization's defense in depth. The speaker will include real world examples and specific instructions. The presentation will conclude with broader applications of this style of Purple Team. This will include how to collect and analyze the engagement results and apply these results to drive improvement to an organization's resilience to common threats. This talk is ideal for security professionals, both Red and Blue Team, who are looking to elevate the way they perform Purple Team engagements.

Red Teamer & Penetration Tester
Security Risk Advisors
Annika Clarke is a red teamer, penetration tester, and offensive security engineer working at Security Risk Advisors. Her focus is on researching and developing novel attack techniques that creatively evade traditional detection mechanisms in real-world client engagements.
Hiding in Plain Sight: Weaponizing Developer Applications and Interpreted Languages to Evade Modern EDR
As endpoint detection and response (EDR) solutions evolve to counter traditional intrusion techniques, organizations often develop a false sense of security, relying heavily on these tools to detect and mitigate threats. This presentation challenges that perception by exposing how trusted developer tools, such as IDE extensions, Electron applications, and interpreted language execution environments can be weaponized to bypass the most sophisticated detection mechanisms. These inherent risks posed by trusted applications and developer environments are often critically overlooked. This talk will demonstrate how to exploit this blind trust through bypassing traditional endpoint controls and signature-based detection due to the use of high-level languages like NodeJS and Python to hijack legitimate applications. Attendees will gain insights into how these methods were developed and successfully deployed during real-world red team engagements.

Principal
Sherpa Intelligence LLC
Tracy Z. Maleeff, aka InfoSecSherpa, is the principal of Sherpa Intelligence LLC. She previously held roles at the Krebs Stamos Group, The New York Times Company, and GlaxoSmithKline. Prior to joining the Information Security field, Tracy worked as a librarian in academic, corporate, and law firm libraries. She holds a Master of Library and Information Science degree from the University of Pittsburgh in addition to undergraduate degrees from both Temple University (magna cum laude) and the Pennsylvania State University. Tracy has been featured in the Tribe of Hackers: Cybersecurity Advice and Tribe of Hackers: Leadership books. A native of the Philadelphia area, she is passionate about Philly sports - Go Birds! Tracy publishes OSINT blogs and an Information Security & Privacy newsletter.
The Threats & Research Opportunities of the Cannabis Industry
The legal cannabis industry has grown exponentially in the past few years, particularly in North America. Like any business, they are not immune to cybersecurity and physical security threats. This session will provide an overview of the threats and challenges faced by this industry, identifying known malicious actors and Tactics, Techniques, and Procedures (TTPs). In addition to providing attendees with a breakdown of threats, time will be spent outlining opportunities for Open-Source Intelligence (OSINT) research for this unique business which can help to identify potential challenges and possible solutions for defense. Applicable areas to be covered include legislation, politics, patents/trademarks, legal, cultural, and business. Presented by an Information Security professional with years of research experience, including as a law firm librarian, also has a medical marijuana license from the Commonwealth of Pennsylvania.

CTO & Co-founder
Sondera
Matt Maisel cofounded Sondera, a Philly-based AI security startup, where he serves as CTO. He's built ML security products at Cylance, Obsidian, and NetRise. Recently, he's focused on agentic systems--- both building and breaking them. He's presented at BSidesCharm and DEF CON AI Village, and trained at Black Hat USA.
Your AI Agent Just Got Pwned: A Security Engineer's Guide to Building Trustworthy Autonomous Systems
A recent AI red teaming study by the UK AI Security Institute achieved a 100% attack success rate against all tested agents, with successful exploits in as few as 10 queries. If you're a security engineer building or securing agents, this should motivate you to act. As we give agents more autonomy, we expand the attack surface. Every autonomy level increases utility but also increases risk. Yet most teams ship agents with only basic prompt filtering. This talk delivers practical patterns for building secure agents. Attendees will learn what agentic systems are and why to use them, how each autonomy level creates new attack vectors, design patterns for agents, and guardrails that add security hooks to agent frameworks without breaking functionality. Attendees will leave with a security evaluation framework, code examples, and a pre-deployment checklist.

Lead Scientist
Security Risk Advisors
Evan Perotti is a Lead Scientist at Security Risk Advisors. He focuses on research and development, primarily within the offensive security space. His specialties include AWS security, Windows endpoint security, and purple teaming.
Screaming about detection coverage in ALLCAPS!
Are your tools actually detecting on the right indicators? A common refrain in defense is that attacks should be addressed at the behavior level and avoid low-hanging indicators. From the hundreds of purple team exercises we conduct each year, we've evaluated all manner of endpoint security controls. And, invariably, they all suffer from diminished coverage when removing weaker indicators, like process command lines. It seems security vendors prefer taking the easy route. This talk will focus on separating attack behaviors from their specific implementations, evaluating detection robustness, and implementing atomic testing procedures to address these concepts. This talk will also cover open-source tooling we've released that is designed to help red teams put these concepts into practice for their organizations.

vCISO & Compliance Professional
Independent
Michael Raymond is a vCISO and compliance professional who thrives on exploring the bleeding edge of tech. In his earlier career, Michael was a security researcher and video producer, delivering live-streamed educational content on channels like Null Byte, SecurityFWD, and Hak5. Outside of his day job, Michael's curiosity drives him into the realms of hardware, electronics, and aerospace. Whether it's tracking airplanes through ADS-B, diving into signals intelligence with SDRs, home automation with Home Assistant, or uncovering other obscure niche topics, he brings the same passion and friendly enthusiasm to every new challenge.
Catching the Catchers: Open Source Stingray Detection in the Wild
Cell-site simulators (CSS), also known as Stingrays, are surveillance devices that impersonate legitimate cell towers, forcing nearby phones to connect. They can track devices, harvest IMSIs, and in some cases intercept communications, all while operating in secrecy. Despite their widespread use, little is publicly known about how or where they are deployed. Rayhunter, developed by the Electronic Frontier Foundation (EFF), is an open-source tool that puts CSS detection into the hands of everyone. Running on an inexpensive Orbic mobile hotspot, Rayhunter passively monitors cellular control traffic to identify suspicious behavior. This talk will explore how Rayhunter works, why it fills a critical gap left by existing detection methods, and what early deployments are revealing.

Director of Technology and Cybersecurity & AI Educator
The Baldwin School
Director of Technology and Cybersecurity & AI Educator at The Baldwin School. Dr. Heverin has over a decade of experience in cybersecurity, penetration testing, and AI security research. He has published multiple papers with his students at the Baldwin School (an all-girls private school) on topics including prompt injection attacks, ontology-driven cybersecurity, and vulnerabilities in enterprise technologies. He is the author of a Navy cyber risk assessment patent, a NVD CVE entry, exploits on ExploitDB, Google Dorks, and countless bug bounty reports for universities and a government agency.
LLM-SRO: Ontology-Driven Security for Large Language Models
Large Language Models (LLMs) are being adopted across industries, yet their attack surface is expanding faster than defenders can keep pace. This talk introduces LLM-SRO (Large Language Model Security Risk Ontology), an ontology-driven framework for systematically modeling and mitigating adversarial risks in LLMs. Built collaboratively in WebProtégé and paired with AI reasoning through ChatGPT, LLM-SRO integrates the OWASP Top 10 for LLM Applications with MITRE ATLAS adversarial techniques to create a living, queryable knowledge base for defenders. A key takeaway is that LLM-SRO was built with no coding required. This talk equips attendees with practical, actionable methods to prioritize risks and plan defenses.

Security Practitioners
Various
BV_ is a security practitioner currently working at (intentionally left blank). Syntax is a long-term pen tester currently working at (intentionally left blank) with a combined over 40 years of experience in the industry they have seen some things and this is where they get to regale you all with some stories so hopefully you can learn from everyone else's mistakes. A lot of them ours.
Don't worry, everyone is that bad!
With a combined 40 years of experience you get some stories and this is where you get to hear that even the biggest companies are as bad as you think.

Assistant Professor
Commonwealth University of Pennsylvania
Atdhe Buja is an Assistant Professor of computer science, digital forensics, and cybersecurity at the Commonwealth University of Pennsylvania, USA (Bloomsburg University). Atdhe is a world-renowned cybersecurity expert with decades of experience. As PM, Atdhe has established and led the CERT team in academia and the private sector in Southeast Europe. He is an EC-Council Instructor (CEI), Microsoft IT Professional, and Oracle Administrator for RDMBS, and a leading authority on information technology, Industrial IoT, and ICS/SCADA cybersecurity. His research work focuses on cybersecurity countermeasures for Industrial IoT, IoT security, ICS/SCADA infrastructures, wireless sensor network WSN, cybersecurity of ML and artificial intelligence, and database management systems. Author of multiple books including 'Cybersecurity of Industrial Internet of Things (IIoT)' and 'AI and ML-Driven Cybersecurity: Industrial IoT and WSN with Python Scripting'.
Enhancing Incident Response with AI: Leveraging ML for OT/IoT/IIoT Attack Detection and Prevention
As cyber threats progressively target Operational Technology (OT), Internet of Things (IoT), and Industrial IoT (IIoT) systems, traditional defenses wrestle to keep pace. This talk introduces how artificial intelligence (AI) and machine learning (ML) can redefine incident response in these domains by enabling predictive detection and rapid response. Through portrayals of applied research and real-world datasets from the Global Cyber Alliance (GCA), I will demonstrate how the Data Science Lifecycle can be applied to build predictive ML models that identify anomalies, patterns, and attack trends over IIoT networks. The session introduces the IIoT Guardian prototype, a device-level cybersecurity solution that integrates ML/AI for real-time anomaly detection.

Professional Mentalist
Independent
Professional mentalist and speaker who explores the intersection of psychological manipulation and cybersecurity.
MENTAL GAMES
What you believe can be overwritten in seconds. That's the power mentalists exploit, and the weakness that cybercriminals weaponize. In this live interactive session, a professional mentalist performs mind reading and influence demonstrations, then we pull back the veil to show how it's done. Together we dissect multiple mentalism techniques and expose the cognitive vulnerabilities they exploit. Each reveal is aligned with real world social engineering parallels: phishing and pretexting, deepfake impersonation, trust-exploiting scams, and more. Attendees witness how subtle cues, misdirection, and psychological priming can hijack logical decision making and security controls.

Principal Cyber Risk Engineer
Liberty Mutual Insurance
Principal Cyber Risk Engineer at Liberty Mutual Insurance with expertise in cyber insurance and risk management.
What Cyber Insurance Does(n't) Do For You
If you've ever taken a course on the basics of cybersecurity you've learned about some standard risk management techniques: Reduction, Avoidance, Acceptance, and Transfer. And about all that is said about risk transfer is it means 'buying insurance'. What does risk transfer or the buying of cyber insurance actually do for your overall security program? Hear from an insurance insider about what cyber insurance does and does not do. This talk will provide the audience with a better understanding of what cyber insurance does, what it doesn't do, and what the different types of insurance are. This will help them to understand why 'just buy insurance' isn't always a good answer for risks that the organization does not want to mitigate.