Resources Articles Highlights from The Rochester Security Summit 2023

Highlights from The Rochester Security Summit 2023

Sedara at RSS

October’s a fun month in the cybersecurity field, and not just because of the costumes and candy. Since it was designated as the National Cybersecurity Awareness Month in 2004, October’s always packed with great events, such as the Rochester Security Summit (RSS). RSS has been a leading regional cybersecurity conference in Upstate New York since 2006, where hundreds of attendees gather to share about the latest advancements in the field. Several of our team members attended (and presented at) this year’s RSS. From the history of the field, to AI, to purple teaming, here are their highlights from the event.

As we asked the question – What was your favorite session from RSS?


David Frier, vCISO

My favorite session was Dr. Eugene Spafford’s keynote address. Spaf (as he’s affectionately known in the community) is a professor at Purdue University. He’s been a cybersecurity expert since before the field was born, and I’ve been following him for almost as long. His keynote stands out to me because he’s trying to shake up InfoSec and promote some clearer ways of thinking about systems design, and whether systems even can be secure.

Spaf’s presentation was based on his latest book Cybersecurity Myths and Misconceptions, which is a trenchant examination of, you guessed it, the most widespread and enduring myths in the field. Spaf argues that cybersecurity is in a developmental stage that’s common among all nascent fields, where some assumptions and personal beliefs still outweigh more rigorous, science-based understandings. One myth is that more technology is always better. Spaf notes that today’s technology solutions are often too complex, bloated, and riddled with scope creep such that they create hosts of “emergent failures” that system administrators cannot foresee because they are unable to fully understand the system. To dispel this myth Spaf argues we must design more constrained, intentional technology. Overcoming these myths can help cybersecurity evolve to its next stage as a discipline.


Cris Trevino, SOC Analyst

The talk that I enjoyed the most was “Unlocking Generative AI: Balancing Innovation with Security” by Jason Ross, a longtime cybersecurity expert and current lead security engineer. Jason spoke of interesting use cases for generative AI, such as using AI to create code, search documents, or for sentiment analysis. Most people are familiar with generative AI’s ability to create text, but it can also analyze text—to varying degrees of current effectiveness—for its overall “feel.” This could be used by HR to determine if an email is threatening or by communications to determine if a press release has the right tone.

Jason stressed that AI’s use cases must be balanced with security. For example, social engineering is one of the oldest and most effective forms of cyberattacks, and AI will only make it easier for malicious actors to generate ostensibly trustworthy requests and make it harder for humans to discern truth from fiction. Generative AI is poor at understanding when it doesn’t know something, so it’s susceptible to generating convincing-sounding information that’s actually false. Jason cited a recent example of a lawyer using ChatGPT-generated research in court, only to find out the tool made-up court cases and laws. Jason also stressed other security concerns with generative AI, such as its capacity to assist in prompt injection, data poisoning, and data exposure.

Overall, AI is here to stay whether we like it or not, and security researchers must be at the forefront of AI developments to ensure its use cases are balanced with security.


Courtney Bell, Cybersecurity Engineer

I have to agree with Cris on Jason Ross’ talk, or any of the other AI talks, including “Artificial Intelligence: Risk, Regulation, and Reward” by Paul Greene, Felix Knoll, and Bruce Cheney, “Security Risks and Mitigations for Generative AI Foundation Models” by Justin Leto, and “Rise of the Machines: The Future of AI in Cybersecurity” by Reg Harnish.

AI is such a game changer with its relative ease of use and potential integration with every discipline. AI’s popular champions posture the technology as something that “gets things done,” but it does not come without its risks. AI requires far more accuracy and nuance before we should trust its widespread adoption. I think AI is a powerful tool that can provide powerful solutions, but it’s not a panacea and our relationship with it must evolve alongside, or ideally even ahead, of the technology going forward.


Frank D’Arrigo, vCISO

Do I have to choose one?


How about two?

I’ll take that as a yes! Okay I’ll stay on the AI bandwagon for a minute. Reg Harnish’s “Rise of the Machines: The Future of AI in Cybersecurity” had me hooked (in addition to its frequent Terminator references). Reg took us from the humble beginnings of machines that can play checkers to systems that might eventually enslave the human race. I walked (read: ran) away ready to join the last vestiges of human resistance against our inevitable machine overloads. Well, I did at first, until the courage wore off and I felt like crawling into a hole and rocking back and forth. Only after that did I calm down and realize I’m not sure I entirely agree with Reg’s fatalistic conclusions: humans are currently in the driver seat, and, if we’re smart, we’ll stay there. Ultimately, I left with a greater appreciation and respect for the increasing pace of advancements on machine learning and AI. Its developments will reflect the interests of those with the financial and technological means to develop the tools, which opens its own can of ethical, moral, and legal worms. Regardless of how it shakes out, it feels like generally intelligent AI is within my children’s lifetime, so I may start the survival bunker this Spring…

My other favorite presentation was actually my colleague Courtney Bell’s (see above) talk: “It’s OK to Make Mistakes: Blame Culture in Infosec.” Courtney took us down a road not often traveled by examining the human side of cybersecurity. She discussed what makes people vulnerable to scams and social engineering and how blame is assigned to the victims of these attacks. We often think of cybersecurity problems as having only technological solutions, but people are—and always will be—part of the solution (unless maybe that whole AI thing, but you get my point). Courtney presented the results of a social psychology study she conducted and found what other researchers have noted, that negative, blame-oriented cultures produce less desirable changes in human behavior than more solution-focused cultures. Ultimately, Courtney argues cybersecurity practitioners should try to ally with other stakeholders and users in creating more secure environments that authentically produce desired cybersecurity outcomes.


Jason Taylor, Cybersecurity Program Analyst

My favorite presentation was “Defending Beyond Defense” by Dr. Catherine Ullman. Cathy argues that cybersecurity defenders can more effectively protect their systems when they start thinking like attackers. Defenders typically focus on configuring, implementing, patching, scanning, and documenting systems and processes. They tend to think in terms of lists, such as when a task is completed they “check it off” and may erroneously think they are now “safe” from the threat.

Cathy argues this approach falls short of meeting the reality of how cyber threats manifest, which is exemplified by how attackers think. To Cathy, attackers think in terms of ongoing relationships. Attackers focus on detection, access, persistence, expansion, and exfiltration; they continuously question how one door may lead to another, and when they encounter resistance, they just try another path. Therefore, Defenders who adopt this type of question-based thinking may uncover new attack vectors and opportunities for additional security controls that they otherwise would not see if they thought only in terms of absolutes and completing rote tasks.

This presentation resonated with me because I’ve seen how cybersecurity blue teams (Defenders) can be siloed off from cybersecurity red teams (Attackers). Merging the two teams (purple teaming) and their thinking styles can produce outcomes greater than the sum of their parts, which is what we’ve been generating at Sedara. As someone who identifies as a Defender, Cathy’s talk inspires me to explore more Attacker methodologies and toolsets, which she explores more in her latest book The Active Defender.



RSS wrapped up another lively October. Our team is always attending local and national cybersecurity events to stay up-to-date on the latest industry trends. If you’re looking for support enhancing your cybersecurity program and reducing your risks, it’s time to get started! Reach out to the team at Sedara to find out how you can protect your organization.


More Reading on This Topic



Accomplish your security & compliance goals.

Get a Demo