From Manual Tradecraft to Scientific Foundations in Cybersecurity
In the 2000s, cybersecurity analysis was built on human expertise. Analysts and investigators would manually pivot across indicators—starting with an IP address, a domain, or a piece of malware—and map connections to adversaries, victims, infrastructure, and capabilities. This process was invaluable for uncovering relationships and understanding threat campaigns, but it was time-consuming, resource-intensive, and unable to keep pace with the rapidly growing scale and complexity of cyber threats.
Pivoting is the analytic technique of extracting a data element and exploiting that element, in conjunction with data sources, to discover other related elements. Ultimately, pivoting is about the fundamental analytic task of hypothesis testing. Each element of an intrusion event generates its own hypotheses which require evidence to strengthen, weaken, or change the hypothesis. Pivoting is the task of discovering related elements (evidence) which inform the hypothesis and also generate new hypotheses themselves. Pivoting success relies on the analyst to understand the relationship between elements and their ability to successfully exploit a data element and data sources (e.g., if I have this information combined with this data source then I can find this. . . ).
Working closely with these analysts, I saw the power of this manual tradecraft. It was systematic in nature, driven by cause-and-effect relationships and reasoning, but it was also inherently limited by human capacity. It was clear that scaling this work required more than better tools; it required a scientific transformation. The question that drove me was this: how do we formalize what analysts are doing—turning an intuitive process into a rigorous, repeatable, and scalable framework?
By 2010, I was exploring semantic technologies as the answer. These technologies—ontologies, knowledge graphs, and reasoning engines—offered the potential to represent relationships in a way that machines could interpret and reason about. I realized this wasn’t just an engineering challenge; it was an opportunity to elevate cybersecurity analysis into a science. Semantic technologies could help encode knowledge formally, automate analytic pivoting, and build dynamic knowledge graphs that captured the evolving threat landscape. But it wasn’t about replacing human judgment. It was about extending analysts’ abilities and creating a foundation for systematic, evidence-based cybersecurity.
Building the Scientific Framework
The research I conducted culminated in a system designed to explore and test what was possible with semantic technologies. The goal wasn’t to create a ready-made solution but to prove that we could automate key elements of the analyst’s work while adhering to scientific principles. The system focused on three core objectives:
Laying the Foundations for Cybersecurity Science
This research wasn’t just about solving technical problems; it was about advancing cybersecurity as a scientific discipline. Semantic technologies allowed us to apply core scientific principles to cyber threat analysis:
The result was a research system that didn’t just assist analysts but laid the groundwork for a more systematic, scientific approach to cybersecurity. It turned the tradecraft of pivoting and graph-building into a rigorous, scalable process that adhered to the principles of measurement, testability, and repeatability.
Looking Forward: Building on the Foundations
This research was just the beginning. It proved that semantic technologies could transform cyber threat analysis, but it also raised new questions and opportunities for future work. How could these knowledge graphs be integrated into operational environments? How could reasoning engines be extended to handle even more complex inferences? What other aspects of cybersecurity—like risk assessment or attack prediction—could be formalized using these scientific principles?
As I continue to explore these questions, this story will evolve. Each new piece of research builds on the same foundation: a commitment to turning cybersecurity into a science, driven by formal methods, scalable systems, and a focus on empowering analysts to do their best work.
Recommended by LinkedIn
The Next Phase: Semantic Technologies in Integrated Adaptive Cyber Defense (IACD)
Building on the foundations of my earlier research into semantic technologies and knowledge graphs, the late 2010s presented an opportunity to further expand these ideas in collaboration with the NSA and DHS (CISA) as part of the Integrated Adaptive Cyber Defense (IACD) initiative at Johns Hopkins University Applied Physics Laboratory (JHU-APL). While my earlier work demonstrated how semantic technologies could automate analytic pivoting and knowledge graph construction, the focus in IACD shifted to operationalizing these concepts to enable Security Orchestration, Automation, and Reporting (SOAR) at scale.
This phase of research emphasized the creation of cognitive playbooks—structured workflows that encoded human domain expertise, decision-making processes, and reasoning into machine-executable formats. These playbooks weren’t just scripts or static procedures; they leveraged semantic technologies to capture the context and logic behind expert decisions. This allowed automated systems to act in ways that reflected the reasoning of skilled analysts, while also adapting dynamically to new situations.
What made this work significant was its ability to integrate outputs from the rapidly emerging fields of machine learning and data science analytics, which were gaining prominence during the 2010s. Data science models could provide powerful insights—such as identifying anomalous behaviors or predicting potential threats—but their outputs often lacked context or explainability, making them challenging to operationalize. Semantic technologies bridged this gap by embedding these analytical outputs into cognitive playbooks, where they could be validated, explained, and acted upon in a structured way.
In this research, semantic reasoning allowed for workflows that combined sensing (e.g., security logs and events), sense-making (validating hypotheses and explaining activities using contextual knowledge), and decision-making (driving automated responses based on human-expert-captured logic). The result was a system that could not only orchestrate complex defense actions but also explain its reasoning and adapt to new inputs in real-time.
From SOAR to AI-Augmented Cyber Defense
This focus on encoding human expertise into machine-executable systems paved the way for the next chapter in this story: the integration of AI agents powered by large language models (LLMs) in the early 2020s. While the IACD research demonstrated the power of semantic technologies to formalize workflows and integrate analytics, recent advancements in LLMs have introduced new possibilities for unstructured reasoning and dynamic knowledge synthesis.
LLMs, such as GPT-based models, bring the ability to interpret, summarize, and reason about unstructured data—such as incident reports, threat intelligence feeds, and open-source information. However, their lack of grounding in structured, formalized knowledge can sometimes lead to inaccuracies or hallucinations. This is where the integration of LLMs with semantic technologies has proven transformative. Frameworks like GraphRAG (Graph Retrieval-Augmented Generation) combine the unstructured reasoning capabilities of LLMs with the structured, contextual power of knowledge graphs, resulting in systems that are both flexible and scientifically grounded.
The principles established during my earlier research—formalization, inference, and scalability—are at the heart of this new hybrid approach. Knowledge graphs serve as the backbone for grounding LLM outputs, providing a repository of verified relationships and facts that can be used to validate or augment the reasoning of AI agents. For example, while an LLM might synthesize potential insights from a corpus of unstructured data, the knowledge graph can ensure those insights align with known adversary infrastructure or past attack patterns.
This integration addresses several critical challenges. It enables dynamic, context-aware reasoning that blends the flexibility of LLMs with the rigor of semantic systems. It also creates opportunities for continuous learning, as LLMs process new data and update the knowledge graph with emerging relationships and entities.
A Cohesive Scientific Storyline
The transition from automating analytic pivoting to developing SOAR systems and integrating AI agents tells a unified story of advancing cybersecurity as a science. Each phase builds on the last:
This progression is a reflection of the larger journey in cybersecurity science: from manual, intuition-driven tradecraft to systems that are formalized, explainable, and capable of dynamic reasoning. Each step represents a deepening of our ability to manage complexity, scale defenses, and stay ahead of an ever-evolving threat landscape.
These developments are not the end of the story but rather a foundation for what comes next. With the convergence of semantic technologies, machine learning, and AI agents, the tools at our disposal are more powerful than ever. The challenge now lies in continuing to push these systems forward, ensuring that they remain grounded in scientific principles while empowering analysts and defenders to address the increasingly sophisticated threats of the future.
Now you understand the journey that I've been on since the late 2000s and why I've become a bit obsessed with semantic technologies and advancing cybersecurity science.
Data, Decision Engineering and Workflow Automation
2wVery helpful
Cybersecurity Scientist | US Navy Cryptology Community Veteran | VFW Member | Autistic | LGBTQ | INTJ-Mastermind
2wEvery time I hear George Kurtz at CrowdStrike talk about increased data integration and interoperability I always hope that they make the pivot to semantic technologies. I don't think anyone is really paying attention to advancing cybersecurity science and scaling tradecraft when they are focusing on these hard problems.
Senior Principal at Mandiant Intel, part of Google Cloud
2wpretty cool stuff, shawn. i've pursued automated enrichment with the aim of informing auutomated defenses but quickly found myself in a mess. this is around the time of ATT&CK's emergence and before D3FEND, so quite a while ago, before such ontologies were more mature. i think i didn't spend enough time on it or explore the right technological paradigms, which you seem to be pursuing to much greater effect than i did. the slides you present here give me DOD flashbacks :) keep going!
US Navy Information Warfare Officer
2wYou should definitely write a book on this.