Cybersecurity Observability Strategies for the Modern CISO
In today’s rapidly evolving threat landscape, the ability to see everything happening within your systems is no longer a luxury—it’s a necessity. With over two decades of experience in cybersecurity and Security Operations Centres (SOC), I’ve witnessed firsthand how observability has transformed from a mere advantage to a vital defence strategy. Organisations must shift from reactive monitoring to proactive observability to stay ahead of sophisticated cyber threats.
This paper will explore the importance of cybersecurity observability, highlight key challenges, and offer actionable strategies for leveraging the latest techniques and technologies to enhance security posture.
The Shift from Monitoring to Observability
Monitoring vs. Observability: What’s the Difference?
Traditional monitoring involves tracking predefined metrics and logs from specific sources to alert on known issues or breaches. While this remains crucial, it is reactive and often limited in scope. Observability takes this concept much further. It doesn’t just inform you of issues; it empowers your team to understand why incidents occur, predict potential failures, and quickly take corrective action—transforming reactive responses into proactive defence strategies.
The Importance of Full-Stack Visibility
Cybersecurity observability focuses on obtaining full-stack visibility—covering networks, endpoints, applications, and cloud environments. It involves real-time analysis of logs, events, and traces across all layers to detect early threats, even those evading traditional monitoring systems. Observability allows SOC teams to track anomalies and unknowns within a system by continuously collecting and analysing data from multiple sources. Key Pillars of Cybersecurity Observability:
1. Metrics: Quantifiable data points that indicate performance, usage, and security trends.
2. Logs: Comprehensive record of system events, crucial for threat detection and forensic analysis.
3. Traces: Information that maps the execution paths of requests across systems, essential for understanding and tracing complex attacks.
The Challenge: Increasing Log Sources and Data Overload
The Explosion of Data
In the era of cloud computing, containerisation, IoT, and distributed architectures, the number of log sources has increased exponentially. Modern security teams are faced with a tsunami of data from:
• Network Devices (routers, firewalls, IDS/IPS)
• Endpoints (laptops, mobile devices)
• Cloud Providers (AWS, Azure, Google Cloud)
• Applications (web servers, databases, microservices)
• IoT Devices (connected sensors, industrial systems)
Each of these sources generates a massive amount of data daily. Choosing which data to collect can lead to information blind spots, where critical threats can be missed.
The Integration Dilemma
Integration is a cornerstone of effective data observability, ensuring seamless collection, processing, and data analysis from diverse sources. However, integrating disparate systems and technologies can pose significant challenges. For instance, data formats may vary, real-time collection can be complex in distributed environments, and processing large volumes of data for timely analysis can be demanding.
Data Overload vs. Data Quality
More logs don’t always mean better security. Many SOC teams struggle under the weight of redundant or low-quality data, leading to alert fatigue. This can cause serious threats to be overlooked, as was the case in notable breaches where early warning signs were buried under unnecessary data.
To counter this, observability emphasises intelligent data collection, where both volume and quality are optimised to capture meaningful events without overwhelming security personnel.
The Data Retention Paradox
Adhering to industry-specific regulations that dictate data retention periods is crucial. However, managing the costs associated with storing large volumes of data is also essential. Effective data lifecycle management processes, including proper deletion and archiving, are necessary to balance cost, access control, and compliance with regulations.
Operational Challenges in Data Distribution
Distributing observability data securely across teams such as IT, security, DevOps, and compliance presents significant operational challenges. The complexity of filtering and reusing data can also lead to serious operational risks. Insider threats, whether accidental or malicious, become a real concern when access control is not meticulously managed, especially when teams handle vast amounts of sensitive data.
Recommended by LinkedIn
Actionable Strategies for Enhancing Cybersecurity Observability
Collect Everything: Leveraging Cheap Storage and Elastic Infrastructure
In the past, storage limitations meant that Organisations had to make trade-offs, collecting only the most critical logs. Today, advances in cloud storage and elastic technologies mean it’s economically feasible to collect and store everything. Solutions like Amazon S3, Azure Blob Storage, and Google Cloud Storage provide scalable, low-cost options for handling massive volumes of data without breaking the budget.
Benefits of Collecting Everything:
• Ensures no log, metric, or event is missed.
• Enables teams to retain long-term data for retrospective threat hunting and auditing.
• Allows correlation across different layers and timelines, detecting sophisticated multi-step attacks.
Optimise Data Ingestion: Use Advanced Analytics and AI
With large volumes of data comes the challenge of efficiently processing it. Traditional rule-based systems are no longer sufficient for identifying patterns in high-dimensional datasets. Modern SOCs should integrate machine learning (ML) and AI-driven analytics to automate the identification of anomalous behaviour and zero-day threats.
Key Technologies:
• SIEM Platforms with AI Capabilities: Integrating tools like Splunk, QRadar, or MS Sentinel with machine learning modules help automate anomaly detection.
• User and Entity Behaviour Analytics (UEBA): ML-driven analysis of user behaviour to detect insider threats and advanced persistent threats (APTs).
• XDR Solutions: Extended Detection and Response platforms combine data from multiple security tools for deeper cross-domain insights.
Embrace a common standard.
Cybersecurity observability thrives when data collection is standardised across systems. Initiatives like OpenTelemetry, Jaeger, Zipkin, Kiali, Dynatrace, New Relic, and Datadog —many of which are vendor-neutral—provide a unified set of APIs, libraries, and agents for collecting telemetry data (metrics, logs, and traces). This allows for seamless integration across diverse environments without the overhead of maintaining multiple proprietary agents. Benefits of this approach:
• Provides uniform data collection across environments, whether on-premises, cloud, or hybrid.
• Works across a range of languages and platforms, ensuring compatibility with modern applications.
• Frees organisations from vendor lock-in, enabling them to switch or integrate observability tools as needed.
Data Reduction Techniques: Filtering and Enrichment
While "collect everything" is an ideal, it’s equally important to apply data reduction techniques to improve the quality of data ingested. This involves filtering out noise while enriching relevant data to enhance context. Key Techniques:
• Log Filtering: Use preprocessing pipelines to filter out irrelevant data (e.g., routine system calls) before they hit your SIEM.
• Log Enrichment: Append metadata (e.g., geo-location, device type, or user context) to logs to give analysts more context when investigating threats.
• Data Normalisation: Standardise log formats from diverse sources to improve the correlation of events across your infrastructure.
I've discovered Cribl, a powerful data pipeline platform that provides a flexible and scalable way to collect, process, and route telemetry data. It allows organizations to consolidate data from various sources, reduce data volume, enrich data with context, and route data to different destinations. This helps improve data management efficiency, reduce costs, and enhance security.
Cybersecurity Observability is Key to Proactive Defence
As cyber threats grow more sophisticated and frequent, organisations that fail to adopt a proactive, observability-driven approach risk falling behind. Those who invest in full-stack observability and cutting-edge analytics will not only defend against current threats but will future-proof their security posture, staying one step ahead in the ever-evolving digital battlefield.
The future of cybersecurity relies on a foundation of observability—empowering security teams to see everything, understand threats in real time, and respond with speed and precision. Only organisations that can observe, analyse, and act on their data at scale will stay ahead of evolving cyber threats.
About the Author
Mohamed Shawara, ANZ Cyber Security Head at Orange Cyberdefense (OCD), has over two decades of experience in cybersecurity and SOC leadership. He has helped organisations build resilient security infrastructures to defend against advanced threats. Mo is committed to driving innovation in security operations and guiding teams toward proactive, data-driven defences.
Kubernetes & Cloud Native Engineer
2moObservability optimises visibility, enabling proactive threat detection. Mo Farid Shawara, MBA