Since the early days of security monitoring, having visibility of the Logs, Network, and the Endpoint was the staple of most SOC operations. Now its reasonably commonplace to see other data sources feature in Security Operations requirements such as Cloud and Applications. This has made the 'Nuclear' Triad more of a 'Nuclear' Quintet. But is it expanding further? it would seem so; in recent years new visibility types such as Deception have come to the fray. This has created extensions in our ability to collect information, not just about action, but also intent. Another good example is Identity, specifically how identity is managed. This creates a wider area for security analysis focusing more closely on human behaviour, profile it and flagging discrepancies. More recently the ability to look more closely at the Hardware 'connected' to our IT and OT assets. As the commercial world sets about making these ever more accessible. For example the Rubber Ducky a programmable-keystroke USB device and the OMG! Cable which requires the attacker to be in WiFi range. More Data Means Better Visibility and More Security, Right? It is obvious to state that better visibility provides greater reach to detect more. But in a world of plenty, not many want more (or can deal with more). Historically security solutions have been built on a premise that "more data, means more security". Realistically most organisations only use a small amount of the data they collect in security tools to actually do security things. So whilst technology vendors would love us to collect more data and pump it into their tools, they are finding that there is push back from buyers. One key area of push back is cost. This is forcing many organisations to look more closely at how they manage data, especially security data. This means considering batch-driven log storage and recall solutions, rather than real-time solutions, as these cost less, enable us to store more...just in case. Another area is that; as the dimensions of visibility expand, some of the data we may want to include is out of the reach of traditional storage, and presents itself in the form of API access. This means leaving the data exactly where it is. Examples of this have been around for a while, specifically in applications that use external threat lists and tools(such as Virus-Total and Shodan). Although it has expanded into more commonplace data sources such as Endpoint because we use SaaS platforms to manage these capabilities more commonly. Its Harder, More Complex, More Expensive... Now What? Firstly there is more data available, this makes the landscape more complex, we need to understand what data we require. We don't always know what that data contains before we access it. Secondly to understand what data we require. Better addressing what we are worried about (our business risks), and how these are prioritised. Finally we need to accept that there will be too much of everything and therefore, we need to prioritise. Some things may fall by the wayside and possibly never get addressed. Questions to ask yourself: Can you leave the data where it is and access it in situ? We are beginning to identify this internally within Gartner as a 'Security Control Plane'. Do you use the data 'minute by minute', or could you store it somewhere else, more cheaply and recall it when needed? Are there specific data sources, either in the old Triad, or in newer technologies that will address your needs directly? Can you 'Swap' and 'Drop'?