As we know, big data comes in all shapes and sizes and managing data always requires a secure approach. One of the strategies to secure data is to continuously monitor both the network and the data. We talked with Lee Vorthman, Manager, Cyber Initiative for NetApp U.S. Public Sector to get his thoughts. Read on to see what he has to say about big data and the challenges of continuous monitoring.
Last week I attended an event hosted by the Washington Cyber Roundtable and one of the hot topics we discussed is the need for continuous monitoring. Continuous monitoring is one of those phrases closely associated with cyber security, yet there doesn’t seem to be much consensus on how exactly to do it. At NetApp we have been approaching continuous monitoring as a big data problem and I would like to share with you the challenges we have identified around this topic.
First, let’s take a moment to examine the goal of continuous monitoring. The National Institute of Standards and Technology (NIST) defines the objective of continuous monitoring as “a program to determine if the complete set of security controls within an information system continues to be effective over time in light of the inevitable changes that occur.” At a very high level, continuous monitoring allows decision makers to gather security metrics and make informed decisions on the level of risk they are willing to accept for their organization.
This is a great concept, and our leaders need this information, but the underlying question remains about how to collect and analyze the massive volumes of data a continuous monitoring program implies. A well-executed continuous monitoring program closely resembles a big data problem in the challenges it creates. Let’s take a look at two fundamental challenges presented by continuous monitoring.
The first challenge is to collect and store the large volumes of data generated by an organization. For our discussion let’s assume we have a single 10Gb/s link connecting our organization to the internet and we want to monitor this link. At first glance this doesn’t seem that challenging, but in reality a single 10 Gb/s half duplex link generates 100TB per day. This presents a challenge not only for current network capture infrastructure, but also stresses the storage capacity of most organizations. This problem is compounded as we add multiple 10 Gb/s links or as organizations increase retention time. In recent conversations with employees across the government we have identified retention requirements from one day up to one year and this can represent up to 36.5PB of data!
The second challenge with continuous monitoring is how to analyze this increasing volume of data. It is physically impossible for an individual to look at 100TB of data in a day. Analysts can no longer afford to manually sift through logs, and therefore they need a central tool that fuses all of the information sources together. The only way a continuous monitoring program will succeed is with automation. We need to automatically collect, process and flag information so our analysts can turn it into actionable intelligence. I am excited about the direction current security information and event management (SIEM) technologies and cyber analytics tools are taking as they begin to address the automation and integration challenges of continuous monitoring.
At the end of the day decision makers need metrics on what is an acceptable level of risk they are willing to accept. How can we detect which machines are compromised? What data is critical and shouldn’t leave the organization’s network? Are the security policies currently in place working to protect my organization? These are all questions that we can begin to answer with a well-designed continuous monitoring program. As this problem matures, I look forward to hearing about the challenges you are facing and how you were successful in solving them.