When it comes to security, government agencies can never be too careful. From Solarwinds to Microsoft Exchange, federal organizations have faced a year of breaches that have set off the clear need to rethink their security posture — and they’re turning to AI for assistance.
How are they doing it? That’s what Iron Bow Technologies’ Jim Smid and Rob Chee discuss on Machine Momentum, a podcast that aims to demystify artificial intelligence and machine learning for government leaders.
How does AI and ML pertain to federal security?
Long story short, AI and ML are implemented to make your life easier. You can apply it to security by being able to monitor and analyze your endpoints, collect that data and give it to analysts not in the form of a jumble of data but in easily digestible reports. Agencies can easily glean actionable insight from this data to minimize or prevent cyber-attacks.
Machine learning can gather the plethora of data inputs from various security appliances and then pull out relevant information. It essentially finds your needle in a data haystack so agencies can get a head start on investigating malicious attacks and move more quickly to action.
For government CISOs, security is always top of mind but when a threat is detected or a breach happens, fill us in on how AI can help prevent cyberattacks and what should agencies be asking their vendors/partners?
When an agency implements AI, they feed their current data in (based on a particular user, their role or their servers) and use that as a baseline foundation or a source of truth. When a breach or attack happens, they feed the new sources into the ML platform again and ML is able to identify the differences between the data sets. Determining these differences is identifying the anomalies in the data: the change in data that has been caused by the attack. This saves enormous amounts of time and resources because ML is able to do all that analysis in the background and at a fraction of the time it would take an analyst or even a team of analysts.
What are the steps organizations can take to make sure their AI is doing the right thing?
That’s a great question that has been around since the beginning of AI and ML application. As agencies start to implement AI, it’s best to take a crawl, walk, run approach. From an operation standpoint, understand the education on how this tool can best be used. Start small with a few controllable data sets and, as you are able to ensure the technology is performing the algorithms you want with accuracy, scale up to larger environments and more data sources. In the beginning, your analysts will still have to manually validate the findings from the AI and ML applications but it is worth it in the end simply because of the time and resources this technology saves by performing the endlessly repetitive tasks and iterative processes in a fraction of the time it takes a human.
Other technology that assists AI is a new trend we’re seeing called Extended Detection and Response (XDR). It takes in all these different inputs, reporting, API data and analytics and complements your AI platform to give you the best of both worlds – both the raw data in the beginning and the end result of actionable insights.
With everything we’ve gone through this last year, a lot of agencies are still in a hybrid or fully remote environment, does that change how AI can be used within cyber security?
This is actually a great opportunity for remote users working in various locations to take advantage of the benefits and capabilities of AI or ML. Combined with XDR tools, agencies are able to gather data coming in from any location and build a solid foundation of a zero trust architecture that paves the way to a higher level of security. With a ML solution in place, agencies are able to much more quickly identify which computer is causing a problem, quarantine that machine from the rest of the network, perform a root cause analysis, identify the look and feel of the attack and – within seconds — determine if the same attack is taking place on all other endpoints connected to the organization.
How does governance play a role in how you implement AI and is there anything agencies need to look out for?
AI aids data governance via visibility into your organization’s information. AI provides access to raw data but also the ability to identify who is accessing different data resources, when they’re accessing them and if they are authorized to access them. It’s almost like AI is a modern data governance tool that creates your policy, but also allows accountability to ensure the enacted policy is actually being enforced properly. This also applies to compliance perspective because it provides the reports and the actual information to meet compliance control requirements. When you think about it, that really is the spirit of compliance: making sure everything that should be in your security policy is there and all you have to do is provide validation of your best practices and your security policies.
What are a couple takeaways for agencies moving forward and putting intelligence into their security posture?
The first thing to remember is that the use of AI is still in its crawl phase of the crawl, walk, run approach. A lot of the ways machine learning is innovating and being introduced is in its very early days and this is a constantly evolving space. Cyber security in general is rapidly evolving but AI is even faster, so flexibility is crucial in applying ML as a cyber security tool.
The second thing is that AI tools moving forward will inevitably be cloud-based and therefore may not be immediately brought to bear for agencies like the Department of Defense, but may be able to be implemented in a civilian agency or within organizations that do not require FedRAMP certification.
Third, don’t look at ML as an excuse to throw out your existing security appliances and purchase new, “ML-ready” appliances but instead look at it from a holistic view; look at all your security appliances, especially those that are API-driven or that are able to make configuration changes as well as being able to provide reporting and data information via APIs. That is going to be a critical factor when you take a lot of that data and feed it into a platform that incorporates machine learning.
This is the second podcast in a three-part series for Machine Momentum. The first can be found here. For more information on how Iron Bow is utilizing AI to assist DoD organizations, click here.
COMMENTS