Unified Communications Manager Cloud for Government (UCM) – Formerly Hosted Collaboration Solution for Government (HCS-G)

Iron Bow’s UCM (formerly HCS-G), powered by Cisco, is a FedRAMP Authorized cloud-based collaboration service built to help you improve communication capabilities, empower your mobile workforce, meet cloud-first mandates and maintain stringent security standards. Check out this video and see how we can help your agency overcome key IT and business challenges.

See what VDI can do for your agency.

The case for Virtual Desktop Infrastructure (VDI) has never been stronger. Agencies are looking for better approaches to securing and managing end-user devices. Check out this infographic and see what’s driving the interest in VDI solutions—and what concerns are slowing agencies down.

@Iron_Bow
About TechSource

Welcome to Iron Bow's TechSource, a blog about the issues facing the government and industry today and the technologies being adopted to help overcome them.

Big Data, Little Data, Secure Data and Destruction

TechSource Editor

April 26, 2012  |  Big Data & Analytics  •  Cloud  •  Data Center


Data. Lots of it, and everywhere. From massive data warehouses to a plethora of flash media, we are surrounded by incredibly huge amounts of data thanks to the consistently decreasing costs of storage. Regardless if it’s the DoD or a Fortune 100 entity, somewhere within their infrastructure, is a repository with petabytes, if not zetabytes, of data in some state of digital decay. Let’s not even begin discussing the amount of information held in various “public clouds.”

Within the information security domain, we’ve begun utilizing various business intelligence (BI) tools to visualize/analyze, and in general, begin dealing with the “Big Data” challenges that are currently facing our federal and corporate information security communities. While Big Data, Digital Decay and the hazards of data retention are interesting discussions, this post isn’t a piece on analytics. This is about the data destruction issues in the modern age of solid-state media. Furthermore, if you believe your organization doesn’t have solid state drives (SSDs) and flash media in production, you may consider speaking to your SAN admin or virtualization guru to confirm your assumptions.

As the resident security guy, lately I’ve been having a number of conversations with customers about data destruction. Before I go much further in depth, let me say that we primarily work with federal government customers and have deep ties to the usual suspects whom you might assume would have more than a passing interest in information security. Knowing this, we can also state with a certain amount of veracity, that many of our discussions have relevance to national security.

So let’s take a look at data destruction in the old days and today, and discuss some best practices and tips.

Data Destruction of Yesterday 

In the old days, data destruction was “easy.” I grew up during the days of boot disk destruction where we would create a DOS or *NIX boot disk, load the requisite kernel data destruction apps (such as DBAN), make certain the floppy had a bootable sector and off we went.

For magnetic media such as hard disks, the standards were consistent: overwrite the drive a number of times, execute the built-in secure erase command and destroy or degauss the drive.

Magnetic media has a particular method to the data destruction and what we as practitioners would do is use some disk scrubbing utility (DBAN, srm, shred, PGP) to wipe either the file or the entire disk via the Gutmann method, or something along the lines of the Airforce System Instruction 5020.

Below is a screen shot from the PGP 10.x client with file shredding capabilities on OSX.

There were a number of clear-cut options of how to execute a data destruction process:

A) Single file overwrite with an option to overwrite with random data 1-35 times

B) Whole disk overwrite with an option to overwrite with random data 1-35 times

To quote Gutmann’s original paper, “A good scrubbing with random data will do about as well as can be expected.”

Lastly, there was arguably at least one other effective method of data sanitization:

C) Degaussing via some specialized hardware

Degaussing requires the termination of the disk itself, which magnetically destroys the media, as well as the drive motor. How?  By rotating some multi-K gauss field co-planar to the chips and a multi-K gauss perpendicular alternating field. The point is, you put a hard drive in or on the device, it creates a magnetic field and ruins the media and the drive heads.

Data Destruction in the Present Day

Presently, we have a plethora of cheap, high-density disks that happily respond to the usual ATA and SCSI destruction commands.

We tend to use these plentiful disks as backend storage, and for any system that requires rapid response or quick boot times, we use SSDs or disks, which don’t require any moving mechanical components. And with that, the fun begins.

When presented with some data destruction questions from one of our more interesting clients, I was forced to dig into whitepaper land. Short of incinerating a USB memory stick, I had never attempted performing data destruction on solid-state media, and it is most certainly an animal of a different color.

Also worthy of note: All of the above data destruction ideas (Gutmann, AF, etc.) are irrelevant, as SSDs do not play by any of the old rules. Per the Wei whitepaper, the above methods of data destruction are either ineffectual, falsely effective (showing destruction successes with full simple recovery possible) or a waste of energy.

According to the whitepaper:

“None of these solutions are satisfactory: Our data shows that overwriting is ineffective and that the ‘erase procedures provided by the manufacturer’ may not work properly in all cases.”

So what does work? Scrubbing. For details on how SSDs read/write, please read the summary of University of California’s whitepaper.

“Programming individual pages is possible, so an alternative is to re-program the page to turn all the remaining 1s into 0s.”

And what is the net effect of using an only marginally effective data destruction method on the SSDs? The eventual destruction of the disk OR a heavily increased latency of the read/writes—meaning, you ruin the disk.

“Overall, we conclude that the increased complexity of SSDs relative to hard drives requires that SSDs provide verifiable sanitization operations.”

For the layman, this translates to what exactly? Overwriting doesn’t work.

So based on this perspective, we have a few take-away points for your organization to keep in mind when considering data destruction:

  • Right now, there are few, if any, controller based integrated provisions for performing data destruction operations on SSDs
  • Traditional hard disk or file-based destructions do not work. Read the source document and make operational decisions based upon these findings
  • Do not consider decommissioning SSDs and releasing them into the public domain. If you can handle the degradation of speed, consider using FDE on all SSD endpoints, devices and drives

In summary, unless there is a large crucible with which you can melt your SSDs, we recommend reviewing and revising your organization’s data destruction policy with regard to SSDs.


TechSource in your Inbox

Sign-up here to receive our latest posts by email.

  • This field is for validation purposes and should be left unchanged.