Three Things to Consider When Budgeting for the Future of Your Data Center
Recently, I wrote about the future of the data center and how Internet of Things (IoT) and IT consumption is changing how we manage data. Similarly, as these changes occur, the traditional budgeting cycles for federal IT will have to shift too.
From my experience, the typical federal IT budget cycle looks something like this: during even years budget is allocated to a network refresh and during odd years those dollars go into a server refresh. Every second even year, the focus will include storage, and every second odd year, the focus will be on desktops. That is an actual cycle I have seen implemented. And it no longer is effective.
With the evolution of technology, the move toward virtualization, and the change in focus of the data center here are a few things I think budget managers should assess as they prepare for this – and future – budget seasons.
From Virtualization to Hyperconvergence
Most agencies began making the move to virtualize systems. Infrastructure functions, including server, desktop, applications, storage and backup were all good candidates for virtualization and in many cases were the first to make the move to a virtualized environment. Yet, in terms of investment, I believe that your network and applications should be the top priorities. Here is why: the network and applications are the backbone of your system. In today’s environment, employees are accessing data from anywhere at anytime and are not behind the traditional firewall model.
So, the system needs to be recreated and redesigned for this new reality. That is why hyperconvergence is so important from the infrastructure perspective. Instead of focusing on the “traditional” IT budget cycle I mentioned earlier, agencies should focus on creating a mixed mode platform to alleviate big vertical infrastructure that has been built and acquired over the past five or 10 years. The goal should be to remove hops between the user and the applications held in storage sub-systems, in order to remove the latency between the user and the applications. Storage Array’s should be leveraged for user shares and other large, mostly static data sets. Hyperconvergence helps an organization determine what applications and data sets need to be hosted on more expensive storage tiers or a hyperconverged storage pool/volume. Point being, the hyperconverged model allows organizations to leverage commodity products for their enterprise workloads.
In addition, freeing up funds from the traditional model will allow them to reallocate and to begin building out a more robust edge. These two – hyperconvergence and creating a more robust edge – will be the next big transition for IT in agencies.
Data center requirements for mission success
Regardless of whether your agency is ready to move now toward hyperconvergence and a more robust edge, there are specific requirements for your data center that must be top of mind to achieve your agency’s mission. The first and foremost is your network has 5-9s availability. This can be difficult, but when we think about the mission – whether it is the financial sector or the DoD – I think about the imperative of 99.999-percent uptime. The financial sector actually does require 100-percent uptime, and they pay massive amounts of money for that because the losses they would incur would be in the billions of dollars. For government agencies, 99.999 percent availability is critical.
A second critical requirement is redundancy, which can cover many things, including backup, for example. Whenever I think of redundancy, I think the goal is to have fail-safe mechanisms for feature sets your agency need to maintain your network domain and allow you to restore them quickly as needed.
Last, but not least, is the human capital surrounding that infrastructure. Your agencies IT human capital is responsible and beholden to the government/your constituents/citizens to maintain technical aptitude on the equipment and the software. No longer, can someone say, “I’m a network person.” You’re IT professionals have to understand the entire ecosystem, and must stay up to date on technologies to make sure the agency can meet its objectives. Frankly, that’s where some organizations get into trouble. If there is a silo mentality, because there is no communication or synergy between the network administrators and the SAN administrators, for example, it is not only detrimental from both technology point of view, but also from maximizing technology investments’ point of view.
Quantifying ROI to justify IT expenditures
Being able to understand and qualify IT assets, things you procure, capital expenditures, operational expenditures, people you are paying, the work that they do and what it is attached to is part of the IT maturity process. The value is derived from the end-state, and once you know that value, then you can determine if it is worth keeping your data center/IT services or if you should just scrap them and hire a managed services company. Click here for 5 reasons to consider managed services.
TechSource in your Inbox
Sign-up here to receive our latest posts by email.