3 tips for boosting your FITARA grade
Although the federal government's focus on data centers has pivoted from consolidation to optimization, many agencies are still struggling with the same old challenges. For many, data center utilization is still burdened with siloed applications that require too much dedicated hardware and server duplication. In fact, in May 2018, ITDashboard.gov pegged achieved server utilization and automated monitoring progress at 1.5 percent, far short of the 65 percent target for fiscal year 2018 provided by the Federal IT Acquisition Reform Act.
Nevertheless, many agencies continue to improve their data center optimization scores in Congress’ bi-annual FITARA scorecards. In the most recent scorecard, several agencies, including the Justice Department, the Treasury Department, NASA, the Nuclear Regulatory Commission and the U.S. Agency for International Development, scored a "B" for data center optimization.
The common thread is the use of more cloud-like approaches to achieve better utilization. Many agency CIOs are using virtual machine "right-sizing" to automate reclamation of resources and better allocate only resources that are required for particular applications. They were also among the first to embrace the use of cloud environments to consolidate workloads.
Just as some of these agencies led the charge for data center closures, they are now setting the standard for utilization. Here are three steps CIOs can take to emulate their peers’ success, boost their FITARA optimization scores and achieve optimal utilization rates that can deliver long lasting dividends.
1. Identify physical infrastructure and -- if necessary -- virtualize or move it to the cloud
The Data Center Optimization Initiative calls for agencies to house at least four virtual servers per physical server. There is good reason for this. Virtualization can greatly maximize data center efficiency. Today we find that agency CIOs have many options for server virtualization and that the virtualization marketplace is quickly becoming commoditized. The open source community has united behind the Kernel-based Virtual Machine, a virtualization infrastructure for the Linux kernel that turns it into a hypervisor. KVM has been commercialized by vendors and is the underlying virtualization for the OpenStack project.
CIOs should take inventory and carefully ascertain the various components of their data centers to achieve optimal utilization.They may find that only certain parts of their infrastructures need to be modernized. They can then approach optimization in a piecemeal manner, making the process far less daunting while achieving optimal results.
2. Move toward on-demand resource consumption
Thanks to the cloud, CIOs no longer must provision resources they may not use. They can strip away any excess by implementing dynamic allocation and on-demand, user-driven consumption of resources. For example, the implementation of user self-service portals can help significantly reduce resource utilization, since information is only delivered as requested.
Using a cloud management platform to enforce governance policy can help appropriately size self-service-provisioned virtual machines. If not, the platform can be used to resize them to minimize wasted resources and increase the number of VMs that can fit on a hypervisor.
Implementing this step requires use of cloud-like resources, such as OpenStack. OpenStack enables organizations to create a single pool of compute they can tap into, as necessary. Only the necessary resources are used at any given time, preventing overspecification while supporting agencies’ optimization goals. This may require a cultural shift -- so that rather than program managers procuring and deploying their own storage, network and compute resources -- they rely upon the CIOs.
3. Use containers to create consolidated and portable workloads
Virtualization involves installing an operating system on top of a virtualized piece of hardware, then deploying applications on top of the OS. However, virtualizing hardware and instantiating an entire OS can amount to a lot of unnecessary fat.
Fortunately, that fat can be trimmed through the use of Linux containers. Containers are lightweight, highly portable and self-contained. Agencies can use them to easily move applications between operating systems that support containerization and can run multiple containers on a single OS image. With containers, agencies can run as many applications on the OS as possible and drive up utilization by making the most of their hardware, whether physical or virtual. This can result in higher FITARA utilization grades.
Taking these actions can pay long-term dividends for CIOs and their agencies. FITARA was designed to provide CIOs with greater control over their IT resources, and that sentiment has been supported by the Executive Order Enhancing the Effectiveness of Agency Chief Information Officers. Similar to the “found money” that can be collected through the Modernizing Government Technology Act, CIOs that “find efficiencies” and strive for better utilization can get more than just a collective high-five from Congress. They can do more with the resources they have and build a better IT infrastructure that is more efficient, less costly to maintain and primed for the future.
Adam Clater is the chief architect, North America Public Sector, Red Hat.