The charge towards the cloud has helped fuel substantial growth in new data centre developments as well as expansion of existing facilities around the world.

Centralization, shared services and economies of scale are the name of the game now. That means even the largest of organizations need to build or leverage large data centre facilities to unlock savings or make use of processing and storage on-demand.

The modern data centre is therefore tasked with two key roles: delivering a seemingly-unending level of capacity; and maintaining high levels of operating performance.

Whether you are charged with building or remodelling a data centre exclusively for your organization, or doing the same for a larger shared services facility with multiple tenants, the same considerations apply. Planning is critical to ensure that you have and can deliver the capacity that people need, when they need it. Moreover, that you can deliver the performance levels expected, be that fast storage or processing horsepower, or simply the speed at which you can provision services when requested by a customer.

Modern data centres are vast and contain a substantial amount of technology. Yet the space is finite and there are thresholds that ultimately impose hard limits on just how much can be held inside one facility, and how the contents can operate effectively 24/7.

True capacity planning means being able to crystal ball gaze and predict future IT needs – what the data centre must provide in CPU cycles, storage, space, and power in order to support the business or, in the case of a shared services facility, support a broad range of clients scaling up and down across the year.

The latter is the real challenge. With a constantly fluctuating overall customer base, each with fluctuating demands, a degree of over-specifying of resources is needed to ensure enough of a reserve is on-hand when customer demand jumps, but not so much that if customers scale back, you as the data centre operator are left nursing costly over-capacity and recurring bills for bandwidth, energy, as well as the physical cost of unoccupied floor space.

Performance management

Data centres are growing in size and number in all markets, but particularly in the EU where existing and forthcoming legislation is having a profound impact on where data is stored. In-country data storage is creating substantial demand for additional data centre capacity across the region, particularly in markets such as the UK and Germany. This also means there is high demand for additional energy to power these data centres and the hardware sitting in them.

Compounding the challenge is the fact that as data centre demand and new site construction is sky rocketing, energy networks across Europe and elsewhere are under their greatest pressure to service demand.

There is only a finite amount of power available in a local electricity sub grid, potentially limiting the scope for a data centre to draw down as it reaches full capacity. In order for a local power provider to maintain continuity of service to everyone on the same subnet, including homes, hospitals and street furniture, a data centre may well find itself at the back of the power queue unless it has its own generating capability in the form of solar, wind or standalone generators.

With power demands growing, the extremely small margin for overcapacity in most electricity markets may not be able to fully satisfy the needs of the data centre, hampering business as well as operational performance.

Power needs

The peak power consumption of a data centre is a key consideration when architecting everything from the cabling to networking infrastructure, to how many server racks you place on each floor or segment of the facility. Get it right, and you will ensure that the facility can scale without interruption. Get it wrong, and you will fight a constant battle between a shortage of power and an inability to keep the facility cool enough to operate, itself putting further pressure on power needs and operating performance.

It is also a reason why deploying the latest, energy-efficient infrastructure technology is factored into capacity planning. Doing so will ensure energy use is as low as possible. Modern switches and interconnect technologies are now delivering substantial advances in power use, enabling them to deliver substantially lower power consumption per port or per Gb compared with previous generations of the same hardware.

Meticulous power performance monitoring and planning is needed because electricity generating capability across Europe is actually falling. This is due to the decommissioning of coal-fired power stations. The exception is France, which remains largely reliant on an established nuclear power programme and therefore has a much lower exposure to fossil fuel-based electricity generation.

This move away from coal, intended to improve air quality, is removing electricity generating capability from Europe’s major nations at a rate faster than nuclear, solar, biomass, wind and wave power can fill the void.

For example, in the UK, generating capability has been steadily declining for over a decade. In 2014, data from the Department of Energy and Climate Change showed total electricity production stood at 335 TWh, while consumption was 302 TWh. This is down from peak generation of 385 TWh against consumption of 285 TWh in 2005. Add to this that energy prices have risen steadily from 2010. For data centres, this represents a major OpEx challenge to overcome and a dwindling level of grid capacity nationwide that can be leveraged for new and expanding facilities.

Data centres are not the only energy consumers having a big impact on local and national power infrastructure load. Everything from the growing Internet of Things to smart motorways are putting pressure on power grids.

Application performance

Any data centre ops team needs to focus on ensuring the performance of what is being served from the facility, whether its running on the customer’s own hardware, or rented hardware provided by the facility itself. Maximizing the performance of cloud or private applications while maximising the use of available infrastructure. Every activity that is undertaken in a modern data centre including provisioning, monitoring, capacity management, and automation supports this goal.

With the advent of widespread server virtualization, the process of provisioning, deploying and configuring a server resource is increasingly a software action, rather than that of physically installing a server in a rack.

Nonetheless, the rise of virtualization has implications for data centre operators. Densely packed racks of physical servers all running at 100 per cent load, each with multiple virtual instances in play, will test any facility’s ability. Be it to maintain operating temperature, deliver energy as well as ensure enough bandwidth is coming into the building to service traffic to and from those servers.

Tools for capacity planning

Capacity planning tools are essential for today’s data centre operators to help them calculate the resources and power draw that a data centre will require, based on current and future projected use.

The tools for the job range from simple Excel spreadsheets to custom 3D renderings of the data centre floor map, complete with automated asset discovery, integration with power and cooling systems and other sensors around the facility. Sophisticated capacity management tools can even suggest outsourcing options when major power, space and cooling upgrades to the physical site are cost or time prohibitive. The same tools can also be used to provide information to customers of shared data centres, helping to automate some of the management of collocated hardware, or provisioning of virtual services on rented hardware.

Going forward, new and revamped data centre facilities need to be carefully architected to make most efficient use of the available resources. Alongside this, operators must ensure that high-energy functions such as cooling are up to the task.

Most critically, communications infrastructure will operate 24/7/365, with customers and end users expecting a consistent and high degree of operating performance and data throughput. This can’t be compromised by overheating servers, power outages or an inability of a customer to scale up their installation on short notice to address a spike in demand.