Hyperscale: The Next Generation of Data Center Architecture

Steven Carlini, Senior Director, Data Center Solutions, Schneider Electric
211
403
89

Steven Carlini, Senior Director, Data Center Solutions, Schneider Electric

As cloud computing and collocated IT models continue to expand to support the increasing ocean of data, IT demands and workloads, there is a growing focus on hyperscale data centers. Typically, these are big, sprawling campus-level facilities, often run by large internet giants, cloud or collocation providers, but also some large enterprises. In addition, according to 451Research Hyperscale is the fastest-growing datacenter segment.

Hyperscale vs. Traditional Data Centers

When it comes to Hyperscale data centers, there are significant differences beyond just size when compared to traditional data centers. Hyperscale have distinctly unique design and management requirements to support the massive scale of new workloads and storage demands.

These are a few ways in how they differ:

Servers: Many Hyperscale operators, particularly the internet giants that are running hundreds of thousands of servers, construct what are called “vanity free” servers built to their own specifications rather than purchasing name-brand servers. These servers do not have many of the components of traditional servers–displays, fancy bezels or multiple interfaces–and are designed in a way that make them very fast to deploy, very modular to fix, and run at higher temperature in some cases. In addition, hyperscale data centers use bare metal racks and are typically larger and wider than the 19-inch mounts. Because of the bare metal design, the server racks can be pre-loaded, also known as “rack and stack,” and shipped directly to the site, further decreasing installation time. 

  â€‹Hyperscale data centers are ushering in a new approach to the way data centers are designed, operated, and managed to handle the complexity of new workloads  

Cooling: Hyperscale locations are starting to move towards more temperate and colder climates in an attempt to save on cooling costs. Inside, cooling systems seen in traditional data centers are replaced with custom air handlers, which are essentially large metal boxes containing a fan or blower that move enough air to keep servers at the proper operating temperature. 

Application portability: Hyperscale data centers usually run cloud applications, which are highly portable, so if a server fails, workloads can easily be moved around from server to server and even data center to data center. Whereas in a traditional data center, if a server running a critical application fails, it would need to be repaired before that application can run again on that server. 

Power: The power supply is being taken out of the servers, and in some instances, built directly into the individual custom racks. Alternatively, there are operators that prefer to use a centralized UPS solution to avoid some of the added maintenance of distributed power architecture. In either case, hyperscales prefer to use lithium-ion batteries instead of traditional Valve Regulated Sealed Lead Acid (VRLA) batteries as they pack a lot of energy into a much smaller footprint. Lithium-ion batteries can be used as an in rack solution, as the batteries for a centralized UPS, or as both for redundant back up.

Support: Because Hyperscale environments utilize thousands of servers or more, staffing ratios can vary drastically from the average data center. In some instances, hyperscale operators employ dedicated teams just to initialize, configure the software, deploy, and maintain their servers. In an average data center, this granular level of support just does not exist simply due to the high cost and lack of available personnel.

An Attempt towards Hyperscale Standardization

The Open Compute Project (OCP) developed by Facebook was designed to be deployed in hyperscale data centers to enable the delivery of efficient server, storage, and data center hardware designs for scalable computing. The initial goal of OCP was to bring industry leaders together to collaborate on designs and discuss what has and has not worked for them, but the draw back has been that many organizations are creating similar versions, but to their own custom designs and standards in silos. Additionally, new projects like LinkedIn’s Open 19, have started to compete with OCP and has left some proponents divided on which group could be successful in starting to unify the hyperscale designers and builders. 

The Promise of Hyperscale Data Centers

Hyperscale data centers are ushering in a new approach to the way data centers are designed, operated, and managed to handle the complexity of new workloads and the increasing demand on IT services. They come with the promise of economies of scale, lower total cost of ownership and high levels of modularity and scalability. Their bare metal approach is not for everyone as only the most experienced data center builders can conceive and construct these mammoth facilities filled with hundreds of thousands of utilitarian servers. In addition, large IT staffs are needed to load and configure the cloud-based applications as well as monitor, maintain and update the installed base. It is the fastest growing data center segment and the building of these continues to parallel the world’s need for more and more data with no end in sight.

Read Also

Mobile Telecom Shifting Yet Again

Julie Stafford, SVP Strategic Consulting, Tangoe

Approach To The "Things" In The Internet Of Things Is Crucial For Designing An IoT Product

Adrian Caceres, CTO, Co-founder & VP of Engineering, Ayla Networks