3m read time
The concept of server disaggregation is catching on in data centers as it delivers some significant benefits, including faster, less-expensive server upgrades, pooled server resources including power supplies and cooling, and banks of servers that are purpose-built for almost any workload.
Server disaggregation separates servers’ compute, memory, and storage subsystems, which can quickly become outdated by the latest innovations, from the sheet metal, fans, power supplies, and racks – technologies that don’t evolve so quickly.
In enterprise data centers, disaggregation delivers at least three significant benefits.
1. Faster, less-expensive server upgrades
Traditionally, when a company wanted to upgrade a server, it meant physically removing the old server and replacing it with a new one. The main benefit of the upgrade was typically a more powerful CPU and additional (perhaps faster) memory. Never mind that many other components — fans, power supplies, cables, and the chassis — had potentially many years of life in them; the whole unit was swapped out.
Disaggregating the CPU, memory, and storage subsystems enables companies to swap out only those components, which delivers both cost and time savings. Intel, not surprisingly, was among the first to test the disaggregation concept in its own data centers. In a white paper summarizing the results, Intel said the concept delivered the following benefits:
2. Pooled resources
Disaggregation also enables power, cooling, and networking resources to be shared by multiple CPU, memory, and storage subsystems. This results in energy savings because the pooled resources can support multiple computing subsystems simultaneously, rather than each server having its own. In Intel’s case, this contributed to an extremely low data center power usage effectiveness (PUE) rating of 1.06. (PUE is a measure of how much of the electricity a data center consumes is used to power IT equipment, as opposed to supporting infrastructure such as cooling. A PUE of 1.0 is considered ideal.)
3. Purpose-built servers
Finally, server disaggregation makes possible a “building block” approach to server design, with the ability to closely match requirements to workloads. Supermicro, for example, uses this building block approach for its servers, all based on a common x86 open architecture. Whether the customer needs a 16-GPU server to run AI applications or a single GPU edge switch to process images or telemetry, Supermicro can build a server that is an optimized fit. It can also deliver servers that are purpose-built for various applications, such as for partners like RedHat and VMware.
Along with server disaggregation, it’s the open x86 architecture that makes this approach possible, according to Michael McNerney, Vice President of Marketing and Network Security at Supermicro.
“With an open architecture, you have standard APIs and a software platform on which you can develop innovative new applications and services,” he said. “And we innovate underneath those APIs to deliver systems optimized for specific workloads, so the software delivers higher performance storage, networking, and compute.”
Visit us at supermicro.com/Cloud to learn more.
Organizations today are under unprecedented pressure to adapt. Intel has delivered five generations of custom silicon built for cloud scale, along with co-engineering with partners and relationships with top cloud providers. Today’s top clouds are powered by Intel® Xeon® Scalable Processors.