Blade servers: An introduction and overview
Blade servers have become a staple in almost every data center. The typical “blade” is a stripped-down modular server that saves space by focusing on processing power and memory on each blade, while forgoing many of the traditional storage and I/O functionality typical of rack and standalone server systems. Small size and relatively low cost makes blades ideal for situations that require highh physical server density, such as distributing a workload across multiple Web servers).
But high density also creates new concerns that prospective adopters should weigh before making a purchase decision. This guide outlines the most important criteria that should be examined when purchasing blade servers, reviews a blade server’s internal and external hardware, andd discusses basic blade server management expectationss.
Form factor. Although blade server size varies from manufacturer to manufacturer, blade servers are characterized as full height or half height. The height aspect refers to how much space a blade server occupies within a chassis.
Unlike a rackmount server, which is entirely self-contained, blade servers lack certain key components, such as cooling fans and power supplies. These missing components, which contribute to a blade server’s small size and lower cost, are instead contained in a dedicated blade server chassis. The chassis is a modular unit that contains blade servers and other modules. In addition to the servers, a blade server chassis might contain modular power supplies, storage modules, cooling modules (i.e., fans) and management modules.
Blade chassis design is proprietary and often specific to a provider’s modules. As such, you cannot install a Hewlett-Packard (HP) Co. server in a Dell Inc. chassis, or vice versa. Furthermore, blade server chassis won’t necessarily accommodate all blade server models that a manufacturer offers. Dell’s M1000e chassis, for example, accommodates only Dell M series blade servers. But third-party vendors sometimes offer modules that are designed to fit another vendor’s chassis. For example, Cisco Systems Inc. makes networking hardware for HP and Dell bladess.
Historically, blades’ high-density design posed overheating concerns, and they could be power hogs. With such high density, a fully used chassis consumes a lot of power and produces a significant amount of heat. While there is little danger of newer blade servers overheating (assuming that sufficient cooling modules are used), proper rack design and arrangement are still necessary to prevent escalating temperatures. Organizations with multiple blade server chassis should design data centers to use hot-row/cold-row architecture, as is typical with rack servers.
Processor support. As organizations ponder a blade server purchase, they need to consider a server’s processing capabilities. Nearly all of today’s blade servers offer multiple processor sockets. Given a blade server’s small form factor, each server can usually accommodate only two to four sockets.
Most blade servers on the market use Intel Xeon processors, although the Super Micro SBA-7142G-T4 uses Advanced Micro Devices (AMD) Inc.’s Opteron 6100 series processors. In either case, blade servers rarely offer less than four cores per socket. Most blade server CPUs have six to eight cores perr socket. Some AMD Opteron-based processors, such as the 6100 series used by Super Micro, have up to 32 cores.
If you require additional processing power, consider blade modules that can work cooperatively, such as the SGI Altix 450. This class of blades can distribute workloads across multiple nodes. By doing so, the SGI Altix 450 offers up to 38 processor sockets and up to 76 cores when two-core processors are installed.
Memory support. As you ponder a blade server purchase, consider how well the server can host virtual machines (VMs). In the past, blade servers were often overlooked as host servers, because they were marketed as commodity hardware rather than high-end hardware capable of sustaining a virtual data center. Today, blade server technology has caught up with data center requirements, and hosting VMs on blade servers is a realistic option. Because server virtualization is so memory-intensive, organizations typically try to purchase servers that support an enormous amount of memory.
Even with its small form factor, it is rare to find a blade server that offers less than 32 GB of memory. Many of the blade servers on the market support hundreds of gigabytes of memory, with servers like the Fujitsu Primergy BX960 S1 and the Dell PowerEdge M910 topping out at 512 GB.
As important as it is for a blade server to have sufficient memory, there are other aspects of the server’s memory that are worth considering. For example, it is a good idea to look for servers that support error-correcting code (ECC) memory. ECC memory is supported on some, but not all, bladee servers. The advantage to using this type of memory is that it can correct single-bit memory errors, and it can detect double-bit memory errors.
Drive support. Given their smaller size, blade servers have limited internal storage. Almost all the blade servers on the market allow for up to two 2.5-inch hard drives. While a server’s operating system (OS) can use these drives, they aren’t intended to store large amounts of data.
If a blade server requires access to additional storage, there are a few different options available. One option is to install storage modules within the server’s chassis. Storage modules, which are sometimes referred to as storage blades or expansion blades, can provide a blade server with additional storage. A storage module can usually accommodate six 2.5-inch SAS drives and typicallyy includes its own storage controller. The disadvantages to using a storage module are that storage modules consume chassis space and the total amount of storage it provides is still limited.
Organizations that need to maximize chassis space for processing (or provide blade servers with more storage than can be achieved through storage modules) typically deploy external storage, such as network-attached storage or storage area network (SAN). Blade servers can accept Fibre Channel mezzanine cards, which can link a blade server to a SAN. In fact, blade servers can even boot from a SAN, rendering internal storage unnecessary.
If you do use internal storage or a storage module, verify that the server supports hot-swappable drives so that you can replace drives without taking the server offline. Even though hot-swappable drives are standard features among rackmount servers, many blade servers do not support hot-swappable drives.
Expansion slots. While traditional rackmount servers support the use of PCI Express (PCIe) and PCI eXtended (PCI-X) expansion cards, most blade servers cannot accommodate these devices. Instead, blade servers offer expansion slots that accommodate mezzanine cards, which are PCI based. Mezzanine card slots, which are sometimes referred to as fibers, are referred to by letter, where the first slot is A, the second slot is B and so on.
We refer to mezzanine slots this way because blade server design has certain limits and requires consistent slot use. If in one server, you install a Fibre Channel card in slot A, for example, every other server in the chassis is affected by that decision. You could install a Fibre Channel card intoo slot A on your other servers or leave slot A empty, but you cannot mix and match. You cannot, for example, place a Fibre Channel card in slot A on one server and use slot A to accommodate an Ethernet card on another server. You can, however, put a Fibre Channel card in slot A and an Ethernet card in slot B -- as long as you do the same on all other servers in the chassis (or, alternatively, leave all slots empty).
External blade server characteristics
Power. Blade servers do not contain a power supply. Instead, the power supply is a modular unit that mounts in the chassis. Unlike a traditional power supply, a blade chassis power supply often requires multiple power cords, which connect to multiple 20 ampere utility feeds. This ensures that no single power feed is overloaded, and in some cases provides redundancy.
Another common design provides for multiple power supplies. For example, the HP BladeSystem C3000 enclosure supports the simultaneous use of up to eight different power supplies, which can power eight different blade servers.
Network connectivity. Blade servers almost always include 2 GB network interface cards (NICs) that are integrated into the server. However, some servers, such as the Fujitsu Primergy BX960 S1, offer 10 GB NICs instead. Unlike a rackmount server, you cannot simply plug a network cable into a blade server’s NIC. The chassis design makes it impossible to do so. Instead, NIC ports are mapped to interface modules, which provide connectivity on the back of the chassis. The interesting thing about this design is that a server’s two NIC ports are almost always routed to different interfacee modules for the sake of redundancy. Additional NIC ports can be added through the use of mezzanine cards.
User interface ports. The interface ports for managing blade servers are almost always built into the server chassis. Each chassis typically contains a traditional built-in keyboard, video and mouse (KVM) switch, although connecting to blade servers through an IP-based KVM may also be an option. In addition, the chassis almost always contains a DVD drive that can be used for installing software to individual blade servers. Some blade servers, such as the HP ProLiant BL280c G6, contain an internal USB port and an SD card slot, which are intended for use with hardware dongles.
Controls and indicators. Individual blade servers tend to be very limited in terms of controls and indicators. For example, the Fujitsu Primergy BX960 S1 only offers an on-off switch and an ID button. This same server has LED indicators for power, system status, LAN connection, identification and CSS.
Often the blade chassis contains additional controls and indicators. For example, some HP chassis include a built in LCD panel that allows the administrator to perform various configuration and diagnostic tasks, such as performing firmware updates. The precise number and purpose of each control or indicator will vary with each manufacturer and their blade chassis design.
Given that blade servers tend to be used in high-density environments, management capabilities are central. Blade servers should offer diagnostic and management capabilities at both the hardware and the software level.
Hardware-based management features. Hardware-level monitoring capabilities exist so that administrators can monitor server health regardless of the OS that is running on the server. Intelligent Platform Management Interface (IPMI) is one of the most common and is used by the Dell PowerEdge M910 and the Super Micro SBA-7142G-T44.
IPMI uses a dedicated low-bandwidth network port to communicate a server’s status to IPMI-compliant management software. Because IPMI works at the hardware level, the server can communicate its status regardless of the applications that run on the server. In fact, because IPMI works independently of the main processor, it works even if a server isn’t turned on. The IPMI hardware can do its job as long as a server is connected to a power source.
Blade servers that support IPMI 2.0 almost always include a dedicated network port within the server’s chassis that can be used for IPMI-based management. Typically, a single IPMI port services all servers within a chassis. Unlike a rack server, each server doesn’t need its own management port.
Blade servers can get away with sharing an IPMI port because of the types of management that IPMI-compliant management software can perform. Such software (running on a PC) is used to monitor things like temperature, voltage and fan speed. Some server manufacturers even include IPMI sensors that are designed to detect someone opening the server’s case. As previously mentioned, blade servers do not have their own fans or power supplies. Cooling and power units are chassis-level components.
Software-based management features. Although most servers offer hardware-level management capabilities, each server manufacturer also provides their own management software as well, although sometimes at an extra cost. Dell, for example, has the management application OpenManage, while HP provides a management console known as the HP Systems Insight Manager (SIM). Hardware management tools tend to be diagnostic in nature, while software-based tools also provide configuration capabilities. You might, for example, use a software management tool to configure a server’s storage array. As a general rule, hardware management is fairly standardized. Multiple vendors support IPMI and baseboard management controller (BMC), which is another hardware management standard. Some servers, such as the Dell PowerEdge M910, support both standards. Management software, on the other hand, is vendor-specific. You can’t, for example, use HP SIM to manage a Dell server. But you can use a vendor’s management software too manage different server lines from that vendor. For example, Dell OpenManage works with Dell’s M series blade servers, but you can also use it to manage Dell rack servers such as the PowerEdge R715.
Because of the proliferation of management software, server management can get complicated in large data centers. As such, some organizations try to use servers from a single manufacturer to ease the management burden. In other cases, it might be possible to adopt a third-party management tool that can support heterogeneous hardware, though the gain in heterogeneity often comes at a cost of management granularity. It’s important to review each management option carefully and select a tool that provides the desired balance of support and detail.
Table 1: A basic summary of blade servers
There are countless blade servers on the market. Table 1 displays a sample of some of the currently available blade servers. Furthermore, most server vendors provide numerous configuration options, so the configurations outlined in the table may differ from what you encounter in the real world.
Product |
PowerEdge M910 Blade Server |
Processor support |
Two to four processor sockets |
Maximum cores |
32 |
Chipset |
Intel Xeon 7500 and 6500 series |
Memory support |
Up to 512 GB (32 DIMM slots) 1 GB, 2 GB, 4 GB, 8 GB 16 GB ECC DDR3 |
Hard drive support |
Up to two 2.5” SAS SSD, SATA SSD, SAS (15K or 10K), nearline SAS (7.2K). Maximum internal storage of up to 2 TB |
Expansion slots |
Support for three fabrics |
Network ports |
Two embedded Broadcom NetExtreme II Dual port 5709S Ethernet NICs with failover and load-balancing capabilities.
TCP/IP Offload and iSCSI offload capabilities on supported OSes.
|
Manageability |
BMC, IPMI 2.0 Compliant
Dell OpenManage
Unified Server Configurator
Lifecycle Controller
iDRAC6 with optional vFlash
|
Power supplies |
Supported by Dell’s M1000e Blade Chassis |
Product |
Fujitsu Primergy BX960 S1 |
Processor support |
Up to four Intel Xeon processors |
Maximum cores |
32 |
Chipset |
Intel Xeon E7500 series or Intel X7500 series |
Memory support |
8 GB to 512 GB
DDR3 registered ECC 1333 MHz PC3-10600, DIMM
|
Hard drive support |
Two 2.5-inch non hot-pluggable SATA SSD |
Expansion slots |
Four BX900 mezzanine cards |
Network ports |
Two Intel 82599, 2x 10 Gbps Ethernet |
Manageability |
Automatic Server Recovery and Restart
Prefailure Detection and Analysis
ServerView Suite:
SV Installation Manager
SV Operation Manager
SV RAID Manager
SV Update Management
SV Power Management
SV Agents
iRMC S2 Advanced Pack
|
Power supplies |
Integrated into chassis |
Product |
HP ProLiant BL280c G6 |
Processor support |
Dual socket Intel Xeon 5500 or 5600 series |
Maximum cores |
12 |
Chipset |
Intel Xeon 5500 series or 5600 series |
Memory support |
Maximum 192 GB
12 DIMM slots
PC3-10600 DDR3
|
Hard drive support |
Two drive bays supporting non hot-pluggable SAS, SATA or SATA SSD |
Expansion slots |
Two |
Network ports |
Two, NC362i gigabit NICs |
Manageability |
HP Integrated Lights Out (iLO 2)
HP Insight
Onboard Administrator
|
Power supplies |
Installed in chassis |
Product |
SGI Altix 450 |
Processor support |
38 sockets for Intel Itanium series 9000 processors |
Maximum cores |
76 |
Chipset |
Intel Itanium series 9000 |
Memory support |
Up to 32 GB DDR2 per blade |
Hard drive support |
Up to two 146 GB SAS drives |
Expansion slots |
Two low-profile PCI-X slots |
Network ports |
Not specified |
Manageability |
Not specified |
Power supplies |
Built into chassis |
Product |
Super Micro SBA-7142G-T4 |
Processor support |
Four-socket AMD Opteron 6100 series |
Maximum cores |
48 |
Chipset |
AMD Opteron 6100 series eight/12 core |
Memory support |
Up to 256 GB (16 x 240 pin DIMM)
1333/1066
800 MHz DDR3 ECC unbuffered
|
Hard drive support |
Four 2.5-inch hot-swappable SATA |
Expansion slots |
4X QDR / DDR (40/20 Gbps) Infiniband mezzanine HCA |
Network ports |
Intel 8276 dual-port gigabit Ethernet |
Manageability |
IPMI 2.0 via Chassis Management Module |
Power supplies |
Included in chassis |
Check out the rest of our Server Month resources.