Multi-Quantity Discounts Available for ALL Line Items
** LARGE DISCOUNTS available for complete systems. Pricing displayed is for single item purchases **
Use the configuration tool to create your solution and submit it us for a quote OR send us a message with your complete build requirements to receive best pricing
READ ME
- Some options may not be displayed until the compatible parent option is chosen ie. Chassis – Drives, Processor – RAM etc.
- “Submit for Quote” items can be added to your cart and sent to us via the cart page
- Click the blue bar to close/open a section from view after choosing your options
Select Processor
All SKUs below ship with a processor only. Adequate fans and heatsinks must be selected.
Mixing of 2 different processor models is NOT allowed.
Processors with TDP greater than 150W require High Performance Heatsink (P48818-B21).
8470Q processor is not supported with 24SFF CTO Server.
Platinum
supports up to 2 processors.
Gold
supports up to 2 processors.
Silver / Bronze
supports up to 2 processors.
Heat sinks / Fans / Cooling Modules
This section holds Heat Sinks, Fans, and DLC modules.
CPUs with TDP equal to or lower than 150W = Standard Heat Sink.
CPUs with TDP over 150W = High-Performance Heat Sink.
If you install the 4LFF Mid-Tray and your CPU TDP is over 150W. You will need to Low-Profile version of a High-Performance Heat sink (P48905-B21).
High-Performance Fan are required In general, when rear drives are installed, or CPU TDP > 205W, or High-Performance NVMe drives, three drive cages, mid-tray, GPU card, or certain backplanes are populated.
DLC Modules need certain cables to work. They are at the bottom of this list.
For all rules please read the manual or view each item to read the notes.
Memory
DIMM Blanks are optional and not required.
96GB memory cannot be mixed with any other memory.
Mixing of x4 and x8 memory is not allowed
Mixing of 3DS memory and non-3DS memory is not supported
256GB DIMM will also need to limit the maximum front-end cage to two.
Memory with larger than 128GB capacity will need High-Performance Fan Kit (P48820-B21) and ambient limitation.
Storage Devices (Optional)
If you storage device has Cache you will need a Power supply for them.
Storage Batteries & Cables (Optional)
Drive cage and backplane (Optional)
There are many Drive cages to choose from to upgrade the system from 12 EDSFF to 20 or even 36 drives.
Please view each item or read the manual to understand what items you will need when selecting some of these drive bundles.
EDSFF Bundles (Optional)
Please read the note for each Bundle there are a lot of parts that need to be added and limitations to GPUs if certain DLC are not added.
EDSFF Drives (Optional)
Selection of PM1743 EDSFF drives with VMware requires the selection of HPE NS204i-u Gen11 NVMe Hot Plug Boot Optimized Storage Device(P48183-B21) for booting
OS BOOT Device (Optional)
If the DLC NS204 (P62023-B21) is installed, you are not able to install the NS204i Boot Device. They take up the same slots.
If external accessible drives are needed please add trigger SKU P54542-B21 HPE ProLiant DL380 Gen11 NS204i-u FIO Bundle Kit.
Riser Card (Optional)
The Primary Riser shipping default in the CTO server is a x8 FH, FL, x16 FH, FL, and x8 FH, HL.
For a Secondary/Tertiary riser, the second processor is required.
x16 cards installed on x8 slots could observe sub-optimal performance.
Some risers require cables which are at the bottom of this list.
NEBS risers cannot be mixed with other non-NEBS risers.
Please view the items or read the manual for all the detailed specs about the risers.
OCP Enablement (Optional)
If you install an x16 OCP card you will need to expand the OCP 3.0 slot from an x8 to and x16.
OCP cards (Optional)
Networking adapters below can be selected as the primary networking choice when configuring a Networking Choice (NC) Configure-to-Order (CTO) chassis. The DL380 Gen11 NC CTO chassis does not come with embedded networking, hence the requirement to configure with either a FlexibleLOM or select PCIe networking adapter.
Networking PCI (Optional)
Networking adapters below can be selected as the primary networking choice when configuring a Networking Choice (NC) Configure-to-Order (CTO) chassis. The DL380 Gen11 NC CTO chassis does not come with embedded networking, hence the requirement to configure with either a FlexibleLOM or select PCIe networking adapter.
Intel E810-CQDA2 (P21112-B21) is not supported when 3 drive cages are installed.
Infini Band PCI/OCP (Optional)
OCP 3.0 Cards at the top.
These OCP x16 cards are only supported in OCP slot 2. You must install the HPE ProLiant DL3XX Gen11 OCP2 x16 Enablement Kit (P48828-B21)
Fibre Channel HBA (Optional)
GPU (Optional)
For the 12EDSFF CTO Server, there are limitations for the GPUs if additional drive cages are installed with and without the DLC module.
Please read the note on the items for more detail.
Rail Kit (Optional)
The Easy Install Rail 3 Kit does not include a Cable Management Arm (CMA) (P22020-B21).
Power Cooling
Select a minimum (1), or maximum (2) power supplies.
All power supplies in a server should match. Mixing Power Supplies is not supported.
Helpful Tip: Once desired configuration is selected click "Add to Cart". From the cart page you can submit a submit a quote request for best pricing
Since this Server is so large and has so many configurations, we have broken it up by chassis to make this information a little easier for you to digest.
Max Internal Storage
| Drive |
Capacity |
Configuration |
| Hot Plug SFF SAS HDD |
91.2 TB |
24+8+6 x 2.4TB |
| Hot Plug SFF SAS SSD |
583.3 TB |
24 +8+6 15.35TB |
| Hot Plug SFF SATA HDD |
76 TB |
24+8+6 x 2 TB |
| Hot Plug SFF SATA SSD |
291.84 TB |
24 +8+ 6 x 7.68 TB |
| Hot Plug LFF SAS HDD |
360 TB |
12+4+4x 18 TB (with optional rear LFF drive cage) |
| Hot Plug LFF SATA HDD |
360 TB |
12+4+4 x 18 TB (with optional rear LFF drive cage) |
| Hot Plug SFF NVMe PCIe SSD |
374.4 TB |
24+ x 15.36TB + 6 x 960GB<10W (with optional rear Primary and Secondary 2SFF) |
HPE ProLiant DL380 Gen11 12EDSFF Upgrades Options
Processor
The HPE DL380 GEn 11 supports up to 2 4th Generation Intel® Xeon® Scalable Processors. Processors do not ship with heatsinks or fan kits, these must be ordered separately. Processors with TDP equal to or greater than 150W through 350W require High-Performancea Heatsink (P48818-B21). Processors with TDP greater than 150W through 350W and mid-tray drive cage require HPE DL3xx/560 Gen11 High Performance Heatsink (P48905-B21. “Q” processors require Max Performance Heatsink (P48817-B21). Processors with TDP equal to or less than 150W require Standard Heatsink (P49145-B21). Liquid-cooled CPUs require Maximum Performance Heat Sink (P48817-B21). One heatsink covers both CPUs.
Heatsink & Fans
The HPE DL380 Gen11 supports up to two 4th Generation Intel® Xeon® Scalable Processors, though processors ship without heatsinks or fan kits, which must be ordered separately. Processors with a TDP of 150W to 350W require the High Performance Heatsink (P48818-B21), or High-Performance LP HSK (P48905-B21) if paired with a mid-tray drive cage. "Q" processors and liquid-cooled CPUs require the Maximum Performance Heatsink (P48817-B21), while processors with a TDP of 150W or below use the Standard Heatsink (P49145-B21). Each heatsink covers both CPUs.
Cooling Options:
- Standard HSK (P49145-B21)
- High-Performance 2U HSK (P48818-B21)
- High-Performance LP HSK (P48905-B21)
- Max Performance HSK (P48817-B21)
- 2U High Performance Fan Kit (P48820-B21)
Liquid Cooling
There are two Liquid Cooling options for this server. If any are installed, you must also install the correct Tubes for the cooling system to work properly. Please keep in mind the PCIe slot amount. One of these LCs takes up a PCIe slot and is usually installed when a GPU is selected. Some GPUs are DW and take up two slots. Even with 8 PCIe slots you can run out of room fast.
Requires the DLC:
- Intel Data Center GPU Max 1100 (S1T66C)
- Intel Liquid Cooled Processors (Q)
- Read Manual for more DLC requirments
Cold Plate Module NS204 Tube Kit (P62023-B21)
- Includes 2 Cold Plate Modules & 1 Quick Disconnect Module
- DLC 55cm Quick Disconnect Tube Set Kit (P62042-B21)
- This DLC uses the NS204i-u slot for connections
- The HPE NS204i-u Gen11 NVMe Boot Optimized Storage Device(P48183-B21) cannot be selected.
Cold Plate Module Kit from Primary PCIe (P62029-B21)
- Includes 2 Cold Plate Modules & 1 Quick Disconnect Module.
- DLC 55cm Quick Disconnect Tube kit P62042-B21)
- This kit uses a PCIe slot on the Primary Riser.
- If this option is selected the Primary Riser will have one less PCIe slot available for PCIe adapters. Please keep this in mind when considering the total number of PCIe adapters required.
Memory | 32x DDR5 4800MT/s DIMM
The HPE DL380 Gen11 Server supports a maximum of 32 DIMMs, offering 16 DIMM slots per processor with 8 memory channels per processor and 2 DIMMs per channel, enabling a total memory capacity of up to 8.0 TB. This configuration utilizes 32 x 256 GB RDIMMs operating at 4800 MT/s. The server supports the full 32 DIMMs with the 8SFF or 16SFF drive configurations, while the 24SFF configuration is limited to a maximum of 16 DIMMs. This design provides high-speed, scalable memory options, making the server well-suited for demanding workloads.
- DIMM Blanks are optional and not required.
- Mixing of 3DS memory and non-3DS memory is not supported
- Mixing of x4 and x8 memory is not allowed
- Mixing of DIMM types (UDIMM, RDIMM, and LRDIMM) is not supported.
- If 96GB or higher density memory is selected then High Performance Fan Kit must be selected.
- 96GB memory cannot be mixed with any other memory.
Network Choice NC
In the HPE DL380 Gen11 Rack Server, NC stands for "Network Choice". This designation means the server does not come with a built-in network adapter (NIC), giving customers the flexibility to select and install their preferred network options, such as OCP 3.0 or stand-up NICs, based on their specific networking needs.
This system has 2 OCP ports. The OCP ports have enablement kits to provide x8 or x16 functionality to the slots. Please take a look at which controller you are trying to install and the slot diagram below to know which slot each controller should be installed in.
| OCP Slot Location |
1 OCP Storage Controller (OROC) + 1OCP NIC |
1 OCP NIC |
2 OCP NICs |
1 OCP Storage Controller (OROC) |
2 OCP Storage Controllers (OROC) |
| OCP 1 |
OROC |
N/A |
OCP NIC |
OROC (Higher priority) |
OROC (Higher priority) |
| OCP 2 (with shared NIC and WoL) |
OCP NIC |
NIC (higher priority) |
OCP NIC (higher priority) |
N/A |
OROC |
Expansion Slots
The HPE DL380 Gen11 Rack Server comes with 8 PCIe slots designed for expansion and flexibility. These slots support PCIe Gen4, offering high bandwidth and low latency, which is ideal for data-intensive applications.
As a Network Choice (NC) server, the HPE DL380 Gen11 allows you to install PCIe cards or utilize the two available OCP (Open Compute Project) ports for networking. Using the OCP ports frees up PCIe slots for other expansion needs, maximizing configuration flexibility and optimizing available PCIe resources.
Rules & Limitations
- NEBS (rugged) risers cannot be mixed with other non-NEBS risers.
- The Primary Riser shipping default in the CTO server is a x8 FH/FL, x16 FH/FL, and x8 FH/HL
- A second processor is required for the Secondary/Tertiary riser
- x16 cards installed on x8 slots could observe sub-optimal performance.
- When 2LFF Tertiary Cage is selected then Secondary and Tertiary Riser cannot be selected.
- Please read the manual for more rules and limitations.
| Riser Information |
| Part Number |
Description |
Riser Position |
Bus Width (Gen5 Lanes) |
| Primary |
Secondary |
Tertiary |
Top Slot |
Middle Slot |
Bottom Slot |
| N/A |
This is the default riser in the chassis |
D |
N |
N |
x8 |
x16 |
x8 |
| P48803-B21 |
HPE DL380 Gen11 x16/x16/x16 Primary Riser Kit |
O |
N |
N |
x16 |
x16 |
x16 #1 |
| P51083-B21 |
HPE DL380 Gen11 x16/x16/x16 Secondary Riser Kit |
N |
O |
N |
x16 |
x16 |
x16 #2 |
| P48802-B21 |
HPE DL38X Gen11 x8/x16/x8 Sec Riser Kit |
N |
O |
N |
x8 |
x16 |
x8 |
| P48804-B21 |
HPE DL38X Gen11 2x16 Tertiary Riser Kit |
N |
N |
O |
x16 |
x16 #3 |
|
| P48805-B21 |
HPE ProLiant DL380 Gen11 2U Primary/Secondary NEBS-compliant Riser Kit |
O |
N |
N |
x8 |
x16 |
x8 |
| P48806-B21 |
HPE ProLiant DL380 Gen11 2U Secondary/Tertiary NEBS-compliant Riser Kit |
N |
O |
N |
x8 |
x16 |
x8 |
Notes: − D = Default on chassis; O = Optional; N = not supported or slot/connector not present. − #1 Requires HPE DL380 Gen11 x16/x16/x16 Primary Cable Kit (P56073-B21) − #2 Requires HPE DL380 Gen11 x16/x16/x16 Secondary Cable Kit (P56074-B21) − #3 PCIe Gen4 lanes. − x16 cards installed on x8 slots could observe sub-optimal performance. |
GPU
System memory should be 2x of GPU memory. Most GPUs need an LC system installed so please take note of the PCIe slots used and which PCIe slot you install the GPU in. Some GPUs require certain parts or have limitations. To view all the rules for each GPU please view the item.
GPU Requirments
- Maximum Performance fan kit (P48820-B21)
- GPU Power Cable Kit (P56072-B21)
When configuring the HPE ProLiant DL380 Gen11 server with GPUs, it's crucial to consider cooling requirements to maintain optimal performance and system reliability. The server supports up to eight single-wide or three double-wide GPUs, depending on if the DLC is installed in the configuration.
| GPU Information |
| Part number |
Card |
QTY Support |
PCIe |
8SFF |
16SFF/8LFF |
24SFF/12LFF |
| S1T66C |
Intel® Data Center GPU Max 1100 |
2 |
Gen5 |
2@35C(Air) |
2@35C(Air) if CPU ≤ 185W. |
Not Supported |
| R9S41C |
NVIDIA H100 80GB PCIe Accelerator |
2 or 3 |
Gen5 |
2@25C (Air) 3@20C (DLC) |
2 @ 20C (Air) 2 @ 25C (DLC) |
Not supported |
| S0K90C |
NVIDIA L40 48GB PCIe Accelerator |
3 |
Gen4 |
3@25C(Air) 3@25C(DLC) |
2@25C(Air) 3@25C(DLC) |
1@25C(Air) 2@25C(DLC) |
| S0K89C |
NVIDIA L4 24GB PCIe Accelerator |
8 |
Gen4 |
8 |
8@25C(Air) 8@25C(DLC) |
5@25C(Air) 8@25C(DLC) |
| R9P49C |
NVIDIA A100 80GB PCIe Non-CEC Accelerator |
3 |
Gen4 |
30C |
25C |
Not supported |
| R8T26C |
NVIDIA A16 64GB PCIe Non-CEC Accelerator for HPE |
3 |
Gen4 |
30C |
25C |
Not supported |
Storage Controllers | Tri-Mode SAS/SATA/NVMe
The HPE DL380 Gen11 Rack Server supports Tri-Mode controllers for mixed drive configurations. For direct-attached drives without a controller, VROC can be enabled. There are two VROC versions, one of which supports only NVMe SSDs.
Intel VROC SATA for HPE ProLiant Gen11
All models feature an embedded storage controller, with embedded software SATA RAID support for up to 14 bays.
- RAID Support- 0/1/5/10.
- Utilizing Intel CPU to RAID or HBA direct connected drives
- Off by default and must be enabled.
- Read the manual below for more.
Intel VROC NVMe for HPE ProLiant Gen11
All models feature 4 x8 PCIe 5.0 connectors per socket for NVMe connectivity, provides support for up to 8 direct attach x4 NVMe bays.
- Only supported on SFF models
- NVMe SSDs connected directly to the CPU
- For NVMe SSDs only, no PCIe card support.
- Utilizing Intel CPU to RAID or HBA direct connected drives
- Standard for RAID 0/1/10 (S0E37A/S0E38AAE)
- Premium for RAID 0/1/5/10 (R7J57A/R7J59AAE)
- Read the manual below for more.
If a controller with cache is installed, you must also add either the HPE Smart Storage Hybrid Capacitor (P02377-B21) or the HPE 96W Smart Storage Lithium-ion Battery (P01366-B21), both with a 145mm cable kit.
Integrated Lights-Out 6 (iLO 6)
Software that enables you to securely configure, monitor, and update your HPE ProLiant Gen11 servers seamlessly, from anywhere.
Embedded in Hewlett Packard Enterprise servers, HPE Integrated Lights-Out 6 (iLO 6) is an exclusive core intelligence that monitors server status, providing the means for reporting, ongoing management, service alerting, and local or remote management to identify and resolve issues quickly.
What is different in iLO6
- SPDM support for increased security with storage and network cards
- Telemetry streaming using Redfish Event subscription.
- Redfish APIs for iLO, System TPM measurement and SPDM capable option cards measurements.
- Added capability in iLO for Two Factor Authentication using OTP (One Time Password) for Microsoft AD users
- PLDM Downstream Firmware Update
- Certificate Management Enhancements
- Automatic certificate Enrollment via SCEP
- Certificate sideloading
- Redfish consistent health roll-ups
- Automatic clearing of Redfish alerts when condition doesnotdoes not exist anymore.
- Telemetry streaming using Redfish Event subscription.
- Redfish APIs for iLO, System TPM measurement and SPDM capable option cards measurements.
- Added capability in iLO for Two Factor Authentication using OTP (One Time Password) for Microsoft AD users
What’s deprecated in iLO 6
- Java IRC
- Internet Explorer
- eRSDirect Connect
- Jitter Smoothing
Power
Select a minimum (1), or maximum (2) power supplies. The mixing of 2 different power supplies is NOT allowed.
HPE Flexible Slot (Flex Slot) Power Supplies share a common electrical and physical design that allows for hot plug, tool-less installation into HPE ProLiant Gen11 Performance Servers. Flex Slot power supplies are certified for high-efficiency operation and offer multiple power output options, allowing users to "right-size" a power supply for specific server configurations. This flexibility helps to reduce power waste, lower overall energy costs, and avoid "trapped" power capacity in the data center.
Dimensions & Weight:
LFF CTO servers:
8.75 x 44.8 x 73.25 cm / 3.44 x 17.64 x 28.84 in
12 LFF hard drives (no rear drives), 2x processors, 2x power supplies, 1x RAID controller, 2x Risers installed)
- Maximum: 37kg/81.57 lbs
- Minimum: 18kg/39.68 lbs
SFF CTO servers:
8.75 x 44.8 x 72.7. cm / 3.44 x 17.64 x 28.62 in
8 SFF hard drives (no rear drives), 2x processors, 2x power supplies, 1x RAID controller, 2x Risers installed)
- Maximum: 33kg/72.75 lbs
- Minimum: 16kg/35.27 lbs