Loren Data's SAM Daily™

fbodaily.com
Home Today's SAM Search Archives Numbered Notes CBD Archives Subscribe
FBO DAILY - FEDBIZOPPS ISSUE OF MARCH 14, 2018 FBO #5955
SPECIAL NOTICE

70 -- Intent to Sole Source

Notice Date
3/12/2018
 
Notice Type
Special Notice
 
NAICS
334112 — Computer Storage Device Manufacturing
 
Contracting Office
Department of the Army, Army Contracting Command, ACC - APG (W911QX) Adelphi, 2800 Powder Mill Road, Building 601, Adelphi, Maryland, 20783-1197, United States
 
ZIP Code
20783-1197
 
Solicitation Number
N00173-18-Q-JL02
 
Archive Date
3/17/2018
 
Point of Contact
Joshua Lenfest, Phone: 2027672356
 
E-Mail Address
joshua.lenfest@nrl.navy.mil
(joshua.lenfest@nrl.navy.mil)
 
Small Business Set-Aside
N/A
 
Description
Solicitation #: N00173-18-Q-JL02 Procurement Type: Special Notice Title: High Performance Computing Cluster Description: The U.S. Department of Navy, Naval Research Laboratory (NRL) Contracting Division, proposes to enter into a sole source contract with Hewlett Packard Enterprises Government, LLC (HPEG) for a High Performance Computing Cluster. The minimum required specifications for the High Performance Computing (HPC) Cluster are identified below: One Rack HPC Cluster Requirements 1)The HPC cluster must fit within a single rack enclosure. 2)The HPC cluster must be composed of compute trays (similar to "blade" technology). Each compute tray must have two (2) processors in a shared memory configuration. The cluster enclosure must contain thirty-two (32) compute trays and support up to forty-eight (48) compute trays without additional infrastructure. 3)Each compute tray must contain 2 Intel® Xeon-G® 6136 (3.0GHz/12-core/150W) Gen 10 CPUs and 384 GB Dynamic Random Access Memory (DRAM). The HPC cluster must have a total of 768 cores with 16 GB of DRAM per core. 4)All hard disk storage for the compute trays must be Solid-State Drives (SSD). Each compute tray must have one 400GB NVMe SSD. 5)The HPC cluster must be factory assembled and tested. It must have full computer system integration and testing before delivery and installation at NRL. It must be shipped to NRL completely ready for immediate deployment only requiring power and network connections. The installation must include on-site technical support and system optimization, technology transfer and training. The cluster must include three (3) years of maintenance with "next business day" response, CDMR, and all required licensing. 6)The HPC cluster must include one (1) head node management server with rack-mount LCD monitor/keyboard/mouse device included. Head node must fit into a 2U slot in the rack and must include the following: a)Two (2) Intel® Xeon-G® 6136 (3.0GHz/12-core/150W) CPUs b)192GB DRAM interleaved on the motherboard for maximum memory throughput c)Two (2) 600GB SAS 15K HDDs in a Raid1 (mirror RAID redundant) configuration d)Six (6) 8TB SATA 7.2K HDDs Raid6 (can sustain two simultaneous failed drives) configuration e)Smart Array 10G RAID controller with: i)Must have 16 SAS Lanes and 4x4 internal Mini-SAS ports ii)Must handle both 12 Gb/s SAS and 6 Gb/s SATA technology HDDs iii)Must support Mix and Match SAS and SATA drives on the same controller iv)Must support SAS tape drives v)Must support RAID Levels: 0, 1, 5, 6, 10, 50, 60, 1 ADM and 10 ADM (Advanced Data Mirroring) vi)Must be VMware VSAN certified vii)Must support both legacy and UEFI boot operations viii)Must support up to 238 physical drives/64 logical drives ix)Must support Secure Encryption and full FIPS 140-2 Level 1 compliance x)Must have 96W smart storage battery xi)Must have 10Gb Ethernet NIC supporting two 562FLR-SFP+Adpt xii)Must have HPE ILO4 Management module xiii)Must have embedded 4x1gb NIC ports xiv) Must have dual, redundant 800W Platinum Power Supplies 7)The HPC cluster must include one (1) 48 port 10GbE switch with 952 Mpps/1280 Gbps throughput 8)The HPC cluster must include two (2) 24 port 1Gb/s Ethernet switches with 9.5 Mpps/12.8 Gbps throughput 9)To fully integrate into our already existing HPE computing environment, the cluster must include the following: a)HPE Integrated Lights Out (ILO) hardware modules throughout and licenses for the ILO management tools b)Full licensing for the HPE Cluster Management Utility (CMU) version 8 or newer c)Full licensing for the HPE Apollo Platform Manager (APM) d)Full licensing for HPE OneView 10)To comply with security best practices and requirements, the cluster must include the following: a)Silicon Root-of-Trust that is burned into the silicon components on the motherboard. b)Must conform to National Institute of Standards and Technology (NIST) security framework known as: "Protect, Detect, and Recovery." 11)Cluster must have these rack management features a)The cluster must have automatic chassis and device discovery with topographic views b)The cluster must t have time-stamped, rack-level event logging c)The cluster must have rack and chassis shared power and thermal component management d)The cluster must have integrated serial connector for server console and ancillary device access e)The cluster must have integrated Gb Ethernet switch for server iLO consolidation 12)Cluster must have these server management features a)The cluster must have server health monitoring b)The cluster must have iLO Single Sign-On access c)The cluster must have server FRU data reporting d)The cluster must have iLO IP address and PXE MAC address listings 13)Cluster must have these power management features a)The cluster must have Power control and measurement at the server, chassis, and rack level b)The cluster must have PDU-level power outlet control and current measurement c)The cluster must have Rack-level static or dynamic power capping d)The cluster must have DC Power Shelf management and integration with UPS subsystem 14)The vendor must provide cleared personnel for onsite installation, integration, and system tuning. 15)The vendor must provide cleared personnel for all onsite system maintenance. High Speed Switch Requirements To support cluster operations and user interaction, the following network hardware is required: 1)Eight (8) fully port-level managed, 48-port 1Gbs Layer 2 network switches are required a)Must fully compatible and manageable through existing HPE OneView environment b)Must support stacking, fixed 10GbE ports, static layer 3 routing and RIP v1/v2, PoE+, ACLs, IPv6 and power savings with Energy Efficient Ethernet. c)Must provide traffic prioritization with supported congestion actions including: strict priority (SP) queuing, weighted round robin (WRR), weighted random early detection (WRED), and SP+WRR and traffic policing with Committed Access Rate (CAR) and line rate The contemplated contract will be a Firm Fixed Price (FFP) type action. This notice is not a request for proposals or competitive offers. A determination by the Government to compete, or not, will be based on the responses received, and is solely within the discretion of the Government. Information received will normally be considered only for the purpose of determining whether to conduct a competitive procurement. The FSC is 7025 and the NAICS is 334112
 
Web Link
FBO.gov Permalink
(https://www.fbo.gov/notices/68cb9e6fc5067242483b32abaf0c6dfa)
 
Place of Performance
Address: 4555 Overlook Avenue, SW, Washington, District of Columbia, 20375, United States
Zip Code: 20375
 
Record
SN04851252-W 20180314/180312231400-68cb9e6fc5067242483b32abaf0c6dfa (fbodaily.com)
 
Source
FedBizOpps Link to This Notice
(may not be valid after Archive Date)

FSG Index  |  This Issue's Index  |  Today's FBO Daily Index Page |
ECGrid: EDI VAN Interconnect ECGridOS: EDI Web Services Interconnect API Government Data Publications CBDDisk Subscribers
 Privacy Policy  Jenny in Wanderland!  © 1994-2024, Loren Data Corp.