Loren Data's SAM Daily™

fbodaily.com
Home Today's SAM Search Archives Numbered Notes CBD Archives Subscribe
FBO DAILY ISSUE OF SEPTEMBER 02, 2010 FBO #3204
SOLICITATION NOTICE

70 -- RECOVERY: LINUX CLUSTER UPGRADE, GAITHERSBURG MARYLAND - RECOVERY: Attachment 2

Notice Date
8/31/2010
 
Notice Type
Combined Synopsis/Solicitation
 
NAICS
541519 — Other Computer Related Services
 
Contracting Office
Department of Commerce, National Institute of Standards and Technology (NIST), Acquisition Management Division, 100 Bureau Drive, Building 301, Room B129, Mail Stop 1640, Gaithersburg, Maryland, 20899-1640
 
ZIP Code
20899-1640
 
Solicitation Number
SB1341-10-RQ-0512
 
Archive Date
9/29/2010
 
Point of Contact
Wanza B. Jonjo, Phone: 301-975-3978, Alice M. Rhodie, Phone: 301-975-5309
 
E-Mail Address
wanza.jonjo@nist.gov, alice.rhodie@nist.gov
(wanza.jonjo@nist.gov, alice.rhodie@nist.gov)
 
Small Business Set-Aside
Total Small Business
 
Description
BRANDNAME JUSTIFICATION Attachment 1 - Bid Schedule Attachment 2 - ARRA - Clauses THIS ACQUISITION IS BEING FUNDED BY THE AMERICAN RECOVERY AND REINVESTMENT ACT OF 2009. The National Institute of Standards and Technology (NIST) have a requirement for Linux Clusters Upgrade at NIST, Gaithersburg, MD. RECOVERY: THIS IS A COMBINED SYNOPSIS/SOLICITATION FOR COMMERCIAL ITEMS PREPARED IN ACCORDANCE WITH THE FORMAT IN FAR SUBPART 12.6-STREAMLINED PROCEDURES FOR EVALUATION AND SOLICITATION FOR COMMERCIAL ITEMS-AS SUPPLEMENTED WITH ADDITIONAL INFORMATION INCLUDED IN THIS NOTICE. THIS ANNOUNCEMENT CONSTITUTES THE ONLY SOLICITATION; QUOTATIONS ARE BEING REQUESTED, AND A WRITTEN SOLICITATION DOCUMENT WILL NOT BE ISSUED. THE SOLICITATION IS BEING ISSUED USING SIMPLIFIED ACQUISITION PROCEDURES UNDER THE AUTHORITY OF FAR 13.5 TEST PROGRAM FOR CERTAIN COMMERCIAL ITEMS. 1352.215-73 INQUIRIES (MAR 2000) Offerors must submit all questions concerning this solicitation in writing to the Contracting Officer. They must be received no later than seven calendar days after the date of this solicitation. All responses to the questions will be made in writing and included in an amendment to the solicitation. This solicitation is a Request for Quotation (RFQ). The solicitation document and incorporated provisions and clauses are those in effect through Federal Acquisition Circular (FAC) 2005-44. The associated North American Industrial Classification System (NAICS) code for this procurement is 541519 with a small business size standard of 25 million. This requirement is a total Small Business Set-aside. A fixed-price contract will be awarded as a result of this effort. CLIN 0001 -Cluster 1- 148 dual quad Xeon compute nodes with 16 gigabyte of memory, 24 TB Parallel File system, Infiniband interconnect, Gigabit Ethernet network, General Hardware Integration Fee - Includes Wire management, mountin kits and other misc. items CLIN 0002 - Cluster 2 - 30 dual quad Xeon compute nodes with 16 Gigabyte of memory, 30 dual quad Xeon compute nodes with 16 Gigabyte of memory, 2 dual quad Xeon compute nodes with 32 Gigabyte of memory, 64 TB (raw) Storage system, Infiniband interconnect, Gigabit Ethernet network, Gigabit Ethernet network CLIN 0003 - Cluster 3 - Head Node with 8TB, 12 dual quad Xeon compute nodes with 16 Gigabyte of memory, Gigabit Ethernet, Open Source Cluster Software configuration, using CentOS 5.4 (or current level), SUN GRID Engine job scheduler, MPI Software - Open Source, General Hardware Integration Fee - Includes Wire management, racks and mounting kits and other misc. items, Labor CLIN 0004 - Compute nodes (OPTIONAL LINE ITEM) - 200 dual quad Xeon compute nodes with 16 gigabyte of memory including, General Hardware Integration Fee - Includes Wire management, mounting kits and other misc. items. The government reserves the right to award this item at time of award or up to 90 days after contract award subject to availability of funds. Funds are not presently available for this line item. Delivery Schedule Project plan(s) covering the installation, shakedown and acceptance testing of the Cluster 1 and Cluster 2 due two months after contract award. Delivery of hardware and documentation for Cluster 1 and Cluster 2 to NIST due after December 13, 2010 Installation and acceptance testing of Cluster 1 and Cluster 2 due 1 month after the initial delivery of hardware to NIST. Delivery of Cluster 3 hardware, software, and documentation (including documentation describing installation instructions, MPI operation instructions, and node recovery and cloning) due two months after contract award. Additionally, the contractor will supply hardware and software maintenance for each proposed cluster for a one year period, which begins with cluster acceptance by the government. BRAND NAME JUSTIFICATION: The Scientists at the U.S. Department of Commerce, National Institute of Standards and Technology (NIST) need access to a large Linux cluster in order to do meaningful algorithm scaling studies, and for the testing and debugging of large-scale computer simulations in preparation for production runs on external supercomputers of 10,000+ nodes. NIST scientists also need additional computing power to perform scientific computing for a range of research, including atmospheric transmittance, molecular surface interactions of interest for battery materials, simulation of light matter interactions for quantum computing, the development of nanoscale measurement techniques, and modeling of fire phenomena, concrete, and cement. NIST currently owns and operates three High-Performance Computing (HPC) Linux clusters to provide scientific computing resources to NIST-scientific programs. This purchase is to upgrade these HPC Linux Clusters. Linux is the recommended operating system that is compatible with NIST existing HPC clusters. Procuring an operating system other than Linux would cause NIST to incur substantial cost to replace NIST existing HPC Linux Clusters with another compatible system. Additional cost would also be incurred by NIST for labor to train NIST Staff and to export existing NIST in-house developed software. Therefore, procuring HPC Linux Cluster upgrades would result in a substantial cost savings to NIST. 1352.211-70 Statement of Work/Specifications for HPC Linux Cluster A. BACKGROUND INFORMATION Scientists at the U.S. Department of Commerce, National Institute of Standards and Technology (NIST) need access to a large Linux cluster in order to do meaningful algorithm scaling studies, and for the testing and debugging of large-scale computer simulations in preparation for production runs on external supercomputers of 10,000+ nodes. NIST scientists also need additional computing power to perform scientific computing for a range of research, including atmospheric transmittance, molecular surface interactions of interest for battery materials, simulation of light matter interactions for quantum computing, the development of nanoscale measurement techniques, and modeling of fire phenomena, concrete, and cement. For the past several years, NIST has invested significant efforts in the deployment of high-performance computing (HPC) Linux clusters to provide scientific computing resources to NIST-scientific programs. After deploying two modest-sized clusters, Hercules (300 nodes) and Raritan ( 498 nodes), NIST has the need to purchase a third HPC Linux cluster and significantly expand/enhance both the Raritan and Hercules HPC Linux clusters. B. PURPOSE AND OBJECTIVES OF THE PROCUREMENT The Enterprise Systems Division (ESD) of the Office of Chief Information Officer (OCIO) at NIST is seeking to procure three commercially-available High Performance Computing (HPC) Linux clusters upgrades (cluster 1, cluster 2, and cluster 3). The first and second HPC Linux clusters upgrades (cluster 1 and cluster 2) obtained in this acquisition will be installed into existing NIST-owned HPC Linux clusters. The acquisition will also include a higher speed inter-node network to reduce communications bottlenecks and a HPC storage system to improve Input/Output (I/O) bottlenecks in many applications, especially those using large data sets or highly parallel codes. The third HPC cluster (cluster 3) obtained in the acquisition will be a 6 node turn-key solution. C. CONTRACTOR REQUIREMENTS. The Contractor shall provide the following commercially-available items: 1. Hardware and Software requirements (Cluster 1) The contractor shall provide one hundred forty eight (148) compute nodes built from commodity AMD x86-64 or Intel EM64T (or binary equivalent) rack mounted computers. Each compute nodes will be configured with at least two AMD64 or Intel EM64T (or binary compatible) quad core processor chips and at least 16 GB of memory running at 1066 Mhz. 1.1. Node requirements (Cluster 1) All compute nodes must have following required properties: • must have at least two AMD64 or Intel EM64T (or binary compatible) quad core processor chips. Each processor chip must utilize no more than 80 watts when running a copy of High Performance Linpack Benchmark (HPL) on every core. HPL can be found at (http://www.netlib.org/benchmark/hpl/). • Basic input/output system (BIOS) set to "preboot execution environment (PXE) boot" as the first boot option. The following BIOS settings will be set to off or disabled if supported: Intel TurboBoost, Intel Simultaneous, and Intel Multithreading. • Double-data-rate three (DDR3) synchronous dynamic random access (SDRAM) memory with error-correcting code (ECC). • Double Date rate (DDR) or faster Infiniband connection built into the motherboard or provided as a host channel adapter (HCA). If not built into the motherboard, the HCA will be attached to a Peripheral Component Interconnect Express (PCI Express) 2.0 8x slot running at full 8x speed. • At least 500GB of local disk storage. The disk(s) shall be accessible for replacement without removing compute node from the rack. The disk(s) must have a spin rate of at least 7200 revolutions per minute (RPM) with at least 8 megabyte MB cache memory. • Capable of running 64 bit Red Hat Enterprise Linux 5.4 or later. The cluster will be configured with CentOS Linux 5 or later during acceptance testing. • Dual gigabit Ethernet with the first interface set to PXE boot • integrated or add-on remote management card (also called a Baseboard Management Controller or BMC) that compatible with Intelligent Platform Management Interface (IPMI) 2.0 and supports (at least) the following remote management functions via an Ethernet LAN interface: remote power off; remote power on; remote system (re)boot; remote motherboard bios setting; remote motherboard bios upgrade/update/flashing; viewing serial console boot and runtime input/output from a remote management location • Power, disk storage, and network indicators. • 80 PLUS (www.80plus.org) certified power supply having a minimum efficiency of 88% or greater when tested at following load conditions: 20%, 50% and 100%. • Rail kits for mounting in standard 42U racks with square holes 1.2. Identical parts (Cluster 1) All nodes must contain identical parts, including identical firmware revisions/versions and board-level hardware revision numbers. 1.3. Remote Management (Cluster 1) All nodes must be connected to a management system that supports the installation, configuration, and day-to-day operations of any node in the cluster. The management system must: support IPMI 2.0; support remote terminal over Secure Shell (SSH) or Secure Hyper Text Markup Language (HTTPS) to each node; provides maintenance tools such as system reset and power on/off. 1.4. Parallel File Systems (Cluster 1) Contractor must provide a parallel file system that provides at least 24TB of usable storage. The parallel file systems must meet or exceed a sustain write performance of 700MB/second when supporting thirty two (32) compute nodes writing simultaneous. The HPC solution must also:  have a design with no single-points-of-failure (redundant storage nodes, redundant network path failover, RAID data parity, hot-swappable power supplies, and cooling fans).  support remote system management via a web based Graphical User Interface (GUI) over Secure Hyper Text Markup Language (HTTPS) or Secure Shell (SSH) accessible Command Line Interface (CLI).  support remote alerts via SYSLOG.  support network file system protocol (NFS).  support Network Data Management Protocol version 4 (NDMP).  support Red Hat Enterprise Linux 5.  support hard and soft quotas for users and groups.  include rack mounting kit. 1.5. Ethernet Network (Cluster 1) All nodes must be connected via a 1 Gigabit Ethernet network. The 1 Gigabit Ethernet network must:  have switches connected together by 10GB fiber uplinks.  be managed via a single IP address.  support Secure Shell (SSH), Hypertext Transfer Protocol Secure (HTTPS) and Simple Network Management Protocol Version 3 (SNMPv3), nine (9) kilobyte jumbo frames.  provide enough ports to support equipment provided by contractor and plus an additional 512 ports to replace existing Ethernet network.  provide two (2) 10 Gigabit Ethernet port for storage node.  include rack mounting kit. NIST will provide fiber cabling connections between network switches and cabling for existing nodes. 1.6. InfiniBand Interconnect. (Cluster 1) All nodes must be connected via an Infiniband fabric. The Infiniband fabric must:  support up to 40 Gigabit/second per port.  support OpenIB's OpenSM subnet manager.  provide enough ports to support equipment provided by contractor plus an additional 96 ports to support exist nodes.  be expandable to at least 256 ports.  include rack mounting kit  include 32 InfiniBand PCI express X4 Single data rate (SDR) host channel adapters (HCA) and 5 meter cables to retrofit 32 existing nodes.  support Red Hat Enterprise 5 Linux NIST will provide Infiniband cabling for existing nodes. 1.7. Operating System (Cluster 1) No Operating System required 1.8. Login and gateway nodes (Cluster 1) No ancillary nodes (login, gateway, etc) required 2. Hardware and Software requirements (Cluster 2) The contractor shall provide thirty (32) compute nodes built from commodity AMD x86-64 or Intel EM64T (or binary equivalent) rack mounted computers. Thirty (30) compute nodes must have at least 16 GB of memory running at 1066Mhz. Two (2) compute nodes must have at least 32 GB of memory running at 1066Mhz. 2.1. Node requirements (Cluster 2) All compute nodes must have following required properties: • must have at least two AMD64 or Intel EM64T (or binary compatible) quad core processor chips. Each processor chip must utilize no more than 80 watts when running a copy of High Performance Linpack Benchmark (HPL) on every core. HPL can be found at (http://www.netlib.org/benchmark/hpl/). • Basic input/output system (BIOS) set to "preboot execution environment (PXE) boot" as the first boot option. The following BIOS settings will be set to off or disabled if supported: Intel TurboBoost, Intel Simultaneous, and Intel Multithreading. • Double-data-rate three (DDR3) synchronous dynamic random access (SDRAM) memory with error-correcting code (ECC). • Double Date rate (DDR) or faster InfiniBand connection built into the motherboard or provided as a host channel adapter (HCA). If not built into the motherboard, the HCA will be attached to a Peripheral Component Interconnect Express (PCI Express) 2.0 8x slot running at full 8x speed. • At least 500GB of local disk storage. The disk(s) shall be accessible for replacement without removing compute node from the rack. The disk(s) must have a spin rate of at least 7200 revolutions per minute (RPM) with at least 8 megabyte MB cache memory. • Capable of running 64 bit Red Hat Enterprise Linux 5.4 or later. The cluster will be configured with CentOS Linux 5 or later during acceptance testing. • Dual gigabit Ethernet with the first interface set to PXE boot • integrated or add-on remote management card (also called a Baseboard Management Controller or BMC) that compatible with Intelligent Platform Management Interface (IPMI) 2.0 and supports (at least) the following remote management functions via an Ethernet LAN interface: remote power off; remote power on; remote system (re)boot; remote motherboard bios setting; remote motherboard bios upgrade/update/flashing; viewing serial console boot and runtime input/output from a remote management location • power, disk storage, and network indicators. • 80 PLUS (www.80plus.org) certified power supply having a minimum efficiency of 88% or greater when tested at following load conditions: 20%, 50% and 100%. • Rail kits for mounting in standard racks with square holes 2.2. Identical parts (Cluster 2) All nodes must contain identical parts, including identical firmware revisions/versions and board-level hardware revision numbers. 2.3. Ethernet Networks (Cluster 2) All nodes must be connected via an Ethernet network. The Ethernet network must support all nodes connected at 1 Gigabit/second. 2.4. InfiniBand Interconnect. (Cluster 2) All nodes must be connected via an InfiniBand Interconnect. The InfiniBand interconnect must support 40 Gigabit/second to each node and support OpenIB's OpenSM subnet manager. 2.5. Remote Management (Cluster 2) All nodes must be connected to a management system that supports the installation, configuration, and day-to-day operations of any node in the cluster. The management system must: support IPMI 2.0; support remote terminal over Secure Shell (SSH) or Secure Hyper Text Markup Language (HTTPS) to each node; provides maintenance tools such as system reset and power on/off. 2.6. Storage System (Cluster 2) Contractor must provide at least 64 terabytes (raw) of usable storage system that meets or exceeds a sustained throughput of 300MB/second. The storage system must: • support storage RAID Levels 5 and 6. • have a design with no single-points-of-failure (redundant controllers, hot-swappable power supplies and cooling fans). • have 1TB or 2TB Serial ATA disk drives. • support Serial attached SCSI (SAS) host connections. • support remote system management via a Secure HTML (Hyper Text Markup Language). • support remote alerts via SYSLOG. • support Red Hat Enterprise Linux 5. • support automatic storage verify every 24 hours. 2.7. Operating System (Cluster 2) No Operating System required. 2.8. Login and gateway nodes (Cluster 2) No ancillary nodes (login, gateway, etc) required. 3. Hardware and Software requirements (Cluster 3) The contractor shall provide a twelve (12) node Turn-Key (ready for use) High Performance Computing (HPC) Linux cluster including resource manager and HPC cluster software. The cluster will be built from commodity AMD x86-64 or Intel EM64T (or binary equivalent) compute nodes. 3.1. Node requirements (Cluster 3) The contractor shall provide twelve (12) compute nodes built from commodity AMD x86-64 or Intel EM64T (or binary equivalent) rack mounted computers. All compute nodes must have following required properties: • At least two AMD64 or Intel EM64T (or binary compatible) quad core processor chips, 32 GB of memory running at 1333Mhz. Each processor chip must utilize no more than 120 Watts when running a copy of High Performance Linpack Benchmark (HPL) on every core. HPL can be found at (http://www.netlib.org/benchmark/hpl/). • Basic input/output system (BIOS) set to "preboot execution environment (PXE) boot" as the first boot option. The following BIOS settings will be set to off or disabled if supported: Intel TurboBoost, Intel Simultaneous, and Intel Multithreading. • Double-data-rate three (DDR3) synchronous dynamic random access (SDRAM) memory with error-correcting code (ECC). • Double Date rate (DDR) or faster InfiniBand connection built into the motherboard or provided as a host channel adapter (HCA). If not built into the motherboard, the HCA will be attached to a Peripheral Component Interconnect Express (PCI Express) 2.0 8x slot running at full 8x speed. • At least 500GB of local disk storage. The disk(s) shall be accessible for replacement without removing compute node from the rack. The disk(s) must have a spin rate of at least 7200 revolutions per minute (RPM) with at least 8 megabyte MB cache memory. • Capable of running 64 bit Red Hat Enterprise Linux 5.4 or later. The cluster will be configured with CentOS Linux 5 or later during acceptance testing. • Dual gigabit Ethernet with the first interface set to PXE boot • integrated or add-on remote management card (also called a Baseboard Management Controller or BMC) that compatible with Intelligent Platform Management Interface (IPMI) 2.0 and supports (at least) the following remote management functions via an Ethernet LAN interface: remote power off; remote power on; remote system (re)boot; remote motherboard bios setting; remote motherboard bios upgrade/update/flashing; viewing serial console boot and runtime input/output from a remote management location • power, disk storage, and network indicators. • 80 PLUS (www.80plus.org) certified power supply having a minimum efficiency of 88% or greater when tested at following load conditions: 20%, 50% and 100%. • Rail kits 3.2. Identical parts (Cluster 3) All nodes must contain identical parts, including identical firmware revisions/versions and board-level hardware revision numbers. 3.3. Ethernet Networks (Cluster 3) All nodes must be connected via an Ethernet network. The Ethernet network must support all nodes connected at 1 Gigabit/second. 3.4. InfiniBand Interconnect. (Cluster 3) All nodes must be connected via an InfiniBand Interconnect. The InfiniBand interconnect must support 40 Gigabit/second to each node and support OpenIB's OpenSM subnet manager. 3.5. Remote Management (Cluster 3) All nodes must be connected to a management system that supports the installation, configuration, and day-to-day operations of any node in the cluster. The management system must: support IPMI 2.0; support remote terminal over Secure Shell (SSH) or Secure Hyper Text Markup Language (HTTPS) to each node; provides maintenance tools such as system reset and power on/off. 3.6. Operating System (Cluster 3) All nodes must contain a Red Hat Enterprise compatible operating system. 3.7. Login and Management nodes (Cluster 3) Head Node must have: • at least two AMD64 or Intel EM64T (or binary compatible) quad core processor chips, 24 GB of memory running at 1333Mhz. Each processor chip must utilize no more than 100 Watts when running a copy High-Performance Linpack Benchmark (HPL) on every core. HPL can be found at (http://www.netlib.org/benchmark/hpl/). • a rackmount chassis with Redundant Power Supply and at least 12 drive bays • adjustable sliding rails • Double-data-rate three (DDR3) synchronous dynamic random access (SDRAM) memory with error-correcting code (ECC). • a DVD+RW drive. • an on-board graphics Board Graphics with 8MB Video. • two integrated gigabit Ethernet capable of full-duplex wire-speed. • Double Date rate (DDR) or faster Infiniband connection built into the motherboard or provided as a host channel adapter (HCA). If not built into the motherboard, the HCA will be attached to a Peripheral Component Interconnect Express (PCI Express) 2.0 8x slot running at full 8x speed. • at least 250 gigabyte (GB) of Serial ATA (SATA) storage for operating system configured in RAID 1. Disk(s) must have a spin rate of (at least) 7200 revolutions per minute (RPM) with (at least) 8 megabyte (MB) cache memory. • at least 6TB of usable Serial ATA (SATA) storage for user data configured in RAID 6. Disk(s) must have a spin rate of (at least) 7200 revolutions per minute (RPM) with (at least) 8 megabyte (MB) cache memory. 3.8. Beowulf cluster software (Cluster 3) Contractor will provide the following preconfigured software: • Message Passing Libraries (MPICH, MPICH2, OpenMPI). • GNU Scientific Compilers LAPACK Libraries, BLAS Libraries. • Cluster resource manager (Torque or Sun Grid). • Cluster monitoring utility. • Scripts for node recovery and cloning. • Cluster management for remote installation/update of software across the network. • Scripts for node reboot and node shutdown. 3.9. KVM solution (Cluster 3) All nodes (compute nodes and head node) must be connected via a Keyboard, Video, and Mouse (KVM) solution. KVM solution must provide a 1U Rackmount 17" LCD Monitor with Keyboard and Touchpad. 3.10. Uninterruptible power supply (UPS) (Cluster 3) The head node must have an UPS Environmental Monitoring Smart Card installed and be connected to a rackmount Uninterruptible power supply (UPS). The UPS must provide at least Five (5) minutes of runtime after lost of external power. The contractor must also include custom scripts for automatic safe shutdown of head node prior to UPS batteries being drained. 4. Installation Onsite installation is not required. 4.1. Installation requirements for Cluster 1 and Cluster 2 All equipment provided by the contractor for Cluster 1 and Cluster 2 will be installed in NIST provided standard 19-inch racks by NIST staff. Each rack has adjustable mounting rails with square holes. NIST with provide power to equipment via 208 volts Power Distribution Unit (PDU) with C13 sockets. 4.2. Onsite Installation for requirements Cluster 3 All equipment provided by the contractor for Cluster 3 must be housed into contractor provided rack. The racks must have sufficient number of 30AMP 208V 3 Phase power distribution units (PDU) to power all equipment. In addition, the amperage of the required circuit breakers should be calibrated so that the utilization is maximized, but below 80% of the rated load during normal operation with user applications running. If the equipment in the rack requires more power during power-up (so called surge power), the rack PDU may not be calibrated to this surge power, but rather to the normal operating power with user applications running. Each rack shall not exceed 15kilowatt (kW). 5. Component Labeling (Cluster 3) Every rack, Ethernet switch, Ethernet cable, InfiniBand switch, InfiniBand cable, node, disk enclosure will be clearly labeled with a unique identifier visible from the front of the rack and/or the rear of the rack, as appropriate, when the rack door is open. These labels will be high quality so that they do not fall off, fade, disintegrate, or otherwise become unusable or unreadable during the lifetime of the cluster. The font will be non-serif such as Arial or Courier with font size for these labels at least 9pt. Nodes will be labeled from the front or rear with a unique serial number for inventory tracking. 6. Hardware and Software Maintenance (Cluster 1, Cluster 2, Cluster 3) The contractor will supply hardware and software maintenance for each proposed cluster for a one period, which begins with cluster acceptance. 7. Optional Additional computer nodes and InfiniBand interconnect The contractor shall provide 200 hundred (200) compute nodes including InfiniBand interconnect. All compute nodes shall be built from commodity AMD x86-64 or Intel EM64T (or binary equivalent) rack mounted computers. Each compute nodes will be configured with at least two AMD64 or Intel EM64T (or binary compatible) quad core processor chips and at least 16 GB of memory running at 1066 Mhz. • All compute nodes must meet the same requirements listed in section 1.1. • All nodes must be connected via an Ethernet network. The Ethernet network must support all nodes connected at 1 Gigabit/second. • All nodes must be connected via an InfiniBand Interconnect. The InfiniBand interconnect must support 40 Gigabit/second to each node and support OpenIB's OpenSM subnet manager. • All nodes must be connected to a management system that supports the installation, configuration, and day-to-day operations of any node in the cluster. The management system must: support IPMI 2.0; support remote terminal over Secure Shell (SSH) or Secure Hyper Text Markup Language (HTTPS) to each node; provides maintenance tools such as system reset and power on/off. • All equipment provided by the contractor will be installed in NIST provided standard 19-inch racks by NIST staff. • NIST with provide power to equipment via 208 volts Power Distribution Unit (PDU) with C13 sockets. D. GOVERNMENT RESPONSIBILITIES NIST will also furnish the following: • Facility drawings and specifications may be available upon request. • Power and Cooling • Racks where specified. E. DELIVERABLES • Delivery of hardware and documentation for Cluster 1 and Cluster 2 to NIST due after December 13, 2010 • Delivery of Cluster 3 hardware, software, and documentation (including documentation describing installation instructions, MPI operation instructions, and node recovery and cloning) due two months after contract award. F. INSPECTION AND ACCEPTANCE CRITERIA Acceptance will be provided at Government site and as duties and responsibilities are completed, the Contractor shall request review and acceptance by the NIST Technical POC. Acceptance testing will be performed as follows: a. Physical verification of equipment ordered. i. Verify quantities and specifications match invoice. b. Bring system online. i. Verify CPU speed ii. Verify RAM total and speed iii. Verify BIOS settings iv. Verify PXE enabled Ethernet interface v. Verify IPMI interface exists, is enabled, and receives IP address by DHCP c. Performance and reliability testing. i. Boot nodes with memtest86 via PXE and run for 8 hours ii. Run "HPL" benchmark test for 24 hours iii. Run "iozone" (http://www.iozone.org/) N-to-N sequential writes and reads benchmark test PROVISIONS AND CLAUSES: The following provisions and clauses apply to this acquisition: Section 508 of the Rehabilitation Act of 1973, as amended, does not apply to this acquisition because the user does not interact with the EIT products directly. Provisions: 52.212-1, Instructions to Offerors-Commercial Items 52.212-3 Offeror Representations and Certifications-Commercial Items. Offerors shall complete annual representations and certifications on-line at http://orca.bpn.gov in accordance with FAR 52.212-3 Offerors Representations and Certifications- Commercial Items. If paragraph (j) of the provision is applicable, a written submission is required. Clauses: 52.212-4 Contract Terms and Conditions-Commercial Items 52.212-5 II Contract Terms and Conditions Required to Implement Statutes or Executive Orders-Commercial Items (July 2010) ALTERNATE II (Jun 2010). Commercial Items including subparagraphs: 52.203-6 Restrictions on Subcontractor Sales to the Government 52.203-15 Whistleblower Protections Under the American Recovery and Reinvestment Act of 2009 52.204-10 - Reporting Subcontract Awards 52.204-11 American Recovery and Reinvestment Act--Reporting Requirements 52.219-4 Notice of Price Evaluation Preference for HUBZone Small Business Concerns 52.219-6 Notice of Total Small Business Set-Aside 52.219-8 Utilization of Small Business Concerns 52.222-3 Convict Labor; 52.222-19 Child Labor - Cooperation With Authorities And Remedies; 52.222-21 Prohibition of Segregated Facilities; 52-222-26 Equal Opportunity; 52.222-35 Equal Opportunity for Special Disabled Veterans, Veterans of the Vietnam Era, and Other Eligible Veterans 52.222-36 Affirmative Action for Workers with Disabilities; 52.222-37 Employment Reports on Special Disabled Veterans, Veterans of the Vietnam Era, and Other Eligible Veterans 52.225-1 Buy American Act - Supplies-IF SET-ASIDE 52.225-13 Restriction on Certain Foreign Purchases; 52.232-30 Installment Payments for Commercial Items 52.232-33 Payment by Electronic Funds Transfer-Central Contractor Registration. 52.247-34 F.O.B. Destination 52.232-18 Availability of Funds 52.217-7 Option for Increased Quantity All FAR clauses may be viewed at http://acquisition.gov/comp/far/index.html 1352.201-70 Contracting Officer's Authority 1352.201-71 Contracting Officer's Technical Representative (COTR) 1352.209-73 Compliance With The Laws 1352.233-70 Harmless From Liability 1352.242-71 Post-Award Conference 1352.247-72 Marking Deliverables NIST Local Clause 04 Billing Instructions INSTRUCTIONS: Central Contractor Registration In accordance with FAR 52.204-7, the awardee must be registered in the Central Contractor Registration (www.ccr.gov) prior to award. Refusal to register shall forfeit award. Due Date for Quotations Offerors shall submit their quotations so that NIST receives them not later than 12:00 p.m. Eastern Time on Tuesday, September 14, 2010. FAXED or E-Mail quotations shall not be accepted. All quotations shall be sent to the National Institute of Standards and Technology, Acquisition Management Division, Attn: Wanza Jonjo/Alice Rhodie, Spt Contractor, 100 Bureau Drive, Stop 1640, Gaithersburg, MD 20899-1640. All offerors shall ensure the RFQ number is visible on the outermost packaging. Hand Carried Quotes shall be delivered to National Institute of Standards and Technology (NIST), Building 301, Room 125B, 100 Bureau Drive, Gaithersburg, MD 20899-0001. Because of heightened security, FED-EX, UPS, or similar delivery methods are the preferred method of delivery of quotes. If quotes are hand delivered, delivery shall be made on the actual due date through Gate A, and a 48 hour (excluding weekends and holidays) prior notice shall be provided to Wanza Jonjo, Contracting Officer, 301-975-3978. NIST is not responsible for late delivery due to the added security measures. ADDENDUM TO PROVISION 52.212-1 - QUOTATION SUBMISSION INSTRUCTIONS The Offeror shall submit with their offer an original and four copies of each volume and one (1) CD-ROM that includes the entire quotation on a CD-ROM as specified below: Volume I - Technical: a. Technical Excellence- NIST will validate that a Contractor's technical proposal satisfies requirements. NIST will assess how well a Contractor's technical proposal addresses NIST's requirements. A Contractor is not solely limited to discussion of these features. A Contractor may propose other features or attributes if the Contractor believes they may be of value to NIST. If NIST agrees, consideration may be given to them in the evaluation process. In all cases, NIST will assess the value of each proposal as submitted. b. Feasibility and Schedule Credibility- Feasibility of the proposed solution is of critical importance to the NIST. Schedule is of critical importance to the NIST. The NIST will assess feasibility of the Contractor's proposed solution, and the proposed delivery schedule, with consideration to the following. Volume II - Experience: The quotation shall demonstrate the Offeror's experience performing requirements similar in scope to those of this acquisition. The Offeror may also describe relevant certifications or awards received that relate to its experience performing requirements similar in scope to this acquisition. Volume II - Past Performance : The Offeror shall provide a list of contracts/orders completed within the last five (5) years, or currently being performed, that are similar in scope to this acquisition. Include the following information for each similar contract/order: (a) Customer Contract/order Number; (b) Name, address, phone number, and e-mail address of customer; (c) Description of the requirements of the contract and the work performed by the Offeror under that contract/order; (d) Contract type for the contract/order (e.g., firm-fixed-price, labor-hour, time-and-material, cost-plus-fixed-fee); (e) Total dollar amount of the contract/order including options; and (f) A list of problems encountered and corrective actions taken, if any. Volume III - Price : The cost/price portion of the quotation shall be submitted as a separate submission on the attached Bid Schedule (Attachment 1). NOTE : Quoters shall include a CD-ROM that includes the entire quotation on a CD-ROM and is readable by Microsoft Office Excel 2007. ADDENDUM TO 52.212-2-EVALUATION CRITERIA Evaluation Factors Award shall be made to the offeror whose quotation offers the best value to the Government price and other factors considered. The Government will evaluate quotations based on the following evaluation criteria: 1) Technical, 2) Experience, 3) Past Performance, and 4) Price. All non-price factors, when combined, are significantly more important than price. NIST will evaluate the Offeror's technical approach against the minimum requirements of the Linux Cluster as described in the Statement of Work. NIST intends to select the responsive and responsible Contractor whose proposal satisfies the requirements and contains the combination of Price, Past Performance, Technical and Experience and whose proposal offers the best overall value to the Government. NIST will determine the best overall value by comparing differences in technical, experience and past performance offered with differences in price, striking the most advantageous balance between expected performance and the overall price to the NIST. Contractors must, therefore, be persuasive in describing the value of their proposed technical (performance features), experience and past performance in enhancing the likelihood of successful performance or otherwise best achieving NIST's objectives. NIST's selection may be made on the basis of the initial proposals. Requirements in the Statement of Work (SOW) are essential to NIST requirements, and a Contractor must satisfactorily propose all requirements in order to have its proposal considered responsive. The Government will evaluate the offeror's quotation based on the following evaluation criteria list in order of importance: 1) Technical 2) Experience 3) Past Performance 4) Price Note: All non-price factors, when combined, are significantly more important than price. The Government intends to award a contract to the responsible Offeror whose technically acceptable proposal represents the best value to the Government. Only quotations that fully address and meet or exceed the technical requirements of this solicitation will be considered for award. 1) Technical : (Technical Excellence and Feasibility/Schedule Credibility): Technical Excellence: NIST will validate that a Contractor's technical proposal satisfies minimum requirements. NIST will also assess how well a Contractor's technical proposal addresses NIST's requirements. A Contractor is not solely limited to discussion of these features. A Contractor may propose other features or attributes if the Contractor believes they may be of value to NIST. If NIST agrees, consideration may be given to them in the evaluation process. In all cases, NIST will assess the value of each proposal as submitted. Proposals will be rated using the above performance features as (1) meets the minimum requirements, (2) exceeds the minimum requirements, or (3) does not meet all the minimum requirements using the above performance features. NIST will evaluate the following performance features. • Proposed compute nodes performance per watt (the energy efficiency of the compute nodes) for Cluster 1 and Cluster 2 in the SOW. • Proposed hardware and software support model and how this model will provide at least one years of practical system maintenance (i.e., will the maintenance model work in practice?). • How well the proposed maintenance plan meets or exceeds the stated requirements. • How favorable is the overall system footprint compared to other proposals. • Proposed parallel file system design, capacity, scalability and I/O performance. Feasibility/Schedule Credibility: Feasibility of the proposed solution is of critical importance to the NIST. Schedule is of critical importance to the NIST. The NIST will assess and rate as meets or exceeds feasibility of the Contractor's proposed solution, and the proposed delivery schedule, with consideration to the following. • The likelihood that the Contractor's proposed build, pre-ship, delivery and acceptance activities can actually happen within the required timeframes. • The likelihood that the Contractor's proposed build, pre-ship, delivery and acceptance activities can actually happen with the required timeframes. • The Contractor's ability to comply with the required or proposed delivery and performance schedules. • The Contractor's ability to diagnose and determine root cause of hardware and software problems in a timely manner. • The Contractor's demonstrated ability to meet schedule and delivery promises 2) Experience : NIST will evaluate the extent of the offeror's experience providing similar equipment. NIST will give preference to offerors who demonstrate they have experience delivering the same equipment they are proposing for the current requirement. Experience will be evaluated for the relevance and breadth of the offerors demonstrated experience in designing, building and installing Linux Clusters. 3) Past Performance : The Government will evaluate the Offeror's past performance information and, if appropriate, its proposed subcontractors' past performance to determine its relevance to the current requirement and the extent to which it demonstrates that the offeror has successfully completed relevant contracts in the past five years. In assessing the offeror's past performance information, NIST will evaluate, as appropriate, successful performance of contract requirements, quality and timeliness of delivery of goods and services, quality and scope of the Contractor's performance record, contractor's demonstrated ability to meet schedule and delivery date, contractor's ability to diagnose and determine root cause of hardware and software problems in a timely manner. Evaluation of this factor will be based on information contained in the technical portion of the quotation and information provided by references. The Government will evaluate past performance information by contacting appropriate references, including NIST references, if applicable. The Government may also consider other available information in evaluating the Offeror's past performance. The Government will assign a neutral rating if the offeror has no relevant past performance information. 4) Price : NIST will evaluate the following price related factors: • Reasonableness of the prices of proposed components • Proposed price compared to the perceived value • The total ownership of the Contractor's proposed solution. Total cost of ownership will consider anticipated power consumption, maintenance schedules, anticipated installation costs, and overall system footprint. Experience, Past Performance and Price shall not be evaluated on quotes that are determined technically unacceptable under the Technical evaluation factors. Non-Governmental Acquisition Support. The Government may utilize non-Governmental acquisition support for processing of quotations submitted in response to this solicitation. These individual(s) will have access to the quotation information and will be assist the Contracting Officer in the procurement process by performing services including, but not limited to, preparing the quotations for submission to the technical evaluators, performing past performance checks, and preparing award documents. The non-Governmental acquisition support will not conduct technical evaluations of any quotation and will not be involved in the final decision as to the awardee under this procurement. 1352.233-71 SERVICE OF PROTESTS (MAR 2000) An agency protest may be filed with either (1) the Contracting Officer, or (2) at a level above the Contracting Officer, with the agency Protest Decision Authority. See 64 Fed. Reg. 16,651 (April 6, 1999) (Internet site: http://oamweb.osec.doc.gov/conops/reflib/alp1296.htm) for the procedures for filing agency protests at the level above the Contracting Officer (with the Protest Decision Authority). Agency protests filed with the Contracting Officer shall be sent to the following address: NIST/Acquisition Management Division ATTN: Wanza Jonjo, Contracting Officer 100 Bureau Drive, MS 1640 Gaithersburg, MD 20899 If a protest is filed with either the Protest Decision Authority, or with the General Accounting Office (GAO), a complete copy of the protest (including all attachments) shall be served upon both the Contracting Officer and Contract Law Division of the Office of the General Counsel within one day of filing with the Protest Decision Authority or with GAO. Service upon the Contract Law Division shall be made, as follows: U.S. Department of Commerce Office of the General Counsel Contract Law Division--Room 5893 Herbert C. Hoover Building 14th Street and Constitution Avenue, N.W. Washington, D.C. 20230 Attn: Mark Langstein, Esquire FAX: (202) 482-5858
 
Web Link
FBO.gov Permalink
(https://www.fbo.gov/spg/DOC/NIST/AcAsD/SB1341-10-RQ-0512/listing.html)
 
Place of Performance
Address: National Institute of Standards and Technology, 100 Bureau Drive, Gaithersburg, Maryland, 20899-0001, United States
Zip Code: 20899-0001
 
Record
SN02262151-W 20100902/100831235835-26558f27ff41a9cc04d5945c24938385 (fbodaily.com)
 
Source
FedBizOpps Link to This Notice
(may not be valid after Archive Date)

FSG Index  |  This Issue's Index  |  Today's FBO Daily Index Page |
ECGrid: EDI VAN Interconnect ECGridOS: EDI Web Services Interconnect API Government Data Publications CBDDisk Subscribers
 Privacy Policy  Jenny in Wanderland!  © 1994-2024, Loren Data Corp.