Loren Data's SAM Daily™

fbodaily.com
Home Today's SAM Search Archives Numbered Notes CBD Archives Subscribe
FBO DAILY ISSUE OF FEBRUARY 07, 2002 FBO #0067
SOLICITATION NOTICE

A -- Beowulf cluster computer systems with 16 processors

Notice Date
2/5/2002
 
Notice Type
Solicitation Notice
 
Contracting Office
101 Strauss Ave.
 
ZIP Code
00000
 
Solicitation Number
N0017402Q0074
 
Response Due
2/27/2002
 
Archive Date
3/29/2002
 
Point of Contact
Carol M. Bolton (301)744-6789 boltoncm@ih.navy.mil or Renee M. Brown (301) 744-6653 brownrm@ih.navy.mil
 
E-Mail Address
Email your questions to boltoncm@ih.navy.mil
(boltoncm@ih.navy.mil)
 
Description
Specifications: Beowulf Cluster Computer System with 16 processors The requirement is for a cluster computer with 8 computing nodes, each node having 2 processors and 2 Gb of RAM. The system should be fully integrated, consisting of rackmounted compute modules in a standard 19" rack cabinet. The master node may be included as one of the 8 computing nodes or may be configured as a separate node. The performance of the computing nodes must meet or exceed the specifications described below. Floating point performance is the most important criterion, and the performance of each compute node must match or exceed that of a Dual Athlon motherboard with AMD 62/768 chip set with two 1.4GHz Athlon processors and 133 MHz DDR memory bus with a peak throughput of 2.4 gigaflops. Slave Nodes: Each slave node must include: 1U Case and 300 Watt Power Supply The equivalent or better: T yan K7 Dual Athlon Motherboard The equivalent or better: two 1.4GHz Athlon Processors with 384K Cache 2GB PC2100 DDR SDRAM (4 each 512MB DIMMs) 80GB 7,200 RPM IDE Hard Drive 3.5" 1.44 MB Floppy Drive Dual 10/100 PCI Ethernet Linux operating system installed MPI, PVM, PBS, cluster management software (e. g. SCYLD) installed Master Node: If the master node is one of the eight computing nodes it must have the same specifications as the slave nodes above, with the exceptions noted below. If it is supplied as a separate node, it may have lower performance, as adequate for system administration and management. In either case, four 72 Gb Ultra 160 SCSI hard disks in a RAID configuration with redundancy should be supplied in place of the 80 Gb IDE disk. In addition, the master node must have 48x or better CD-ROM drive, and graphics card. Also a 17? Monitor, Keyboard and Mouse (not rack mounted). Interconnection: Nodes must be interconnected with a wire-speed, non-blocking, fast ethernet switch equivalent to HP Procurve 2424M (24 ports for future expansion). In addition, they must be interconnected via a high performance Myricom Myrinet network. The Myrinet configuration should include a Myrinet 2000 32 port switch enclosure (M3-E32) to allow for future expansion, with one M3-SW16-8F 8 port line cards, an M3-M Monitoring Card, and appropriate M3-BLANK cards. Each compute node should have a Myrinet 2000 Fiber Link PCI Interface M3F-PCI64B-2 133 MHz and appropriate Fiber Cables. Page 1 of 2 Attachment (E) Specifications: Beowulf Cluster Computer System with 16 processors (cont.) Uninterruptable Power Supply: There must be sufficient UPS capability to provide a run time of 15 minutes or more in the event of a power outage. The UPS(s) should be rackmounted, with capability for adding external batteries for extended run time, and input is preferably for 120v, 30A circuits. Acceptable configurations would include two (2) Tripp Lite 3KVA SU3000RT3U UPS back-up system or two (2) APC Smart UPS SU3000RMXL3U Rack Cabinet: At least 45U heavy duty rack cabinet with cooling fans, power strips. System Software: Operating System: Pre-loaded and configured Red Hat Linux Pre-loaded and configured PVM, MPI, and PBS Pre-loaded and configured cluster management and monitoring utilities (e. g., SCYLD) Fortran and C Compilers: Portland Group Cluster Development Kit 2 User including: PGHPF parallel Fortran for clusters PGF90 parallel SMP Fortran 90 PGF77 parallel SMP Fortran 77 PGCC parallel SMP C/C++ PGDBG symbolic debugger PGPROF performance profiler Up to 2 simultan eous users Executables for up to 16 processors System must be fully integrated, tested, and burned-in. All software and hardware must be fully installed and tested. On-site system installation, integration, and training must be provided. Full System documentation must be provided System must have a three year warranty. System must be planned for future expansion to at least 24 nodes (48 processors). Vendor must be ISO9002 certified and copy of certificate must be attached to quote. Vendor must be Myrinet authorized. Page 2 of 2 Attachment (E)
 
Web Link
http://ih.navy.mil/contracts
(http://ih.navy.mil/contracts)
 
Record
SN20020207/00023001-020205213414-W (fbodaily.com)
 
Source
FedBizOpps.gov Link to This Notice
(may not be valid after Archive Date)

FSG Index  |  This Issue's Index  |  Today's FBO Daily Index Page |
ECGrid: EDI VAN Interconnect ECGridOS: EDI Web Services Interconnect API Government Data Publications CBDDisk Subscribers
 Privacy Policy  Jenny in Wanderland!  © 1994-2024, Loren Data Corp.