Loren Data's FBO Daily™

fbodaily.com
Home Today's FBO Search Archives Numbered Notes CBD Archives Subscribe
FBO DAILY - FEDBIZOPPS ISSUE OF JULY 13, 2018 FBO #6076
SOURCES SOUGHT

70 -- High Performance Machine Learning/Deep Learning/Data Analytics System Housed in an ISO High Cube Container - Performance Work Statement (PWS) DRAFT - Sources Sought

Notice Date
7/11/2018
 
Notice Type
Sources Sought
 
NAICS
334111 — Electronic Computer Manufacturing
 
Contracting Office
Department of the Army, Army Contracting Command, ACC - APG (W911QX) Adelphi, 2800 Powder Mill Road, Building 601, Adelphi, Maryland, 20783-1197, United States
 
ZIP Code
20783-1197
 
Solicitation Number
W911QX18R0068
 
Archive Date
8/15/2018
 
Point of Contact
Joseph M. Dellinger, Phone: 3013940769
 
E-Mail Address
joseph.m.dellinger2.civ@mail.mil
(joseph.m.dellinger2.civ@mail.mil)
 
Small Business Set-Aside
N/A
 
Description
Sources Sought PWS Draft REQUEST FOR INFORMATION (RFI)/SOURCES SOUGHT SYNOPSIS (1) Action Code :SOURCES SOUGHT This Request for Information (RFI) is issued for planning purposes only and is not a Request for Proposal (RFP), Invitation for Bid (IFB), or an obligation on the part of the Government to acquire any products or services. In accordance with (IAW) FAR 15.201(e), responses to this RFI are not offers and cannot be accepted by the Government to form a binding contract. The Government will not reimburse respondents for any costs associated with the preparation or submission of the information requested. All costs associated with responding to the RFI will be solely at the responding party's expense. The Government reserves the right to reject, in whole or in part, any contractor's input resulting from this RFI. This RFI does not constitute a solicitation for proposal or the authority to enter into negotiations to award a contract. No funds have been authorized, appropriated, or received for this contemplated effort. The information provided may be used by the Services in developing an acquisition strategy, a statement of work (SOW), statement of objectives (SOO), or performance work statement (PWS). Some information resulting from this RFI may eventually be included in one or more RFPs, which will be released to industry. Proprietary information should be marked as such. Any subsequent actions resulting from the evaluation of the information provided in response to the RFI may be synopsized at a future date. If synopsized, information detailing the specific requirements of this procurement will be included. Not responding to this RFI does not preclude participation in any future RFP, if any is issued. If a solicitation is released, it will be synopsized on the Federal Business Opportunities (FedBizOpps) website at http://www.fbo.gov. It is the responsibility of the potential respondents to monitor these sites for additional information pertaining to this requirement. The purpose of this Sources Sought Notice is to gain knowledge of interest, capabilities, and qualifications of various members of industry, to include the Small Business Community: Small Business, Section 8(a), Historically Underutilized Business Zones (HUB-Zone), Service-Disabled Veteran-Owned Small Business (SDVOSB), Women-Owned Small Business (WOSB), and Economically Disadvantaged Women-Owned Small Business (EDWOSB). The Government must ensure there is adequate competition among the potential pool of responsible contractors. Small business, Section 8(a), HUBZone, SDVOSB, WOSB, & EDWOSB businesses are highly encouraged to participate. (2) Date : 11 July 2018 (3) Classification Code : 70 (4) NAICS Code: 334111 (5) NAICS Size Standard: 1,250 employees (6) Contracting Office Address:2800 Powder Mill Road Bldg. 601 Adelphi, Maryland 20783 (7) Subject: Sources Sought for one (1) High Performance Machine Learning/Deep Learning/Data Analytics System Housed in an International Standards Organization (ISO) 40' High Cube Container with Embedded Power Conditioning, Fire Suppression, and Cooling (8) Proposed Solicitation Number: W911QX-18-R-0068 (9) Sources Sought Closing Response Date: 07/31/2018 (10) Contact Point: Joseph Dellinger, Contract Specialist, joseph.m.dellinger2.civ@mail.mil, (301) 394-0769 (11) A. Objective: To find sources that are qualified to meet the supplies/services as listed in section 11B. Note that the specific requirements in section C are subject to change prior to the release of any solicitation. B. Performance Work Statement 1. Architecture This order consists of two systems: one CLASSIFIED SECRET base system, and one UNCLASSIFIED test and development system. All proposed hardware and software (released commercially or as open source) must be supported by the offeror at the appropriate level of classification from the time of Government acceptance until the fulfilment of five one-year administration/maintenance options (i.e. 60 consecutive months of production use). All storage used in the system must be solid state (no rotating media is to be utilized). 1.1Base system - CLASSIFIED The Base system must be a balanced, commercially-available, production-grade HPC system that contains an appropriate combination of processor, memory, interconnect, input/output (I/O), and operating system (OS) capabilities in order to execute complex, tightly-coupled, large-scale, scientific calculations in support of approximately 50 active users; more specifically, this system must be able to successfully execute a variety of workloads, including jobs which stress all subsystems and which require the simultaneous, tightly-coupled use of the full number of compute nodes within the system. 1.1.1Compute nodes The system must contain three (3) distinct (i.e. mutually exclusive) sets of compute nodes: (1) training, (2) inference, and (3) visualization 1.1.1.1The processor type (e.g. Cascade Lake, EPYC, POWER9, Thunder) and version (e.g. specific wattage, specific core count, specific performance) must be the same for all (a) training compute nodes, (b) inference compute nodes, (c) visualization and (d) login nodes. 1.1.1.2All memory within each compute node must be error-correcting code (ECC) memory. 1.1.1.3Training compute nodes. At least 16 of the compute nodes must be training nodes with at least 256 gibibyte (GiB) of dual in-line memory module (DIMM)-based memory; each must be configured with at least two (2) accelerators, each with at least 32 GiB of in-package memory. Each accelerator must be able to efficiently support double, single and half precision workloads. A total of at least 128 accelerators are required. (Nodes and accelerators above the minimum requirement increase the competitiveness of the proposal.) Each training compute node must also be configured with at least 15 terabytes (TB) of 1-disk write per day (DWPD) solid-state drive (SSD) storage (i.e. must be able to conduct one full-capacity disk write per day without failure while under maintenance). 1.1.1.4Inference compute nodes. At least 16 of the compute nodes must be inference nodes with at least 128 GiB of DIMM-based memory; each must be configured with at least two (2) accelerators, each with at least 8 GiB of in-package memory. A total of at least 128 accelerators are required. (Nodes and accelerators above the minimum requirement increase the competitiveness of the proposal.) Each inference compute node must also be configured with at least 3 TB of of 1-DWPD SSD storage (i.e. must be able to conduct one (1) full-capacity disk write per day without failure while under maintenance). 1.1.1.5Visualization compute nodes. At least two (2) of the compute nodes must be visualization nodes; each must be configured with at least one GPGPU, at least 32 GiB of in-package general pupose graphics processing unit (GPGPU) memory, at least 256 GiB of DIMM-based memory, and a graphics processing unit (GPU) base clock of at least 1300 megahertz (MHz). In addition to being fully integrated into the system's primary interconnect, each node must have two (2) 10 Gigabit Ethernet (GigE) interfaces each with: (a) support for Internet Protocol verions 6 (IPv6), Internet Protocal version 4 (IPv4), dual stack IPv4/IPv6, and jumbo frames to allow users to directly access these nodes and (b) a category 7 connection (to each switch specified in 1.1.2.1). 1.1.2System connectivity Internal system networks may use IPv4; networks designed for external access by users and administrators must use IPv4, and must fully support IPv4/IPv6 dual-stack functionality and jumbo frames. All network equipment must comply with all applicable laws and Department of Defense (DoD) regulations (i.e. must have appropriate physical and logical protections). 1.1.2.1Routed Ethernet Network 1.1.2.1.1Each compute and login node must connect to a single high performance Ethernet network switch at a minimum of 10 gigabits per second (Gb/s). Copper cabling (e.g Cat) is preferred. 1.1.2.1.2A minimum of two (2) ports, configured with optics for connection via OM4 multimode fiber must be available for connection to the Government's network. 1.1.2.1.3A diagram of the routed Ethernet interconnect must be provided. 1.1.2.2Facility Storage Network 1.1.2.2.1Each compute and login node must connect to a single high performance Ethernet network switch at a minimum of 10 Gb/s. Copper cabling (e.g Cat) is preferred. 1.1.2.2.2A minimum of two (2) ports, configured with optics for connection via OM4 multimode fiber must be available for connection to the Government's facility storage network. 1.1.2.2.3A diagram of the facility storage Ethernet interconnect must be provided. 1.1.2.3Low Latency Interconnect 1.1.2.3.1The network design for the system, to include all interfaces on the system, must utilize a single high performance switch. The placement and prioritization of file I/O relative to other traffic must be clearly explained 1.1.2.3.2A diagram of each interconnect must also be provided. 1.1.2.3.3Primary interconnect: All compute nodes must be connected by a common interconnect to allow each compute node to contribute toward the execution of a complex, tightly-coupled, large-scale, scientific calculation. 1.1.2.3.4At least 100 Gb/s of peak injection bandwidth must be provided for each node. 1.1.2.3.5The maximum interconnect latency across the system for all possible end-node pairs must be no greater than 3 microseconds (µs), as measured by MPI Ping-Pong. 1.1.2.4Out-of-band networks 1.1.2.4.1A full description of each out-of-band network (e.g. administration, monitoring) must be provided. 1.1.3File systems 1.1.3.1Three (3) file systems (/work, /home, and /app) must be simultaneously mounted to all login nodes, all data transfer nodes, and all compute nodes. 1.1.3.2Each file system must (a) be parallel, (b) have no single point of failure in its design, (c) protect against data loss in the event of a double disk failure per logical unit number (or equivalent protection for other storage layouts [e.g. object storage]), and (d) have an ability to monitor low-level I/O characteristic for trend and performance analysis. 1.1.3.3Multiple simultaneously active metadata servers are required (for performance and redundancy) for the /work file system. Active/passive metadata servers (for redundancy) are sufficient for the /home and app file systems The metadata for each file system defined in 1.1.3 must be explicitly placed in SSD storage in order to improve the responsiveness of the metadata server. 1.1.3.4/work A distinct parallel file system with at least 1024 TB of formatted usable storage must be provided. Only solid state devices may be utilized. 1.1.3.5The full duplex I/O bandwidth between the compute nodes and /work must be at least 120 GB/s. 1.1.3.6/work must be capable of accessing more than 3 billion file system objects at a sustained rate of 250,000 input/output operations per second (IOPs) as measured by MDTEST (see Attachment 2: Benchmark Descriptions and Guidelines). 1.1.3.7/home A distinct, highly-available, parallel file system with at least 100 TB of formatted usable storage per effective compute core must be provided. Only solid state devices may be utilized. 1.1.3.8The full duplex I/O bandwidth between the compute nodes and /home must be at least 20 GB/s. 1.1.3.9/home must be capable of accessing more than 500 million file system objects. 1.1.3.10/app A distinct parallel file system with at least 50 TB of formatted usable storage. Only solid state devices may be utilized. 1.1.3.11The full duplex I/O bandwidth between the compute nodes and /app must be at least 5 GB/s. 1.1.3.12/app must be capable of accessing more than 100 million file system objects. 1.1.4Service nodes 1.1.4.1Login nodes. At least four (4) login nodes must be provided. Each login node must be configured with (1) at least 256 GiB of usable memory, (2) one GPGPU of the same type and version as the GPGPUs used in the visualization compute nodes, (3) at least 2900 GB of 1-DWPD SSD storage, (4) at least two available Universal Serial Bus (USB) 3.x ports, and (5) at least two 10 GigE interfaces; each 10 GigE interface must have (a) Category 7 (Cat 7) interface, and (c) support for jumbo frames. Each login node must be capable of (a) delivering external services (e.g. piping graphics) and (b) conducting unattended tasks on the user's behalf (e.g. running scripts, updating modern automated notebooks). 1.1.4.2Job scheduling nodes. An appropriate number of job scheduling nodes with (a) suitable job scheduling software and (b) adequate memory, processing capability, and redundancy must be provided; the offeror must describe how its design (including strategies for high availability and spares) supports the service level requirements in 2.1 Administration and Maintenance Provisions During Operational Phase (e.g. the minimum system effectiveness level, the maximum number of system-wide interrupts, the maximum number of user/job interrupts). All standard compute nodes must be capable of executing multi-node, MPI jobs launched in containers (e.g. Docker, Shifter, Singularity). 1.1.4.3Administration nodes. An appropriate number of administration nodes with adequate memory, processing capability, and redundancy (e.g. an active/passive pair of system management nodes, an active/passive pair of network management nodes) must be provided; the offeror must describe how its design (including strategies for high availability and spares) supports the service level requirements in 2.1 Administration and Maintenance Provisions During Operational Phase (e.g. the minimum system effectiveness level, the maximum number of system-wide interrupts, the maximum number of user/job interrupts). 1.1.5Software All proposed software must be compatible with the proposed hardware. Functionality will be verified during the acceptance testing process (see section 3). 1.1.5.1At least two (2) suites of high-level language compilers must be provided: (a) GNU Compiler Collection (GCC) (version 6.3 or higher) and (b) a proprietary suite. 1.1.5.2At least one (1) compiler suite must support all proposed GPGPUs. 1.1.5.3Each compiler suite must include (at a minimum) FORTRAN 77, FORTRAN 90, FORTRAN 08, C 11 with embedded extensions, C++ 11, C++ 14, and C++ 17. 1.1.5.4Python 2.7 must be provided with NumPy, PyMPI, MPI4Py, SciPy, PyTorch, Keras, theano, and IPython extensions; Python 3.6 (or higher) must be provided with as many of the following extensions as possible: NumPy, PyMPI, MPI4Py, SciPy, PyTorch, Keras, and IPython. 1.1.5.5At least two (2) implementations of MPI must be provided; at least one (1) implementation must be compliant with the MPI-3.1 standard (http://www.mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf). 1.1.5.6For each MPI implementation, the offeror must document any variance from the MPI-3.1 standard, noting any features that cannot be exploited at full system scale. 1.1.5.7At least one (1) MPI implementation must allow the execution of a job across all compute nodes (i.e. large-memory, visualization, and standard). 1.1.5.8Multiprocessing Application Programming Interface (APIs) (including OpenMP), standard computational libraries (including BLAS, BLACS, FFTW, LAPACK, and ScaLAPACK), accelerator computational libraries/compilers (including CUDA, OpenCL, and OpenACC), performance libraries (including PAPI), and an integrated development environment must be provided. 1.1.5.9Application performance instrumentation tools (including at least one (1) code profiler that interfaces with PAPI and at least one (1) tool that provides MPI messaging statistics) must be provided. 1.1.5.10Machine learning frameworks/tools: tensorflow 1.1.5.11Scripting languages: javascript, go 1.1.5.12Any support software (e.g. compilers and libraries) used to produce executables during acceptance testing must be integrated as a functional part of the production system. 1.1.5.13The system must be able to compile and successfully execute High Performance Computing Modernization Program (HPCMP)-provided authentication software (e.g. HPCMP Kerberos) and HPCMP-provided communication software (e.g. HPCMP SSH). 1.1.5.14The Government or its agent must be allowed to access and modify the source code for the proposed operating system (for security purposes). 1.1.5.15The offeror must provide a comprehensive description of the proposed workload management software (e.g. PBSPro, SLURM) and container solution (e.g. Docker, Shifter, Singularity). Singularity is strongly preferred as it is the only container solution approved for use by HPCMP Security. 1.1.6Reliability 1.1.6.1The system design must adequately address the reliability requirements presented in 2.1 Administration and Maintenance Provisions During Operational Phase (e.g. the minimum system effectiveness level, the maximum number of system-wide interrupts, the maximum number of user/job interrupts). 1.1.6.2Node health checkers must be provided to ensure each node is capable of proper execution prior to being assigned work; the offeror must provide a comprehensive description of (a) the proactive mechanisms it will employ to ensure the pool of available compute nodes is fully prepared to receive work and (b) the overall anticipated effectiveness of these methods. 1.1.7Security Features 1.1.7.1The offeror must minimize supply chain risk, and must provide a detailed explanation of (a) how the use of foreign parts will be prevented or minimized, (b) how the impact of any employed foreign parts will be mitigated, and (c) what steps will be taken to protect the system from tampering between system manufacturing and system installation. 1.1.7.2Radio frequency emitting devices (e.g. WiFi, LTE, GSM) must not be included in the system. 1.1.8Infrastructure/facilities 1.1.8.1The system must meet CBEMA and ITIC power quality standards. 1.1.8.2480 volts (V) is preferred; 208V may be used. 1.1.8.3It is preferred that the system's power configuration allow the login, administration, networking, and storage infrastructure to be supported by dual, redundant facility power feeds (in order for this infrastructure to remain available when one source of power is offline). 1.1.8.4The architectural design must allow the system to remain in production even if one or more of the compute racks are powered down; in the event that all compute racks are powered down, the file systems, networks, and login nodes must remain available to users. 1.1.8.5The system's design must allow individual racks to be powered down without impacting the power to other racks. 1.1.8.6All trays necessary to neatly organize the system's cables must be provided. 1.1.8.7The system must be installed at the Army Research Laboratory's (ARL) Building 120 facility at Aberdeen Proving Ground (APG), MD. 1.2Test and development system (TDS) - UNCLASSIFIED The test and development system (TDS) must be configured with the same hardware and software as the base system to allow (A) effective system software testing, (B) user code porting, and (C) base system configuration management. To facilitate (A) through (C), the TDS will likely remain unclassified throughout its lifecycle. 1.2.1Exceptions to the base system architecture 1.2.1.1The system must contain at least five (5) compute nodes: at least two (2) training nodes, at least two (2) inference nodes, and at least one (1) visualization node. 1.2.1.2The system must also contain at least two (2) login nodes, at least two (2) job scheduling nodes, and at least two (2) administration nodes. 1.2.1.3/work must contain at least 25 TB of formatted usable disk storage per effective compute core. 1.2.1.4The full duplex I/O bandwidth between the compute nodes and /work must be at least 3 megabytes per second (MB/s) per effective compute core. 1.2.1.5/home and /app are not required. 1.2.1.6Air or water cooling may be used. 1.2.1.7480V 3-phase power is preferred. 208V 3-phase power is acceptable. 1.2.1.8The system must be installed in ARL's Building 120 facility at APG, MD. 2.Administration, Maintenance, and Training 2.1Administration and maintenance provisions during operational phase 2.1.1The offeror must provide on-site (a) system administration, (b) preventive hardware inspection and maintenance, (c) preventive software inspection and maintenance, (d) remedial hardware inspection and maintenance, and (e) remedial software inspection and maintenance at the appropriate level of classification over a period of 60 consecutive months (in five one (1)-year options) for all contracted systems. 2.1.2The design features and functionality present at the time of acceptance must be maintained until the administration and maintenance contract is no longer in effect. 2.1.3All personnel requiring administrative access to any system and/or supporting network/security component must have an adjudicated tier 5 background investigation. 2.1.4All personnel requiring administrative system access must comply with the latest version of DoD Directive 8140 and must have Information Assurance Technician (IAT) Level II or IAT Level III certification. 2.1.5All personnel requiring administrative system access must be identified to the contracting officer's representative (COR) and approved by the Information Systems Security Manager (ISSM) prior to being granted system access. 2.1.6Performance of administration and maintenance may require the offeror's personnel to access and use data and information that is proprietary to a Government agency or Government contractor; the offeror and the offeror's personnel must not release data/information developed or obtained in performance of this effort, except as authorized by the COR. 2.1.7The offeror must not use, disclose, or reproduce proprietary data that bears a restrictive legend, other than as required in the performance of this effort, and must assume responsibility for protecting the confidentiality of Government records; the offeror must notify its personnel in writing that (a) such information may not be disclosed and (b) failure to comply may result in fines or imprisonment. 2.1.8The U. S. Government will maintain full data rights in accordance with (IAW) Federal Acquisition Regulation (FAR) 52.227-14, Rights in Data-General (i.e. the Government will have exclusive rights over any software developed and data produced under this effort). 2.1.9The offeror must identify all positions necessary to address the Government's requirements in a formal staffing plan; each identified position will be a key position. 2.1.10The offeror must submit to the COR all requested key personnel substitutions at least 30 days in advance; all such requests (a) must be in writing, (b) must provide a detailed explanation of the circumstances necessitating the proposed substitution, (c) must contain a complete resume for the replacement candidate, and (d) must be approved by the COR. 2.1.11All offeror personnel must be capable of working independently and possess the necessary knowledge, skills, and expertise to administer and maintain HPCMP systems; the offeror will be responsible for training its personnel appropriately at its own expense and/or replacing personnel as necessary to address the Government's requirements. 2.1.12Any changes in staffing must be reported in the offeror's monthly administration and maintenance activity report. 2.1.13All computing resources (e.g. laptops for system engineers) which are brought to the Government's worksite must comply with all applicable laws and DoD regulations (i.e. must have appropriate physical and logical protections, must not be connected to any non-public Government network, and must not be used to store Government-protected data); the ISSM will enforce compliance and will determine the procedures that must be followed in cases of non-compliance. 2.1.14Any electronic device with the ability to transmit and/or receive must not be carried into classified areas by offeror personnel unless approved by the ISSM. 2.1.15Any Government worksite space needed to accommodate personnel and/or parts storage must be noted in advance; the Government will assume by default that there is no requirement for space. 2.1.16Spare parts needed to repair the base systems may not be obtained from the TDS without permission from the COR. 2.1.17For each contracted system, the monthly system effectiveness level (see Appendix A: Key Definitions) must not fall below the offeror's proposed level. The minimum acceptable offeror proposed level is 97%. 2.1.18A schedule for preventive maintenance (see Appendix A: Key Definitions) must be coordinated with the Government. 2.1.19For any pre-scheduled outage, all users must be notified before the outage by a duration that is no less than the greater of the following: (a) 10 days or (b) the number of outage days multiplied by 5. 2.1.20For the base system, the monthly number of system-wide interrupts (see Appendix A: Key Definitions) must not exceed one (1). 2.1.21For the base system, the monthly number of user/job interrupts (see Appendix A: Key Definitions) must not exceed ten (10). 2.1.22Proactive mechanisms must be employed to ensure the pool of available compute nodes is fully capable of executing work; if uncharacteristic performance and/or adverse node behavior is observed, the offeror must adjust its mechanisms accordingly to remove the underlying issue. 2.1.23Each interrupt/outage must be logged and disclosed to the Government (a) within 24 hours and (b) in a format (developed in coordination with the Government) that facilitates the calculation of the monthly system effectiveness level, the monthly number of system-wide interrupts, and the monthly number of user/job interrupts. 2.1.24A description of the offeror's planned support escalation procedures (including an organizational chart, contact information [for each person identified in the chart], expected problem escalation steps, and expected problem escalation timelines) must be provided to the Government in the offeror's proposal. 2.1.25The offeror's escalation approach must clearly, logically, and appreciably augment its ability to address (a) the Government's service level requirements (e.g. minimum monthly system effectiveness level, maximum monthly number of system-wide interrupts, maximum monthly number of user/job interrupts), (b) the Government's functionality requirements (e.g. creating and maintaining a productive user environment) and (c) the Government's security requirements (e.g. timely generation and implementation of patches, timely reconfiguration of system hardware/software). 2.1.26The Government must be notified when problems are escalated, and the Government may at its discretion demand escalation. 2.1.27The Offeror's operating system and software support plan must be provided to the Government in the offeror's proposal detailing the planned patch frequency and upgrade schedule. The support plan must also describe how the offeror will support patches that are required to meet DoD timelines that may not match up with the normal patch cycle. For example, U.S. CYBERCOM Task Order 17-0019 requires all HIGH and CRITICAL IAVAs be patched within 21 days. 2.1.28All new versions, updates, upgrades, and patches for (a) operating systems, compilers, parallel environments, libraries, integrated development environments, performance tools, and code profilers, (b) software proposed as part of the base systems, (c) firmware for internal system components, and (d) other relevant software must be (1) compatible with HPCMP-provided standard user environment software, (2) tested on the TDS prior to implementation on the base systems, (3) installed/configured for production use prior to expiration of operating system vendor support for the prior version unless the Government's ISSM approves a plan of action and milestones (POA&M) in a format provided by the ISSM, excusing a short delay in upgrading from software components without vendor support. 2.1.29The COR and contracting officer must be notified, both verbally and in writing, of any problem or potential issue affecting the performance and/or functionality of any contracted system; this notification does not relieve the offeror of its responsibility to correct problems. 2.1.30The user environment on any contracted system must be conducive to broad usage (i.e. users must be able to easily develop, test, and execute parallel code). 2.1.31The offeror must install (or assist in installation of) applications and system software, as necessary, to create a productive user environment. 2.1.32All contracted systems must be patched and reconfigured (e.g to maintain compliance with para 3.2.2) in accordance with all applicable laws, DoD regulations, and compliance deadlines in order to address vulnerabilities identified by the following sources: (a) U.S. Cyber Command (through Information Assurance Vulnerability Alerts [IAVAs] and/or notices of Common Vulnerabilities and Exposures [CVEs]), (b) the HPCMP Cybersecurity Service Provider (CSSP), and (c) the ISSM; the ISSM will enforce compliance and must be provided with a Plan of Action and Milestones (POA&M) in a format specified by the ISSM for review/approval in all cases of non-compliance. 2.1.33System vulnerabilities not addressed by established DoD compliance deadlines must be documented in a POAM and approved by the ISSM (in a format specified by the ISSM). 2.1.34The offeror must follow the guidance provided by the ISSM to ensure the contracted systems are properly secured. 2.1.35The Government worksite's configuration management and control processes for requesting and reporting system hardware and software changes must be followed. 2.1.36All contracted systems must be maintained 24 hours per day, seven (7) days per week, including holidays. 2.1.37The response time for remedial maintenance (see Appendix A: Key Definitions) must be less than two (2) hours for all contracted systems; response time is calculated by subtracting (a) the time the repair is initiated by on-site qualified maintenance personnel and (b) the time the problem occurs. 2.1.38The risk of losing Government data while performing preventive hardware/software maintenance must be minimized. 2.1.39The risk of losing Government data while performing remedial hardware/software maintenance must be minimized. 2.1.40The Government may permit degaussing, deleting, and/or declassification of data in accordance with Government-approved procedures for returning devices; however, the Government reserves the right to retain devices permanently or to destroy them, regardless of maintenance or warranty provisions. 2.1.41The Government will retain all failed storage devices (e.g. solid state drives) with no post-award adjustment in price. 2.1.42Maintenance provisions do not apply to repairs required due to the fault or negligence of the Government or acts of God/nature. 2.1.43The offeror must provide a monthly report assessing all administration and maintenance activities in a suitable format (developed in coordination with the Government). 2.1.44Contract manpower reporting: The offeror must submit manpower reporting data to the eCMRA website on an annual basis in accordance with DARS Clause 52.237-9001. Contractor manpower reporting costs should be assessed based on the requirements presented in this Performance Work Statement (PWS) and should be included in the price of administration and maintenance (i.e. should not be priced separately). 2.2Administration and maintenance provisions during system installation, system configuration, capability testing (CT), and effectiveness-level testing (ELT) 2.2.1All offeror personnel designated for the purposes of installing, configuring, and/or testing HPCMP systems must have SECRET clearances. 2.2.2All personnel requiring administrative access to any system and/or supporting network/security component must have an adjudicated tier 5 investigation. 2.2.3All personnel requiring administrative system access must comply with the latest version of DoD Directive 8140 and must have Information Assurance Technician (IAT) Level II or IAT Level III certification. 2.2.4All personnel requiring administrative system access must be identified to the contracting officer's representative (COR) and approved by the Information Systems Security Manager (ISSM) prior to being granted system access. 2.2.5All personnel who will participate in system installation/configuration must be identified to the COR prior to system delivery. 2.2.6All personnel who will perform capability testing and/or effectiveness-level testing must be identified to the Government as part of the test readiness review (TRR) defined in 3. Acceptance Testing. 3.Acceptance Testing The functionality, usability, performance, and reliability of the contracted systems must be demonstrated via a formal acceptance process, which will consist of the following stages: (a) preparation for testing, (b) a test readiness review (TRR), (c) capability testing (CT), and (d) effectiveness-level testing (ELT). 3.1General 3.1.1Any hardware or software added or substituted to satisfy a particular test must also pass all other acceptance test criteria; a new bill of materials must be provided to reflect the added or substituted hardware or software. 3.1.2The Government is relieved from all risks of loss or damage to equipment prior to final acceptance, except when the loss or damage is due to the negligence of the Government. 3.2Preparation for testing 3.2.1An Acceptance Test Plan (ATP) for the base system must be prepared in coordination with the Government, addressing both CT and ELT; the ATP must be approved by the COR. 3.2.2The offeror must provide a comprehensive list of all software (HPCMP and other), including version number, that is installed at the time of the TRR. 3.2.3All contracted systems must meet all physical and information security requirements set by the DoD, the Government worksite, and the ISSM, to include DoD Cybersecurity policies (e.g. DoDI 8500.01 and USCYBERCON TASKORD 17-0019) and applicable DISA security technical implementation guides (STIGs). Applicable Security Technical Implementation Guide (STIG) include those for any routers, internal databases and internal web servers. For Unix type operating system versions for which there is not a version-specific STIG available, the general Unix STIG will be used. All deviations must be documented (including the justification for the deviation) and approved by the ISSM. This includes items such as SELinux enforcing mode, IPtables, auditing, and host-based security system (HBSS) implementation. 3.2.4Once each contracted systems has (1) zero critical or high vulnerabilities as determined by an authenticated Assured Compliance Assessment Solution (ACAS) scan without an associated POA&M approved by the ISSM, and (2) zero undocumented STIG findings as determined by the HPCMP Radix tool plus manual or automated evaluation of any STIG items not covered by the RADIX tool, the system will be deemed sufficiently secure by the ISSM, and the system may be connected to the Government site's network; the Government must, subsequently, be granted access to the contracted systems over a period of no less than 14 calendar days (per system) to allow the Government to facilitate certain aspects of site integration (e.g. establishing a connection between each system and ARL's facility storage system). 3.2.5The offeror must install permanent (as opposed to temporary) licenses for all software on the contracted systems. 3.2.6Government inspection of all contracted systems relative to 1. Architecture must be completed. All necessary hardware and software must be installed and configured to provide (a) a fully functional base system and (b) a fully functional TDS. 3.2.7A DD Form 250 (Material Inspection and Receiving Report) for the TDS must be prepared and submitted to the Government. 3.2.8For the base system, the High Performance LINPACK (HPL) portion of the HPC Challenge Benchmarks must be successfully executed using all CPUs and accelerators capable of double precision arithmetic; this execution must be coordinated with the Government to facilitate the Government's measurement of LINPACK power consumption. 3.2.9All scripts necessary to track the following during the effectiveness-level testing (ELT) of each base system must be prepared and submitted to the Government: (a) the number of system-wide interrupts, (b) the number of user/job interrupts, (c) the system effectiveness level, and (d) the system utilization. 3.3Test readiness review (TRR) 3.3.1Proof of completion for each requirement listed in 3.2 Preparation for Testing must be presented to the COR; concurrence must be recorded in a test readiness review (TRR) checklist. 3.3.2The completed TRR checklist must be signed and approved by the COR. 3.4Capability testing (CT) 3.4.1Initiation of testing 3.4.1.1Once the Contracting Officer has approved the TRR checklist, the Government will schedule capability testing (CT) in coordination with the offeror. 3.4.1.2CT may begin as late as ten (10) calendar days after the TRR checklist has been approved (to allow the Government ample time to arrange for qualified representatives to participate in CT). 3.4.2Functionality/usability demonstrations for each base system 3.4.2.1The functionality and usability of each distinct set of proposed compute nodes must be demonstrated; the functionality/usability of the GPGPUs and any other accelerators must be explicitly tested. 3.4.2.2System access via Kerberos and PKI must be demonstrated (a) over the Defense Research and Engineering Network (DREN) for the base system. 3.4.2.3Remote system access via ssh must be demonstrated. 3.4.2.4Remote file exchange via secure ftp and scp must be demonstrated. 3.4.2.5Aggregate usable memory per node (in GiB) must be demonstrated for each node type. 3.4.2.6Aggregate formatted usable disk storage (in GB) must be demonstrated for each file system. 3.4.2.7Access to each file system from the login nodes, data transfer nodes (if applicable), and compute nodes must be demonstrated. 3.4.2.8The functionality and usability of all software must be demonstrated. 3.4.2.9The read portion of IOR(sequential) must be successfully executed for each file system (a) per the guidelines provided in Benchmark Descriptions and Guidelines (see Attachment 6), (b) using at least 95% of the compute nodes, and (c) using the proposed software container solution. 3.4.2.10The write portion of IOR(sequential) must be successfully executed for each file system (a) per the guidelines provided in Benchmark Descriptions and Guidelines (see Attachment 6), (b) using at least 95% of the standard compute nodes, and (c) using the proposed software container solution. 3.4.2.11The ability of each base system's power configuration to allow the login, administration, and storage infrastructure to be supported by dual, redundant facility power feeds must be demonstrated, if proposed. 3.4.2.12The ability of each base system to remain in production even if one or more of the compute racks are powered down must be demonstrated. 3.4.2.13The ability of the file systems and login nodes for each base system to remain available to users in the event that all compute racks are powered down must be demonstrated. 3.4.2.14The ability to power down individual racks for each base system without impacting the power to other racks must be demonstrated. 3.4.2.15The ability to halt and restart each base system in two (2) hours or less must be demonstrated. 3.4.3Performance demonstrations for each base system 3.4.3.1Testing rules 3.4.3.1.1The guidelines provided in Benchmark Descriptions and Guidelines (see Attachment 2) must be followed. 3.4.3.1.2Codes must not be modified for the purpose of improving performance; any modifications must be disclosed to the Government and must be justified. 3.4.3.1.3All tests must be compiled on the base system being tested. 3.4.3.1.4All tests must be submitted using the proposed container solution. 3.4.3.1.5A copy of all files used during testing must be provided to the Government. 3.4.3.1.6The input files, output files, and scripts for each test must not be modified, except as allowed in Benchmark Descriptions and Guidelines (see Attachment 2). 3.4.3.1.7For time-to-solution demonstrations, the elapsed wall-clock execution time reported by the echo statements embedded in each test must be used as the time-to-solution for that test; this time-to-solution will only be valid if the test output passes the correctness criteria for that test. 3.4.3.1.8For time-to-solution demonstrations, any number of cores per node may be used, but all cores within each employed node must be counted toward the total number of cores for the job. 3.4.3.2Demonstrations for the base system 3.4.3.2.1Guaranteed times-to-solution (in seconds) for TENSORFLOW using RESNET-50 against the CIFAR-10 data set using 1 and the configured number of accelerators per training node must be demonstrated for each training node. 3.4.3.2.2Guaranteed times-to-solution (in seconds) for TENSORFLOW using RESNET-50 against the ImageNet data set using 1 and the configured number of accelerators per training node must be demonstrated for each training node. 3.4.3.2.3The guaranteed full duplex I/O bandwidth (in GB/s) between the compute nodes and /work must be demonstrated with IOR(sequential) using all configured nodes. 3.4.3.2.4The guaranteed full duplex I/O bandwidth (in GB/s) between the compute nodes and /home must be demonstrated with IOR(sequential) using all configured nodes. 3.4.3.2.5The guaranteed full duplex I/O bandwidth (in GB/s) between the compute nodes and /app must be demonstrated with IOR(sequential) using all configured nodes. 3.4.3.2.6The guaranteed number of I/O operations per second (IOPs) for /work must be demonstrated with MDTEST using all configured nodes. 3.4.3.2.7The guaranteed number of I/O operations per second (IOPs) for /home must be demonstrated with MDTEST using all configured nodes. 3.4.3.2.8The guaranteed number of I/O operations per second (IOPs) for /app must be demonstrated with MDTEST using all configured nodes. 3.4.3.2.9The maximum interconnect latency across the system for all possible end-node pairs must be measured using MPI Ping-Pong. 3.4.4Completion of testing for the base system 3.4.4.1If CT cannot be completed within five (5) calendar days of the Government-approved commencement date, the base system under test will have failed CT; another test readiness review (TRR) must be conducted before CT may be repeated for this system. 3.4.4.2If CT is successfully completed within 5 calendar days of the Government-approved commencement date, the COR will approve and sign the Capability Testing Completion block of the ATP for the base system under test. 3.5Effectiveness-level testing (ELT) 3.5.1Initiation of testing for the base system 3.5.1.1Effectiveness-level testing (ELT) may begin immediately after the Contracting Officer approves the results of the CT for the base system under test. 3.5.2Testing rules for the base system 3.5.2.1All scripts submitted to the Government during the test preparation phase must be used to track the following in five (5) minute increments: (a) the number of system-wide interrupts, (b) the number of user/job interrupts, (c) the system effectiveness level, and (d) the system utilization. 3.5.2.2A system effectiveness level of 97% or greater must be demonstrated over a period of 30 consecutive calendar days. 3.5.2.3During the period of testing, the observed number of system-wide interrupts must not exceed 1. 3.5.2.4During the period of testing, the observed number of user/job interrupts must not exceed 10. 3.5.2.5All interrupts and outages must be logged and disclosed to the Government. 3.5.2.6The 30 day window of observation may be delayed; however, the time between the start of CT and the end of ELT must not exceed 60 consecutive calendar days. 3.5.2.7During ELT, the system's workload must be augmented as necessary to ensure the system utilization (for each full day of testing) does not fall below 75%; the Government may (at its discretion) waive this requirement with approval from the COTR. 3.5.2.8The contents of the offeror's portion of the ELT workload must be approved by the COR. 3.5.2.9During ELT, the Government must have full access to the system and must be able to execute jobs on the system. 3.5.2.10At the completion of ELT, there shall be zero (0) undocumented critical, high or medium vulnerabilities as determined by a final authenticated ACAS scan. 3.5.2.11At the completion of ELT, there shall be zero (0) undocumented STIG findings as determined by the HPCMP Radix tool plus manual or automated evaluation of any STIG items not covered by the RADIX tool. All findings will either be remedied or documented in a POA&M prior to final acceptance of the base system. 3.5.3Completion of testing for the base system 3.5.3.1Should effectiveness-level testing (ELT) not be successfully completed within 60 consecutive calendar days of the start of CT, the Government may (a) intervene and specify an alternative approach to bring closure to acceptance testing or (b) may unilaterally reject the base system under test. 3.5.3.2If ELT is successfully completed within 60 consecutive calendar days of the start of CT, the COTR will sign and approve the Effectiveness-Level Testing Completion block of the ATP. 3.6Final acceptance for the base system 3.6.1Acceptance testing for the base system under test will be completed immediately after the COR approves the ELT results. 3.6.2A DD Form 250 (Material Inspection and Receiving Report) for the base system must be prepared and submitted to the Government. 3.6.3Final acceptance for the base system will occur immediately after the Government signs the DD Form 250. 4.Cube Container 4.1Ability to package above system in a 40 foot (') high ISO cube container; container solution must include: 4.1.1All required cooling equipment. All heat must be exchanged directly to the atmosphere (no external liquid cooling or condenser units are feasible in this application). 4.1.2All required power conditioning (UPS) 4.1.3All required power distribution to equipment racks 4.1.4Racks for IT equipment 4.1.5One rack at the end of the row must be reserved for government networking equipment requiring 208 VAC power, not to exceed 5 KW 4.1.6Fire suppression system with external alarm contacts for integration with an external building alarm. 4.1.7Ability to monitor the facility via a Niagara or Metasys building management system and/or a site scan system. 4.1.8Details of how containers can be secured to operate at the SECRET level 4.1.9Details about what ARL should consider in preparing the site for installation 4.1.10Highlight areas where customization of a standard product may be needed 4.2Integrated uninterruptible power system (UPS) 4.2.1Specify available input voltage configurations (480 VAC input preferred) 4.2.2Specify output power voltage and distribution option 4.2.3Specify available redundancy features and number of battery strings, type of battery 4.2.4Specify type of UPS (e.g. dual conversion) 4.3Integrated cooling solution 4.3.1Specify effective ambient temperature range 4.3.2Specify maximum and sensible tonnage 4.4Point and distributed loading capacities 4.5Number of standard racks, number of available rack units, depth of racks (or square footage available for racks) 4.6Wind load rating 4.7Fire rating and fire suppression specifications 4.8Security measures 4.9A commercial price list to include any quantity or Government discounts available 4.10Maintenance requirements and availability of vendor maintenance 4.11Site preparation requirements 4.12Typical installation time required on a prepared concrete or asphalt pad 4.13Links to any more detailed information that might be relevant to ARL Appendix A: Key Definitions A system-wide interrupt is an event in which significant components or portions of the system fail or accumulate failure; examples of system-wide interrupts include the unavailability or failure of any of the following: (a) 20% or more of the compute nodes, (b) one or more of the file systems, (c) most of the login nodes, (d) a large portion of one or more of the interconnects, (e) the queuing system, or (f) any hardware or software impacting 20% or more of the users actively working on the system or passively running batch jobs. A user/job interrupt is an event denying at least one user the ability to access the system, access files, submit jobs, and/or run jobs at the levels of performance required during acceptance testing. Multiple observed interrupts that (a) begin and end within a single 2-hour window and (b) are caused by the same isolated failure of a system component are counted as one user/job interrupt. A failure that spans more than two (2) hours is counted as multiple user/job interrupts - one interrupt for every two (2) hour increment. The system effectiveness level is the ratio of operational use time to scheduled use time, truncated as a whole number percentage. Scheduled use time is the ideal number of compute core-hours (over a specified period of time) minus the number of compute core-hours associated with excusable delays. Excusable delays are defined in FAR Clause 52.212-4(f) and (for the purposes of this acquisition) also include (a) any planned outage which has been scheduled in advance by the Government, approved by the COR, and (b) any period during which the system is not performing due to the fault of the Government. Operational use time is the number of compute core-hours that were available for productive computing over a specified period of time; any compute core-hour associated with a system-wide interrupt, a user/job interrupt, or an excusable delay cannot contribute toward operational use time; operational use time is a subset of scheduled use time. Remedial maintenance is a repair that is required due to a product malfunction. Preventive maintenance includes routine inspection, adjustment, and/or replacement of products in order to proactively ensure the viability of the system. Appendix B: Performance Requirements Specifications RequirementStandard (S) / Acceptable Quality Level (AQL) 2.1.4 The design features and functionality present at the time of acceptance must be maintained until the administration and maintenance contract is no longer in effect.S - no instances of non-compliance AQL - remedy of any non-compliance within four calendar days 2.1.19 For each contracted system, the monthly system effectiveness level must not fall below 97%.S - above offeror's proposed monthly effectiveness level AQL - offeror's proposed monthly effectiveness level 2.1.22 For each contracted system, the monthly number of system-wide interrupts must not exceed 1.S - less than 1 AQL - 1 2.1.23 For each contracted system, the monthly number of user/job interrupts must not exceed 10.S - less than 10 AQL - 10 2.1.24 Proactive mechanisms must be employed to ensure the pool of available compute nodes is fully capable of executing work; if uncharacteristic performance and/or adverse node behavior is observed, the offeror must adjust its mechanisms accordingly to remove the underlying issue.S - no instances of non-compliance AQL - remedy of any non-compliance within four calendar days 2.1.25 Each interrupt/outage must be logged and disclosed to the Government (a) within 24 hours and (b) in a format (developed in coordination with the Government) that facilitates the calculation of the monthly system effectiveness level, the monthly number of system-wide interrupts, and the monthly number of user/job interrupts.S - 100% accuracy within 24 hours AQL - 99% accuracy within 24 hours 2.1.30 All new versions, updates, upgrades, and patches for (a) operating systems, compilers, parallel environments, libraries, integrated development environments, performance tools, and code profilers, (b) software proposed as part of the base systems, (c) firmware for internal system components, and (d) other relevant software must be (1) compatible with HPCMP-provided standard user environment software, (2) tested on the TDS prior to implementation on the base systems, (3) installed/configured for production use prior to expiration of operating system vendor support for the prior version unless the Government's Information Systems Security Manager (ISSM) approves a plan of action and milestones (POA&M) in a format provided by the ISSM, excusing a short delay in upgrading from software components without vendor support..S - no instances of non-compliance AQL - remedy of any non-compliance within four calendar days 2.1.32 The user environment on any contracted system must be conducive to broad usage (i.e. users must be able to easily develop, test, and execute parallel code).S - no instances of non-compliance AQL - remedy of any non-compliance within four calendar days 2.1.34 All contracted systems must be patched and reconfigured (e.g to maintain compliance with para 3.2.2) in accordance with all applicable laws, DoD regulations, and compliance deadlines in order to address vulnerabilities identified by the following sources: (a) U.S. Cyber Command (through Information Assurance Vulnerability Alerts [IAVAs] and/or notices of Common Vulnerabilities and Exposures [CVEs]), (b) the HPCMP Cybersecurity Service Provider (CSSP), and (c) the ISSM; the ISSM will enforce compliance and must be provided with a POA&M in a format specified by the ISSM for review/approval in all cases of non-complianceS - no instances of non-compliance AQL - remedy of any non-compliance within four calendar days 2.1.40 The response time for remedial maintenance must be less than two hours for all contracted systems.S - two hours or less AQL - two hours 2.1.46 The offeror must provide a monthly report assessing all administration and maintenance activities in a suitable format (developed in coordination with the Government).S - 100% accuracy AQL - 99% accuracy All other provisions in 2.1 Administration and Maintenance Provisions During Operational Phase must be followed.S - no instances of non-compliance AQL - remedy of any non-compliance within four calendar days C. Responses: The Government is requesting interested sources to provide sufficient information in the form of a white paper which demonstrates recent experience (within the last two (2) - three (3) years) in providing the broad range of capabilities described under Section III in this RFI. Interested sources are requested to submit a brief Summary of Demonstrated Capabilities (no more than 30 pages) that addresses each of the interest areas listed in Section III. The response shall include previous scenarios, examples of past projects and deliverables that demonstrate experience and success in each of the interest areas. If a particular interest area falls outside of previous experience, please acknowledge that area as being new and describe how it would be approached. In addition, interested sources are highly encouraged to include new or novel approaches to addressing the areas of interest described in Section III. In addition to providing a response for each interest area, interested sources must also address the following Market Survey Questionnaire: 1. Administrative Information a. Company Name b. Mailing Address and Website c. Commercial and Government Entity (CAGE) Code d. North America Industry Classification System (NAICS) number, with associated Business size and eligibility under U.S. Government social-economic programs and preference (i.e., type of business -large, small, small-disadvantaged, women-owned, veteran-owned or 8(a)). e. Data Universal Numbering System (DUNS) number f. Location of facility(s) g. Subcontracting Plan (if applicable) 2. Person(s) Responding to RFI a. Name b. Title c. Company Responsibility/Position d. Telephone Number/Fax Number e. Email Address 3. Security a. Do you have a Facility Security Officer (FSO)? If so, provide contact information. b. Is your company foreign owned, either directly or indirectly? If so, what is the country of origin and is the company fully or partially owned? c. Do you have foreign national employees? If so, what is the country of origin? d. Has your company implemented an Operations Security (OPSEC) Plan? e. Does your company have experience with Program Protection planning? f. Is your company currently able to transmit unclassified data in accordance with U.S. Government data encryption requirements? g. Does your company have experience with compliance of DD254 - Contract Security Classification Specification? If so, what procedures does your company use to enforce DD 254 compliance with subcontractors? h. Does your company have the ability to operate and maintain a secure shared data collection site (manpower, estimate labor hours, contract deliverables, etc.) between contractor and Government? If so, what systems are used? i. Any potential sources shall have, at a minimum, a valid accreditation from the Defense Intelligence Agency (DIA) (or equivalent accreditation authority) at the TOP SECRET/Sensitive Compartmented Information (TS/SCI) level with TS/SCI safeguarding capability. The following questions shall also be answered as part of your submittal: i. Does your company have an accredited Sensitive Compartmented Information letter? Facility (SCIF)? If yes, who is the accreditation authority? ii. When does the SCIF accreditation expire? Please provide a copy of the accreditation. iii. Does your company have experience with Special Access Programs? 4. Core Competencies/Core Lines of Business for DOD-centric Research and Development a. Briefly describe your company's demonstrated core competencies or existing lines of business under the realm of DOD Research & Development. Interested sources may only provide one (1) submission per CAGE code. Multiple capabilities may be submitted within one (1) submission. Additional capabilities must be provided within the confines of the page limits provided in Section V below. If your company has different business sectors which hold different CAGE codes, each sector can submit a capability package. In addition, each CAGE code must provide separate facility clearance documentation. If teaming, use the Prime Contractor's CAGE code in the response. Submission of propriety and other sensitive information must be marked and identified with disposition instructions. Submitted material will not be returned. Responses to this RFI are due no later than 11:59 AM Eastern Standard Time on 31 July 2018. All submissions and questions must be electronically submitted to joseph.m.dellinger2.civ@mail.mil. All submissions must have the RFI number listed in the subject. No phone calls will be accepted. No hand delivered packages will be accepted. Do not send classified information to email address above. The entire email submission package can be no more than 8mb in size as well no more than 20 pages (to include cover page, table of contents, response and attachments). Pages over the 20 page limit may not be reviewed and considered for any efforts that may result from this RFI. The Government does not require the Contractor to have TS//SCI facilities at the time of the White Paper/Sources Sought submission. The information provided and received in response to this announcement is subject to the conditions set forth in FAR 52.215-3 -- Request for Information or Solicitation for Planning Purposes. Questions concerning this RFI may be directed to Joseph Dellinger at joseph.m.dellinger2.civ@mail.mil. Please be advised that.zip and.exe files cannot be accepted. If your submission package exceeds the size limit stated, please contact the aforementioned point of contact.
 
Web Link
FBO.gov Permalink
(https://www.fbo.gov/notices/db9201ce477e3e8f817e7c93b5387fad)
 
Place of Performance
Address: United States (U.S.) Army Research Laboratory (ARL), Aberdeen Proving Ground, Maryland, 21005, United States
Zip Code: 21005
 
Record
SN04986934-W 20180713/180711231004-db9201ce477e3e8f817e7c93b5387fad (fbodaily.com)
 
Source
FedBizOpps Link to This Notice
(may not be valid after Archive Date)

FSG Index  |  This Issue's Index  |  Today's FBO Daily Index Page |
ECGrid: EDI VAN Interconnect ECGridOS: EDI Web Services Interconnect API Government Data Publications CBDDisk Subscribers
 Privacy Policy  © 1994-2016, Loren Data Corp.