Date: Mon, 16 Jul 2012 19:41:03 +0900
--------------------------------------------------
2012/09/05
"LLNL Takes Lead in Managing the DOE’s FastForward Program"
08.16.2012, insideHPC
http://insidehpc.com/2012/08/16/llnl-takes-lead-in-managing-the-doe%E2%80%99s-fastforward-program/
--------------------------------------------------
先日担当社が確定した (公式プレスリリースは見つかりませんが)、DOE/NNSAの
Extreme-Scale Computing Research and Development
FastForward Request for Proposals
https://asc.llnl.gov/fastforward/
の Statement of Work (プロジェクトの目標、成果物等、役割分担を明記した
文章) を斜め読みしてみました (SOWの TCO紹介は長いので最後にしました)。
はじめにプロジェクトについて (プロジェクト期間は 2年間)
"The FastForward RFP objective is to initiate partnerships with multiple
companies to accelerate the R&D of critical technologies needed
for extreme-scale computing. It is recognized that the broader
computing market will drive innovation in a direction that may not meet
DOE's mission needs"
後で紹介する NVIDIAの Bill Dallyの Blogの方が解りやすい気がするので:
"The two-year contract calls for NVIDIA to conduct research and development
in processor architecture, circuits, memory architecture, high-speed
signaling and programming models to enable an exascale computer at
a reasonable power level. The concept is to use thousands of efficient,
throughput-optimized cores to perform the bulk of the work, with
a handful of latency-optimized cores to perform the residual serial
computation." Bill Dally, NVIDIA
※↑に Storage and I/Oが加わります。
以下の 3領域で RFPを募集してました (2012/05/11締切)
*)Memory
*)Processor
*)Storage and I/O
結果は
AMD: $12.6 M (Memory($3 M) and Processor($9.6M))
Intel: $19.0 M (Memory and Processor)
NVIDIA: $12.4 M (Processor)
Whamcloud *): $Unknown (Storage and I/O)
*)Technology partners: Cray Inc., EMC Corporation, and The HDF Group
各社プレスリリース (NVIDIAは Blog、DOE/NNSAのプレスは見つかりません):
"AMD Selected by U.S. Government to Help Engineer and Shape the Future of
High Performance Computing"
AMD Awarded $12.6 Million by U.S. Department of Energy to Extend Processor
and Memory Expertise to Fuel New Scientific Discoveries
Jul 11, 2012
http://phx.corporate-ir.net/phoenix.zhtml?c=74093&p=irol-newsArticle&ID=1713614&highlight=
"Intel Federal LLC to Propel Supercomputing Advancements for the U.S. Government"
2012/07/13, IntelPR
http://newsroom.intel.com/community/intel_newsroom/blog/2012/07/13/intel-federal-llc-to-propel-supercomputing-advancements-for-the-us-government
"NVIDIA Nabs $12M US DOE Contract for Exascale Research"
Jul 13 2012, Bill Dally, Chief Scientist, NVIDIA
http://blogs.nvidia.com/2012/07/us-awards-12-million-contract-to-nvidia-for-exascale-research/
"Whamcloud Leads Group of HPC Experts Winning DOE FastForward Storage and IO Project"
July 10, 2012
http://www.whamcloud.com/news/whamcloud-leads-group-of-hpc-experts-winning-doe-fastforward-storage-and-io-project/
解説記事:
"Intel love bombs US.gov for supercomputing tax dollars"
2nd September 2011, The Register
http://www.theregister.co.uk/2011/09/02/intel_federal_unit_hpc/
*)About Intel FederalLLC
"Earlier this year, Mark Seager, former head of HPC systems at
Lawrence Livermore National Laboratory, was tapped by Intel to be
CTO for the chipmaker's HPC Ecosystems group."
"DOE doles out cash to AMD, Whamcloud for exascale research"
11th July 2012, The Register
http://www.theregister.co.uk/2012/07/11/doe_fastforward_amd_whamcloud/
"US Energy dept starts handing out cash for exaflop superputer quest"
16th July 2012, The Register
http://www.theregister.co.uk/2012/07/16/more_doe_exascale_research_grants/
"DOE Primes Pump for Exascale Supercomputers"
July 12, 2012, Michael Feldman, HPCwire
http://www.hpcwire.com/hpcwire/2012-07-12/doe_primes_pump_for_exascale_supercomputers.html
Whamcloudが提案した Storage and I/O Researchについては、以下に CTOへの
インタビュー記事があります (いずれ紹介すると思います)。
"Interview: Whamcloud Wins FastForward Contract for Exascale R&D"
07.10.2012
http://insidehpc.com/2012/07/10/interview-whamcloud-wins-fastforward-contract-for-exascale-rd/
※Intelによる買収発表はこの後の話です。
"Intel Corporation has acquired Whamcloud"
July 13, 2012
http://www.whamcloud.com/news/intel-corporation-has-acquired-whamcloud/
ちなみに Lustreは、2001年の
"The Development of New High Performance Computer File System Technology"
FastForward Project
で予算がつきました。その時の提出資料
"Lustre Technical Project Summary"
(Attachment A to RFP B514193 Response)
Peter J. Braam, Cluster File Systems and Rumi Zahir, Intel Labs
Date: Version 2, July 29, 2001
は検索すれば見つかります。
Peter J. Braamは、
Parallel Scientific, Inc
http://www.parsci.com/
を 2010年に設立しています (Xyratexは Fellowです)。
"We are currently working with the Haskell development team and major
HPC laboratories world wide on libraries and compiler extensions for
parallel programming."
以下 "FastForward R&D Draft Statement of Work" の TOC的なもの
※長文です。全体論から ATTACHMENTとして 3システム固有の詳細があります。
RFP Attachment 4
"FastForward R&D Draft Statement of Work"
March 29, 2012, 38 pages
https://asc.llnl.gov/fastforward/04_FastForward_SOW_FinalDraftv3.pdf
1 INTRODUCTION
"..... Informed by responses to the exascale RFI, DOE SC and NNSA have
identified three areas of strategic research and development (R&D)
investment that will provide benefit to future extreme-scale applications:
Processor technology
Memory technology
Storage and input/output (I/O)
These R&D activities will initially be pursued through a program called
FastForward. The objective of the FastForward program is to initiate
partnerships with multiple companies to accelerate the R&D of critical
technologies needed for extreme-scale computing. ....."
2 ORGANIZATIONAL OVERVIEW
3 MISSION DRIVERS
4 EXTREME-SCALE TECHNOLOGY CHALLENGES
4.1 Power Consumption and Energy Efficiency
"Achieving the power target for exascale systems is a significant research
challenge. Even with optimistic expectations of current R&D activities,
there is at least a factor of five gap between what we must have and
what current research can provide."
4.2 Concurrency
4.3 Fault Tolerance and Resiliency
"The goal of research in this area is to improve the application MTTI by
greater than100 times, so that applications can run for many hours.
Additional goals are to improve by a factor of 10 times the hardware
reliability and improve by a factor of 10 times the local recovery from
errors."
4.4 Memory and Storage Architecture
"Without improvements in this area, it is anticipated that systems in
the 2020 timeframe will suffer a 10-time loss in memory size relative
to compute power."
"Systems ten years from now could have a billion cores, tens of petabytes
of memory, and require hundreds of terabytes per second of I/O bandwidth
to hundreds of petabytes of storage. This level of concurrency is well
beyond the design point for today’s HPC file systems. New approaches to
application checkpointing, as well as alternative storage system paradigms,
must be explored."
"Solid-state drives (SSDs), while cost-effective for bandwidth, are
cost-prohibitive for capacity. Future storage systems will no longer
be able to assume an all disk storage device solution, and therefore, we
anticipate solutions that involve hybrid storage or other
technologies/concepts.
4.5 Programmability/Productivity
Parallelism
Distributed Resource Allocation and Locality Management
Latency Hiding
Hardware Idiosyncrasies
Portability
Synchronization Bottlenecks
5 APPLICATION CHARACTERISTICS
6 ROLE OF CO-DESIGN
6.1 Overview
6.2 ASCR Co-Design Centers
6.3 ASC Co-Design Center
6.4 Proxy Apps
7 REQUIREMENTS
7.1 Description of Requirement Categories
"Requirements are either mandatory (designated MR) or target (designated
TR-1, TR-2, or TR-3), ..."
.....
8 EVALUATION CRITERIA
.....
ATTACHMENT 1:
PROCESSOR RESEARCH AND DEVELOPMENT REQUIREMENTS
A1-1 Key Challenges for Processor Technology
A1-1.1 Energy Utilization
A1-1.2 Resilience and Reliability
A1-1.3 On-Chip and Off-Chip Data Movement
A1-1.4 Concurrency
A1-1.5 Programmability
A1-2 Areas of Interest
A1-2.1 Energy Utilization
A1-2.2 Resilience and Reliability
A1-2.3 On-Chip and Off-Chip Data Movement
A1-2.4 Concurrency
A1.2.5 Programmability
A1-3 Performance Metrics
A1-4 Target Requirements
A1-4.1 Energy Utilization (TR-1)
"Solutions should realize greater than 50 GF/Watt at system scale while
maintaining or improving system reliability."
A1-4.2 Resilience and Reliability (TR-1)
Mean Time to Application Failure (TR-1)
Wall-Time Overhead (TR-1)
A1-4.3 On-Chip Data Movement (TR-2)
"... at least 4 TB/s to a greater than 100-GB region of memory"
"It is highly desirable for a node to have 320?640 GB of memory"
A1-4.4 Off-Chip Data Movement (TR-2)
"The total bandwidth between a node and the interconnect should be
greater than 400 GB/s."
A1-4.5 Concurrency (TR-2)
"To keep system sizes manageable, the overall performance of a node
should be greater than 10 TF."
A1-4.6 Programmability (TR-1)
ATTACHMENT 2:
MEMORY RESEARCH AND DEVELOPMENT REQUIREMENTS
A2-1 Key Challenges for Memory Technology
A2-1.1 Energy Consumption
A2-1.2 Memory Bandwidth
A2-1.3 Memory Capacity
A2-1.4 Reliability
A2-1.5 Error Detection/Correction/Reporting
A2-1.6 Processing in Memory
A2-1.7 Integration of NVRAM Technology
A2-1.8 Ease of Programmability
A2-2 Areas of Interest
A2-3 Performance Metrics (MR)
A2-3.1 DRAM Performance Metrics
Energy per Bit.
Aggregate Bandwidth per Socket (DRAM or Suitable Replacement for DRAM).
Memory Capacity per Socket.
FIT Rate per Node.
Error Detection.
Processing in Memory.
Programmability/Usability.
A2-3.2 NVRAM Performance Metrics
NVRAM Integration.
Energy per Bit.
Aggregate Bandwidth per Socket.
Capacity per Socket.
FIT Rate per Node.
Error Detection.
Programmability/Usability.
A2-4 Target Requirements
A2-4.1 Energy per Bit
Reduced Energy per Bit (TR-1)
"Energy per bit should be 5 picojoules or less end-to-end."
Greatly Reduced Energy per Bit (TR-2)
"Energy per bit should be 1 picojoule or less end-to-end."
A2-4.2 Aggregate Delivered DRAM Bandwidth
Improved Aggregate Delivered DRAM Bandwidth Per Socket (TR-1)
"Aggregate delivered bandwidth per socket for DRAM or equivalent should be
4 TB/s or greater over a distance of 5 cm or more."
Greatly Improved Aggregate Delivered DRAM Bandwidth Per Socket (TR-2)
"10 TB/s or greater over a distance of 5 cm or more."
A2-4.3 Memory Capacity per Socket
Increased DRAM Capacity per Socket (TR-1)
"Memory capacity per socket for DRAM or equivalent should be 500 GB or
greater with preference for "fast" memory per item 4.2 above."
Greatly Increased DRAM Capacity per Socket (TR-2)
"Memory capacity per socket for DRAM or equivalent should be 1 TB or"
Increased NVRAM Capacity per Socket (TR-1)
"Memory capacity per socket for NVRAM or equivalent should be 1 TB or
greater with preference for greatly improved reliability per items 4.4
and 4.5 below."
A2-4.4 FIT Rate per Node
Improved FIT Rate per Node (TR-1)
"FIT rate per node should not exceed 100."
Greatly Improved FIT Rate per Node (TR-2)
"FIT rate per node shall not exceed 10."
A2-4.5 Error Detection Coverage and Reporting
Reduction in Silent Errors (TR-1)
"Solution should propose and estimate ways to greatly reduce possible
rates of silent errors."
End-to-End Error Detection and Recovery (TR-2)
"Solution should provide complete end-to-end error detection and recovery,
including data paths."
A2-4.6 Advanced Processing in Memory Capabilities
Vector Operations and/or Gather/Scatter (TR-1)
"Processing in memory solutions should include vector operations and/or
gather/scatter."
General Purpose Processor in Memory (TR-2)
"Offeror should implement a general-purpose processor-in-memory solution."
A2-4.7 Enhanced Programmability/Usability (TR-1)
ATTACHMENT 3:
STORAGE AND INPUT/OUTPUT RESEARCH AND DEVELOPMENT REQUIREMENTS
A3-1 Key Challenges for Storage and Input/Output Technologies
A3-2 Areas of Interest
A3-2.1 Reliability/Availability/Manageability
A3-2.2 Metadata
"Further, non-POSIX access methods that could be proposed may imply
the need for new types of metadata. Metadata targets are described
in the environments table above."
A3-2.3 Data
A3-2.4 Quality of Service
"Innovative solutions to this QOS problem will be needed, especially
fundamental solutions that are not an afterthought."
A3-2.5 Security
A3-2.6 New Device/Topology Exploration/Exploitation
A3-3 Performance Metrics (MR)
Mean Time to Application Visible Interrupt.
Peak Burst I/O Rate.
Data Rate for Unaligned/Variable-Sized Requests.
Metadata Transaction Rates.
Metadata Performance Efficiency.
End-to-End Data Protection.
A3-4 Target Requirements
A3-4.1 Reliability/Availability/Manageability
Mean Time to Application Visible Interrupt (TR-1)
"The mean time to application visible interrupt caused by the storage
system measured at full system job size should be no less than 30 days."
Mean Time to Data Loss (TR-1)
"The mean time to unrecoverable data loss caused by the storage system
should be no less than 120 days, and all data lost can be enumerated
by name for system users."
Partial Unavailability (TR-1)
"Mean time to partial unavailability for the storage system should be
no less than 20 days."
Total Unavailability (TR-1)
"Mean time to total unavailability for the storage system should be
no less than 120 days.
End-to-End Data Integrity with Low Overhead (TR-1)
"End-to-end data integrity capability from application interface and back
should be provided with no more than 10-percent overhead on metadata
insert/query and data movement rates, measured on a full
supercomputer-system-sized workload."
A3-4.2 Metadata
Improved Metadata Insert Rates (TR-1)
"Transactionally secure insert rates into metadata store with consistency
provided in less than 10 s should be no less than 1 million/s."
Significantly Improved Metadata Insert Rates (TR-2)
"Transactionally secure insert rates into metadata store with consistency
provided in less than 10 s should be no less than 10 million/s."
Greatly Improved Metadata Insert Rates (TR-3)
"Transactionally secure insert rates into metadata store with consistency
provided in less than 10 s should be no less than 100 million/s."
Improved Metadata Query Rates (TR-1)
"Keyed lookup and retrieval of metadata entries should be
no less than 100 thousand/s."
Significantly Improved Metadata Query Rates (TR-2)
"Keyed lookup and retrieval of metadata entries should be
no less than 1 million/s."
Greatly Improved Metadata Query Rates (TR-3)
"Keyed lookup and retrieval of metadata entries should be
no less than 10 million/s."
Metadata Richness (TR-2)
"...the capability for users to annotate data and find data via multiple
metadata approaches (for example, hierarchies or key values)."
A3-4.3 Data
Improved Data Rates for Unaligned/Variable-Sized Requests (TR-1)
"The storage system should be able to read and write two system memories
having irregular and unaligned data patterns in parallel from
1 billion processes at 100 TiB/s."
Greatly Improved Data Rates for Unaligned/Variable-sized Requests (TR-2)
"1 billion processes at 300 TiB/s"
Improved Data Rates for Many Unaligned/Variable-sized Requests (TR-1)
"The storage system should be able to read and write 20 system memories
having irregular and unaligned data patterns in parallel at 20 TiB/s."
A3-4.4 QoS
Efficient Metadata Requests During Large Data Movement (TR-1)
"The storage system should have no more than 25-percent degradation
on metadata query/retrieval operations during storage system
peak read and/or write operations."
Highly Efficient Metadata Requests During Large Data Movement (TR-2)
"... no more than 10-percent degradation ..."
Efficient Multiple Concurrent Large Data Movement (TR-1)
"The storage system should allow each of four parallel concurrent
read/write workloads occupying the entire supercomputer to operate
at 75 percent of the data rate these workloads would receive without
the other concurrent workloads."
Highly Efficient Multiple Concurrent Large Data Movement (TR-2)
"... to operate at 90 percent of the data rate these workloads ..."
A3-4.5 Security
End-to-End Data Protection (TR-1)
"End-to-end data security capability should be provided, from application
interface to storage system and back."
Minimal End-to-End Data Protection Overhead (TR-2)
"End-to-end data security capability should be provided, from application
interface to storage system and back, while meeting all other TR-1 and
TR-2 in this section."
少し関連?
[san-tech][02070] US HEC系ファイルシステム予算獲得軌跡の報告
Date: Wed, 10 Feb 2010 00:11:10 +0900
> "Coordinating government funding of file system and I/O research
> through the high end computing university research activity"
>
> ACM SIGOPS Operating Systems Review, Volume 43, Issue 1 (January 2009)
> Gary Grider: Los Alamos National Lab
> James Nunez: Los Alamos National Lab
> John Bent: Los Alamos National Lab
> Steve Poole: Oak Ridge National Lab
> Rob Ross: Argonne National Lab
> Evan Felix: Pacific Northwest National Lab
> http://doi.acm.org/10.1145/1496909.1496910
0 件のコメント:
コメントを投稿