--------------------------------------------------
QLogic関連の情報等です。
まずは講演ビデオ:
"Video: QLogic Hot Seat ? Interconnect on the Path to ExaScale"
07.9.2011, insideHPC
"In this video from ISC’11, QLogic's Phil Murphy presents,
"Interconnect on the Path to ExaScale." The fast-paced Hot Seat
Sessions have been a trademark of ISC for many years, providing
HPC vendors an opportunity to present their latest technologies
in front of a board of inquisitors."
Video:
"ISC'11 Hot-Seat - Phil Murphy, QLogic: "Interconnect on the Path to ExaScale""
2011/07/08, VideoISCEvents, 15:00, 480p
Slide:
少し前の QLogic大型案件です:
"QLogic Wins Major Deployment in NNSA's Tri-Labs Cluster"
Jun 17, 2011
※これは、Department of Energy National Nuclear Security Administration
(NNSA's) Tri-Laboratory Linux Capacity Cluster 2 (TLCC2) への採用です。
関連記事:
"QLogic Grows InfiniBand Reach, Teams with NNS"
June 20th 2011, SiliconANGEL
"QLogic Sees Supercomputers In The Cloud"
June 17, 2011, InformationWeek
Tri-Laboratory Linux Capacity Cluster 2 (TLCC2) のサーバは前回と
同様に Appro社です (今回は、プロセッサ・マザーボード共にインテル)。
"Appro Nabs Exclusive Supercomputing Deal with Three US National Laboratories"
6/8/2011
.....
""Intel is excited to collaborate with Appro to deliver this innovative
solution, including next generation Intel Xeon processor, code named
Sandy Bridge-EP, as well as Intel's Server Board which is optimized
for memory bandwidth performance and maximum density, with a flexible
IO configuration." said Lisa Graff, vice president and general manager
of the Enterprise Platforms & Services Division at Intel."
ちょっと驚いたのが Approが 1991年設立ということ
20 Year Anniversary
History
TLCC2開札に関する NNSAのプレスリリース:
"NNSA Announces Procurement of Capacity Computing Clusters to Support
Stockpile Stewardship at National Labs"
Jun 8, 2011
Stockpile stewardship
関連記事:
"3.Approが6PFlopsのスパコンを受注"
2011年6月11日版、最近の話題、Ando's Processor Information
"Appro Comes Up Multi-Million Dollar Winner in HPC Procurement for NNSA"
June 08, 2011, HPCWire
"The new Tri-Lab clusters will be outfitted with QLogic QDR InfiniBand
hardware, ditching the Mellanox parts in the TLCC and Peloton systems.
In this case, the labs are favoring QLogic gear based on impressive
scalability and performance results on some of their existing
QLogic-equipped systems, in particular, the 23-thousand-core Sierra
cluster at Lawrence Livermore."
"Appro wins 6 petaflops contract from DOE nuke labs"
Largest deal in company history
8th June 2011, The Register
なお、Tri-Laboratory Linux Capacity Cluster 2 (TLCC2) RFPは現在でも
公開されています:
Tri-Laboratory Linux Capacity Cluster
Request for Proposal B590550
Draft Statement of Work (SOW) (89 Page) は上記に含まれていますが、
最終仕様書は FBOでも公開されてませんね (調達登録はしてあります)。
Draft SOWによると Hyperionに近い気がします (例として Hyperion構成図を
使ってます)。Tri-Laboratory Operating System Stack (TOSS):OS環境に
SLURM: A Highly Scalable Resource Manager
を必須としてます。
"single job scalability to 78,848 processors, and a compatibility library
to support TORQUE job submission command syntax." (Page 27)
で 1セットの最大規模を推定出来ます。スケジューラは Moabです。
3 TLCC2 Technical Requirements (Page 33)
"These combined SUs shall be capable of supporting a complex workload
consisting of small (4-256) medium (257?2,048), large (2,049-16,384)
and occasionally full capability (78,848) MPI task count parallel jobs
for Tri-Laboratory classified ASC Program and SSP simulations."
とありました (でも↑では processors、ここでは MPI task count parallel
jobs、processors = coresと考えられますが、後から、processor socket、
processor coresという表現が出て来て、定義してくれないと困ります)。
定義ありました (Page 85)、CPU = core = processorです。
単品は VLSI chipと定義してますね。私はソケットとした気がします。
※これは、メーカーさんによって用語の使い方が違うという現実があるからです。
GB(TB)、GiB (TiB) も明確にしておかないと、ベンチマーク等でかなりの差が
出ます:
GB: 1,000,000,000 bytes
GiB: 1,073,741,824 bytes (ベンチマークで 7%の差は大きいです)
TB: 1,000,000,000,000 bytes
TiB: 1,099,511,627,776 bytes (当然差が広がります)
0 件のコメント:
コメントを投稿