Reprints from my posting to SAN-Tech Mailing List and ...

2011/06/10

[san-tech][02840] "Tuning HDF5 for Lustre File Systems", IASDS10 (2010/09/24)

Date: Sun, 28 Nov 2010 06:42:20 +0900
--------------------------------------------------
2010年 6月の NERSCの以下プレスリリースが、2010年 9月の IASDS10で発表
されていました:

"NERSC and HDF Group Optimize HDF5 Library to Improve I/O Performance"
 June 28, 2010
  http://www.lbl.gov/cs/Archive/news062810.html

  "There are several layers of software that deal with I/O on high
   performance computing (HPC) systems. The filesystem software, such
   as Lustre or GPFS, is closest to the hardware and deals with the
   physical access and storage issues. The I/O library, such as HDF5
   or netCDF, is closest to the scientific application, and maps the
   application's data abstractions onto storage abstractions. I/O
   middleware, such as MPI-IO, coordinates activities between the other
   layers."



  "Getting these three layers of software to work together efficiently
   can have a big impact on a scientific code's performance. That's
   why the U.S. Department of Energy's (DOE's) National Energy Research
   Scientific Computing Center (NERSC) has partnered with the nonprofit
   Hierarchical Data Format (HDF) Group to optimize the performance of
   the HDF5 library on modern HPC platforms."

発表ペーパー:
"Tuning HDF5 for Lustre File Systems", IASDS10, 2010/09/24
 Mark Howison, Quincey Koziol, David Knaak, John Mainzer and John Shalf
  http://www.mcs.anl.gov/events/workshops/iasds10/howison_hdf5_lustre_iasds2010.pdf

Agenda,
IASD10 (Workshop on Interfaces and Abstractions for Scientific Data Storage)
  http://www.mcs.anl.gov/events/workshops/iasds10/agenda.php
※他にもいろいろな発表があります。
IEEE Cluster 2010
  http://www.cluster2010.org/index.html

スライド:
"Tuning HDF5 for Lustre File Systems", IASDS10, 2010/09/24
 Mark Howison, Quincey Koziol, David Knaak, John Mainzer and John Shalf
  http://www.nersc.gov/projects/presentations/HDF5_DonofrioNERSC.pdf

ペーパーから
Abstract
  "We describe our recent work to optimize the performance of the HDF5
   and MPI-IO libraries for the Lustre parallel file system. We selected
   three different HPC applications to represent the diverse range of
   I/O requirements, and measured their performance on three different
   systems to demonstrate the robustness of our optimizations across
   different file system configurations and to validate our optimization
   strategy. We demonstrate that the combined optimizations improve HDF5
   parallel I/O performance by up to 33 times in some cases -  running
   close to the achievable peak performance of the underlying file system
   - and demonstrate scalable performance up to 40,960-way concurrency."

検証対象アプリケーション:
*)Global Cloud Resolving Model (GCRM): climate simulation
*)VORPAL: versatile plasma simulation
*)Chombo: adaptive mesh refinement (AMR) package

検証システム
*)JaguarPF: 40,960 Core Cray XT5
*)Franklin: 10.240 Core Cray XT4
*)Hopper (Phase 1): 2560 Core Cray XT5
この成果はいずれ HDF5に取り込まれるそうです (どこかに書いてありました)。

The HDF Group
  http://www.hdfgroup.org/

HDF and HDF-EOS Workshop XIV, September 28-30, 2010
AGENDA
  http://www.hdfeos.org/workshops/ws14/agenda.php

NERSC関係者の HDF5解説:
"Parallel I/O Tutorial"
 Katie Antypas,
 Astrosim Summer School on Computational Astrophysics, July 20, 2010
  http://supercomputing.astri.umk.pl/files/lectures/Parallelisation.pdf
Schedule, Astrosim Summer School 2010
  http://supercomputing.astri.umk.pl/schedule
※ビデオあり

"HDF5 Lab Session: Tutorial from HDF Group" (105 Page)
 Katie Antypas,
 Astrosim Summer School on Computational Astrophysics, , July 21, 2010
  http://www.nersc.gov/projects/presentations/KatieHDF.pdf

NERSC関係者の I/O関連基礎研究・解析:
"Parallel I/O Performance: From Events to Ensembles"
 Andrew Uselton
 IEEE International Parallel and Distributed Processing Symposium (IPDPS),
 April 19-23, 2010
  http://www.nersc.gov/projects/presentations/IPDPS2010Uselton.pptx.pdf
  http://www.nersc.gov/projects/reports/technical/UseltonIPDPSpaper.pdf

"MPI-I/O on Franklin XT4 System at NERSC"
 Katie Antypas and Andrew Uselton, Cray User's Group Meeting 2009
  http://www.nersc.gov/projects/presentations/AntypasCUG09.pdf
"Deploying Server-side File System Monitoring at NERSC"
 Andrew Uselton, Cray User's Group Meeting 2009
  http://www.cug.org/7-archives/previous_conferences/CUG2009/bestpaper/16A-Uselton/uselton_slides.pdf
  http://www.cug.org/5-publications/proceedings_attendee_lists/CUG09CD/S09_Proceedings/pages/authors/16-18Thursday/16A-Uselton/uselton.pdf
"MPI-I/O on Franklin XT4 System at NERSC"
 Katie Antypas, Cray User's Group Meeting 2009
  http://www.cug.org/5-publications/proceedings_attendee_lists/CUG09CD/S09_Proceedings/pages/authors/11-15Wednesday/13A-Antypas/Antypas-paper.pdf
"Monitoring I/O Performance on Lustre"
 Andrew Uselton, Lustre User Group 2009
  http://www.nersc.gov/projects/presentations/UseltonLUG2009.pdf

※こういう地道な解析が HDF5等のパフォーマンス向上に繋がってます。
※他にも関連研究はいくつもあります。

0 件のコメント:

コメントを投稿