Reprints from my posting to SAN-Tech Mailing List and ...

2012/05/03

[san-tech][03546] Slide/Video: Lustre User Group (LUG) 2012, (Apr 23 - 25, 2012)

Date: Thu, 03 May 2012 10:46:12 +0900
--------------------------------------------------
2012年 4月 23日~25日に開催された Lustre User Group (LUG) 2012の
スライドとビデオが公開されています:

LUG 2012 Agenda
  http://www.opensfs.org/lug/program

Presentations
  http://www.opensfs.org/resources-2/presentations

Media (YouTubeへのリンク)
  http://www.opensfs.org/resources-2/videos

個人的に気になったシステム関連の発表:
==========
"Lustre Future Development"
 Andreas Dilger, Whamcloud
  http://www.opensfs.org/wp-content/uploads/2011/11/LUG-2012-Lustre-future-development.pdf

Lustre 2.4 - March 2013
Lustre 2.4 - ZFS OSD (WC/LLNL) (Page 6)
  Leverage many features immediately
    Data checksums on disk + Lustre checksums on network
    Online filesystem check/scrub/repair- no more e2fsck!
    Integrated with flash storage cache (L2ARC)
  More ZFS features to leverage in the future
    Snapshots, end-to-end data integrity, datasets
Lustre 2.4 - HSM (WC/CEA) (Page 7)
  Originally developed by CEA France
  Simple archive back-end interface
  Infrastructure useful for other projects
ZFS on Linux Licensing Answers (Page 13)

==========
"Sequoia's 55PB Lustre+ZFS Filesystem"
 Brian Behlendorf, Lawrence Livermore National Laboratory
  http://www.opensfs.org/wp-content/uploads/2011/11/LUG12_ZFS_Lustre_for_Sequoia.pdf

Status Update (Page 5)
  55PB Lustre+ZFS File system for Sequoia
    Development still under way
    Performance work started
  Contract with Whamcloud
MDS Configurations (Page 12)
  LDISKFS+MD & ZFS+MIRROR
OSS Configurations (Page 16)
  LDISKFS+RAID6 & ZFS+RAID6 & ZFS+RAIDZ2
Website
  http://zfsonlinux.org
  http://zfsonlinux.org/lustre.html
※LDISKFSと ZFSとの比較等、いろいろ試行錯誤の感じがします。

==========
"Lustre 2.1 and Lustre-HSM at IFERC"
 Diego Moreno, Bull
  http://www.opensfs.org/wp-content/uploads/2011/11/LUG_2012_Diego_Moreno_v2.pdf
※六ヶ所村の「六ちゃん」

Archive Lustre L2 - DMF (HSM) (Page 7, 8)
  8PB Lustre + 40PB SGI DMF (Lustre Clients)

==========
"Current Status of FEFS for the K Computer"
 Shinji Sumimoto, Fujitsu
  http://www.opensfs.org/wp-content/uploads/2011/11/LUG2012-FJ-20120426.pdf

==========
"A Technical Overview of the OLCF's Next-generation Center-wide Lustre File System"
 Sarp Oral, Oak Ridge National Laboratory
  http://www.opensfs.org/wp-content/uploads/2011/11/oral-lug-2012.pdf

Transitioning I/O to next gen computing (Page 2)
From Jaguar to Titan
  Number of cores: 224K -> 300K
  Memory: 300 TB -> 600 TB
  Peak Performance: 2.2 PFlops -> 10-20 Pflops
  Proprietary Interconnect: SeaStar2+ -> Gemini
  Peak egress I/O (over IB): (192 x 1.5 GB/s) -> (384-420 x 2.8-3 GB/s)
New Architecture (Page 5)
Target numbers for next gen parallel file system
  1 TB/s file system-level well-formed I/O performance
  240 GB/s file system-level random I/O performance
  Capacity will be based on the selected storage media
    Expected to be 9-20 PB
  Availability: >95%
    Expected availability will be similar of Spider's

0 件のコメント:

コメントを投稿