wiki:UfoServer

Version 4 (modified by Suren A. Chilingaryan, 13 years ago) (diff)

--

UFO Server

System

  • Host name: ufosrv1.ka.fzk.de
  • Interfaces: eth0 (10 GBit), eth1 (upper-right socket)
    • eth0: dhcp
    • eth1: 141.52.111.135/22
    • Gateway: 141.52.111.208 (via dhcp)
    • Name server: 141.52.111.248 (via dhcp)
  • Running services
    • SSH on ports 22 and 24
    • NX server over SSH

Hardware

  • Display connected to integrated video card (Matrox G200)
  • System drives: 2 x Hitachi 2TB SATA2
  • Areca ARC-1880 Raid Controller (x16 slot)
    • 16 x 2 TB Hitachi HUA722020ALA330 in external Areca Enclosure
    • 4 x 256 Crucial RealSSD C300
  • External PCIe 2.0 x16 (x16 slot)
    • External GPU box from One Stop Systems
    • 4 x NVIDIA GeForce? GTX580
  • 2 x NVIDIA GeForce? GTX580 (x16 slots)
  • Intel 82598EB 10GBit Ethernet (x4 slot)
  • Silicon Software CameraLink? FrameGrabber? MicroEnable? IV Full (PCIe 1.0 x4 slot)
  • Free slots
    • PCI express: x4 and x1
    • Storage: 2 x SSD in the main server case

Areca Raid Configuration

  • A single Areca-1880 controller handles both external storage box with SATA hard drives and internal SSD cache. Only a pair of system hard drives are connected to the SATA controller integrated in the motherboard.
  • 16 x Hitachi 2TB SATA hard drives in the external enclosure are organized as Raid-6
  • 4 x Crucial SSD C300 in the server case are organized as Raid-0

Partitioning

  • Two system hard-drives are connected to internal SATA controller and mirrored as Raid-1 using Linux Software Raid.
    • Devices: /dev/sda, /dev/sdb
    • Partitions: /boot (2GB ext2), / (256GB ext4), /home (ext4)
  • The RAID is split into the 2 partitions (GPT partition table): the fast and normal.
    • Device: /dev/sdc
    • The fast partition will be used to stream the data from the camera and should be able to stand throughput of 850 MB/s. The data should be moved out as soon as possible. Only a single application is allowed to write to the disk.
      • Size: first 6TB of disk array
      • File system: non-journaled ext4
      • Mount point: /mnt/fast
    • Standard partition is for short term data storage (before offloading to LSDF)
      • Size: 22 TB
      • File system: ext4
      • Mount point: /mnt/raid
  • The SSD cache
    • Device: /dev/sdd
    • Size: 1 TB
    • File system: ext4
    • Mount point: /mnt/ssd
  • Partition table (/dev/sda & /dev/sdc)
    Disk /dev/sda: 2000.4 GB, 2000398934016 bytes
    255 heads, 63 sectors/track, 243201 cylinders, total 3907029168 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x0003874d
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sda1   *        2048     4192255     2095104   fd  Linux raid autodetect
    /dev/sda2         4192256   541069311   268438528   fd  Linux raid autodetect
    /dev/sda3       541069312  3907028991  1682979840   fd  Linux raid autodetect
    
  • Partition table (/dev/sdc)
    Disk /dev/sdc: 28.0TB
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt_sync_mbr
    
    Number  Start   End     Size    File system  Name     Flags
     1      1049kB  6597GB  6597GB               primary
     2      6597GB  28.0TB  21.4TB  xfs          primary
    
  • Partition table (/dev/sdd)
    Disk /dev/sdd: 1000.0 GB, 999998619648 bytes
    255 heads, 63 sectors/track, 121576 cylinders, total 1953122304 sectors
    Units = sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disk identifier: 0x000b6bc3
    
       Device Boot      Start         End      Blocks   Id  System
    /dev/sdd1            2048  1953122303   976560128   83  Linux
    
  • Raid table:
    Personalities : [raid1] [raid0] [raid10] [raid6] [raid5] [raid4] 
    md2 : active raid1 sdb3[1] sda3[0]
          1682979704 blocks super 1.0 [2/2] [UU]
          bitmap: 3/13 pages [12KB], 65536KB chunk
    
    md0 : active raid1 sdb1[1] sda1[0]
          2095092 blocks super 1.0 [2/2] [UU]
          bitmap: 0/1 pages [0KB], 65536KB chunk
    
    md1 : active raid1 sda2[0] sdb2[1]
          268438392 blocks super 1.0 [2/2] [UU]
          bitmap: 0/3 pages [0KB], 65536KB chunk
    
  • fstab
    /dev/disk/by-id/md-uuid-4a18bb5c:4b4b4490:929fdc08:99b65f2f /                    ext4       acl,user_xattr        1 1
    /dev/disk/by-id/md-uuid-7c032686:e8861a19:9ccb43c3:8f25011e /boot                ext2       acl,user_xattr        1 2
    /dev/disk/by-id/md-uuid-8e7c863e:3a75af81:321862ae:d679602e /home                ext4       acl,user_xattr        1 2
    /dev/disk/by-id/scsi-2001b4d2003077811-part2 /mnt/raid            xfs        defaults              1 2
    /dev/disk/by-id/scsi-2001b4d2064473251-part1 /mnt/ssd             ext4       acl,user_xattr        1 2
    proc                 /proc                proc       defaults              0 0
    sysfs                /sys                 sysfs      noauto                0 0
    debugfs              /sys/kernel/debug    debugfs    noauto                0 0
    usbfs                /proc/bus/usb        usbfs      noauto                0 0
    devpts               /dev/pts             devpts     mode=0620,gid=5       0 0
    

Software

  • OpenSuSe 12.1
    • Gnome Desktop
    • Console tools
    • Development: Kernel, Gnome, Python
  • CUDA
    • Driver: 285.05.32
    • Toolkit: 4.1 (installed into /opt/cuda)
    • SDK: 4.1 (installed into /opt/cuda/sdk)
  • PyHST
  • UFO Framework

Attachments (1)

Download all attachments as: .zip