Equipment/Blanton

From London Hackspace Wiki
Jump to: navigation, search
Blanton
Hackspace Unknown.png
Model Xyratex HS-1235T (2U Version with 12 3.5" SAS Bays)
Sub-category Systems
Status Good working order
Last updated 15 June 2019 21:48:34
Training requirement yes
Training link Unknown
ACnode no
Owner LHS
Origin Donation from kraptv and g6vzm
Location Basement rack
Maintainers Sysadmin team

Video and ZFS backup server for services in Ujima House

The system was named after the American computer scientist Ethan Blanton who did some cool stuff and has a beard.

Please do not install anything directly on Blanton - please spin up a VM on Landin or a Docker instance on Chomsky for this purpose

Info

  • IP: 10.0.20.11
  • DNS: blanton.london.hackspace.org.uk
  • Access: Admins

Stats

Blanton is a Xyratex HS-1235T (OEM storage server platform for IBM XIV, Dell Compellent, LaCie 12Big, Pure FA-300, and several others others - compatibility with various branded disk trays such as NetApp DS4243 and other Xyratex OEM customers mentioned above fit in the array as well)

Note that the power button is just to the inside-front-left (just around the corner from the front-facing LED status lights)

Documentation

Build Notes

  1. These are the notes for the build of Blanton (and its functional twin Landin)

Do the right thing and install the Software RAID-1 on the two boot SSDs. Install Notes here SSD install note: NO SWAP PARTITION (we've got 96GB of memory and the SSDs are only 120GB - make a swapfile if we really need on the ZFS array)

Note with the above, grub-install fails, so:

  1. fdisk /dev/sda (and then sdb)
  2. Add in a second partition that is at the front of the drives, change new partition 2 to type 4 (BIOS BOOT)
  3. Then chroot /target /bin/bash and grub-install /dev/sda and grub-install /dev/sdb (assuming these are the SSDs being mirrored)
  4. Now system works with grub installs, reboots, etc.

FYI - sda (and similarly sdb) will look like this:

 Disk /dev/sda: 111.8 GiB, 120040980480 bytes, 234455040 sectors
 Units: sectors of 1 * 512 = 512 bytes
 Sector size (logical/physical): 512 bytes / 512 bytes
 I/O size (minimum/optimal): 512 bytes / 512 bytes
 Disklabel type: gpt
 Disk identifier: 
 
 Device     Start       End   Sectors   Size Type
 /dev/sda1   2048 234452991 234450944 111.8G Linux RAID
 /dev/sda2     34      2047      2014  1007K BIOS boot

Debian packages to install (support for either legacy multi-user commands, compilation stuff, and more):

Please note you should add "contrib non-free" after main to the /etc/apt/sources.list for ZFS!

iotop htop sudo finger bsdgames ethtool* lynx elinks net-tools openssh-server sudo screen iproute resolvconf build-essential tcpdump vlan ethtool rsync git rdist bzip2 git-core less unzip curl flex bc bison netcat nmap locate vim zsh vim-scripts zfs-dkms zfsutils-linux nfs-kernel-server samba-common-bin qemu-kvm libvirt-clients libvirt-daemon-system libvirt-daemon lshw ipmitool tftpd-hpa apt-mirror smartmontools iozone3 minicom tmux mosh silversearcher-ag

Show off dmesg

Why can superusers only look at dmesg nowadays? It's kinda useful to see (yeah, OK, fine, everything is a security risk) sudo sysctl kernel.dmesg_restrict=0 kernel.dmesg_restrict = 0 NOTE ABOVE - PUT IN /etc/sysctl.conf to make it permanent.


Installing ZFS, Setting up ZPOOL and Live Disk Swapping

Already setup above in the mega-apt-get command. (Legacy note) Please note you may need to add contrib (and possibly non-free) to the /etc/apt/sources.list (!)

 apt-get install linux-headers-$(uname -r)
 apt-get install zfs-dkms zfsutils-linux
  1. EASY WAY TO MAKE THE ZPOOL (NOTE WHETHER YOU WANT RAIDZ1/Z2/Z3 and the WORKING DIRECTORY)
  2. Note you're using -f because you're using the whole disk and ignoring legacy disklabels...
 cd /dev/disk/by-id
 sudo zpool create -f ethan raidz2  `ls ata-HITACHI*|grep -v part`

(this is easy because all of the donated 1TB drives are same-model HITACHI)

  1. FYI - a Similar pool creation expanded out would look like this
 sudo zpool create -f ethan raidz2 /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAG06BGA /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAG06EWA /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAG0DJ9A /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ93TMF /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9ES2F /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9GPHF /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9J1EF /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9J59F /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9N1AF /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9N2TF /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PAJ9N3EF /dev/disk/by-id/ata-HITACHI_HUA721010KLA330_PBJ76D4F 

Proxmox setup

We installed Debian Stretch (Debian 9.4.0 at the time) and then followed the Install Proxmox VE on Debian Stretch documentation. After that we needed to install the upgraded the ZFS ZED Daemon via apt-get and upgrade our zpool version as well.


  1. We'll probably just edit LDAP users to be in that group rather than complicate things with local-remote overlays!
  2. libvirt:x:113: and libvirt-qemu:x:64055:
  3. Remember to add LDAP users to libvirt group using inheritance connectivity (or we just make the LDAP group be the 'auth'd group'
  1. Installed apt-mirror and sync'd the archive from RML's server

- rsync'd various items from RichmondMakerLabs mirror seed, updated /etc/apt/mirror.list with same URLs and updated local disk hierarchy.

TODO

 - PHYSICAL: Move card to proper guaranteed x8 slots and confirm they are negotiating at full 5GT/s (SAS2008 and Sun Quad GBE)
 - crontab zpool scrub (weekly)
 - enable mail sending for daemon support and monitoring
 x install latest sas2ircu (https://www.broadcom.com/products/storage/host-bus-adapters/sas-9210-8i#downloads) for mgmt
 - install sas2ircu-status (from somewhere else) (Not needed?)
 x install bay-identifier.sh
 - label drive trays with last 4 or 6 serial number chunks (maybe not needed)
 x Play with sas2ircu to see if we can get drives in certain bays to flash (useful finding failed drives to replace)
 - configure smartd and other warning devices (smartd is dumb when drives get swapped - please note!)
 - integrate into Hackspace infra (automatic emails, root ssh keys, etc.)
 - Find rails to mount into
 - Configure LACP for 4xGbE Sun Interface
 - Export NFS to certain systems over LACP link?i
 - Configure ZeD for automation - /etc/zfs/zed.d/zed.rc Good notes here: http://louwrentius.com/the-zfs-event-daemon-on-linux.html
 - Enable tftpd-hpa for TFTP booting of phones and PXE systems, etc.
 x Enable apt mirroring for Local Debian/Ubuntu installs
 - Documentation for generating VMs
 - Mirroring latest Debian OS for VM installs
 x Add MOTD:
 Welcome to BLANTON.london.hackpsace.org.uk (Proxmox VE)
 
            NmmdhhhhhhhhhhdmmN          This system provides: 
         mmhhhhhhhhhhhhhhhhhhhhmm        
      NmdhhhhhhhhhhhhhhhhhhhhhhhhdmN     
     mhhhhhhhhhhhhh/``/hhhhhhhhhhhhhm   
   Ndhhhhhhhhhhhh/`    `/hhhhhhhhhhhhdN Z 
  Nhhhhhhhhhhhh/`        ohhhhhhhhhhhhhN 
 Ndhhhhhhhhhhss.          .shhhhhhhhhhhdN Please use CHOMSKY for your general
 dhhhhhhhhh/` .os. ``       .syhhhhhhhhhd system needs. 
 hhhhhhhh/`     .ssy/          `/hhhhhhhh
 hhhhhh/`       .s/              `/hhhhhh
 hhhhhh/`              -o.       `/hhhhhh 
 hhhhhhhh/`          -oss.     `/hhhhhhhh 
 dhhhhhhhhhys.       `` .os. `/hhhhhhhhhd 
 Ndhhhhhhhhhhhs.          .sshhhhhhhhhhdN 
  Nhhhhhhhhhhhhho        `/hhhhhhhhhhhhN  
   Ndhhhhhhhhhhhh/`    `/hhhhhhhhhhhhdN  
    mhhhhhhhhhhhhh/``/hhhhhhhhhhhhhm    
    NmdhhhhhhhhhhhhhhhhhhhhhhhhdmN     
       mmhhhhhhhhhhhhhhhhhhhhmm        
          NmmdhhhhhhhhhhdmmN           

Storage Pools

  • One mirrored-striped pool of 4TB WD Purple (surveillance) drives known as zpool 'ethan'
  • One mirrored-striped pool of server-grade SSD known as zpool 'ethan-ssd'
  • One mirrored-striped pool of old 2TB drives known as zpool 'ethan-extra'

Drives

Blanton-Storage.png

Networks

  • bond0 LACP group of 4 gigabit ethernet interfaces, tagged with VLANs

Bridges

  • vmbr0 - Standard Linux Bridge, bridged to bond0.20. Think of it like an internal switch. Any VM attached to this bridge is effectively attached to the Servers VLAN
  • vmbr1 - Standard Linux Bridge, bridged to bond0.30. This is for the cctv network - you probably don't want this one!
  • vmbr2 - Standard Linux Bridge, bridged to bond0.10. This is for the management network - you probably don't want this one!

Current VMs

Bruce

Scheduled Services

  • ZFS Receive for Landin
  • ZFS Scrubbing for Data Health & Verification

How to:

Create a new VM

Via the web interface

  1. Go to to https://blanton.london.hackspace.org.uk:8006
  2. Login with your LDAP credentials
  3. Click Create VM in the top right corner
  4. In the general tab, set the name and check "start at boot"
  5. In the OS tab, select your desired ISO image in the drop down list and configure the parameters for the guest OS
  6. In the Storage tab, select a SCSI device, select the storage to the "ethan" zpool and entered your desired disk size. Check advanced and also check the "discard" box (Important for thin provisioning)
  7. In the CPU tab, select your desired number of cores and sockets
  8. In the memory tab, select your desired size for the RAM
  9. In the Network tab, select "vmbr0" for the bridge and set the model to "VirtIO"
  10. In the Confirm tab, check "start after created" and click finish

Via CLI

Note: We should probably create a wrapper script to make this easier, to enforce naming conventions, run Ansible, and other devops-esque stuff

  1. First of all SSH into Blanton. Your users will have to have the appropriate permissions to create a VM
  2. Find an available "ID". Lets try and keep them contiguous:
    qm list
  3. View available ISOs (Or upload your ISO to the same directory)
    ls /var/lib/vz/template/iso
  4. Create the VM
    qm create [ID] --name [NAME] -cdrom [PATH TO ISO] --memory [RAM] --cores [CORES] --net0 [INTERFACE] --scsi0 [LOCATION,SIZE]
  5. Example of a Debian VM with a single core, 512MB of RAM, 10G HDD and connected to the "Bridge" interface
    qm create 104 --name "qm-test" --cdrom /var/lib/vz/template/iso/debian-9.4.0-amd64-netinst.iso --memory 512 --cores 1 --net0 "virtio,bridge=vmbr0" --scsi0 "file=ethan:10,discard=on,size=10G"

From an existing disk image

Create a VM from the cli or web as above, no need to start it. Then delete it's disk from the hardware config.

Then follow this: http://dae.me/blog/2340/how-to-add-an-existing-virtual-disk-to-proxmox/

If the old vm image is stored on ZFS then you'll need to set the disk cache used by proxmox to `writeback`

Once the disk appears in the proxmox UI you can add it to the vm and activate it (? Can't quite remember how I did it, but the cache thing is the main thing to know)

Notes

RAID Status and How to Blink a Light and Replace a Drive =

Thankfully the system is not in the middle of a woodshop, but the batch of Hitachi 1TB drives are pretty old and we should expect disk failures to happen. This is an overview of tools available to diagnose the health of the array.

How is the ZFS Zpool Health, How is the Hardware Health

  • Very likely you want to see how ZFS sees the drives. This command should suffice:
 # zpool status -v
  • You can check the list of hardware connected to the array via the LSI (Avago/Broadcom) utility sas2ircu
 # sas2ircu 0 display

(you'll want to pipe this to less or a text file to scroll through the various notes.

  • Maybe you want to run through smartctl and see whether any of the disks are in a pre-fail state. Try a shell script like this:
 for i in {a..o}; do
     echo "Disk sd$i" $SN $MD
     smartctl -i -A /dev/sd$i |grep -E "^  "5"|^"197"|^"198"|"FAILING_NOW"|"SERIAL""
 done

ZFS Disk Death - what to do

If a 1 or 2 disks die in the ZFS zpool, you'll want to replace them. You'll see something like a disk or two with the status UNAVAIL and the zpool state being DEGRADED. We don't want to shut off the computer, so what to do?

  • Make note of the disk ID(s) and search for those drives by doing
 # sas2ircu 0 display | less
  • While scrolling up and down using less, you can find the affected dying drive serial number (starts with the letter P in our Hitachi examples)
  • Make a note of the enclosure number and the slot number on the controller in the command above.
  • Make the affected disk(s) blink in their slots if you have enclosures that blink properly. DON'T JUST CUT AND PASTE THIS COMMAND AND REPLACE THE WRONG DRIVE BECAUSE YOU MADE THE WRONG SLOT BLINK. This example below shows blinking drive 1 in assembly 2:
 # sas2ircu 0 locate 2:1 on 
  • then you'll see the blinking slot(s) and can remove those affected disks.
  • Replace the drives in the disk trays (you may need a Torx T10 driver or a careful flathead screwdriver to replace drives in the tray, and then reinsert.
  • Turn the blinking light off.
 # sas2ircu 0 locate 2:1 off
  • Find the new drive by either seeing the latest drive added in dmesg and then poking around /dev/disk/by-id for the right serial number. Example disk replacement (remember, use zpool status to find the old disk to replace)
 # zpool replace -f ethan ata-HITACHI_HUA721010KLA330_PAJ9N3EF ata-HITACHI_HUA721010KLA330_PBJ7DNWE
You can then run
zpool status -v
to see the replacement in progress and a time estimation to finish replacing the old drive in the ZFS array. Nice!