Difference between revisions of "Equipment/Lamarr"

From London Hackspace Wiki
Jump to navigation Jump to search
(→‎Chomsky: updated link to babbage)
 
(14 intermediate revisions by 4 users not shown)
Line 4: Line 4:
 
|model=HP DL380 G5 (2U Version with 8 2.5" SAS Bays) <!-- Model -->
 
|model=HP DL380 G5 (2U Version with 8 2.5" SAS Bays) <!-- Model -->
 
|category=Equipment <!-- Main category. Please leave alone to keep item in this category -->
 
|category=Equipment <!-- Main category. Please leave alone to keep item in this category -->
|subcat=Systems <!-- Sub-category if one exists. Please check main listing to see other categories contained within the main one -->
+
|subcat=Defunct <!-- Sub-category if one exists. Please check main listing to see other categories contained within the main one -->
|status=Good working order
+
|status=Scrapped
 
|consumables=<!-- Any items used up in normal operation, such as; ink, paper, saw-blades, cutting disks, oil, etc.. -->
 
|consumables=<!-- Any items used up in normal operation, such as; ink, paper, saw-blades, cutting disks, oil, etc.. -->
 
|accessories=<!-- Any items associated with the equipment but not consumable, such as; drill bits, safety gloves, goggles, etc.. -->
 
|accessories=<!-- Any items associated with the equipment but not consumable, such as; drill bits, safety gloves, goggles, etc.. -->
Line 17: Line 17:
 
|template_ver=1.1 <!-- Please do not change. Used for tracking out-of-date templates -->
 
|template_ver=1.1 <!-- Please do not change. Used for tracking out-of-date templates -->
 
}}
 
}}
 +
 +
'''Lamarr was retired when 447 Hackney Road closed.  It has been replaced by [[Equipment/Landin|Landin]] at Ujima House.  '''
 +
 
General use Hypervisor, currently using KVM/QEMU.
 
General use Hypervisor, currently using KVM/QEMU.
 +
 +
'''Rule 1 of using Lamarr: Do not install anything on Lamarr (Make a VM, silly!)'''
  
 
= Info =
 
= Info =
Line 25: Line 30:
  
 
= Stats =
 
= Stats =
 +
Lamarr is an HP ProLiant DL380 G5 donated to the space.
 
* 2 Dual-core Xeons @ 3.0ghz
 
* 2 Dual-core Xeons @ 3.0ghz
* 32 GB ECC RAM
+
* 32 GB ECC 667mhz FBDIMM RAM
* RAID10 on / ~123GB
+
* HP P400 SAS PCIe RAID Controller with the following hardware RAID:
* RAID5 on /storage ~404G
+
** RAID10 on / ~123GB
 +
** RAID5 on /storage ~404G
  
 
= libvm config =
 
= libvm config =
 
XML files are stored in /root at the moment. Will get around to sorting.
 
XML files are stored in /root at the moment. Will get around to sorting.
  
An important thing to note is there are multiple instances of libvert running at any one time. The main instance is "system" which runs as root and is considered the main instance, most service VMs are stored here. The other one is "session" which is unique to the user who runs it, it also runs with their user/group permissions.
+
An important thing to note is there are multiple instances of libvirt running at any one time. The main instance is "system" which runs as root and is considered the main instance, most service VMs are stored here. The other one is "session" which is unique to the user who runs it, it also runs with their user/group permissions.
  
 
=== Storage Pools ===
 
=== Storage Pools ===
Line 53: Line 60:
 
* Storage: 100GB on local
 
* Storage: 100GB on local
 
** /storage/vms/chomsky.img
 
** /storage/vms/chomsky.img
* Notes: General purpose system for LHS member usage (IRC, shell interaction, light programming tasks, etc.) - Ideally a replacement for [[Equipment/Babbage|"babbage"]]
 
  
 
=== ACserver ===
 
=== ACserver ===
Line 76: Line 82:
 
** /var/lib/libvirt/images/adminstuffs.img
 
** /var/lib/libvirt/images/adminstuffs.img
 
** /var/lib/libvirt/images/adminstuffs-1.img
 
** /var/lib/libvirt/images/adminstuffs-1.img
* Notes: Network admin bits that were on [[denning]], now running apt-cacher-ng, tftpboot + pxeboot stuff, nfs server for diskless stuff via [[Netboot]].
+
* Notes: Network admin bits that were on [[denning]], now running Ansible, apt-cacher-ng, tftpboot + pxeboot stuff, nfs server for diskless stuff via [[Netboot]].
  
 
==== apt-cacher-ng ====
 
==== apt-cacher-ng ====
Line 91: Line 97:
 
==== Redmine ====
 
==== Redmine ====
  
* Available under https://bugs.london.hackspace.org.uk/
+
* <s>Available under https://bugs.london.hackspace.org.uk/</s>
* Has a plugin added to sync users with ldap
+
* <s>Has a plugin added to sync users with ldap</s>
 +
* https://bugs.london.hackspace.org.uk/ but now just has pointers to our github issues.
  
 
==== Icinga 2 ====
 
==== Icinga 2 ====
Line 121: Line 128:
 
# Start and have fun
 
# Start and have fun
 
# Add it to this wiki page
 
# Add it to this wiki page
 +
=== Check RAID array status ====
 +
 +
(As root):
 +
 +
hpacucli ctrl slot=1 ld all show detail
 +
 +
hpacucli ctrl slot=1  array all show detail

Latest revision as of 22:03, 24 May 2021

Lamarr
Hackspace Unknown.png
Model HP DL380 G5 (2U Version with 8 2.5" SAS Bays)
Sub-category Defunct
Status Scrapped
Training requirement yes
Training link Unknown
ACnode no
Owner LHS
Origin Donation
Location Basement rack
Maintainers Sysadmin team

Lamarr was retired when 447 Hackney Road closed. It has been replaced by Landin at Ujima House.

General use Hypervisor, currently using KVM/QEMU.

Rule 1 of using Lamarr: Do not install anything on Lamarr (Make a VM, silly!)

Info

  • IP: 172.31.24.32
  • DNS: lamarr.lan.london.hackspace.org.uk
  • Access: LDAP

Stats

Lamarr is an HP ProLiant DL380 G5 donated to the space.

  • 2 Dual-core Xeons @ 3.0ghz
  • 32 GB ECC 667mhz FBDIMM RAM
  • HP P400 SAS PCIe RAID Controller with the following hardware RAID:
    • RAID10 on / ~123GB
    • RAID5 on /storage ~404G

libvm config

XML files are stored in /root at the moment. Will get around to sorting.

An important thing to note is there are multiple instances of libvirt running at any one time. The main instance is "system" which runs as root and is considered the main instance, most service VMs are stored here. The other one is "session" which is unique to the user who runs it, it also runs with their user/group permissions.

Storage Pools

  • iso - /storage/isos - Boot and install media. Everyone has read permission, root has write permission (Should be changed to admins?)
  • local - /storage/vms - Virtual drives stored on the local machine, should only be used for system VMs.

Networks

  • default - NATed network, works fine but won't give you an externally accessible IP or allow for PXE booting
  • bridge - Bridged network br0.

Current VMs

Chomsky

  • ID: 4
  • Status: On
  • Date Created 22/09/2014
  • IP: 172.31.24.34
  • CPUs: 4
  • RAM: 8GB
  • Storage: 100GB on local
    • /storage/vms/chomsky.img

ACserver

Adminstuff

  • ID: 17
  • Status: On
  • Date Created 15/10/2014
  • IP: 172.31.24.36
  • CPUs: 1
  • RAM: 1GB
  • Storage: 12GB on local
    • /var/lib/libvirt/images/adminstuffs.img
    • /var/lib/libvirt/images/adminstuffs-1.img
  • Notes: Network admin bits that were on denning, now running Ansible, apt-cacher-ng, tftpboot + pxeboot stuff, nfs server for diskless stuff via Netboot.

apt-cacher-ng

To expire the cache (useful if /space fills up):

  • login to adminstuff
  • links http://localhost:3142/
  • click "Statistics report and configuration page"
  • Untick "Stop the work on errors during index setup".
  • Tick "then truncate damaged files immediately", "Treat incomplete files as damaged" and "Purge unreferenced files immediately after scan"
  • click on "Start scan or Expiration"

Redmine

Icinga 2

Services

  • ID: 16
  • Status: On
  • Date Created 15/10/2014
  • IP: 172.31.24.37
  • CPUs: 1
  • RAM: 1GB
  • Storage: 12GB on local
    • /var/lib/libvirt/images/debianwheezy-2.img
  • Notes: Importantish services that were on babbage, robonaut etc., under construction.

How to:

Create a new VM

  1. Have a login to Lamarr via LDAP
  2. Install virt-manager
  3. Connect to Lamarr with your login. (You'll probsbly need to set up an ssh key first).
  4. Create a new VM on Lamarr. Use local to store the virtual drives
  5. Set suitable resources
  6. Set network to join bridge br0
  7. Start and have fun
  8. Add it to this wiki page

Check RAID array status =

(As root):

hpacucli ctrl slot=1 ld all show detail
hpacucli ctrl slot=1  array all show detail