Difference between revisions of "Equipment/Lamarr"

From London Hackspace Wiki
Jump to: navigation, search
m (spelling fix domation to donation)
 
(35 intermediate revisions by 5 users not shown)
Line 4: Line 4:
 
|model=HP DL380 G5 (2U Version with 8 2.5" SAS Bays) <!-- Model -->
 
|model=HP DL380 G5 (2U Version with 8 2.5" SAS Bays) <!-- Model -->
 
|category=Equipment <!-- Main category. Please leave alone to keep item in this category -->
 
|category=Equipment <!-- Main category. Please leave alone to keep item in this category -->
|subcat=Systems <!-- Sub-category if one exists. Please check main listing to see other categories contained within the main one -->
+
|subcat=Defunct <!-- Sub-category if one exists. Please check main listing to see other categories contained within the main one -->
|status=Good working order
+
|status=Scrapped
 
|consumables=<!-- Any items used up in normal operation, such as; ink, paper, saw-blades, cutting disks, oil, etc.. -->
 
|consumables=<!-- Any items used up in normal operation, such as; ink, paper, saw-blades, cutting disks, oil, etc.. -->
 
|accessories=<!-- Any items associated with the equipment but not consumable, such as; drill bits, safety gloves, goggles, etc.. -->
 
|accessories=<!-- Any items associated with the equipment but not consumable, such as; drill bits, safety gloves, goggles, etc.. -->
Line 17: Line 17:
 
|template_ver=1.1 <!-- Please do not change. Used for tracking out-of-date templates -->
 
|template_ver=1.1 <!-- Please do not change. Used for tracking out-of-date templates -->
 
}}
 
}}
General use Hypervisor, currently using KVM/QEMU, serving out the [[Project:VOIP|VOIP]]  system at the moment.
+
 
 +
'''Lamarr was retired when 447 Hackney Road closed.  It has been replaced by [[Equipment/Landin|Landin]] at Ujima House.  '''
 +
 
 +
General use Hypervisor, currently using KVM/QEMU.
 +
 
 +
'''Rule 1 of using Lamarr: Do not install anything on Lamarr (Make a VM, silly!)'''
  
 
= Info =
 
= Info =
 
* IP: 172.31.24.32
 
* IP: 172.31.24.32
 
* DNS: lamarr.lan.london.hackspace.org.uk
 
* DNS: lamarr.lan.london.hackspace.org.uk
* Access: Sysadmins
+
* Access: LDAP
  
 
= Stats =
 
= Stats =
 +
Lamarr is an HP ProLiant DL380 G5 donated to the space.
 
* 2 Dual-core Xeons @ 3.0ghz
 
* 2 Dual-core Xeons @ 3.0ghz
* 18GB RAM
+
* 32 GB ECC 667mhz FBDIMM RAM
* RAID10 on / ~123GB
+
* HP P400 SAS PCIe RAID Controller with the following hardware RAID:
* RAID5 on /storage ~404G
+
** RAID10 on / ~123GB
 +
** RAID5 on /storage ~404G
  
 
= libvm config =
 
= libvm config =
XML files are stored in /root at the moment. Will get around to sorting
+
XML files are stored in /root at the moment. Will get around to sorting.
 +
 
 +
An important thing to note is there are multiple instances of libvirt running at any one time. The main instance is "system" which runs as root and is considered the main instance, most service VMs are stored here. The other one is "session" which is unique to the user who runs it, it also runs with their user/group permissions.
  
 
=== Storage Pools ===
 
=== Storage Pools ===
* iso - /storage/isos - Boot and install media. Everyone has write permission
+
* iso - /storage/isos - Boot and install media. Everyone has read permission, root has write permission (Should be changed to admins?)
* local - /storage/vms - Virtual drives stored on the local machine.
+
* local - /storage/vms - Virtual drives stored on the local machine, should only be used for system VMs.
  
 
=== Networks ===
 
=== Networks ===
Line 42: Line 51:
  
 
= Current VMs =
 
= Current VMs =
=== Fragspace ===
+
=== Chomsky ===
* Date Created: 10/03/2014
+
* ID: 4
* IP: 172.31.24.33, also uses .34, 35, 36, and 37.
+
* Status: On
 +
* Date Created 22/09/2014
 +
* IP: 172.31.24.34
 
* CPUs: 4
 
* CPUs: 4
 
* RAM: 8GB
 
* RAM: 8GB
* Storage: 200GB on local
+
* Storage: 100GB on local
* Notes: Fragspace game server. Run by [[User:Velyks|Velyks]]
+
** /storage/vms/chomsky.img
 +
 
 +
=== ACserver ===
 +
* ID: 26
 +
* Status: On
 +
* Date Created 15/10/2014
 +
* IP: 172.31.24.35
 +
* CPUs: 1
 +
* RAM: 1GB
 +
* Storage: 12GB on local
 +
** /var/lib/libvirt/images/debianwheezy.img
 +
* Notes: Access control server, https://wiki.london.hackspace.org.uk/view/Project:Tool_Access_Control/ACNet
 +
 
 +
=== Adminstuff ===
 +
* ID: 17
 +
* Status: On
 +
* Date Created 15/10/2014
 +
* IP: 172.31.24.36
 +
* CPUs: 1
 +
* RAM: 1GB
 +
* Storage: 12GB on local
 +
** /var/lib/libvirt/images/adminstuffs.img
 +
** /var/lib/libvirt/images/adminstuffs-1.img
 +
* Notes: Network admin bits that were on [[denning]], now running Ansible, apt-cacher-ng, tftpboot + pxeboot stuff, nfs server for diskless stuff via [[Netboot]].
 +
 
 +
==== apt-cacher-ng ====
 +
 
 +
To expire the cache (useful if '''/space''' fills up):
 +
 
 +
* login to adminstuff
 +
* <nowiki>links http://localhost:3142/</nowiki>
 +
* click "Statistics report and configuration page"
 +
* Untick "Stop the work on errors during index setup".
 +
* Tick "then truncate damaged files immediately", "Treat incomplete files as damaged" and "Purge unreferenced files immediately after scan"
 +
* click on "Start scan or Expiration"
 +
 
 +
==== Redmine ====
 +
 
 +
* <s>Available under https://bugs.london.hackspace.org.uk/</s>
 +
* <s>Has a plugin added to sync users with ldap</s>
 +
* https://bugs.london.hackspace.org.uk/ but now just has pointers to our github issues.
 +
 
 +
==== Icinga 2 ====
 +
 
 +
* Available under https://adminstuff.london.hackspace.org.uk/icinga-web/
 +
* Icinga *2* so different config syntax: http://docs.icinga.org/icinga2/latest/doc/module/icinga2/toc
 +
* To test config after editing: '''icinga2 -c /etc/icinga2/icinga2.conf -C'''
 +
 
 +
=== Services ===
 +
* ID: 16
 +
* Status: On
 +
* Date Created 15/10/2014
 +
* IP: 172.31.24.37
 +
* CPUs: 1
 +
* RAM: 1GB
 +
* Storage: 12GB on local
 +
** /var/lib/libvirt/images/debianwheezy-2.img
 +
* Notes: Importantish services that were on [[babbage]], robonaut etc., under construction.
  
 
= How to: =
 
= How to: =
 
=== Create a new VM ===
 
=== Create a new VM ===
# Have a login to Lamarr via Ansible
+
# Have a login to Lamarr via [[LDAP]]
 
# Install virt-manager
 
# Install virt-manager
# Connect to Lamarr with your login. (Requires your key to be there as ID_RSA) (?need to be root?)
+
# Connect to Lamarr with your login. (You'll probsbly need to set up an ssh key first).
 
# Create a new VM on Lamarr. Use local to store the virtual drives
 
# Create a new VM on Lamarr. Use local to store the virtual drives
 
# Set suitable resources
 
# Set suitable resources
# Set network to eth0
+
# Set network to join bridge br0
 
# Start and have fun
 
# Start and have fun
 
# Add it to this wiki page
 
# Add it to this wiki page
 +
=== Check RAID array status ====
 +
 +
(As root):
 +
 +
hpacucli ctrl slot=1 ld all show detail
 +
 +
hpacucli ctrl slot=1  array all show detail

Latest revision as of 22:03, 24 May 2021

Lamarr
Hackspace Unknown.png
Model HP DL380 G5 (2U Version with 8 2.5" SAS Bays)
Sub-category Defunct
Status Scrapped
Last updated 24 May 2021 22:03:16
Training requirement yes
Training link Unknown
ACnode no
Owner LHS
Origin Donation
Location Basement rack
Maintainers Sysadmin team

Lamarr was retired when 447 Hackney Road closed. It has been replaced by Landin at Ujima House.

General use Hypervisor, currently using KVM/QEMU.

Rule 1 of using Lamarr: Do not install anything on Lamarr (Make a VM, silly!)

Info

  • IP: 172.31.24.32
  • DNS: lamarr.lan.london.hackspace.org.uk
  • Access: LDAP

Stats

Lamarr is an HP ProLiant DL380 G5 donated to the space.

  • 2 Dual-core Xeons @ 3.0ghz
  • 32 GB ECC 667mhz FBDIMM RAM
  • HP P400 SAS PCIe RAID Controller with the following hardware RAID:
    • RAID10 on / ~123GB
    • RAID5 on /storage ~404G

libvm config

XML files are stored in /root at the moment. Will get around to sorting.

An important thing to note is there are multiple instances of libvirt running at any one time. The main instance is "system" which runs as root and is considered the main instance, most service VMs are stored here. The other one is "session" which is unique to the user who runs it, it also runs with their user/group permissions.

Storage Pools

  • iso - /storage/isos - Boot and install media. Everyone has read permission, root has write permission (Should be changed to admins?)
  • local - /storage/vms - Virtual drives stored on the local machine, should only be used for system VMs.

Networks

  • default - NATed network, works fine but won't give you an externally accessible IP or allow for PXE booting
  • bridge - Bridged network br0.

Current VMs

Chomsky

  • ID: 4
  • Status: On
  • Date Created 22/09/2014
  • IP: 172.31.24.34
  • CPUs: 4
  • RAM: 8GB
  • Storage: 100GB on local
    • /storage/vms/chomsky.img

ACserver

Adminstuff

  • ID: 17
  • Status: On
  • Date Created 15/10/2014
  • IP: 172.31.24.36
  • CPUs: 1
  • RAM: 1GB
  • Storage: 12GB on local
    • /var/lib/libvirt/images/adminstuffs.img
    • /var/lib/libvirt/images/adminstuffs-1.img
  • Notes: Network admin bits that were on denning, now running Ansible, apt-cacher-ng, tftpboot + pxeboot stuff, nfs server for diskless stuff via Netboot.

apt-cacher-ng

To expire the cache (useful if /space fills up):

  • login to adminstuff
  • links http://localhost:3142/
  • click "Statistics report and configuration page"
  • Untick "Stop the work on errors during index setup".
  • Tick "then truncate damaged files immediately", "Treat incomplete files as damaged" and "Purge unreferenced files immediately after scan"
  • click on "Start scan or Expiration"

Redmine

Icinga 2

Services

  • ID: 16
  • Status: On
  • Date Created 15/10/2014
  • IP: 172.31.24.37
  • CPUs: 1
  • RAM: 1GB
  • Storage: 12GB on local
    • /var/lib/libvirt/images/debianwheezy-2.img
  • Notes: Importantish services that were on babbage, robonaut etc., under construction.

How to:

Create a new VM

  1. Have a login to Lamarr via LDAP
  2. Install virt-manager
  3. Connect to Lamarr with your login. (You'll probsbly need to set up an ssh key first).
  4. Create a new VM on Lamarr. Use local to store the virtual drives
  5. Set suitable resources
  6. Set network to join bridge br0
  7. Start and have fun
  8. Add it to this wiki page

Check RAID array status =

(As root):

hpacucli ctrl slot=1 ld all show detail
hpacucli ctrl slot=1  array all show detail