X-Git-Url: http://xvm.mit.edu/gitweb/invirt/doc/xvm.git/blobdiff_plain/96b673537107c04e21875f783b61e8ac51469f85..3252d4103c98cf72f4e0e84313316870c81571ad:/xvm-host-setup-notes diff --git a/xvm-host-setup-notes b/xvm-host-setup-notes index 088cb54..b932f2e 100644 --- a/xvm-host-setup-notes +++ b/xvm-host-setup-notes @@ -6,6 +6,31 @@ in the setup screen at F2: - CPU -> Virtualization Technology: on - Boot Sequence: disable NIC - Integrated Devices -> Embedded Gb NIC: turn off PXE +- Serial Communication: + - Serial Communication: On with Console Redirection via COM2 + - External Serial Connector: COM1 + - Failsafe Baud Rate: 57600 + - Remote Terminal Type: VT100/VT220 + - Redirection After Boot: Disabled + +New Supermicros: +- Advanced -> "Wait for 'F1' If Error" -> Disabled +- Advanced -> Power Button Function -> 4 Seconds Override +- Advanced -> PCIe/PnP -> Load Onboard LAN N Option Rom -> Disabled +- Advanced -> Serial Port -> SOL -> Redirection After BIOS Post -> BootLoader +- Advanced -> Serial Port -> COM -> Console Redirection -> Enabled +- Advanced -> Serial Port -> COM -> Redirection After BIOS Post -> BootLoader +- Advanced -> PCIe -> Disable all OPROMS +- IPMI -> BMC Network Configuration +- Boot order: USB Hard Disk, Removable, Hard Disk, UEFI Shell + +New debian installer: + - 1000M for /boot in raid1 of sda1, sdb1 + - rest for LVM in raid1 of sda2, sdb2 + - 100G / + - 64G swap + - install a vanilla kernel, not xen; + will install xen hypervisor from backports in our later custom install in the setup screen for remote management, at Ctrl-E: - Turn on IPMI over LAN @@ -14,10 +39,9 @@ in the setup screen for remote management, at Ctrl-E: - netmask is 255.255.0.0 - Set the password to the XVM root -All of these settings are reflected on all 4 servers in the production +All of these settings are reflected on all 8 servers in the production cluster. - In the debian installer: - 500M for /boot in raid1 of sda1, sdb1 - rest for LVM in raid1 of sda2, sdb2 @@ -26,3 +50,100 @@ In the debian installer: - install a vanilla kernel, not xen; will install xen hypervisor from backports in our later custom install. + +Currently allocated backend IPs: + +10.5.128.16 c-s +10.5.128.17 a-s +10.5.128.18 s-m +10.5.128.19 a-m +10.5.128.20 d-o +10.5.128.21 g-d +10.5.128.22 b-f +10.5.128.23 m-a (not installed) + +10.5.128.128 RAID group portal +10.5.128.129 RAID device/management + +10.5.128.221 c-s IPMI +10.5.128.222 a-s IPMI +10.5.128.223 s-m IPMI +10.5.128.224 a-m IPMI +10.5.128.225 d-o IPMI +10.5.128.226 g-d IPMI (currently unplugged) +10.5.128.227 b-f IPMI +10.5.128.228 m-a IPMI (not configured) +18.4.58.231 new c-s IPMI +18.4.58.232 new a-s IPMI +18.4.58.233 new s-m IPMI +18.4.58.234 new a-m IPMI + + +10.6.128.16 c-s +10.6.128.17 a-s +10.6.128.18 s-m +10.6.128.19 a-m +10.6.128.20 d-o +10.6.128.21 g-d +10.6.128.22 b-f +10.6.128.23 m-a (not installed) + +10.6.128.129 RAID device/management + +10.6.128.229 g-d IPMI (temporary hardware) + + +Precise hosts: + +Copy /etc/invirt/conf.d/iscsi from another prod host before trying to +start iscsi (possibly before installing xvm-host) + +After installing xen: +/etc/default/grub (note the commenting out!): +GRUB_DEFAULT=2 +#GRUB_HIDDEN_TIMEOUT=0 +#GRUB_HIDDEN_TIMEOUT_QUIET=true +GRUB_TIMEOUT=10 +GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` +GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen nosplash" +GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com2=115200,8n1 console=com2,vga" +GRUB_DISABLE_OS_PROBER="true" + +update-grub + +ln -s /usr/share/qemu-linaro /usr/share/qemu + +Change /etc/hostname to the host's FQDN + +/etc/sysctl.conf (yes, you need all three): +net.ipv4.conf.eth0.rp_filter = 0 +net.ipv4.conf.eth1.rp_filter = 0 +net.ipv4.conf.all.rp_filter = 0 + +Comment out in /etc/init.d/clvm: +# if [ ! -f /etc/cluster/cluster.conf ]; then +# log_failure_msg "clvmd: cluster not configured. Aborting." +# exit 0 +# fi + +# if ! cman_tool status >/dev/null 2>&1; then +# log_failure_msg "clvmd: cluster is not running. Aborting." +# exit 0 +# fi + + +On boot, you'll need to run /usr/lib/xvm-iscsi-connect to bring up +iscsi. Multipath will come up automatically along with that. + +Make sure corosync is running before starting clvmd (I know, obvious, but still) + +On another XVM host, run ipmitool -I lanplus -U ADMIN -H
shell +user list +user set password +Change the password to the XVM root password +exit + +Serial console: from another XVM host run ipmitool -I lanplus -U ADMIN +-H
shell +sol activate