X-Git-Url: http://xvm.mit.edu/gitweb/invirt/doc/xvm.git/blobdiff_plain/ee57059b6647efa627d15b63b6bd25a53cdd823c..3252d4103c98cf72f4e0e84313316870c81571ad:/xvm-host-setup-notes diff --git a/xvm-host-setup-notes b/xvm-host-setup-notes index 09fc501..b932f2e 100644 --- a/xvm-host-setup-notes +++ b/xvm-host-setup-notes @@ -20,9 +20,17 @@ New Supermicros: - Advanced -> Serial Port -> SOL -> Redirection After BIOS Post -> BootLoader - Advanced -> Serial Port -> COM -> Console Redirection -> Enabled - Advanced -> Serial Port -> COM -> Redirection After BIOS Post -> BootLoader -Advanced -> PCIe -> Disable all OPROMS +- Advanced -> PCIe -> Disable all OPROMS +- IPMI -> BMC Network Configuration - Boot order: USB Hard Disk, Removable, Hard Disk, UEFI Shell +New debian installer: + - 1000M for /boot in raid1 of sda1, sdb1 + - rest for LVM in raid1 of sda2, sdb2 + - 100G / + - 64G swap + - install a vanilla kernel, not xen; + will install xen hypervisor from backports in our later custom install in the setup screen for remote management, at Ctrl-E: - Turn on IPMI over LAN @@ -67,6 +75,8 @@ Currently allocated backend IPs: 10.5.128.228 m-a IPMI (not configured) 18.4.58.231 new c-s IPMI 18.4.58.232 new a-s IPMI +18.4.58.233 new s-m IPMI +18.4.58.234 new a-m IPMI 10.6.128.16 c-s @@ -81,3 +91,59 @@ Currently allocated backend IPs: 10.6.128.129 RAID device/management 10.6.128.229 g-d IPMI (temporary hardware) + + +Precise hosts: + +Copy /etc/invirt/conf.d/iscsi from another prod host before trying to +start iscsi (possibly before installing xvm-host) + +After installing xen: +/etc/default/grub (note the commenting out!): +GRUB_DEFAULT=2 +#GRUB_HIDDEN_TIMEOUT=0 +#GRUB_HIDDEN_TIMEOUT_QUIET=true +GRUB_TIMEOUT=10 +GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` +GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen nosplash" +GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com2=115200,8n1 console=com2,vga" +GRUB_DISABLE_OS_PROBER="true" + +update-grub + +ln -s /usr/share/qemu-linaro /usr/share/qemu + +Change /etc/hostname to the host's FQDN + +/etc/sysctl.conf (yes, you need all three): +net.ipv4.conf.eth0.rp_filter = 0 +net.ipv4.conf.eth1.rp_filter = 0 +net.ipv4.conf.all.rp_filter = 0 + +Comment out in /etc/init.d/clvm: +# if [ ! -f /etc/cluster/cluster.conf ]; then +# log_failure_msg "clvmd: cluster not configured. Aborting." +# exit 0 +# fi + +# if ! cman_tool status >/dev/null 2>&1; then +# log_failure_msg "clvmd: cluster is not running. Aborting." +# exit 0 +# fi + + +On boot, you'll need to run /usr/lib/xvm-iscsi-connect to bring up +iscsi. Multipath will come up automatically along with that. + +Make sure corosync is running before starting clvmd (I know, obvious, but still) + +On another XVM host, run ipmitool -I lanplus -U ADMIN -H
shell +user list +user set password +Change the password to the XVM root password +exit + +Serial console: from another XVM host run ipmitool -I lanplus -U ADMIN +-H
shell +sol activate