include looking up admin uid as per Mitch's suggestion
[invirt/doc/xvm.git] / xvm-host-setup-notes
index 3ba8a4d..b932f2e 100644 (file)
@@ -91,3 +91,59 @@ Currently allocated backend IPs:
 10.6.128.129 RAID device/management
 
 10.6.128.229 g-d IPMI (temporary hardware)
+
+
+Precise hosts:
+
+Copy /etc/invirt/conf.d/iscsi from another prod host before trying to
+start iscsi (possibly before installing xvm-host)
+
+After installing xen:
+/etc/default/grub (note the commenting out!):
+GRUB_DEFAULT=2
+#GRUB_HIDDEN_TIMEOUT=0
+#GRUB_HIDDEN_TIMEOUT_QUIET=true
+GRUB_TIMEOUT=10
+GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
+GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen nosplash"
+GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com2=115200,8n1 console=com2,vga"
+GRUB_DISABLE_OS_PROBER="true"
+
+update-grub
+
+ln -s /usr/share/qemu-linaro /usr/share/qemu
+
+Change /etc/hostname to the host's FQDN
+
+/etc/sysctl.conf (yes, you need all three):
+net.ipv4.conf.eth0.rp_filter = 0
+net.ipv4.conf.eth1.rp_filter = 0
+net.ipv4.conf.all.rp_filter = 0
+
+Comment out in /etc/init.d/clvm:
+# if [ ! -f /etc/cluster/cluster.conf ]; then
+#       log_failure_msg "clvmd: cluster not configured. Aborting."
+#       exit 0
+# fi
+
+# if ! cman_tool status >/dev/null 2>&1; then
+#       log_failure_msg "clvmd: cluster is not running. Aborting."
+#       exit 0
+# fi
+
+
+On boot, you'll need to run /usr/lib/xvm-iscsi-connect to bring up
+iscsi. Multipath will come up automatically along with that.
+
+Make sure corosync is running before starting clvmd (I know, obvious, but still)
+
+On another XVM host, run ipmitool -I lanplus -U ADMIN -H <address of
+new machine's ipmi> shell
+user list
+user set password <ID of ADMIN user>
+Change the password to the XVM root password
+exit
+
+Serial console: from another XVM host run ipmitool -I lanplus -U ADMIN
+-H <address of new machine's ipmi> shell
+sol activate