In the Dell PowerEdge SC1435 servers delivered July 2008,
there are several BIOS settings that need to be frobbed on setup.

in the setup screen at F2:
- CPU -> Virtualization Technology: on
- Boot Sequence: disable NIC
- Integrated Devices -> Embedded Gb NIC: turn off PXE
- Serial Communication:
  - Serial Communication: On with Console Redirection via COM2
  - External Serial Connector: COM1
  - Failsafe Baud Rate: 57600
  - Remote Terminal Type: VT100/VT220
  - Redirection After Boot: Disabled

New Supermicros:
- Advanced -> "Wait for 'F1' If Error" -> Disabled
- Advanced -> Power Button Function -> 4 Seconds Override
- Advanced -> PCIe/PnP -> Load Onboard LAN N Option Rom -> Disabled
- Advanced -> Serial Port -> SOL -> Redirection After BIOS Post -> BootLoader
- Advanced -> Serial Port -> COM -> Console Redirection -> Enabled
- Advanced -> Serial Port -> COM -> Redirection After BIOS Post -> BootLoader
- Advanced -> PCIe -> Disable all OPROMS
- IPMI -> BMC Network Configuration
- Boot order: USB Hard Disk, Removable, Hard Disk, UEFI Shell

New debian installer:
 - 1000M for /boot in raid1 of sda1, sdb1
 - rest for LVM in raid1 of sda2, sdb2
  - 100G /
  - 64G swap
 - install a vanilla kernel, not xen;
   will install xen hypervisor from backports in our later custom install

in the setup screen for remote management, at Ctrl-E:
- Turn on IPMI over LAN
- IP is (main IP) =~ s/18.181.0/10.5.128/
  e.g. 10.5.128.221 for citadel-station == 18.181.0.221
- netmask is 255.255.0.0
- Set the password to the XVM root

All of these settings are reflected on all 8 servers in the production
cluster.

In the debian installer:
 - 500M for /boot in raid1 of sda1, sdb1
 - rest for LVM in raid1 of sda2, sdb2
  - 50G /
  - 25G swap
 - install a vanilla kernel, not xen;
   will install xen hypervisor from backports in our later custom install.


Currently allocated backend IPs:

10.5.128.16 c-s
10.5.128.17 a-s
10.5.128.18 s-m
10.5.128.19 a-m
10.5.128.20 d-o
10.5.128.21 g-d
10.5.128.22 b-f
10.5.128.23 m-a (not installed)

10.5.128.128 RAID group portal
10.5.128.129 RAID device/management

10.5.128.221 c-s IPMI
10.5.128.222 a-s IPMI
10.5.128.223 s-m IPMI
10.5.128.224 a-m IPMI
10.5.128.225 d-o IPMI
10.5.128.226 g-d IPMI (currently unplugged)
10.5.128.227 b-f IPMI
10.5.128.228 m-a IPMI (not configured)
18.4.58.231 new c-s IPMI
18.4.58.232 new a-s IPMI
18.4.58.233 new s-m IPMI
18.4.58.234 new a-m IPMI
18.4.58.235 new d-o IPMI
18.4.58.236 new g-d IPMI
18.4.58.237 new b-f IPMI
18.4.58.238 new m-a IPMI


10.6.128.16 c-s
10.6.128.17 a-s
10.6.128.18 s-m
10.6.128.19 a-m
10.6.128.20 d-o
10.6.128.21 g-d
10.6.128.22 b-f
10.6.128.23 m-a (not installed)

10.6.128.129 RAID device/management

10.6.128.229 g-d IPMI (temporary hardware)


Precise hosts:

Copy /etc/invirt/conf.d/iscsi from another prod host before trying to
start iscsi (possibly before installing xvm-host)

After installing xen:
/etc/default/grub (note the commenting out!):
GRUB_DEFAULT=2
#GRUB_HIDDEN_TIMEOUT=0
#GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX="console=hvc0 earlyprintk=xen nosplash"
GRUB_CMDLINE_XEN="loglvl=all guest_loglvl=all com2=115200,8n1 console=com2,vga"
GRUB_DISABLE_OS_PROBER="true"

update-grub

ln -s /usr/share/qemu-linaro /usr/share/qemu

Change /etc/hostname to the host's FQDN

/etc/sysctl.conf (yes, you need all three):
net.ipv4.conf.eth0.rp_filter = 0
net.ipv4.conf.eth1.rp_filter = 0
net.ipv4.conf.all.rp_filter = 0

Comment out in /etc/init.d/clvm:
# if [ ! -f /etc/cluster/cluster.conf ]; then
#       log_failure_msg "clvmd: cluster not configured. Aborting."
#       exit 0
# fi

# if ! cman_tool status >/dev/null 2>&1; then
#       log_failure_msg "clvmd: cluster is not running. Aborting."
#       exit 0
# fi


On boot, you'll need to run /usr/lib/xvm-iscsi-connect to bring up
iscsi. Multipath will come up automatically along with that.

Make sure corosync is running before starting clvmd (I know, obvious, but still)

Copy /etc/init/ttyhvc0.conf from another new host

On another XVM host, run ipmitool -I lanplus -U ADMIN -H <address of
new machine's ipmi> shell
user list
user set password <ID of ADMIN user>
Change the password to the XVM root password
exit

Serial console: from another XVM host run ipmitool -I lanplus -U ADMIN
-H <address of new machine's ipmi> shell
sol activate

To bring up the iSCSI backend, you'll need to ping:
ping 18.4.{58,59}.{128,129}