below is a test of the automatic failover of a 2 node RHEL 5.7 cluster

  • There is 1 service called ServiceName
  • There is a single shared VG/LV called /dev/shared_vg1/shared_lv1

<?xml version="1.0"?>
<cluster alias="round-and-round" config_version="40" name="round-and-round">
        <fence_daemon clean_start="0" post_fail_delay="0" post_join_delay="3"/>
        <clusternodes>
                <clusternode name="hostname1.domain.private" nodeid="1" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ipmi-hostname1"/>
                                </method>
                        </fence>
                </clusternode>
                <clusternode name="hostname2.domain.private" nodeid="2" votes="1">
                        <fence>
                                <method name="1">
                                        <device name="ipmi-hostname2"/>
                                </method>
                        </fence>
                </clusternode>
        </clusternodes>
        <cman expected_votes="1" two_node="1">
                <multicast addr="224.0.0.001"/>
        </cman>
        <fencedevices>
                <fencedevice agent="fence_ipmilan" auth="password" ipaddr="192.168.1.1" login="test-user" name="ipmi-hostname1" passwd="test-password" delay="30"/>
                <fencedevice agent="fence_ipmilan" auth="password" ipaddr="192.168.1.2" login="test-user" name="ipmi-hostname2" passwd="test-password"/>=
        </fencedevices>
        <rm>
                <failoverdomains>
                        <failoverdomain name="FailDomain" ordered="1" restricted="1">
                                <failoverdomainnode name="hostname1.domain.private" priority="1"/>
                                <failoverdomainnode name="hostname2.domain.private" priority="2"/>
                        </failoverdomain>
                </failoverdomains>
                <resources>
                        <lvm lv_name="shared_lv1" name="shared_lv1_name" self_fence="1" vg_name="shared_vg1"/>
                        <fs device="/dev/shared_vg1/shared_lv1" force_fsck="0" force_unmount="1" fsid="6742" fstype="ext3" mountpoint="/mnt/shared" name="shared-fs" self_fence="1"/>
                        <ip address="192.168.1.3" monitor_link="1"/>
                        <script file="/usr/local/bin/pc.sh" name="start.sh"/>
                </resources>
                <service autostart="1" domain="FailDomain" exclusive="0" name="ServiceName">
                        <lvm ref="shared_lv1_name">
                                <fs ref="shared-fs"/>
                        <ip ref="192.168.1.3">
                        <script ref="start.sh"/>
                        </ip>
                        </lvm>
                </service>
        </rm>
</cluster>

the kernel crash will be triggered by executing

 echo c > /proc/sysrq-trigger

status of cluster before triggering kernel panic on primary node hostname1

[root@hostname2 ~]# clustat
Cluster Status for round-and-round @ Fri July 1 16:31:00 2013
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 hostname2.domain.private                   1 Online, Local, rgmanager
 hostname1.domain.private                   2 Online

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:ServiceName                  hostname1.domain.private      started

status of cluster after triggering kernel panic on primary node hostname1

[root@hostname2 ~]# clustat
Cluster Status for round-and-round @ Fri July 1 16:31:40 2013
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 hostname2.domain.private                   1 Online, Local, rgmanager
 hostname1.domain.private                   2 Offline

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:ServiceName                  hostname1.domain.private      started

status of cluster during fencing and service migration

[root@hostname2 ~]# clustat
Cluster Status for round-and-round @ Fri July 1 16:32:02 2013
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 hostname2.domain.private                   1 Online, Local, rgmanager
 hostname1.domain.private                   2 Offline

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:ServiceName                  hostname2.domain.private      starting

status of cluster after fencing and service migration

[root@hostname2 ~]# clustat
Cluster Status for round-and-round @ Fri July 1 16:39:13 2013
Member Status: Quorate

 Member Name                             ID   Status
 ------ ----                             ---- ------
 hostname2.domain.private                   1 Online, Local, rgmanager
 hostname1.domain.private                   2 Offline

 Service Name                   Owner (Last)                   State
 ------- ----                   ----- ------                   -----
 service:ServiceName                  hostname2.domain.private      started



July 1 16:31:16 hostname2 root: seamus simulating kernel panic echo c on primary node
July 1 16:31:31 hostname2 openais[5968]: [TOTEM] The token was lost in the OPERATIONAL state.
July 1 16:31:31 hostname2 openais[5968]: [TOTEM] Receive multicast socket recv buffer size (320000 bytes).
July 1 16:31:31 hostname2 openais[5968]: [TOTEM] Transmit multicast socket send buffer size (320000 bytes).
July 1 16:31:31 hostname2 openais[5968]: [TOTEM] entering GATHER state from 2.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] entering GATHER state from 0.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] Creating commit token because I am the rep.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] Storing new sequence id for ring 1530
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] entering COMMIT state.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] entering RECOVERY state.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] position [0] member 10.10.90.1:
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] previous ring seq 5420 rep 10.10.90.1
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] aru 5c high delivered 5c received flag 1
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] Did not need to originate any messages in recovery.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] Sending initial ORF token
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] CLM CONFIGURATION CHANGE
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] New Configuration:
July 1 16:31:33 hostname2 openais[5968]: [CLM  ]      r(0) ip(10.10.90.1)
July 1 16:31:33 hostname2 kernel: dlm: closing connection to node 2
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] Members Left:
July 1 16:31:33 hostname2 fenced[5990]: hostname1.domain.private not a cluster member after 0 sec post_fail_delay
July 1 16:31:33 hostname2 openais[5968]: [CLM  ]      r(0) ip(10.10.90.2)
July 1 16:31:33 hostname2 fenced[5990]: fencing node "hostname1.domain.private"
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] Members Joined:
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] CLM CONFIGURATION CHANGE
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] New Configuration:
July 1 16:31:33 hostname2 openais[5968]: [CLM  ]      r(0) ip(10.10.90.1)
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] Members Left:
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] Members Joined:
July 1 16:31:33 hostname2 openais[5968]: [SYNC ] This node is within the primary component and will provide service.
July 1 16:31:33 hostname2 openais[5968]: [TOTEM] entering OPERATIONAL state.
July 1 16:31:33 hostname2 openais[5968]: [CLM  ] got nodejoin message 10.10.90.1
July 1 16:31:33 hostname2 openais[5968]: [CPG  ] got joinlist message from node 1
July 1 16:31:50 hostname2 fenced[5990]: fence "hostname1.domain.private" success
July 1 16:31:50 hostname2 clurgmgrd[6229]: <notice> Taking over service service:ServiceName from down member hostname1.domain.private
July 1 16:31:51 hostname2 clurgmgrd: [6229]: <notice> Owner of shared_vg1/shared_lv1 is not in the cluster
July 1 16:31:51 hostname2 clurgmgrd: [6229]: <notice> Stealing shared_vg1/shared_lv1
July 1 16:31:51 hostname2 clurgmgrd: [6229]: <notice> Activating shared_vg1/shared_lv1
July 1 16:31:51 hostname2 clurgmgrd: [6229]: <notice> Making resilient : lvchange -ay shared_vg1/shared_lv1
July 1 16:31:51 hostname2 clurgmgrd: [6229]: <notice> Resilient command: lvchange -ay shared_vg1/shared_lv1
{wrapped} --config devices{filter=["a|/dev/mpath/mpshared|","a|/dev/mpath/mpsysp2|","r|.*|"]}
July 1 16:31:51 hostname2 multipathd: dm-12: devmap not registered, can't remove
July 1 16:31:51 hostname2 multipathd: dm-12: add map (uevent)
July 1 16:31:52 hostname2 kernel: kjournald starting.  Commit interval 5 seconds
July 1 16:31:52 hostname2 kernel: EXT3-fs warning: maximal mount count reached, running e2fsck is recommended
July 1 16:31:52 hostname2 kernel: EXT3 FS on dm-12, internal journal
July 1 16:31:52 hostname2 kernel: EXT3-fs: dm-12: 1 orphan inode deleted
July 1 16:31:52 hostname2 kernel: EXT3-fs: recovery complete.
July 1 16:31:52 hostname2 kernel: EXT3-fs: mounted filesystem with ordered data mode.
July 1 16:31:54 hostname2 avahi-daemon[5564]: Registering new address record for 192.168.1.3 on eth0.
July 1 16:33:30 hostname2 clurgmgrd[6229]: <notice> Service service:ServiceName started