I've an active/active cluster running on my VMware ESXi server without problems. I had some minor problems only during the installation because 'Other Linux-System (64-bit)' allowed me only to use the E1000 NIC and in this case the HA crashed (the master detected the slave but the slave shutdown after a short time...). Now I'm using the 'Other Linux-System (32-Bit)' with Flexible-NIC to run the 64-bit ASG and I'm happy... :-)
which kind of cluster did u test /which type of cluster/ha support
1:cluster in a box (asg cluster in single machine)
2:cluster across box (asg cluster in two different virtual/physical machine )
3:cluster virtual (with/on vmware )to physical (dedicated machine asg)
sorry friends i am neonatal (not even kid [:D] ) to cluster
I set up a cluster with two virtual nodes on one hardware. eth1 External NIC > Bridgedmode > internet access eth0 Internal NIC > Host Only VMnet1 > local network, connection to one VM Client eth2 Sync NIC > Host Only VMnet2 > cluster synchronization
After setting up the second/slave node, it synchronized via sync-interface and got the same firmware and configuration as the master.
Everything works fine so far.
Then I tested a failover by rebooting the master. > effect: all connections broke and did not come up. I got no connection to the webadmin via local network, no ssh connection, no ping answer from the internal ASG interface.
I went to the console of node two, and it became master. Then I recognized a strange behaviour: If I ping the internal interface of the ASG from the VM Client, I get a timeout. If I ping the internal interface of the ASG and run a tcpdump on the ASG's internal interface, I get an answer from the ASG and Webadmin-Access is possible.
I tested a bit and one collegue had the idea to turn off the virtual_mac_address feature on the ASG. Then it worked!!!
Maybe the the virtual_mac_adress feature of the ASG is not working properly together with the virtual mac feature of the VM Server.
Does anybody recognize a problem like this in an fully virtualized cluster?
I set up a cluster with two virtual nodes on one hardware. eth1 External NIC > Bridgedmode > internet access eth0 Internal NIC > Host Only VMnet1 > local network, connection to one VM Client eth2 Sync NIC > Host Only VMnet2 > cluster synchronization
After setting up the second/slave node, it synchronized via sync-interface and got the same firmware and configuration as the master.
Everything works fine so far.
Then I tested a failover by rebooting the master. > effect: all connections broke and did not come up. I got no connection to the webadmin via local network, no ssh connection, no ping answer from the internal ASG interface.
I went to the console of node two, and it became master. Then I recognized a strange behaviour: If I ping the internal interface of the ASG from the VM Client, I get a timeout. If I ping the internal interface of the ASG and run a tcpdump on the ASG's internal interface, I get an answer from the ASG and Webadmin-Access is possible.
I tested a bit and one collegue had the idea to turn off the virtual_mac_address feature on the ASG. Then it worked!!!
Maybe the the virtual_mac_adress feature of the ASG is not working properly together with the virtual mac feature of the VM Server.
Does anybody recognize a problem like this in an fully virtualized cluster?