Hello Together.
I have since a few month a strange and serious problem with vMotion in a Multi-Nic setup. This issue doesn't happend in a single NIC configuration Unfortnately the VMware support didn't found anything and maybe here somebody can help me to identitfy the source of my problem!
My environment:
Two HP Proliant DL380G7 running ESXi 5.1
Both HP Server have an NC360T Dual-Port NIC installed
vMotion setup:
Seperate vMotion VLAN
Both vmknic are in the same subnet and VLAN
Both vmknic are connected to one vSwitch
vSwitch has two pNICs in active/active state
vmknic have different failover order on nics
Regarding the pNICs vMotion:
first NIC is onboard / second NIC is located at NC360T
both NICs are connected to one Cisco Catalyst 2960 switch
Now my problem description:
I am selecting around 15VMs to migrate from one host to another. Everything is going fine for maybe a few minutes till my whole backbone is going "crazy" and certain VMs, Router, Switches, etc. are not reachable anymore for around 10 seconds. At the same time I can see at my switch, where the vMotion pNICS are connected, that the links from the target Host are going down and come up a few seconds later (like a restart of the adapters)!
I don't thnik it's something related to an hardware defect because it happens regardless from Host1 or Host2
Does anybody have an idea where my problem is? or how to identify the source ?
Thanks for any comment