Hi guys,
let's assume your environment consists, more or less, of the following:
- vSphere 5.1 Enterprise version (NOT Enterprise plus - so NO distributed switches)
- 3 Hosts with each 20 1Gbit/s NICs (I know it's a lot but just go with it)
- 2 Cisco 3750 Stacks with each 48Ports of 1Gbit/s and 10Gbit/s Fibreuplinks between the stacks for CLIENT connections
- 2 Cisco 3750 Stacks with each 48Ports of 1Gbit/s and 10Gbit/s Fibreuplinks between the stacks for STORAGE connections
- 2 NetApp storage systems
We want to run about 80-120 virtual servers combined on those 3 servers. Except Exchange 2010, SQL 2012, Sharepoint 2010 and a Fileserver there should not be heavy load.
FT is not needed, maybe in the future.
We will get a certified consultant for the setup but I wanted to know your recommendations too!
So these are the network vlans that need to be working for the client part:
- VLAN_DMZ_IN
- VLAN_DMZ_OUT
- VLAN_CLIENTS01 (Default client vlan)
- VLAN_CLIENTS02 (only a few Clients in another vlan)
I was thinking of something like this for the client part:
- vSwitch0_VLAN_DMZ (contains one vmkernel port for VLAN_DMZ_IN and one for VLAN_DMZ_OUT, consists of 2 phys. NICs)
- vSwitch1_VLAN_CLIENTS01 (contains two vmkernel ports, consists of 6 phys. NICs)
- vSwitch2_VLAN_CLIENTS02 (contains one vmkernel port, consists of 2 phys. NICs)
On the storage side I guess I need those:
- Management+HA (can be together I guess)
- vm traffic (should be isolated I guess)
- vmotion (should be isolated I guess)
- ft (not needed now, should be isolated I guess)
- NFS (for the NetApp storage, should be isolated I guess)
But I'm unsure how to split up the rest of 10 NICs for the storage part.
I would like to have all links active/active if possible. I know there is no 100% loadbalancing but
I would prefer an active/active connection instead of a passive one (if possible).
Besides the MTU size of 9k what loadbalancing algorithm do you recommend for this environment?
Any other tips?
Thanks a million!