I am stumped.
Ive got a 4.1 enviroment that is using 2 SAN units
1 is a SATA SAN isolated on its own pswitch using iscsi on 10.10.10.1 all running over the software iscsi adapter...over 1gbe connections
1 is a SAS/SSD unit isolated on its own pswitch using iscsi on 10.10.10.2 all running over the software iscsi adapter ....over 10gbe connections
I can not seem to get much more than 100MB/sec and Im typically running around 65MB/sec to my new storage volumes on my new san over the 10GBE
That said, i built up a host from scratch on the same hardware, only connected it to the 10gbe storage and im pulling 350Rd / 200Wr or so.
What im trying to sort out is why?
I inheiritied the SATA enviroment with vmware and im trying to get to the SAS enviroment but if the performance isnt better in my testing, i dont want to move 20TB of data only to find out our storage is slow on the new storage as well.
Am i only as slow as my slowest disk speeds using the iscsi software adapter even though i reading and writing from 2 completely seperate storage units on different PSwitches? I dont get it.
I should note that i have my vswitches isolated as well. The portgroups for the 1gbe connections going to the SATA san are on 1 vswitch and the SAS san port groups are on a completely isolated vswitch using the 10GBE adapter. Paths on all storage are set to Round Robin (per the manufacture) although i have tried fixed etc and got simular results. (just to rule it out).
I should also note that the vms im testing with are all on the same host as well so it is using the Host bus to do the transfers in testing.