Thursday, March 20, 2014

My disk speed tests with a 10GBe network and SSD's.

Here is the equipment we will be doing disk speed tests on:
  • 2 x Dell PowerEdge Generation 11 each with:
    • 6 x Samsung 840 Pro 256 GB SSD in Raid 6
    • 1 x Intel x520-DA2 2 port 10GBe PCI card
  • Synology RS10613xs NAS with SSD caching
    • 10 x 10k SAS 600 GB in Raid 6
    • 2 x Samsung 840 Pro 256 GB SSD for caching (I believe Raid 0)
    • 1 x Intel x520-DA2 2 port 10GBe PCI card
Each 10GBe nic card is using Intel SFP+ 10Gbe with short range (SR) fiber modules.  The fiber cables are Belkin 3' with LC connectors.  Each device is connected in a triangle (3 networks between them) with the fiber.  There is a separate 1GBe network for management, so my RDP sessions should not effect the tests.  The Dells are Hyper-V 2012 host servers.  The Hosts and Synology use standard folder shares (Windows SMB).

Some reference tests: Local SSD with two disks in raid 1 (pre-server build test):


Next test was once the servers were provisioned with raid 6, test taken from a VM, with the vhd also on the SSD's.  As you can see there is a bit of a tax to pay for raid6:


Testing with the gigabit network, Host to Host:


Gigabit network again, Host to NAS, even though they are 10k disks, still pretty slow:


10 GBe network, Host to Host with jumbo frames, now we are talking!


10 GBe network, Host to NAS with jumbo frames, these are some crazy numbers!  I wish I could explain such a high write speed, maybe it was a fluke but each test was ran a few times.  Maybe the SSD caching on the NAS had an influence some how.