SSD Acceleration: Difference between revisions

From NURDspace
No edit summary
mNo edit summary
Line 5: Line 5:
|Niche=Software
|Niche=Software
|Purpose=Infra
|Purpose=Infra
|Picture=ssdnow.jpg
|Tool=No
|Tool=No
|Location=Server room
|Location=Server room

Revision as of 20:21, 29 March 2016

SSD Acceleration test
Ssdnow.jpg
Participants
Skills Linux
Status Active
Niche Software
Purpose Infra
Tool No
Location Server room
Cost 0
Tool category

SSD Acceleration test

ssdnow.jpg {{#if:No | [[Tool Owner::{{{ProjectParticipants}}} | }} {{#if:No | [[Tool Cost::0 | }}

In preparation of building a new storage environment, I wanted to see if SSD acceleration would help improve IOPS and throughput on a classic HDD array.

In preparation for this, I put together the following machine: An ASUS P5S800-VM with:

  • 1Gb RAM
  • 1x Intel(R) Celeron(R) CPU 2.66GHz
  • 1x Hitachi Deskstar IC35L120AVV207 'boot' disk
    • IDE
    • 241254720 sectors @ 4096 bits/s (115 Gb)
  • 1x Hitachi HDT721010SLA360 'storage' disk
    • SATA
    • 1953525168 sectors @ 4096 bits/s (931.5 Gb)
  • 1x Kingston SV300S37A120G 'caching' SSD
    • SATA
    • 234441648 sectors @ 4096 bits/s (111.7 Gb)

Using fio and https://github.com/tsaikd/fio-wrapper , I have tested the raw MB/s and IOPS for each device, using the default 32Mb file size: (Graphs coming soon)

  • Boot:
    • randread, 64 users, 8K blocks: 1.7 MB/s , 288 ms latency, 219 IOPS
    • randrw, 64 users, 8K blocks: 0.68 MB/s , 385 ms latency, 86 IOPS
    • read, 64 users, 8K blocks: 32 MB/s , 15 ms latency, 4207 IOPS
    • write, 64 users, 8K blocks: 20 MB/s , 24 ms latency, 2569 IOPS
  • Storage:
    • randread, 64 users, 8K blocks: 1.7 MB/s , 283 ms latency, 222 IOPS
    • randrw, 64 users, 8K blocks: 0.832 MB/s , 305 ms latency, 106 IOPS
    • read, 64 users, 8K blocks: 34.2 MB/s , 14.5 ms latency, 4381 IOPS
    • write, 64 users, 8K blocks: 17.4 MB/s , 27 ms latency, 2220 IOPS
  • Caching:
    • randread, 64 users, 8K blocks: 8 MB/s , 0.46 ms latency, 2060 IOPS
    • randrw, 64 users, 8K blocks: 10 MB/s , 23.7 ms latency, 1348 IOPS
    • read, 64 users, 8K blocks: 29.7 MB/s , 16.2 ms latency, 3811 IOPS
    • write, 64 users, 8K blocks: 20.9 MB/s , 23.6 ms latency, 2683 IOPS

We can see the SSD provides about the same QOS for linear rw operations, but is significantly better at random rw than the spinning disks. This is not unexpected.

A write-back dmsetup cache system was constructed using the following configuration:

root@roger-wilco:~# dmsetup ls
roger--wilco--vg-swap   (254:2)
roger--wilco--vg-root   (254:0)
ssd-metadata    (254:3)
ssd-blocks      (254:4)
cached-disk     (254:6)
vg0-spindle     (254:5)

root@roger-wilco:~# dmsetup table ssd-metadata
0 924000 linear 8:33 0
root@roger-wilco:~# dmsetup table ssd-blocks
0 233515600 linear 8:33 924000
root@roger-wilco:~# dmsetup table cached-disk
0 1953513472 cache 254:3 254:4 254:5 512 1 writeback default 0

root@roger-wilco:~# vgs
 VG             #PV #LV #SN Attr   VSize   VFree
 roger-wilco-vg   1   3   0 wz--n- 114.80g    0
 vg0              1   1   0 wz--n- 931.51g    0
root@roger-wilco:~# pvs
 PV         VG             Fmt  Attr PSize   PFree
 /dev/sda5  roger-wilco-vg lvm2 a--  114.80g    0
 /dev/sdb1  vg0            lvm2 a--  931.51g    0
root@roger-wilco:~# lvs
 LV      VG             Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
 home    roger-wilco-vg -wi-ao---- 103.49g
 root    roger-wilco-vg -wi-ao----   9.31g
 swap    roger-wilco-vg -wi-ao----   2.00g
 spindle vg0            -wi-ao---- 931.51g
  • Cached:
    • randread, 64 users, 8K blocks: 26.7 MB/s , 18.5 ms latency, 3423 IOPS
    • randrw, 64 users, 8K blocks: 5.36 MB/s , 46.6 ms latency, 686 IOPS
    • read, 64 users, 8K blocks: 30.1 MB/s , 16.5 ms latency, 3857 IOPS
    • write, 64 users, 8K blocks: 1 MB/s , 477 ms latency, 127 IOPS

With this setup, we see a tradeoff between linear write and random write. Linear write is significantly slowed for some reason - some issue with the writeback configuration?