Direct IO tests on RBD device vary significantly

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Trying to get an understanding why direct IO would be so slow on my cluster.

Ceph 0.94.1
1 Gig public network
10 Gig public network
10 Gig cluster network

100 OSD's, 4T disk sizes, 5G SSD journal.

As of this morning I had no SSD journal and was finding direct IO was
sub 10MB/s so I decided to add journals today.

Afterwards I started running tests again and wasn't very impressed.
Then for no apparent reason the write speeds increased significantly.
But I'm finding they vary wildly.

Currently there is a bit of background ceph activity, but only my
testing client has an rbd mapped/mounted:
           election epoch 144, quorum 0,1,2 mon1,mon3,mon2
     osdmap e181963: 100 osds: 100 up, 100 in
            flags noout
      pgmap v2852566: 4144 pgs, 7 pools, 113 TB data, 29179 kobjects
            227 TB used, 135 TB / 363 TB avail
                4103 active+clean
                  40 active+clean+scrubbing
                   1 active+clean+scrubbing+deep

Tests:
1M block size: http://pastebin.com/LKtsaHrd throughput has no consistency
4k block size: http://pastebin.com/ib6VW9eB thoughput is amazingly consistent

Thoughts?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux