Re: slow ssd journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The drive you have is not suitable at all for journal. Horrible, actually.

"test with fio (qd=32,128,256, bs=4k) show very good performance of SSD disk (10-30k write io)."

This is not realistic. Try:

fio --sync=1 --fsync=1 --direct=1 --iodepth=1 --ioengine=aio ....

Jan

On 23 Oct 2015, at 16:31, K K <nnex@xxxxxxx> wrote:

Hello.

Some strange things happen with my ceph installation after I was moved journal to SSD disk.

OS: Ubuntu 15.04 with ceph version 0.94.2-0ubuntu0.15.04.1
server: dell r510 with PERC H700 Integrated 512MB RAID cache
my cluster have:
1 monitor node
2 OSD nodes with 6 OSD daemons at each server (3Tb HDD SATA 7200 rpm disks XFS system).
network: 1Gbit to hypervisor and 1 Gbit among all ceph nodes
ceph.conf:
[global]
public network = 10.12.0.0/16
cluster network = 192.168.133.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true
filestore max sync interval = 10
filestore min sync interval = 1
filestore queue max ops = 500
#filestore queue max bytes = 16 MiB
#filestore queue committing max ops = 4096
#filestore queue committing max bytes = 16 MiB
filestore op threads = 20
filestore flusher = false
filestore journal parallel = false
filestore journal writeahead = true
#filestore fsync flushes journal data = "" class=""> journal dio = true
journal aio = true
osd pool default size = 2 # Write an object n times.
osd pool default min size = 1 # Allow writing n copy in a degraded state.
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

[client]
rbd cache = true
rbd cache size = 1024000000
rbd cache max dirty = 128000000

[osd]
osd journal size = 5200
#osd journal = /dev/disk/by-partlabel/journal-$id

Without SSD as a journal i have a ~112MB/sec throughput

After I was added SSD 64Gb ADATA for a journal disk and create 6 raw partitions. And I get a very slow bandwidth with rados bench:

Total time run: 302.350730
Total writes made: 1146
Write size: 4194304
Bandwidth (MB/sec): 15.161

Stddev Bandwidth: 11.5658
Max bandwidth (MB/sec): 52
Min bandwidth (MB/sec): 0
Average Latency: 4.21521
Stddev Latency: 1.25742
Max latency: 8.32535
Min latency: 0.277449

iostat show a few write io (no more than 200):


Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util
sdh 0.00 0.00 0.00 8.00     0.00 1024.00    256.00 129.48  2120.50  0.00   2120.50 124.50 99.60
sdh 0.00 0.00 0.00 124.00 0.00 14744.00  237.81 148.44  1723.81  0.00   1723.81  8.10 100.40
sdh 0.00 0.00 0.00 114.00 0.00 13508.00  236.98 144.27  1394.91  0.00   1394.91  8.77 100.00
sdh 0.00 0.00 0.00 122.00 0.00 13964.00  228.92 122.99  1439.74  0.00   1439.74  8.20 100.00
sdh 0.00 0.00 0.00 161.00 0.00 19640.00  243.98 154.98  1251.16  0.00   1251.16  6.21 100.00
sdh 0.00 0.00 0.00 11.00   0.00 1408.00    256.00 152.68   717.09   0.00   717.09    90.91 100.00
sdh 0.00 0.00 0.00 154.00 0.00 18696.00  242.81 142.09  1278.65  0.00   1278.65  6.49 100.00

test with fio (qd=32,128,256, bs=4k) show very good performance of SSD disk (10-30k write io).

Can anybody help me? Can someone faced with similar problem?

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux