Disk models, other hardware information including CPU, network config? You say you're using Luminous, but then say journal on same device. I'm assuming you mean that you just have the bluestore OSD configured without a separate WAL or DB partition? Any more specifics you can give will be helpful.
On Mon, Jan 22, 2018 at 11:20 AM Steven Vacaroaia <stef97@xxxxxxxxx> wrote:
Hi,_______________________________________________I'll appreciate if you can provide some guidance / suggestions regarding perfomance issues on a test cluster ( 3 x DELL R620, 1 Entreprise SSD, 3 x 600 GB ,Entreprise HDD, 8 cores, 64 GB RAM)I created 2 pools ( replication factor 2) one with only SSD and the other with only HDD( journal on same disk for both)The perfomance is quite similar although I was expecting to be at least 5 times betterNo issues noticed using atopWhat should I check / tune ?Many thanksStevenHDD based pool ( journal on the same disk)ceph osd pool get scbench256 allsize: 2min_size: 1crash_replay_interval: 0pg_num: 256pgp_num: 256crush_rule: replicated_rulehashpspool: truenodelete: falsenopgchange: falsenosizechange: falsewrite_fadvise_dontneed: falsenoscrub: falsenodeep-scrub: falseuse_gmt_hitset: 1auid: 0fast_read: 0rbd bench --io-type write image1 --pool=scbench256bench type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequentialSEC OPS OPS/SEC BYTES/SEC1 46816 46836.46 191842139.782 90658 45339.11 185709011.803 133671 44540.80 182439126.084 177341 44340.36 181618100.145 217300 43464.04 178028704.546 259595 42555.85 174308767.05elapsed: 6 ops: 262144 ops/sec: 42694.50 bytes/sec: 174876688.23fio /home/cephuser/write_256.fiowrite-4M: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32fio-2.2.8Starting 1 processrbd engine: RBD version: 1.12.0Jobs: 1 (f=1): [r(1)] [100.0% done] [66284KB/0KB/0KB /s] [16.6K/0/0 iops] [eta 00m:00s]fio /home/cephuser/write_256.fiowrite-4M: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32fio-2.2.8Starting 1 processrbd engine: RBD version: 1.12.0Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/14464KB/0KB /s] [0/3616/0 iops] [eta 00m:00s]SSD based poolceph osd pool get ssdpool allsize: 2min_size: 1crash_replay_interval: 0pg_num: 128pgp_num: 128crush_rule: ssdpoolhashpspool: truenodelete: falsenopgchange: falsenosizechange: falsewrite_fadvise_dontneed: falsenoscrub: falsenodeep-scrub: falseuse_gmt_hitset: 1auid: 0fast_read: 0rbd -p ssdpool create --size 52100 image2rbd bench --io-type write image2 --pool=ssdpoolbench type write io_size 4096 io_threads 16 bytes 1073741824 pattern sequentialSEC OPS OPS/SEC BYTES/SEC1 42412 41867.57 171489557.932 78343 39180.86 160484805.883 118082 39076.48 160057256.164 155164 38683.98 158449572.385 192825 38307.59 156907885.846 230701 37716.95 154488608.16elapsed: 7 ops: 262144 ops/sec: 36862.89 bytes/sec: 150990387.29[root@osd01 ~]# fio /home/cephuser/write_256.fiowrite-4M: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32fio-2.2.8Starting 1 processrbd engine: RBD version: 1.12.0Jobs: 1 (f=1): [W(1)] [100.0% done] [0KB/20224KB/0KB /s] [0/5056/0 iops] [eta 00m:00s]fio /home/cephuser/write_256.fiowrite-4M: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=rbd, iodepth=32fio-2.2.8Starting 1 processrbd engine: RBD version: 1.12.0Jobs: 1 (f=1): [r(1)] [100.0% done] [76096KB/0KB/0KB /s] [19.3K/0/0 iops] [eta 00m:00s]
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com