Can you provide more detail regarding the infrastructure backing this environment? What hard drive, ssd, and processor are you using? Also, what is providing networking?
I'm seeing 4k blocksize tests here. Latency is going to destroy you.
On Jan 3, 2018 8:11 AM, "Steven Vacaroaia" <stef97@xxxxxxxxx> wrote:
Hi,I am doing a PoC with 3 DELL R620 and 12 OSD , 3 SSD drives ( one on each server), bluestoreI configured the OSD using the following ( /dev/sda is my SSD drive)ceph-disk prepare --zap-disk --cluster ceph --bluestore /dev/sde --block.wal /dev/sda --block.db /dev/sdaUnfortunately both fio and bench tests show much worse performance for the pools than for the individual disksExample:DISKSfio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4k --numjobs=14 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-testSSD driveJobs: 14 (f=14): [W(14)] [100.0% done] [0KB/465.2MB/0KB /s] [0/119K/0 iops] [eta 00m:00s]HD driveJobs: 14 (f=14): [W(14)] [100.0% done] [0KB/179.2MB/0KB /s] [0/45.9K/0 iops] [eta 00m:00s]POOLfio write.fioJobs: 1 (f=0): [w(1)] [100.0% done] [0KB/51428KB/0KB /s] [0/12.9K/0 iops]cat write.fio[write-4M]description="write test with 4k block"ioengine=rbdclientname=adminpool=scbenchrbdname=image01iodepth=32runtime=120rw=randwritebs=4krados bench -p scbench 12 writeMax bandwidth (MB/sec): 224Min bandwidth (MB/sec): 0Average IOPS: 26Stddev IOPS: 24Max IOPS: 56Min IOPS: 0Average Latency(s): 0.59819Stddev Latency(s): 1.64017Max latency(s): 10.8335Min latency(s): 0.00475139I must be missing something - any help/suggestions will be greatly appreciatedHere are some specific infoceph -scluster:id: 91118dde-f231-4e54-a5f0-a1037f3d5142 health: HEALTH_OKservices:mon: 1 daemons, quorum mon01mgr: mon01(active)osd: 12 osds: 12 up, 12 indata:pools: 4 pools, 484 pgsobjects: 70082 objects, 273 GBusage: 570 GB used, 6138 GB / 6708 GB availpgs: 484 active+cleanio:client: 2558 B/s rd, 2 op/s rd, 0 op/s wrceph osd pool ls detailpool 1 'test-replicated' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 157 flags hashpspool stripe_width 0 application rbdremoved_snaps [1~3]pool 2 'test-erasure' erasure size 3 min_size 3 crush_rule 1 object_hash rjenkins pg_num 128 pgp_num 128 last_change 334 flags hashpspool stripe_width 8192 application rbdremoved_snaps [1~5]pool 3 'rbd' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 last_change 200 flags hashpspool stripe_width 0 application rbdremoved_snaps [1~3]pool 4 'scbench' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 100 pgp_num 100 last_change 330 flags hashpspool stripe_width 0removed_snaps [1~3][cephuser@ceph ceph-config]$ ceph osd df treeID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE NAME-1 6.55128 - 2237G 198G 2038G 0 0 - root default-7 0 - 0 0 0 0 0 - host ods03-3 2.18475 - 2237G 181G 2055G 8.12 0.96 - host osd013 hdd 0.54619 1.00000 559G 53890M 506G 9.41 1.11 90 osd.34 hdd 0.54619 1.00000 559G 30567M 529G 5.34 0.63 89 osd.45 hdd 0.54619 1.00000 559G 59385M 501G 10.37 1.22 93 osd.56 hdd 0.54619 1.00000 559G 42156M 518G 7.36 0.87 93 osd.6-5 2.18178 - 2234G 189G 2044G 8.50 1.00 - host osd020 hdd 0.54520 1.00000 558G 32460M 526G 5.68 0.67 90 osd.01 hdd 0.54520 1.00000 558G 54578M 504G 9.55 1.12 89 osd.12 hdd 0.54520 1.00000 558G 47761M 511G 8.35 0.98 93 osd.27 hdd 0.54619 1.00000 559G 59584M 501G 10.40 1.22 92 osd.7-9 2.18475 - 2237G 198G 2038G 8.88 1.04 - host osd038 hdd 0.54619 1.00000 559G 52462M 508G 9.16 1.08 99 osd.810 hdd 0.54619 1.00000 559G 35284M 524G 6.16 0.73 88 osd.1011 hdd 0.54619 1.00000 559G 71739M 489G 12.53 1.47 87 osd.1112 hdd 0.54619 1.00000 559G 43832M 516G 7.65 0.90 93 osd.12TOTAL 6708G 570G 6138G 8.50MIN/MAX VAR: 0.63/1.47 STDDEV: 2.06
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com