ceph luminous performance - disks at 100% , low network utilization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I have been struggling to get my test cluster to behave ( from a performance perspective)
Dell R620, 64 GB RAM, 1 CPU, numa=off , PERC H710, Raid0, Enterprise 10K disks

No SSD - just plain HDD

Local tests ( dd, hdparm ) confirm my disks are capable of delivering  200 MBs
Fio  with 15 jobs indicate 100 MBs
Ceph tell shows  400MBs

rados bench with 1 thread provide  3 MB
rados bench with 32 threads, 2 OSDs ( one per server) , barely touch 10 MB
Adding a third server / OSD improve performance slightly ( 11 MB)

atop shows disk usage at 100% for extended period of time 
Network usage is very low 
Nothing else is "red"

I have removed all TCP setting  and left ceph.conf mostly with defaults
 
What am I missing ?

Many thanks

Steven


ceph osd tree
ID  CLASS WEIGHT  TYPE NAME          STATUS REWEIGHT PRI-AFF

  0   hdd 0.54529         osd.0          up  1.00000 1.00000
 -5       0.54529     host osd02
  1   hdd 0.54529         osd.1          up        0 1.00000
 -7             0     host osd04
-17       0.54529     host osd05
  2   hdd 0.54529         osd.2          up  1.00000 1.00000

[root@osd01 ~]# ceph tell osd.0 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 452125657
}

[root@osd01 ~]# ceph tell osd.2 bench
{
    "bytes_written": 1073741824,
    "blocksize": 4194304,
    "bytes_per_sec": 340553488
}


hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   5874 MB in  1.99 seconds = 2948.51 MB/sec
 Timing buffered disk reads: 596 MB in  3.01 seconds = 198.17 MB/sec

 fio --filename=/dev/sdc --direct=1 --sync=1 --rw=write --bs=4k --numjobs=15 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.2.8
Starting 15 processes
Jobs: 15 (f=15): [W(15)] [100.0% done] [0KB/104.9MB/0KB /s] [0/26.9K/0 iops] [eta 00m:00s]


fio --filename=/dev/sdc --direct=1 --sync=1 --rw=write --bs=4k --numjobs=5 --iodepth=1 --runtime=60 --time_based --group_reporting --name=journal-test
journal-test: (g=0): rw=write, bs=4K-4K/4K-4K/4K-4K, ioengine=sync, iodepth=1
...
fio-2.2.8
Starting 5 processes
Jobs: 5 (f=5): [W(5)] [100.0% done] [0KB/83004KB/0KB /s] [0/20.8K/0 iops] [eta 00m:00s]

 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux