Re: Rados performance inconsistencies, lower than expected performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----Original message-----
> From:Alwin Antreich <a.antreich@xxxxxxxxxxx>
> Sent: Thursday 6th September 2018 16:27
> To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
> Cc: Menno Zonneveld <menno@xxxxxxxx>
> Subject: Re:  Rados performance inconsistencies, lower than expected performance
> 
> Hi,

Hi!

> On Thu, Sep 06, 2018 at 03:52:21PM +0200, Menno Zonneveld wrote:
> > ah yes, 3x replicated with minimal 2.
> > 
> > 
> > my ceph.conf is pretty bare, just in case it might be relevant
> > 
> > [global]
> > auth client required = cephx
> > auth cluster required = cephx
> > auth service required = cephx
> > 
> > cluster network = 172.25.42.0/24
> > 
> > fsid = f4971cca-e73c-46bc-bb05-4af61d419f6e
> > 
> > keyring = /etc/pve/priv/$cluster.$name.keyring
> > 
> > mon allow pool delete = true
> > mon osd allow primary affinity = true
> On our test cluster, we didn't set the primary affinity as all OSDs were
> SSDs of the same model. Did you do any settings other than this? How
> does your crush map look like?

I only used this option when testing with mixing HDD and SSD (1 replica on SSD and 2 on HDD); right now affinity for all disks is 1.

The weight of one OSD in each server is lower because I have partitioned the drive to be able to test with SSD journal for HDDs but this isn't active at the moment.

If I understand correctly setting the weight like this should be fine and I also tested with weight 1 for all OSD's and I still get the same performance ('slow' when empty, fast when full)

Current ceph osd tree

ID  CLASS WEIGHT  TYPE NAME                STATUS REWEIGHT PRI-AFF 
 -1       3.71997 root ssd                                         
 -5       1.23999     host ceph01-test                             
  2   ssd 0.36600         osd.2                up  1.00000 1.00000 
  3   ssd 0.43700         osd.3                up  1.00000 1.00000 
  6   ssd 0.43700         osd.6                up  1.00000 1.00000 
 -7       1.23999     host ceph02-test                             
  4   ssd 0.36600         osd.4                up  1.00000 1.00000 
  5   ssd 0.43700         osd.5                up  1.00000 1.00000 
  7   ssd 0.43700         osd.7                up  1.00000 1.00000 
 -3       1.23999     host ceph03-test                             
  0   ssd 0.36600         osd.0                up  1.00000 1.00000 
  1   ssd 0.43700         osd.1                up  1.00000 1.00000 
  8   ssd 0.43700         osd.8                up  1.00000 1.00000 

My current crush map looks like this:

# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable chooseleaf_vary_r 1
tunable chooseleaf_stable 1
tunable straw_calc_version 1
tunable allowed_bucket_algs 54

# devices
device 0 osd.0 class ssd
device 1 osd.1 class ssd
device 2 osd.2 class ssd
device 3 osd.3 class ssd
device 4 osd.4 class ssd
device 5 osd.5 class ssd
device 6 osd.6 class ssd
device 7 osd.7 class ssd
device 8 osd.8 class ssd

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
host ceph03-test {
 id -3 # do not change unnecessarily
 id -4 class ssd # do not change unnecessarily
 # weight 1.240
 alg straw2
 hash 0 # rjenkins1
 item osd.1 weight 0.437
 item osd.0 weight 0.366
 item osd.8 weight 0.437
}
host ceph01-test {
 id -5 # do not change unnecessarily
 id -6 class ssd # do not change unnecessarily
 # weight 1.240
 alg straw2
 hash 0 # rjenkins1
 item osd.3 weight 0.437
 item osd.2 weight 0.366
 item osd.6 weight 0.437
}
host ceph02-test {
 id -7 # do not change unnecessarily
 id -8 class ssd # do not change unnecessarily
 # weight 1.240
 alg straw2
 hash 0 # rjenkins1
 item osd.5 weight 0.437
 item osd.4 weight 0.366
 item osd.7 weight 0.437
}
root ssd {
 id -1 # do not change unnecessarily
 id -2 class ssd # do not change unnecessarily
 # weight 3.720
 alg straw2
 hash 0 # rjenkins1
 item ceph03-test weight 1.240
 item ceph01-test weight 1.240
 item ceph02-test weight 1.240
}

# rules
rule ssd {
 id 0
 type replicated
 min_size 1
 max_size 10
 step take ssd
 step chooseleaf firstn 0 type host
 step emit
}

# end crush map

> > 
> > osd journal size = 5120
> > osd pool default min size = 2
> > osd pool default size = 3
> > 
> >

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux