set pg_num on pools with different size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

we have a ceph cluster with 3 cluster nodes and 20 OSD's, with 6-7-7 2
TB HDD/s per node.

In long term we want to use 7-9 pools, and for 20 OSD and 8 pools I
calculate that the ideal pg_num was 250 (20 * 100 / 8).

In this case normally each OSD store 100 pg's, that is the recommanded.

I have few problems:

1. I have 1736 pg's, and if I want to create a new pool with 270 pg's, I
got the error:

Error ERANGE:  pg_num 270 size 2 would mean 4012 total pgs, which
exceeds max 4000 (mon_max_pg_per_osd 200 * num_in_osds 20)


2. Now we have 8 pools, but only one of them store huge amount of data,
and for this reason I got a warning:

health: HEALTH_WARN
            1 pools have many more objects per pg than average

But in past I remember that I got a warning that the pg_num for a pool
is less/more then the average pg_num in cluster.


In this case how can I set the optimal pg_num for my pools?

Some debug data:

OSD number: 20

  data:
    pools:   8 pools, 1736 pgs
    objects: 560k objects, 1141 GB
    usage:   2331 GB used, 30053 GB / 32384 GB avail
    pgs:     1736 active+clean
           
           
POOLS:
    NAME                ID     USED       %USED     MAX AVAIL     OBJECTS
    kvmpool             5      34094M      0.24        13833G        8573
    rbd                 6        155G      1.11        13833G       94056
    lxdhv04             15     29589M      0.21        13833G       12805
    lxdhv01             16     14480M      0.10        13833G        9732
    lxdhv02             17     14840M      0.10        13833G        7931
    lxdhv03             18     18735M      0.13        13833G        7567
    cephfs-metadata     22     40433k         0        13833G       11336
    cephfs-data         23       876G      5.96        13833G      422108

   
pool 5 'kvmpool' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 1909 lfor 0/1906 owner
18446744073709551615 flags hashpspool stripe_width 0 application rbd
pool 6 'rbd' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 8422 lfor 0/2375 owner
18446744073709551615 flags hashpspool stripe_width 0 application rbd
pool 15 'lxdhv04' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 3053 flags hashpspool
stripe_width 0 application rbd
pool 16 'lxdhv01' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 3054 flags hashpspool
stripe_width 0 application rbd
pool 17 'lxdhv02' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 8409 flags hashpspool
stripe_width 0 application rbd
pool 18 'lxdhv03' replicated size 2 min_size 1 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 3066 flags hashpspool
stripe_width 0 application rbd
pool 22 'cephfs-metadata' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 100 pgp_num 100 last_change 8405 flags
hashpspool stripe_width 0 application cephfs
pool 23 'cephfs-data' replicated size 2 min_size 1 crush_rule 0
object_hash rjenkins pg_num 100 pgp_num 100 last_change 8405 flags
hashpspool stripe_width 0 application cephfs


-- 
Ákos

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux