Re: Understanding filesystem size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> 
> First of all, thank you so much again for the time you spend in trying to help me, it's much appreciated.

Prego. In my book I assert that the community is a core Ceph component, and I tell people all the time that it’s one of many reasons to choose Ceph over alternatives.  Like Red Green said, we’re all in this together.  

> 
> Then:
> - here's the dump of the CRUSH rules:

Ok, that looks fine.  If you had some rules specifying a device class but not others that would break the pg autoscaler.  

> - Running `ceph osd metadata | grep min_alloc_size` shows:
> 
>   "bluestore_min_alloc_size": "4096"
> 
> for all the OSDs

Groovy.  That this shows up is an indication that your Ceph is recent, I think that went out in Reef.  Maybe Quincy.  

> - About PG count: currently I have two OSDs destroyed waiting for disk upgrade; anyway this doesn't change the overall picture. That said, here's the info you requested:
> 
> # ceph osd df
> ID   CLASS  WEIGHT   REWEIGHT  SIZE     RAW USE  DATA     OMAP     META    AVAIL    %USE   VAR   PGS  STATUS
>  4    hdd  1.81940   1.00000  1.8 TiB  600 GiB  597 GiB   53 MiB  3.4 GiB  1.2 TiB  32.22  0.99   24         up

Oh yeah, way too low.  

> 11   
>                        TOTAL  320 TiB  104 TiB  104 TiB  2.9 GiB  378 GiB  216 TiB  32.51
> MIN/MAX VAR: 0.82/1.12  STDDEV: 2.53
> 
> # ceph osd dump | grep pool
> pool 1 '.mgr' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 191543 flags hashpspool stripe_width 0 pg_num_max 32 pg_num_min 1 application mgr read_balance_score 150.00
> pool 2 'wizard_metadata' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 16 pgp_num 16 autoscale_mode on last_change 191543 lfor 0/8092/8090 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs read_balance_score 7.86
> pool 3 'wizard_data' erasure profile k6_m2_host size 8 min_size 7 crush_rule 1 object_hash rjenkins pg_num 512 pgp_num 512 autoscale_mode off last_change 191543 lfor 0/0/41881 flags hashpspool,ec_overwrites stripe_width 24576 application cephfs

Ok.  You have the autoscaler on for some pools but not others.  

I suggest setting mon_target_pg_per_osd
 to 200 then let’s see where we are when the dust settles.  



> 
> 
> - All the REWEIGHT entries of `ceph osd tree` are currently set to 1 (except for the two destroyed OSDs which are at 0)

Groovy.  


> 
> Thanks again,
> 
> Nicola
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux