Re: Where has my capacity gone?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Sorry for replying late :(. And thanks for the tips.

This is a fresh cluster. And I didn’t think data distribution would be a problem. Is this normal?

Below is the ceph osd df output. The related pool is hdd only (prod.rgw.buckets.data). I guess there is variance but I couldn’t get the reason. Is it because of PG number which I get help from pg-calculator <https://ceph.io/pgcalc/>? Or is this expected ceph behaviour?

#ceph osd df
https://pastebin.ubuntu.com/p/ZmQZsGYpr7/ <https://pastebin.ubuntu.com/p/7C9zpXYntR/>

I am also sharing related cluster information. Any suggestion would be appreciated.
#ceph df
https://pastebin.ubuntu.com/p/sXpf99zhnV/ <https://pastebin.ubuntu.com/p/sXpf99zhnV/>

#ceph detail df
https://pastebin.ubuntu.com/p/dwvwBnnBmv/ <https://pastebin.ubuntu.com/p/dwvwBnnBmv/>

#ceph osd pool ls detail
https://pastebin.ubuntu.com/p/c2KQD5CGMV/ <https://pastebin.ubuntu.com/p/c2KQD5CGMV/>

#crush rules
https://pastebin.ubuntu.com/p/X6WsZhV3Zz/ <https://pastebin.ubuntu.com/p/X6WsZhV3Zz/>

Thanks.


> On 26 Jan 2021, at 11:18, Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:
> 
> ceph osd df | sort -nk8
> 
>> On Jan 25, 2021, at 11:22 PM, George Yil <georgeyil75@xxxxxxxxx> wrote:
>> 
>> Hi,
>> 
>> I have a ceph nautilus (14.2.9) cluster with 10 nodes. Each node has
>> 19x16TB disks attached.
>> 
>> I created radosgw pools. secondaryzone.rgw.buckets.data pool is configured
>> as EC 8+2 (jerasure).
>> ceph df shows 2.1PiB MAX AVAIL space.
>> 
>> Then I configured radosgw as a secondary zone and 100TiB of S3 data is
>> replicated.
>> 
>> But weirdly enough ceph df shows 1.8PiB MAX AVAIL for the same pool. But
>> there is only 100TiB of written data. ceph df also confirms it. I can not
>> figure out where 200TiB capacity is gone.
>> 
>> Would someone please tell me what I am missing?
>> 
>> Thanks.
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux