Re: Maximum CephFS Filesystem Size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I have the same problem. 30TB available on Ceph, but my SMB share has only 5TB available. At IRC I was told I should raise pg count and run balancer. Raising pg count helped a little and I'm waiting Ceph to recover from pg resizing to run the balancer.


--
Salsa

Sent with ProtonMail Secure Email.

‐‐‐‐‐‐‐ Original Message ‐‐‐‐‐‐‐
On Wednesday, April 1, 2020 12:43 PM, <DHilsbos@xxxxxxxxxxxxxx> wrote:

> All;
>
> We set up a CephFS on a Nautilus (14.2.8) cluster in February, to hold backups. We finally have all the backups running, and are just waiting for the system reach steady-state.
>
> I'm concerned about usage numbers, in the Dashboard Capacity it shows the cluster as 37% used, while under Filesystems --> <FSName> --> Pools -_> <data> --> Usage, it shows 71% used.
>
> Does CephFS place a limit on the size of a CephFS? Is there a limit to how large a pool can be in Ceph? Where is the sizing discrepancy coming from, and do I need to address it?
>
> Thank you,
>
> Dominic L. Hilsbos, MBA
> Director - Information Technology
> Perform Air International Inc.
> DHilsbos@xxxxxxxxxxxxxx
> www.PerformAir.com
>
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux