Re: Maximum CephFS Filesystem Size

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All;

Another interesting piece of information: the host that mounts the CephFS shows it as 45% full.

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air Internationl, Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com



-----Original Message-----
From: DHilsbos@xxxxxxxxxxxxxx [mailto:DHilsbos@xxxxxxxxxxxxxx] 
Sent: Wednesday, April 01, 2020 8:43 AM
To: ceph-users@xxxxxxx
Subject:  Maximum CephFS Filesystem Size

All;

We set up a CephFS on a Nautilus (14.2.8) cluster in February, to hold backups.  We finally have all the backups running, and are just waiting for the system reach steady-state.

I'm concerned about usage numbers, in the Dashboard Capacity it shows the cluster as 37% used, while under Filesystems --> <FSName> --> Pools -_> <data> --> Usage, it shows 71% used.

Does CephFS place a limit on the size of a CephFS?  Is there a limit to how large a pool can be in Ceph?  Where is the sizing discrepancy coming from, and do I need to address it?

Thank you,

Dominic L. Hilsbos, MBA 
Director - Information Technology 
Perform Air International Inc.
DHilsbos@xxxxxxxxxxxxxx 
www.PerformAir.com

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux