Re: Ceph OSD reported Slow operations

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


> Just a continuation of this mail, Could you help me out to understand the ceph
> df output. PFA the screenshot with this mail.

No idea what PFA means, but attachments usually don’t make it through on mailing lists.  Paste text instead.

> 1. Raw storage is 180 TB

The sum of OSD total capacities.

> 2. Stored Value is 37 TB

Clients wrote 37 TB of data.

> 3. Used Value is 112 TB

With default RF=3 replication, Ceph stores 3 copies of data for redundancy.  3x37 ~= 112 TB of raw storage used.

> 4. Available Value is 67 TB

> 5. Pool Max Available Value is 16 TB

We need the actual output.
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]

  Powered by Linux