Re: a question about ceph raw space usage

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nitin,
On Tue, Nov 07, 2017 at 12:03:15AM +0000, Kamble, Nitin A wrote:
> Dear Cephers,
>
> As seen below, I notice that 12.7% of raw storage is consumed with zero pools in the system. These are bluestore OSDs. 
> Is this expected or an anomaly?
DB + WAL are consuming space already, if you add them together you should get to the ~82GB.

>
> Thanks,
> Nitin
>
> maruti1:~ # ceph -v
> ceph version 12.2.1 (3e7492b9ada8bdc9a5cd0feafd42fbca27f9c38e) luminous (stable)
> maruti1:~ # ceph -s
>   cluster:
>     id:     37e0fe9e-6a19-4182-8350-e377d45291ce
>     health: HEALTH_OK
>
>   services:
>     mon: 1 daemons, quorum maruti1
>     mgr: maruti1(active)
>     osd: 12 osds: 12 up, 12 in
>
>   data:
>     pools:   0 pools, 0 pgs
>     objects: 0 objects, 0 bytes
>     usage:   972 GB used, 6681 GB / 7653 GB avail
>     pgs:
>
> maruti1:~ # ceph df
> GLOBAL:
>     SIZE      AVAIL     RAW USED     %RAW USED
>     7653G     6681G         972G         12.70
> POOLS:
>     NAME     ID     USED     %USED     MAX AVAIL     OBJECTS
> maruti1:~ # ceph osd df
> ID CLASS WEIGHT  REWEIGHT SIZE  USE    AVAIL %USE  VAR  PGS
> 0   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 6   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 9   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 1   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 5   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 11   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 3   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 7   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 10   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 2   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 4   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
> 8   hdd 0.62279  1.00000  637G 82955M  556G 12.70 1.00   0
>                     TOTAL 7653G   972G 6681G 12.70
> MIN/MAX VAR: 1.00/1.00  STDDEV: 0
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
Cheers,
Alwin

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux