Re: total_used statistic incorrect

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I thought maybe that the cleanup process hadn't occurred yet, but I've been in this state for over a week now. 

I’m just about to go live with this system ( in the next couple of weeks ) so I'm trying to start out as clean as possible.

If anyone has any insights I'd appreciate it. 

There should be no data in the system yet... unless I'm missing something.

Thanks,
Mike

-----Original Message-----
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Serkan Çoban <cobanserkan@xxxxxxxxx>
Date: Wednesday, September 19, 2018 at 6:25 AM
To: "jaszewski.jakub@xxxxxxxxx" <jaszewski.jakub@xxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>, "ceph-users-bounces@xxxxxxxxxxxxxx" <ceph-users-bounces@xxxxxxxxxxxxxx>
Subject: Re:  total_used statistic incorrect

Used data is wal+db size on each OSD.
On Wed, Sep 19, 2018 at 3:50 PM Jakub Jaszewski
<jaszewski.jakub@xxxxxxxxx> wrote:
>
> Hi, I've recently deployed fresh cluster via ceph-ansible. I've not yet created pools, but storage is used anyway.
>
> [root@ceph01 ~]# ceph version
> ceph version 13.2.1 (5533ecdc0fda920179d7ad84e0aa65a127b20d77) mimic (stable)
> [root@ceph01 ~]# ceph df
> GLOBAL:
>     SIZE        AVAIL       RAW USED     %RAW USED
>     269 TiB     262 TiB      7.1 TiB          2.64
> POOLS:
>     NAME     ID     USED     %USED     MAX AVAIL     OBJECTS
> [root@ceph01 ~]# rados df
> POOL_NAME USED OBJECTS CLONES COPIES MISSING_ON_PRIMARY UNFOUND DEGRADED RD_OPS RD WR_OPS WR
>
> total_objects    0
> total_used       7.1 TiB
> total_avail      262 TiB
> total_space      269 TiB
> [root@ceph01 ~]#
>
>
> Regards
> Jakub
>
> On Wed, Sep 19, 2018 at 2:09 PM <xuyong@xxxxxxxxxxx> wrote:
>>
>> The cluster needs time to remove those objects in the previous pools. What you can do is to wait.
>>
>>
>>
>>
>>
>> 发件人:         Mike Cave <mcave@xxxxxxx>
>> 收件人:         ceph-users <ceph-users@xxxxxxxxxxxxxx>
>> 日期:         2018/09/19 06:24
>> 主题:         total_used statistic incorrect
>> 发件人:        "ceph-users" <ceph-users-bounces@xxxxxxxxxxxxxx>
>> ________________________________
>>
>>
>>
>> Greetings,
>>
>> I’ve recently run into an issue with my new Mimic deploy.
>>
>> I created some pools and created volumes and did some general testing. In total, there was about 21 TiB used. Once testing was completed, I deleted the pools and thus thought I deleted the data.
>>
>> However, the ‘total_used’ statistic given from running ‘ceph  -s’ shows that the space is still consumed. I have confirmed that the pools are deleted (rados df) but I cannot get the total_used to reflect the actual state of usage on the system.
>>
>> Have I missed a step in deleting a pool? Is there some other step I need to perform other than what I found in the docs?
>>
>> Please let me know if I can provide any additional data.
>>
>> Cheers,
>> Mike
>>  _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>> --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 本邮件及其附件含有浙江宇视科技有限公司的保密信息,仅限于发送给上面地址中列出的个人或群组。禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制、或散发本邮件中的信息。如果您错收了本邮件请您立即电话或邮件通知发件人并删除本邮件! This e-mail and its attachments contain confidential information from Uniview, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it! _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux