Re: How to properly deal with NEAR FULL OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks. To summarize

Your data, images+volumes = 27.15% space used

Raw used = 81.71% used

 

This is a big difference that I can’t account for? Can anyone? So is your cluster actually full?

 

I had the same problem with my small cluster. Raw used was about 85% and actual data, with replication, was about 30%. My OSDs were also BRTFS. BRTFS was causing its own problems. I fixed my problem by removing each OSD one at a time and re-adding as the default XFS filesystem. Doing so brought the percentages used to be about the same and it’s good now. My observation is that ceph wasn’t reclaiming space used.

 

My version was Hammer

 

/don

 

 

 

From: Dimitar Boichev [mailto:Dimitar.Boichev@xxxxxxxxxxxxx]
Sent: Friday, February 19, 2016 1:19 AM
To: Dimitar Boichev <Dimitar.Boichev@xxxxxxxxxxxxx>; Vlad Blando <vblando@xxxxxxxxxxxxx>; Don Laursen <don.laursen@xxxxxxxxx>
Cc: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: RE: [ceph-users] How to properly deal with NEAR FULL OSD

 

Sorry, reply to a wrong message.

 

Regards.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Dimitar Boichev
Sent: Friday, February 19, 2016 10:19 AM
To: Vlad Blando; Don Laursen
Cc: ceph-users
Subject: Re: [ceph-users] How to properly deal with NEAR FULL OSD

 

I have seen this when there was a recovery going on some PGs and we were deleting big amounts of data.

They disappeared when the recovery process finished.

This was on Firefly 0.80.7

 

 

Regards.

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Vlad Blando
Sent: Friday, February 19, 2016 3:31 AM
To: Don Laursen
Cc: ceph-users
Subject: Re: [ceph-users] How to properly deal with NEAR FULL OSD

 

I changed my volume PGs from 300 to 512 to even out the distribution, right now it is backfilling and remapping and I noticed that it's working.

 

---

osd.2 is near full at 85%

osd.4 is near full at 85%

osd.5 is near full at 85%

osd.6 is near full at 85%

osd.7 is near full at 86%

osd.8 is near full at 88%

osd.9 is near full at 85%

osd.11 is near full at 85%

osd.12 is near full at 86%

osd.16 is near full at 86%

osd.17 is near full at 85%

osd.20 is near full at 85%

osd.23 is near full at 86%

---

 

We will be adding a new node to the cluster after this.

 

Another question, I'de like to adjust the near full OSD warning from 85% to 90% temporarily. I cant remember the command.

 

 

@don

ceph df

---

[root@controller-node ~]# ceph df

GLOBAL:

    SIZE        AVAIL      RAW USED     %RAW USED

    100553G     18391G     82161G       81.71

POOLS:

    NAME        ID     USED       %USED     OBJECTS

    images      4      8927G      8.88      1143014

    volumes     5      18374G     18.27     4721934

[root@controller-node ~]#

---

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux