Bug on rbd rm when using cache tiers Was: OSD on XFS ENOSPC at 84% data / 5% inode and inode64?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2015-11-27 at 10:00 +0100, Laurent GUERBY wrote:
> > 
> > Hi, from given numbers one can conclude that you are facing some kind
> > of XFS preallocation bug, because ((raw space divided by number of
> > files)) is four times lower than the ((raw space divided by 4MB
> > blocks)). At a glance it could be avoided by specifying relatively
> > small allocsize= mount option, of course by impacting overall
> > performance, appropriate benchmarks could be found through
> > ceph-users/ceph-devel. Also do you plan to preserve overcommit ratio
> > to be that high forever?
> 
> Hi again,
> 
> Looks like we hit a bug in image deletion leaving objects undeleted on
> disk:
> 
> http://tracker.ceph.com/issues/13894
> 
> I assume we'll get a lot more free space when it's fixed :).
> 
> Laurent

Hi,

As the bug above, rbd rm not releasing any real disk space on cache
tiers, has been closed with "Rejected" by ceph developpers, I added the
comment below to the ticket.  

Since no usable LTS version will get this issue fixed before a few years
ceph users should be aware of it:

http://tracker.ceph.com/issues/13894#note-15
<<
The issue we're facing is that the size reported by all ceph tools is
completely wrong, from my ticket:

pool name KB objects
ec4p1c 41297 2672953
2.6 millions objects taking 41 megabytes according to ceph, but about 10
terabytes on disk.

As most ceph documentation do suggest to set target_max_bytes there will
be no eviction at all on rbd rm until it's too late as we found out (OSD
will die to full disks, ...).

The only way to prevent users to run into this issue is either fix the
bug on the byte counting for cache promoted "rm"ed objects or to tell
users for the years to come to use exclusively target_max_objects and
not target_max_bytes to control caching, with target_max_objects based
on their estimation of the available disk space and average object size.

Not fixing this issue will cause endless trouble for users for years to
come.
>>

Sincerely,

Laurent


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux