Re: cephfs-data-scan orphan objects while mds active?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hmm... seems I might have been blinded and looking in the wrong place.

I did some scripting and took a look at all the *.00000000 objects'
"parent" xattrs on the pool. Nothing funky there and no files with a
backtrace pointing to that deleted folder. No considerable amount of these
inode object "sequences" with missing .00000000 chunk either. So, probably
it's not orphan objects at this level then :|

...and then I noticed that there are quite a considerable amount of clones
despite there are no snapshots - or can there be some other reason for that?

# rados -p cephfs_ec22hdd_data lssnap
0 snaps
# rados -p cephfs_ec22hdd_data df
POOL_NAME               USED   OBJECTS   CLONES     COPIES
 MISSING_ON_PRIMARY  UNFOUND  DEGRADED     RD_OPS      RD     WR_OPS
WR  USED COMPR  UNDER COMPR
cephfs_ec22hdd_data  179 TiB  68334318  8399291  273337272
  0        0         0  707777158  86 TiB  234691728  117 TiB         0 B
       0 B

Is there some way to force these to get trimmed?

tnx,
---------------------------
Olli Rajala - Lead TD
Anima Vitae Ltd.
www.anima.fi
---------------------------


On Fri, May 17, 2024 at 6:48 AM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:

> It's unfortunately more complicated than that. I don't think that
> forward scrub tag gets persisted to the raw objects; it's just a
> notation for you. And even if it was, it would only be on the first
> object in every file — larger files would have many more objects
> forward scrub doesn't touch.
>
> This isn't a case anybody has really built tooling for. Your best bet
> is probably to live with the data leakage, or else find a time to turn
> it off and run the data-scan tools.
> -Greg
>
> On Tue, May 14, 2024 at 10:26 AM Olli Rajala <olli.rajala@xxxxxxxx> wrote:
> >
> > Tnx Gregory,
> >
> > Doesn't sound too safe then.
> >
> > Only reason to discover these orphans via scanning would be to delete the
> > files again and I know all these files were at least one year old... so,
> I
> > wonder if I could somehow do something like:
> > 1) do forward scrub with a custom tag
> > 2) iterate over all the objects in the pool and delete all objects
> without
> > the tag and older than one year
> >
> > Is there any tooling to do such an operation? Any risks or flawed logic
> > there?
> >
> > ...or any other ways to discover and get rid of these objects?
> >
> > Cheers!
> > ---------------------------
> > Olli Rajala - Lead TD
> > Anima Vitae Ltd.
> > www.anima.fi
> > ---------------------------
> >
> >
> > On Tue, May 14, 2024 at 9:41 AM Gregory Farnum <gfarnum@xxxxxxxxxx>
> wrote:
> >
> > > The cephfs-data-scan tools are built with the expectation that they'll
> > > be run offline. Some portion of them could be run without damaging the
> > > live filesystem (NOT all, and I'd have to dig in to check which is
> > > which), but they will detect inconsistencies that don't really exist
> > > (due to updates that are committed to the journal but not fully
> > > flushed out to backing objects) and so I don't think it would do any
> > > good.
> > > -Greg
> > >
> > > On Mon, May 13, 2024 at 4:33 AM Olli Rajala <olli.rajala@xxxxxxxx>
> wrote:
> > > >
> > > > Hi,
> > > >
> > > > I suspect that I have some orphan objects on a data pool after quite
> > > > haphazardly evicting and removing a cache pool after deleting 17TB of
> > > files
> > > > from cephfs. I have forward scrubbed the mds and the filesystem is in
> > > clean
> > > > state.
> > > >
> > > > This is a production system and I'm curious if it would be safe to
> > > > run cephfs-data-scan scan_extents and scan_inodes while the fs is
> online?
> > > > Does it help if I give a custom tag while forward scrubbing and then
> > > > use --filter-tag on the backward scans?
> > > >
> > > > ...or is there some other way to check and cleanup orphans?
> > > >
> > > > tnx,
> > > > ---------------------------
> > > > Olli Rajala - Lead TD
> > > > Anima Vitae Ltd.
> > > > www.anima.fi
> > > > ---------------------------
> > > > _______________________________________________
> > > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > > >
> > > _______________________________________________
> > > ceph-users mailing list -- ceph-users@xxxxxxx
> > > To unsubscribe send an email to ceph-users-leave@xxxxxxx
> > >
> > _______________________________________________
> > ceph-users mailing list -- ceph-users@xxxxxxx
> > To unsubscribe send an email to ceph-users-leave@xxxxxxx
>
>
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux