Re: A design for CephFS forward scrub with multiple MDS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Well, we can't keep these inodes around the way the code currently
> works: unless I'm much mistaken, nothing is keeping them updated so
> they're just out-of-date copies of metadata. PIN_SCRUBQUEUE exists to
> keep inodes in-memory once on the scrub queue but it was explicitly
> never designed to interact with multi-mds systems and needs to be
> cleaned up; the current behavior is just broken. Luckily, there are
> some not-too-ridiculous solutions.
> * We already freeze a subtree before exporting it. I don't remember if
> that involves actually touching every in-memory CDentry/CInode
> underneath, or marking a flag on the root?
> * We obviously walk our way through the whole subtree when bundling it
> up for export
> So, in one of those passes (or in a new one tacked on to freezing), we
> can detect that an inode is on the scrub queue and remove it.

There’s also:
* Represent the stack of inodes with a different data structure (like an inode_t). Then we could leave the current behavior intact. 

Your subtree bundling suggestion does look interesting, though. I’ll look into it.

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux