Re: [PATCH] mds: handle setxattr ceph.parent

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Dec 20, 2013 at 4:50 PM, Alexandre Oliva <oliva@xxxxxxx> wrote:
> On Dec 20, 2013, Alexandre Oliva <oliva@xxxxxxx> wrote:
>
>> back many of the osds to recent snapshots thereof, from which I'd
>> cleaned all traces of the user.ceph._parent.  I intended to roll back
>
> Err, I meant user.ceph._path, of course ;-)
>
>> So I think by now I'm happy to announce that it was an IO error (where
>> IO stands for Incompetence of the Operator ;-)
>
>> Sorry about this disturbance, and thanks for asking me to investigate it
>> further and find a probable cause that involves no fault of Ceph's.
>
> I guess after the successful --reset-journal, I get to clean up on my
> own the journal files that are no longer used but that apparently won't
> get cleaned up by the mds any more.  Right?

If they're in the "future" of the mds journal they'll get cleared out
automatically as the MDS gets up to them (this is the pre-zeroing
thing). If they're in the "past", yeah, you'll need to clear them up.

Did you do that rollback via your cluster snapshot thing, or just
local btrfs snaps? I don't think I want to add anything that makes it
easy for people to break their filesystem like this. :p
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux