Re: Odd "data used" reporting behavior by ceph -w

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Greg,

On Fri, 2011-01-28 at 17:44 -0700, Gregory Farnum wrote:
> Jim:
> It's been a while, but I started looking into this again a couple days
> ago. The file deletion being slow is something we've decided to push
> back after 1.0, since it doesn't really seem like a serious issue. The
> other two problems we wanted to solve, though, and I think we've
> succeeded. I was unable to reproduce your problem with ls on recent
> branches, and just pushed a fix for the slow dd-truncate-dd to both
> the master and stable branches. When you get the chance, please test
> one of those out and let me know if you still see issues like this.

I will.  Thanks for working on these issues.

-- Jim

> Thanks!
> -Greg
> 
> On Tue, Dec 7, 2010 at 7:14 AM, Jim Schutt <jaschut@xxxxxxxxxx> wrote:
> >
> > Hi Sage,
> >
> > On Sat, 2010-12-04 at 21:59 -0700, Sage Weil wrote:
> >> Hi Jim,
> >>
> >> I think there are at least two different things going on here.
> >>
> >> On Fri, 3 Dec 2010, Jim Schutt wrote:
> >> > On Fri, 2010-12-03 at 15:36 -0700, Gregory Farnum wrote:
> >> > > How are you generating these files? It sounds like maybe you're
> >> doing
> >> > > them concurrently on a bunch of clients?
> >> >
> >> > When I created the files initially, I did it via one
> >> > dd per client over 64 clients, all at the same time.
> >> >
> >> > When I used echo to truncate them to zero length, I
> >> > did all files from one client.  Also, when I removed
> >> > the files, I did them all from a single client.
> >>
> >> The MDS doesn't release objects on deleted files until all references
> >> to
> >> the file go away (i.e. everyone closes the file handle).  The client
> >> make
> >> a point of releasing it's capability on inodes it unlinks, but since
> >> the
> >> unlink happened on a different node, the writer doesn't realize it's
> >> unlinked and doesn't bother to release its capability (until it gets
> >> pushed out of the inode cache due to normal cache pressure).  I
> >> suspect
> >> this will need some additional messaging to get the client to drop it
> >> sooner.
> >>
> >> http://tracker.newdream.net/issues/630
> >>
> >> That fix won't make it into 0.24, sorry!  Probably 0.24.1.
> >>
> >
> > Thanks for tracking this!  Whatever priority you assign
> > works great for me.
> >
> > -- Jim
> >
> >
> >
> >
> >
> 


--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux