Re: [GIT PULL] Ceph fixes for 5.1-rc7

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, 2019-04-26 at 17:50 +0100, Al Viro wrote:
> On Fri, Apr 26, 2019 at 12:25:03PM -0400, Jeff Layton wrote:
> 
> > It turns out though that using name_snapshot from ceph is a bit more
> > tricky. In some cases, we have to call ceph_mdsc_build_path to build up
> > a full path string. We can't easily populate a name_snapshot from there
> > because struct external_name is only defined in fs/dcache.c.
> 
> Explain, please.  For ceph_mdsc_build_path() you don't need name
> snapshots at all and existing code is, AFAICS, just fine, except
> for pointless pr_err() there.
> 

Eventually we have to pass back the result of all the
build_dentry_path() shenanigans to create_request_message(), and then
free whatever that result is after using it.

Today we can get back a string+length from ceph_mdsc_build_path or
clone_dentry_name, or we might get direct pointers into the dentry if
the situation allows for it.

Now we want to rip out clone_dentry_name() and start using
take_dentry_name_snapshot(). That returns a name_snapshot that we'll
need to pass back to create_request_message. It will need to deal with
the fact that it could get one of those instead of just a string+length.

My original thought was to always pass back a name_snapshot, but that
turns out to be difficult because its innards are not public. The other
potential solutions that I've tried make this code look even worse than
it already is.


> I _probably_ would take allocation out of the loop (e.g. make it
> __getname(), called unconditionally) and turned it into the
> d_path.c-style read_seqbegin_or_lock()/need_seqretry()/done_seqretry()
> loop, so that the first pass would go under rcu_read_lock(), while
> the second (if needed) would just hold rename_lock exclusive (without
> bumping the refcount).  But that's a matter of (theoretical) livelock
> avoidance, not the locking correctness for ->d_name accesses.
> 

Yeah, that does sound better. I want to think about this code a bit

> Oh, and
>         *base = ceph_ino(d_inode(temp));
>         *plen = len;
> probably belongs in critical section - _that_ might be a correctness
> issue, since temp is not held by anything once you are out of there.
> 

Good catch. I'll fix that up.

> > I could add some routines to do this, but it feels a lot like I'm
> > abusing internal dcache interfaces. I'll keep thinking about it though.
> > 
> > While we're on the subject though:
> > 
> > struct external_name {
> >         union {
> >                 atomic_t count;
> >                 struct rcu_head head;
> >         } u;
> >         unsigned char name[];
> > };
> > 
> > Is it really ok to union the count and rcu_head there?
> > 
> > I haven't trawled through all of the code yet, but what prevents someone
> > from trying to access the count inside an RCU critical section, after
> > call_rcu has been called on it?
> 
> The fact that no lockless accesses to ->count are ever done?

Thanks,
-- 
Jeff Layton <jlayton@xxxxxxxxxx>




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux