Re: Orangefs ABI documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Feb 10, 2016 at 04:44:36PM +0000, Al Viro wrote:
> > That breakage had been introduced between 2.8.5 and 2.8.6 (at some point
> > during the spring of 2012).  AFAICS, all versions starting with 2.8.6 are
> > vulnerable...
> 
> BTW, what about kill -9 delivered to readdir in progress?  There's no
> cancel for those (and AFAICS the daemon will reject cancel on anything
> other than FILE_IO), so what's to stop another thread from picking the
> same readdir slot and getting (daemon-side) two of them spewing into
> the same area of shared memory?  Is it simply that daemon-side the shared
> memory on readdir is touched only upon request completion in completely
> serialized process_vfs_requests()?  That doesn't seem to be enough -
> suppose the second readdir request completes (daemon-side) first, its results
> get packed into shared memory slot and it is reported to kernel, which
> proceeds to repack and copy that data to userland.  In the meanwhile,
> daemon completes the _earlier_ readdir and proceeds to pack its results into
> the same slot of shared memory.  Sure, the kernel won't take that (the
> op with the matching tag has been gone already), but the data is stored
> into shared memory *before* writev() on the control device that would pass
> the response to the kernel, so it still gets overwritten.  Right under
> decoding readdir()...
> 
> Or is there something in the daemon that would guarantee readdir responses
> to happen in the same order in which it had picked the requests?  I'm not
> familiar enough with that beast (and overall control flow in there is, er,
> not the most transparent one I've seen), so I might be missing something,
> but I don't see anything obvious that would guarantee such ordering.
> 
> Please, clarify.

Two more questions:
	* why do we need cancel to be held back while we are going through
ORANGEFS_DEV_REMOUNT_ALL?  IOW, why do we need to take request_mutex for
them?
	* your ->kill_sb() starts with telling daemon that fs is gone,
then proceeds to evict dentries/inodes.  Sure, you don't have page cache
(or that would've been instantly fatal - dirty pages would need to be
written out, for one thing), but why do it in this order?  IOW, why not
_start_ with kill_anon_super(), then do the rest of the work?
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux