Re: [git pull] vfs part 2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jeff Layton wrote on Thu, Jul 02, 2015:
> So p9_idpool_create should take an argument for the "end" value, and
> then store that in a new field in p9_idpool. Then they can pass that in
> as the "end" parm in idr_alloc. Or, they could give up using the same
> function there and use a different one for tags and FIDs.
> 
> In any case...allowing this thing to allocate tag values that can
> collide seems fundamentally wrong. Using idr_alloc_cyclic might also
> not hurt either, particularly given that these tag values are supposed
> to function something like an XID and you probably don't want to be
> reusing them too quickly.

Using cache=none here so behavious is likely different with cache, but
basically you can't get more than one tag per user thread accessing the
9P mount...
And in RDMA there's a credit so I can't get past whatever sq option was
given (defaults to 32) -- tbh even with other transports I doubt it's
going to get much higher.

Still definitely needs fixing, but I think the issue is somewhere
else... If Andrey could share the workload he uses I can try with other
servers, would be nice if we can rule a qemu bug out completely :)

-- 
Dominique
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux