Re: fscache review comments, part 3

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Sep 29, 2006 at 10:23:57AM +0100, David Howells wrote:
> Christoph Hellwig <hch@xxxxxxxxxxxxx> wrote:
> 
> >  - as a start please don't mix introducing a new module, adding new
> >    function in the core code and adding exports in the same patch.
> 
> Then people (like one Christoph Hellwig) complain about modules that add stuff
> that nothing uses at that point.

In your years of doing kernel work you must surely have been introduced
to the concept of a patch series.   Stop beeing suck a prick and please
work the way everyone else does.

> 
> >  - the new install_page_waitqueue_monitor function is generally fine,
> >    but it should get
> >       a) a kernel-doc comment describing it
> >       b) a name actual describig what it does, e.g. add_page_wait_queue
> >          monitoring the names for other waitqueue functionality.
> 
> Ummm... Do you mean you don't like "install_page_waitqueue_monitor"?  It's not
> clear from your example if that's what you mean.

No, I meant exactly what I said.  We don't use the term "monitor" for any
of our current waitqueue interface, and you shouldn't introduce it randomly.

> >  - generic_file_buffered_write_one_kernel_page seems generally fine, but
> >    you must not call this directly from cachefiles but rather go through
> >    a file operation for it..  There's various filesystems where directly
> >    going to the address_space operations is not fine.  This goes back
> >    to our kernel_read/write discussion at OLS.
> 
> I'm also told (though not by you) that I don't need to add more file ops or
> address space ops as everything I need is there.

But I tell you we need it and you haven't shown a reason why it's not true.
If you only believe in what people from your company tell you please take
a look at the thread starting at
http://marc.theaimsgroup.com/?l=linux-fsdevel&m=114349617015065&w=2
where the shiny new filesystem from the people with the Red Hats has a problem
with exactly that kind of construct.

> >    module parameters now, which sound perfect for this purpose.
> 
> I didn't know that.  What if it's not a module?

Still works, then it's a boot-time/run-time thing instead.

> >  - the procfs interface is insane :) Suggested replacement:
> > 
> >     - the tunables sound like they should be one-value per file sysfs
> >       entinities.
> >     - dito for the tag
> >     - same is true for the root directory, but that one should not
> >       be specified as a filedescriptor, but a normal pathname.
> >       This also get rid of the non-acceptable fget_light export.
> 
> The intention is, in the future, to make it so that you can open it multiple
> times to provide multiple caches with separate parameters, so: request denied.

So what?  Make the sysfs parameters per-cacheobject - thast's exactly
how it's supposed to work.  And btw, this was not a request, this was
a clear NACK for the current broken interface with a strongly suggested
replacement.

> >    Why can you use a random singnal to communicated with your daemon?
> 
> Can you rephrase that please? I think you're missing a negation.
> 
> Anyway, my daemon is attached to a file descriptor: why not use the fd's
> signalling capabilities?

Because we're not doing in anywhere else and having some code in fs/
beeing the only user of it except for the traditional networking use
is a very good sign you're doing something rather fishy here.  I'm
still open for arguments why its wonderfull, but it needs a good
explanation.

> Then CacheFiles has to do all its own filesystem ops in a separate thread
> which will slow things down a lot.  Every read, every write, most opens and
> many closes done to, say, NFS will incur a pair of context switches.
>
> 
> I have to avoid SELinux, fs permission checks and auditing.  You have to
> remember that the security details of the accessing process are *not* what I
> must access the cache with.

No, you must store access control credentials with your files and allow
auditing - everything else is a bypass of the security model.  And yes,
performances is far third after correctness and maintainability.

> 
> >  - all of cf-namei.c is poking into dcache.c and namei.c internals that
> >    it absolutely must not.  I'm not going to mention all the details here
> >    because it's far too much, but I'd say for every single direct invocation
> >    of a filesystem inode or dcache operation from this file you will
> >    need a very good explanation - I'm not going to accesspt something
> >    poking into internals that deeply, it's exactly the kind of layering
> >    we want to provide via the VFS.
> 
> Consider security.

Exactly, what you do meansa you're bypassing all kinds of checks and balcances
in the VFS code, which is exactly what we want to avoid.

> 
> >  - style-comment: please don't mention the filename again in the top-of-file
> >    block comment
> 
> I trust you'll go and fix that in every other file in the kernel that does so.
> 
> But I can do that for my files here.

Fixing existing code is for the kernel-janitors project, while reviews
try to keep sneaking bad habits into new code.  Having a bad example
somewhere in the tree is not a good reason to get it in one more time.

(I'm really tired of writing all this crap again and again because people
obviously aren't listening, does someone who writes a better english then
me want to something about this to SubmittingPatches or a similar document?)
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux