Hi Vyacheslav, ------------------------------ On Thu, Apr 10, 2014 8:23 AM BST Vyacheslav Dubeyko wrote: <snipped> >Yes, it is possible to have different splitting fixes between patches. I >simply think that if you mix the fix for file names and partial fix in >hfsplus_listxattr() in one patch then it needs to use patchset scheme. >Because rest fixes for xattr names should be in other patches of the >patchset. Otherwise, it needs to have one patch for file names fix and >other patches for xattr names. Of course, it's only my opinion and I >don't insist on this way. But it's right way for me. ATM, I am seeing a division of correcting all usages of hfsplus_uni2asc(), and correcting all usages of HFSPLUS_ATTR_MAX_LENGTH. And I am thinking the latter needs a new function, a way of calculating the unicode length of an nls string; so there will be at least 3 patches in total. (I might change my mind later - but it is at least 3 for now). The one in hfsplus_listxattr() falls in both categories as a usage of hfsplus_uni2asc() and also a usage of HFSPLUS_ATTR_MAX_LENGTH, and so can go into either camp. I can see one argument for your division by-purpose (vs my division by API usage), which is that older kernels do not have attribute support; but to be honest, if somebody cares about an older kernel without attribute support to cherry-pick future patches, they should be capable of split the two parts anyway, and they probably should update the whole hfsplus module wholesale. <snipped> >But what difference between inode buffers and xattr name buffers? Inode >buffer has fixed size and xatr buffer has fixed size. Inodes and xattr >names comes and goes. Maybe, xattr name buffer has shorter lifetime. >Efficiency is not only performance but good way of memory using. We know >that xattr buffer has fixed size and it can be many simultaneous >requests for such buffers creation. When we create kmem cache then we >share our knowledge about fixed size nature of data with memory >subsystem. Otherwise, memory subsystem will work in more sophisticated >way. Maybe, I am wrong or misunderstand something here. But I see >similarity between inode buffers and xattr name buffers. Yes, I think you may be misunderstanding what kmem_cache is supposed to be used for, from my reading. (I could be wrong, of course). kmem_cache is a way of avoiding fragmentation: You want to have a lot of long-lasting objects of the same size, a good example is inodes. There are a lot of them at any one time, but they come and go; If they are allocated from the generic, it could cause contiguous memory to be broken down into smaller pieces over time, as you allocate and free these fixed sized objects, *among objects of other sizes*. So to avoid fragmentation, you pre-allocate a "pool" of a number of slots of these fixed size, and just allocate and free from this pool instead. So this assumes (1) you want to have *many* of these same size objects at any one time, but they are allocated/freed all the time, (2) you want to avoid breaking up larger chunk of contiguous memory just to allocate these smaller ones. There is an overhead for setting up the pool (and the expanding/shrinking of the pool), the tearing down of the pool, etc. Nobody is talkng about speed vs just plain kmalloc here - maybe it is faster, maybe it is slower, but the purpose is to avoid memory fragmentation over time. Neither of these are true for those currently stack-based xattr buffers. They are short-lived (stack poped at the end of the routine) - so any breaking up will be soon coalescenced to neighbouring free regions (spelling? "joined back together" after use, I mean), and also, you are unlikely to be calling more than a few copies of getattr/settattr simultaneously, and unlikely to need many of these buffer simultaneously. Does this sounds reasonable? I have only been doing a few hours of reading up on this; Anton's messages are fairly informative if on the brief side - one just google' away the rest of it. Hin-Tak -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html