Hi Vyacheslav, On 10 Apr 2014, at 08:04, Vyacheslav Dubeyko <slava@xxxxxxxxxxx> wrote: > On Wed, 2014-04-09 at 09:58 +0100, Anton Altaparmakov wrote: >>> If you think that nobody use non-english extended attributes then it >>> doesn't make sense to make any fixes in xattr subsystem of HFS+. >> >> It doesn't matter what anyone things. It matters to do what is right. And right is to allow anything that would work on OSX. > > Yes, right things should be right. :) > > I misunderstand where you've found in my sentence differences with your > statement "to allow anything that would work on OSX". So, I see only > useless objection here. Sorry. >From what I understood your sentence to mean (and I apologize if I misunderstood you) you said that "unless you have example or knowledge of someone using non-english extended attributes then there is no need to fix this issue on HFS+". And I think that is very wrong. Just because we don't have an example is not a good reason not to fix something. If it works on OSX which it does, then it should work on Linux, too. No matter whether anyone can provide an example of someone using it or not. It should be fixed simply because it is wrong and that is something that is obviously the case. The Linux kernel is not a business where the mentality of "if there is no customer bug report stop using company time on fixing it" is common place. Here we strive to always be correct. And what you said (from what I understood you to mean anyway) was exactly the same as that business mentality of "if no example == customer then don't fix it"... >>> Because getxattr() and setxattr() series of methods will be the primary >>> source of error in the case of worst-case unicode to char conversion. >>> >>> Otherwise, you can use any splitting of fixes between patches. In my >>> last e-mail I suggested to use kmem cache approach for xattr names >>> instead of kmalloc only. >> >> Yes but I don't think you have given a good reason why it would be worth to create a kmem cache for such temporary buffers and IMO it makes no sense to use them - kmalloc and friends are a better choice in this use case. Please remember that kmalloc is effectively using a kmem cache - just a set of generic ones (depending on size of allocation - just look at /proc/slabinfo some time) instead of a specialised one so there is no benefit in using a dedicated kmem cache unless you are going to have lots of those buffers around which you are not as they only live for the duration of the get/set/list function call. > > I see similarity between xattr name buffers and inodes buffers. So, I've > suggested to use kmem cache because of such similarity. As far as I can > judge, you mean that xattr name buffers will live short time. And, as a > result, kmalloc will be much better choice. Do you mean this? Am I > correct? Otherwise, it makes sense to use simple kmalloc and for inode > buffers. There is no similarity between inode buffers and xattr name buffers other than the fact that they are both buffers! xattr name buffer is currently allocated on the stack thus the buffer only exists for the duration of the function and is then thrown away. Thus at any one point in time there will be usually zero such buffers in memory and even if several processes simultaneously decide to do xattr related operations you would still at any one point in time have a maximum of "number of processes" such buffers allocated in memory. inode buffers are very long lived buffers, on any typical system there are tens of thousands of them in memory at any one time and they stay in memory for a long time some even the entire time the kernel is running. On a busy server the number of inode buffer will go into the millions all in memory at same time. Just the overhead of allocating and maintaining a kmem cache just for those almost never used xattr name buffers will be much larger in terms of memory and CPU use than the buffers you will allocate themselves. Thus it makes no sense to use kmem cache - just using kmalloc will use the appropriate, generic kmalloc kmem cache. How can you possibly be comparing these two things and saying they are similar?!? You are comparing apples to oranges... >From a completely idle, small server using ext3 file system for / which is 56GiB in size, looking at /proc/slabinfo we can see the ext3 inode kmem cache and the generic 1024-byte kmem cache for kmalloc: # name <active_objs> <num_objs> <objsize> <objperslab> <pagesperslab> : tunables <limit> <batchcount> <sharedfactor> : slabdata <active_slabs> <num_slabs> <sharedavail> ext3_inode_cache 15105 15105 816 5 1 : tunables 54 27 8 : slabdata 3021 3021 size-1024 907 968 1024 4 1 : tunables 54 27 8 : slabdata 242 242 0 So there are 15105 cached "struct inode" on the root file system and at present there are 907 1024-byte sized kmalloc() buffers in memory. Thus allocating a handful or even 10s or 100s more of 1024-byte size kmalloc() buffers is hardly going to make any difference to anything and fits nicely in what the size-1024 kmem cache is for. The amount of wasted ram is absolutely minimal as the number of xattr name buffers is so small. I think you really do not understand how memory allocators work... )-: Best regards, Anton > Thanks, > Vyacheslav Dubeyko. -- Anton Altaparmakov <aia21 at cam.ac.uk> (replace at with @) Unix Support, Computing Service, University of Cambridge J.J. Thomson Avenue, Cambridge, CB3 0RB, UK -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html