Hi Christoph, On Wed, 2016-06-01 at 00:33 -0700, Christoph Hellwig wrote: > Hi Krzysztof, > > > I can check but here are my remarks: > > the dip2vip_cpy() does partial byte-swap from file system endian to cpu > > endian, so the union uses fs native endianess and this raises difficulty > > of using struct vxfs_inode_info to yet another level. > > Is this a good practice ? (apart from a benefit like some minor speed > > gain) > > It seems easier this way around. If you think byte swapping makes > life easier for you I'm happy to take an incremental patch. Howerver > that means we'll need separate structure defintions for the union > members. yes, you are right. bitwise attribute is convenient for endian annotation for automated checks. I did not know this feature and it seemed like excessive typing overhead to me. Anyway I must admit that it has benefits. Without that I wouldn't dare to have a mixed endian structure. I would just put a //note somewhere aside for me to remember that this structure requires special handling with regard to endianess and whether it has been copied already or it is just mapped piece of memory from backend device. Would I do this with every member of the structure ? of course, wouldn't. (too many to remember) > > > Are there any real benefits from marking anything "bitwise" except that > > it is just another type ? > > bitwise is an annotation for sparse so that you can't directly assign > to the variable, and need to use accessors instead. In this case it's > used so that we are forced to use the byte swapping helpers and get a > warning from sparse if that's not done properly. > indeed, I realized this. > > > > because for some strange reason the patch uses "PAGE_SHIFT" while > > original lookup.c from 3.16 or 3.18 kernels uses PAGE_CACHE_SHIFT but > > 4.6 kernel. so the patch below does not apply cleanly to source from 3.x > > kernels. I use 3.16, double checked latest 3.18. > > Yes, PAGE_CACHE_SIZE, PAGE_CACHE_SHIFT and co were removed in kernel > 4.6, as they have always been identical to the versions without _CACHE. I see. sorry, I reworked these remaining patches with 3.16 so you will hit the issue with applying them again. I will post them shortly soon. Good news is that regression tests are ok. The patchset will start from 2nd patch. I think there is no need to re-post the patch you created. > > > Anyway because readdir() is not aware of fs endianess (because 0004 > > patch is not in there) the hp-ux tests will fail with some general > > protection fault very likely. > > I think all readdir structures are fixed up now, please check. right. > > > I can't see also patches which fix these obvious bugs like freeing > > kmem_cache_alloc() objects with kfree. > > Even worse is releasing inode->i_private from the "evict_inode" > > callback. > > This leads to dangling objects in vxfs's kmem_cache when the fs is > > unmounted. Destroying cache on module unload causes kmem_cache_destroy > > to complain about it and after a few tries ((module load, mount, some > > ops on mountpoint, unmount, unload) x few times) hard lockup is > > inevitable. > > Yeah, this was just getting started. I've spent some more time on the > whole inode issues and have implemented this a bit different from what > you did, although the end result is very similar. Can you take a look > at the tree here: > > http://git.infradead.org/users/hch/vfs.git/shortlog/refs/heads/freevxfs > > > Do you think other patches should be updated with regard to the patch > > you sent ? > okay .. once again we did the same thing in parallel without a mutex ... will see your solution. we will need to sort out which one is better and/or looks better. I am afraid that I may loose because don't care about tabs&spaces. > Please take a look at the branch above. The only major thing that > should be missing is your directory code refactoring. Thanks. yes, the old readdir has a bug. this time my change logs are more verbose. -- Krzysztof Blaszkowski -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html