On Tue, Sep 22, 2015 at 04:24:50PM +0100, David Howells wrote: > > (4) fs/open.c: Length check in ftruncate(). > > (5) fs/open.c: Length check in generic_file_open(). > > All but the first two are just making length checks that are waived > unconditionally on a 64-bit system. Just skip the length checks, assuming > that O_LARGEFILE is actually set. So what this means is that on 32-bit systems, if we have a userspace program which isn't using the Largefile-enabled, and it opens a file which is larger than can be addressed with a 32-bit off_t, it can get surprised and possibly cause data loss. Is this something we are willing to live with? After all, there was a originally a really good reason for the O_LARGEFILE flag in the first place, and it was primarily about making sure that a non-LARGEFILE capable program would hard fail on the open, instead of after it had trashed the user's data. Granted that 32-bit systems are rarer these days, and hopefully this isn't a situation that would come up that often in embedded systems, but if breaking this functionality is something that we are deliberately going to be doing, we should discuss it explicitly, and document the decision in the commit message. Was there a reason that motivated this change, other than just an clean up? - Ted -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html