Amir Goldstein writes:
On Fri, Dec 27, 2019 at 4:30 PM Chris Down <chris@xxxxxxxxxxxxxx> wrote:
The new inode64 option now uses get_next_ino_full, which always uses the
full width of ino_t (as opposed to get_next_ino, which always uses
unsigned int).
Using inode64 makes inode number wraparound significantly less likely,
at the cost of making some features that rely on the underlying
filesystem not setting any of the highest 32 bits (eg. overlayfs' xino)
not usable.
That's not an accurate statement. overlayfs xino just needs some high
bits available. Therefore I never had any objection to having tmpfs use
64bit ino values (from overlayfs perspective). My only objection is to
use the same pool "irresponsibly" instead of per-sb pool for the heavy
users.
Per-sb get_next_ino is fine, but seems less important if inode64 is used. Or is
your point about people who would still be using inode32?
I think things have become quite unclear in previous discussions, so I want to
make sure we're all on the same page here. Are you saying you would
theoretically ack the following series?
1. Recycle volatile slabs in tmpfs/hugetlbfs
2. Make get_next_ino per-sb
3. Make get_next_ino_full (which is also per-sb)
4. Add inode{32,64} to tmpfs
To keep this thread as high signal as possible, I'll avoid sending any other
patches until I hear back on that :-)
Thanks again,
Chris