On Tue, Dec 01, 2015 at 08:39:06AM -0500, Glauber Costa wrote: > > > > The truncate will free blocks and require block allocation on subsequent > > writes. That might be something you could look into trying to avoid > > (e.g., keeping files around and reusing space), but that depends on your > > application design. > > > This one is a bit hard. We have a journal-like structure for the > modifications issued to the data store, which dominates most of our > write workloads (including this one that I am discussing here). We > could keep they around by renaming them outside of user visibility and > then renaming them back, but that would mean that we are now using > twice as much space. Perhaps we could use a pool that can at least > guarantee one or two allocations from a pre-existing file. I am > assuming here that renaming the file won't block. If it does, we are > better off not doing so. > > > Inodes chunks are allocated and freed dynamically by > > default as well. The 'ikeep' mount option keeps inode chunks around > > indefinitely (even if individual inodes are all freed) if you wanted to > > avoid inode chunk reallocation and know you have a fairly stable working > > set of inodes. > > I believe we do have a fairly stable inode working set, even though > that depends a bit on what's considered stable. For our journal-like > structure, we will keep them around until we are sure the information > is safe and them delete them - creating new ones as we receive more > data. But that's always bounded in size. > > Am I correct to understand that ikeep being passed, new allocations > would just reuse space from the empty chunks on disk? > Yes.. current behavior is that inodes are allocated and freed in chunks of 64. When the entire chunk of inodes is freed from the namespace, the chunk is freed (i.e., it is now free space). With ikeep, inode chunks are never freed. When an individual inode allocation request is made, the inode is allocated from one of the existing inode chunks before a new chunk is allocated. The tradeoff is that you could consume a significant amount of space with inodes, free a bunch of them and that space is not freed. So that is something to be aware of for your use case, particularly if the fs has other uses from your journaling mechanism described above because it affects the entire fs. > > > Per-inode extent size hints might be another option to > > increase the size of allocations and perhaps reduce the number of them. > > > > That's absolutely greatastic. Our files for that journal are all more > or less the same size. That's a great candidate for a hint. > You could consider preallocation (fallocate()) as well if you know the full size in advance. Brian > > Brian > > Thanks again, Brian _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs