On 8/26/15 12:48 PM, Shrinand Javadekar wrote: > Please see my responses inline. I am seeing this behavior again. > > On Tue, Aug 25, 2015 at 4:43 PM, Dave Chinner <david@xxxxxxxxxxxxx> wrote: >> On Tue, Aug 25, 2015 at 04:09:33PM -0700, Shrinand Javadekar wrote: >>> I did this on 2 different setups. >> >> Details? > > [Shri] On hardware box 1: > > 1. # of disks: 23 > 2. Type: Rotational disks > 3. Ran mkfs.xfs and mounted disks > 4. Installed Swift > 5. Ran benchmark Details of "the benchmark?" (buffered or direct? IO sizes, file layout etc?) > 6. Stopped Swift > 7. unmounted disks > 8. mkfs.xfs -f on all 23 disks > 9. mounted disks > 10. Installed Swift > 11. Ran benchmark <snip> >>>> What version of xfsprogs are you using? >>> >>> # xfs_repair -V >>> xfs_repair version 3.1.9 >> >> That's pretty old. > > [Shri] We're using xfs progs version 3.1.9 whereas the kernel is newer > one: 3.16.0-38-generic. Does that matter? > For e.g. one of my colleagues found that the formatting with crc > enabled is only available in newer version of xfsprogs. It's fine to use xfsprogs 3.1.9 with kenrel 3.16. (In fact nothing is going to be problematic, other than possibly running into unknown features if one is too far out of sync with the other. In that case, you'd just get a hard stop on the unknown feature, not a cryptic behavior...) >> >>>> What was the output of mkfs.xfs each time; did the geometry differ? >>> >>> I have the output of xfs_info /mount/point from the first experiment >>> and that of mkfs.xfs -f. One difference I see is that reformatting >>> adds projid32bit=0 for the inode section. >> >> xfs_info didn't get projid32bit status output until 3.2.0. >> >> Anyway, please post the output so we can see the differences for >> ourselves. What we need is mkfs output in both cases, and xfs_info >> output in both cases after mount. > > Step 1: mkfs.xfs <snip> Ok, the mkfs output & xfs_info output is identical with and without -f (as they should be). What is your storage, i.e. what's behind /dev/mapper/35000c50062e6a567-part2 ? Is it thinly-provisioned, or anything else interesting like that? (thin provisioning probably shouldn't matter, because in theory we discard the whole device on mkfs anyway). But is it possible that the first benchmark primed the storage in some way? To that end, what does: 1) mkfs.xfs, benchmark 2) benchmark show? is the 2nd one faster as well? Or, possibly: 1) mkfs.xfs, benchmark 2) mkfs.xfs -f, benchmark 3) wipefs, mkfs.xfs, benchmark That would leave old xfs superblocks in place for the 3rd test, and not wiped by mkfs itself, but I can't imagine why that would matter. (mkfs should reinitialize them anyway, I think the call to zero_old_xfs_structures() is just so that an xfs_repair search for backups won't find old unrelated signatures from a prior different geometry...) Right now I'm actually wondering more about your storage, I guess. -Eric _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs