On Wednesday, April 06, 2011 01:16:19 PM Warren Young wrote: > I expect they added some checks for this since you last tried XFS on 32-bit. > > Perhaps it wasn't clear from what I wrote, but the big partition on this > system is actually 15.9mumble TB, just to be sure we don't even get 1 > byte over the limit. The remaining 1/3 TB is currently unused. I didn't get there in one step. Perhaps that's the difference. What you say in the last paragraph will prevent the effect I saw. Just hope you don't need to do an xfs_repair. No, it wasn't completely clear that you were keeping below 16TB from what you wrote, at least not to me. Now, I didn't do mkfs on a 16.xTB disk initially; I got there in steps with LVM, lvextend, and xfs_growfs. The starting size of the filesystem was ~4TB in two ~2TB LUNs/PV's; VMware is limited to 2TB LUNs, so I added storage, as needed, in ~2TB chunks (actually did 2,000GB chunks; pvscan reports these as 1.95TB (with some at 1.92TB for RAID group setup reasons). The 1.32TB and 1.37TB LUNs are there due to the way the RAID groups on this Clariion CX3-10c this is on are set up. So after a while of doing this, I had a hair over 14TB; xfs_growfs going from 14TB to a hair over 16TB didn't complain. But when the data hit 16TB, it quit mounting. So I migrated to a C5 x86_64 VM, and things started working again. I've added one more 1.95TB PV to the VG since then. Current setup: PV /dev/sdd1 VG pachy-mirror lvm2 [1.92 TB / 0 free] PV /dev/sdg1 VG pachy-mirror lvm2 [1.92 TB / 0 free] PV /dev/sde1 VG pachy-mirror lvm2 [1.95 TB / 0 free] PV /dev/sdu1 VG pachy-mirror lvm2 [1.95 TB / 0 free] PV /dev/sdl1 VG pachy-mirror lvm2 [1.37 TB / 0 free] PV /dev/sdm1 VG pachy-mirror lvm2 [1.32 TB / 0 free] PV /dev/sdx1 VG pachy-mirror lvm2 [1.95 TB / 0 free] PV /dev/sdz1 VG pachy-mirror lvm2 [1.95 TB / 0 free] PV /dev/sdab1 VG pachy-mirror lvm2 [1.95 TB / 0 free] PV /dev/sdt1 VG pachy-mirror lvm2 [1.95 TB / 0 free] ACTIVE '/dev/pachy-mirror/home' [18.24 TB] inherit The growth was over a period of two years, incidentally. There are other issues with XFS and 32-bit; see: http://bugs.centos.org/view.php?id=3364 and http://www.mail-archive.com/scientific-linux-users@xxxxxxxxxxxxxxxxx/msg05347.html and google for 'XFS 32-bit 4K stacks' for more of the gory details. _______________________________________________ CentOS mailing list CentOS@xxxxxxxxxx http://lists.centos.org/mailman/listinfo/centos