Hi, Thanks for your kind, but I don't know how to reproduce the errors, it happens randomly. I try to reproduce on VM but nothing helps, I can't create raid array on VM. :) But, I'm quite sure there is miscalculation or something else that make 4k sector size on RAID-10 array. And anyway, I found something interesting, it happens on my old development server too, but I didn't use the application like on new box. So it's no problem. $ xfs_info /database/mysql meta-data=/dev/mapper/vg_agnirudra-lv_database isize=256 agcount=32, agsize=7629824 blks = sectsz=4096 attr=2, projid32bit=0 data = bsize=4096 blocks=244154368, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 log =internal bsize=4096 blocks=119216, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 It's software raid-1 array. Here is the partition table. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sdc 8:32 0 931.5G 0 disk └─sdc1 8:33 0 931.5G 0 part └─md0 9:0 0 931.4G 0 raid1 └─vg_agnirudra-lv_database (dm-2) 253:2 0 931.4G 0 lvm /database sda 8:0 0 279.5G 0 disk ├─sda1 8:1 0 500M 0 part /boot └─sda2 8:2 0 279G 0 part ├─vg_os-lv_root (dm-0) 253:0 0 271G 0 lvm / └─vg_os-lv_swap (dm-1) 253:1 0 8G 0 lvm [SWAP] sdb 8:16 0 931.5G 0 disk └─sdb1 8:17 0 931.5G 0 part └─md0 9:0 0 931.4G 0 raid1 └─vg_agnirudra-lv_database (dm-2) 253:2 0 931.4G 0 lvm /database sr0 11:0 1 1024M 0 rom $ uname -a Linux agnirudra.xxx 2.6.32-431.29.2.el6.x86_64 #1 SMP Tue Sep 9 21:36:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release CentOS release 6.5 (Final) $ yum history info 1 | grep xfsprogs | fpaste Uploading (0.2KiB)... http://ur1.ca/jiikv -> http://paste.fedoraproject.org/173636/14220298 It's came from CentOS 6.4, xfsprogs default stock 6.4, as you mentioned before. On 01/23/2015 10:49 PM, Eric Sandeen wrote: > On 1/23/15 9:40 AM, Dewangga Bachrul Alam wrote: >> Hi, >> >> I'm sorry, didnt fill any information here, but here is my nodes details. >> >> $ uname -a >> Linux catalyst-db01.jkt3d.xxx 2.6.32-504.3.3.el6.x86_64 #1 SMP Wed Dec >> 17 01:55:02 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux >> >> $ cat /etc/redhat-release >> CentOS release 6.6 (Final) >> >> The xfs and partition table build from anaconda from first install, >> instalation came from CentOS 6.6. But it's weird, only this node has 4k >> sector size, the others is 512. >> >> catalyst-db01$ yum history info 1 | grep xfsprogs | fpaste >> Uploading (0.2KiB)... >> http://ur1.ca/jihyu -> http://paste.fedoraproject.org/173606/27434142 > > so xfsprogs v3.1.1 > > This went into v3.1.8: > > commit 287d168b550857ce40e04b5f618d7eb91b87022f > Author: Eric Sandeen <sandeen@xxxxxxxxxxx> > Date: Thu Mar 1 22:46:35 2012 -0600 > > mkfs.xfs: properly handle physical sector size > > This splits the fs_topology structure "sectorsize" into > logical & physical, and gets both via blkid_get_topology(). > > This primarily allows us to default to using the physical > sectorsize for mkfs's "sector size" value, the fundamental > size of any IOs the filesystem will perform. > > We reduce mkfs.xfs's "sector size" to logical if > a block size < physical sector size is specified. > This is suboptimal, but permissable. > > For block size < sector size, differentiate the error > message based on whether the sector size was manually > specified, or deduced. > > Signed-off-by: Eric Sandeen <sandeen@xxxxxxxxxx> > Reviewed-by: Dave Chinner <dchinner@xxxxxxxxxx> > > but was backported to the RHEL6 xfsprogs: > > * Tue Sep 25 2012 Eric Sandeen <sandeen@xxxxxxxxxx> 3.1.1-8 > - mkfs.xfs: better handle misaligned 4k devices (#836433) > - mkfs.xfs: default to physical sectorsize (#836433) > > So, not *exactly* a bug, because the assumption that 512-byte > DIO will always work is not a good one, but the commit I mentioned > in my first email will let 512-byte DIOs work again. > > I'd tell you to file a bug with your RHEL support people, but > Centos ... ;) We probably should get that kernel commit into RHEL6 > if possible. I'm kind of surprised we haven't seen other reports. > > But, if you ever wind up with hard 4k/4k drives, your database > still won't work. On any filesystem. :) > > If you don't mind following up with this informtation in the other > forum, that might help others. > > -Eric > _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs