Hi, I'm sorry, didnt fill any information here, but here is my nodes details. $ uname -a Linux catalyst-db01.jkt3d.xxx 2.6.32-504.3.3.el6.x86_64 #1 SMP Wed Dec 17 01:55:02 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release CentOS release 6.6 (Final) The xfs and partition table build from anaconda from first install, instalation came from CentOS 6.6. But it's weird, only this node has 4k sector size, the others is 512. catalyst-db01$ yum history info 1 | grep xfsprogs | fpaste Uploading (0.2KiB)... http://ur1.ca/jihyu -> http://paste.fedoraproject.org/173606/27434142 catalyst-db02$ yum history info 1 | grep xfsprogs | fpaste Uploading (0.2KiB)... http://ur1.ca/jihzp -> http://paste.fedoraproject.org/173608/27517142 Is it bug? On 01/23/2015 10:29 PM, Eric Sandeen wrote: > On 1/23/15 7:04 AM, Dewangga Bachrul Alam wrote: >> Hi, >> >> I'm new to XFS, I have RAID-10 array with 4 disk, when I check with >> xfs_info, the information print like this. >> >> $ xfs_info /var/lib/mysql >> meta-data=/dev/mapper/vg_catalystdb01-lv_database isize=256 >> agcount=16, agsize=1600000 blks >> = sectsz=4096 attr=2, projid32bit=0 >> data = bsize=4096 blocks=25600000, imaxpct=25 >> = sunit=0 swidth=0 blks >> naming =version 2 bsize=4096 ascii-ci=0 >> log =internal bsize=4096 blocks=12500, version=2 >> = sectsz=4096 sunit=1 blks, lazy-count=1 >> realtime =none extsz=4096 blocks=0, rtextents=0 >> >> Is it possible to change `sectsz` value to 512 without re-format it? Or >> any suggestion? I have issue with current sector size, my TokuDB >> engines[1] can't start because of this. >> >> [1] https://groups.google.com/forum/#!topic/tokudb-user/kvQFJLCmKwo > > You almost certainly need this commit to resolve the issue. > > You didn't say here what kernel you were using, but from your other > post it looks like CentOS 6.4. Are you using stock xfsprogs from centos6.4 > as well? I don't remember if it chooses 4k sectors for 4k physical block > devices or not; unless you have a custom xfsprogs version or overrode > sector size at mkfs time, I guess it must. > > commit 7c71ee78031c248dca13fc94dea9a4cc217db6cf > Author: Eric Sandeen <sandeen@xxxxxxxxxxx> > Date: Tue Jan 21 16:46:23 2014 -0600 > > xfs: allow logical-sector sized O_DIRECT > > Some time ago, mkfs.xfs started picking the storage physical > sector size as the default filesystem "sector size" in order > to avoid RMW costs incurred by doing IOs at logical sector > size alignments. > > However, this means that for a filesystem made with i.e. > a 4k sector size on an "advanced format" 4k/512 disk, > 512-byte direct IOs are no longer allowed. This means > that XFS has essentially turned this AF drive into a hard > 4K device, from the filesystem on up. > > XFS's mkfs-specified "sector size" is really just controlling > the minimum size & alignment of filesystem metadata. > > There is no real need to tightly couple XFS's minimal > metadata size to the minimum allowed direct IO size; > XFS can continue doing metadata in optimal sizes, but > still allow smaller DIOs for apps which issue them, > for whatever reason. > > This patch adds a new field to the xfs_buftarg, so that > we now track 2 sizes: > > 1) The metadata sector size, which is the minimum unit and > alignment of IO which will be performed by metadata operations. > 2) The device logical sector size > > The first is used internally by the file system for metadata > alignment and IOs. > The second is used for the minimum allowed direct IO alignment. > > This has passed xfstests on filesystems made with 4k sectors, > including when run under the patch I sent to ignore > XFS_IOC_DIOINFO, and issue 512 DIOs anyway. I also directly > tested end of block behavior on preallocated, sparse, and > existing files when we do a 512 IO into a 4k file on a > 4k-sector filesystem, to be sure there were no unexpected > behaviors. > > Signed-off-by: Eric Sandeen <sandeen@xxxxxxxxxx> > > _______________________________________________ xfs mailing list xfs@xxxxxxxxxxx http://oss.sgi.com/mailman/listinfo/xfs