On Sat, Aug 16, 2008 at 1:14 PM, Yong Huang <yong321@xxxxxxxxx> wrote: >> From: "Stephen Carville" <stephen.carville@xxxxxxxxx> >> Subject: read and write size in NFS v4 >> To: "General Red Hat Linux discussion list" <redhat-list@xxxxxxxxxx> >> >> I am migrating my NFS file server from RHEL3 to RHEL5 and moving up to >> NFS v4 where i can. Once question I still have is what is the maximum >> rsize and wsize i can set for V4 automount? The file >> include/linux/nfsd/const.h says teh maximum is 32*1024 (32K) but some >> google postings claim that does not apply to V4; I've tried v4 >> automounts with values up to 1048576 and none of the machine have >> complained. >> >> A part of the new server's duties will be to accept the RMAN (oracle) >> backups so speed is important. I am using a dedicated gigabit switch >> with jumbo frames (9000 bytes) and I have tuned the TCP for the >> "private" interfaces on all the connected interfaces. These is the >> only parameter I am not sure what I can get away with. I have it set >> to 32768 for the current v3 setup. >> >> -- >> Stephen Carville > > I don't know the answer. But can you check nfsstat to see what it says the size > is? Also, strace some read and write operations and see what the return values > for read() and write() are. But I guess the latter method may not tell us the > underlined driver level chunk size. Tried that but it doesn't seem to tell me what the actual block size is. However I found that /proc/mounts does record the wsize and rsize and a little experimentation led me to conclude that the default is 32768 and the max is 32768. -- Stephen Carville -- redhat-list mailing list unsubscribe mailto:redhat-list-request@xxxxxxxxxx?subject=unsubscribe https://www.redhat.com/mailman/listinfo/redhat-list