RE: fio on vmware esxi

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the quick feedback Robert.  I appreciate it.

I forgot to add,  when I created a new NFS datastore in VMware ESXi 6.5 I have to NFS versions to choose from:

-NFS 3 allows the datastore to be accessed by ESX/ESXi hosts of version earlier than 6.0
-NFS 4.1 provides multipathing for servers and supports the Kerberos authentication protocol

The results FIO results I posted were from NFS v.3. so I continued testing...

I unmounted that NFS volume and remounted it as a new datastore using NFS v4.1 and I'm now showing the expected IOPS using the "posixaio" engine on NFS!  Great!

So, use "posixaio" FIO engine on NFS v 4.1;  don’t use "posixaio" engine on NFS v 3.

Eliezer 

-----Original Message-----
From: Elliott, Robert (Persistent Memory) [mailto:elliott@xxxxxxx] 
Sent: Friday, October 26, 2018 5:30 PM
To: 'eliezer@integritech.solutions'; 'Beierl, Mark'
Cc: 'Sitsofe Wheeler'; 'fio'
Subject: RE: fio on vmware esxi


> I run FIO with the following parameters and used the different engines I listed above:
> --randrepeat=1
> --ioengine=posixaio
> --direct=1
> --sync=1
> --name=mytest
> --filename=mytestfile.fio
>  --overwrite=1
> --iodepth=64
> --size=100MB
> --readwrite=randrw
> --rwmixread=50
> --rwmixwrite=50
>  --bs=16k
> 
...
> -ioengine=psync on a 1000 IOPS Perf storage:::     read=177 write=185 IOPS
> -ioengine=pvsync on a 1000 IOPS Perf storage:::    read=176 write=184 IOPS
> -ioengine=sync on a 1000 IOPS Perf storage:::      read=145 write=152 IOPS
> -ioengine=posixaio on a 1000 IOPS Perf storage:::  read=528 write=551 IOPS*
> 
> Based on the FIO results I posted above, I've concluded that the suitable FIO engine for Block Storage
> is "posixaio"; and for NFS it can be psync or pvsync or sync.
> If I use the FIO engine "posixaio" on VMware to test block storage, I'm seeing the expected IOPS.
> However, if I use the same FIO engine ("posixaio") to test NFS, I'm NOT seeing the expected IOPS.  I'd
> have to use either psync, or pvsync, or sync to see the IOPS I'm expecting.
>   These tests are on the same baremetal server running the version of VMWare I posted above.
> 
> Can someone please shed some light on why FIO results is skewed when "posixaio" engine is used to test
> an NFS storage?   The same goes with ioengines psync/pvsync/sync skews IOPS  results on a block
> storage device?

NFS might introduce all sorts of problems.  For the block device cases...

Synchronous engines like sync do not honor the iodepth; you need to use
jobs= to get multiple concurrent IOs.  posixaio is an  asynchronous engine, 
so it is honoring your iodepth (it spawns threads for you).

Also, if your storage devices do any sort of buffering and caching, the
tiny 100 MiB size is likely to result in lots of cache hits, distorting
the results.

---
Robert Elliott, HPE Persistent Memory






[Index of Archives]     [Linux Kernel]     [Linux SCSI]     [Linux IDE]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux SCSI]

  Powered by Linux