Re: ThinPool performance problem with NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



SSD write on the basis of pages. Which are typically 256kb if not rather more like 1mb. 
What kind of workload are you trying to use them for?
I would NOT stripe them at all. Just put each device into the VG and create individual LV that are 1:1 with un underlying PV.

Is your real-world usage not amenable to using 4 devices?

============
"In the information society, nobody thinks. We expected to banish paper, but we actually banished thought.”
  -- Michael Crichton, Jurassic Park

“Ours may become the first civilization destroyed, not by the power of our enemies, but by the ignorance of our teachers and the dangerous nonsense they are teaching our children. In an age of artificial intelligence, they are creating artificial stupidity.' - Thomas Sowell



On Monday, July 10, 2023 at 02:47:34 AM EDT, Anton Kulshenko <shallriseagain@xxxxxxxxx> wrote:


Hello. 

Please help me figure out what my problem is. No matter how I configure the system, I can't get high performance, especially on writes.

OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64
Platform: Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC and 2 TB of RAM
Disks: NVMe Samsung PM1733 7.68 TB 

What I do:
vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 -i 4 -I 4  

-i4 for striping between all disks, -I4 strip size. Also I tried 8, 16, 32... In my setup I can't find a big difference.

lvcreate -n pool_meta -L 15G vg1 /dev/nvme4n1
lvconvert --type thin-pool --poolmetadata vg1/pool_meta vg1/thin_pool_1
lvchange -Zn vg1/thin_pool_1
lvcreate -V 15000G --thin -n data vg1/thin_pool_1

After that I create a load using the FIO with parameters:
fio --filename=/dev/mapper/vg1-data --rw=randwrite --bs=4k --name=test --numjobs=32 --iodepth=32 --random_generator=tausworthe64 --numa_cpu_nodes=0 --direct=1

I only get 40k iops, while one drive at the same load easily gives 130k iops.
I have tried different block sizes, strip sizes, etc. with no result. When I look in iostat I see the load on the disk where the metadata is:
80 WMB/s, 12500 wrqm/s, 68 %wrqm

I don't understand what I'm missing when configuring the system. 



_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux