Re: ThinPool performance problem with NVMe

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



shallriseagain@xxxxxxxxx

problem architektute
MAINBOATD
have 1x slot PCI4x 4x

SAS this system dual SATA disk magnetic

create PV=1 for format GPT create MBR and prefix cache
info hdparm NVMe cache

create VG for more one disk.
create conflict ID

if
VG = id1 and -L 100%FREE for LV = A
VG = id1 and -L 100%FREE for LV = B
VG = id1 and -L 100%FREE for LV = C
VG = id1 and -L 100%FREE for LV = D
you test of DATABASE

hdparm -Tt /dev/id1/A 
slot NVMe

hdparm -Tt /dev/id1/B
change NVMe for slot PCIe x8

adapter NVMe x8 PCIe

not SAS for macnetic slow transmision data

system OS not use disk NVMe or SAS devices

fitst create OS DEBIAN live
init0 make create ram0 partition
copy iso virtual disk 8Gb for ram0
and mount iso OS, jump
system init1

have cut system for
SAS and NVMe controler

yours system
25Gb/s speed system  PV0  ram0
10Gb/s speed database PV1/id1/A

sters speed disc transfer data never colision OS debian system operation

if needed script of configure init0
pleace pay for €500
we add of 100pdf of  IT LINUX programer
use Python C many more service script

Developer London IT europe
Computer.Alarm.Technology.SYSTEM
🏭 2003—2023
📩 service.hofman@xxxxxxxxx
📞 +48 883937952
💬 //t.me/s/CATsystem_plan

💷 POUND
PL44124036791789001109272570
💶 EURO
PL41124036791978001109272583
💵 PLN
PL14124036791111001108735292
💸BIC/SWIFT  PKOPPLPW

🎫 REG. MicroSoft W936403
🎫 REG. Acrobat MASTER2015
🎫 REG. G.E. MasterATM 13/05/2003
🎫 REG. S.E.P.  D1/017/21  30kV
🎫 REG. V.A.T.  572-106-528
🎫 REG. ID06 
★safe_construction_2027 ★
★Mobile_Platform_Safety
★Manual_Handling_Safety
★Working_at_Height_Safety

     Eryk Hofman

10.07.2023 8:47 AM "Anton Kulshenko" <shallriseagain@xxxxxxxxx> napisał(a):
Hello. 

Please help me figure out what my problem is. No matter how I configure the system, I can't get high performance, especially on writes.

OS: Oracle Linux 8.6, 5.4.17-2136.311.6.el8uek.x86_64
Platform: Gigabyte R282-Z94 with 2x 7702 64cores AMD EPYC and 2 TB of RAM
Disks: NVMe Samsung PM1733 7.68 TB 

What I do:
vgcreate vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1
lvcreate -n thin_pool_1 -L 20T vg1 /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 -i 4 -I 4  

-i4 for striping between all disks, -I4 strip size. Also I tried 8, 16, 32... In my setup I can't find a big difference.

lvcreate -n pool_meta -L 15G vg1 /dev/nvme4n1
lvconvert --type thin-pool --poolmetadata vg1/pool_meta vg1/thin_pool_1
lvchange -Zn vg1/thin_pool_1
lvcreate -V 15000G --thin -n data vg1/thin_pool_1

After that I create a load using the FIO with parameters:
fio --filename=/dev/mapper/vg1-data --rw=randwrite --bs=4k --name=test --numjobs=32 --iodepth=32 --random_generator=tausworthe64 --numa_cpu_nodes=0 --direct=1

I only get 40k iops, while one drive at the same load easily gives 130k iops.
I have tried different block sizes, strip sizes, etc. with no result. When I look in iostat I see the load on the disk where the metadata is:
80 WMB/s, 12500 wrqm/s, 68 %wrqm

I don't understand what I'm missing when configuring the system. 




_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Attachment: R282-Z94_BlockDiagram.png
Description: PNG image

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux