Re: Guidance on using large RBD volumes - NTFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi- I managed to improve my throughput somewhat by recreating the RBD Image with a larger object size (I chose 16 Mb, not from any science but on a gut feel ) and then change the stripe to 4    This seemed to roughly double the performance from an average of 8-12Mb/s write to 16-24 Mb/s.
I also changed the pool to  PG Autoscale, and it has spent the past day or so reducing the number of PGs to what appears to be a target of 64.    I am now seeing higher write speeds of 40Mb/s  to as high as 100Mb/s.  

The average response time shown in windows task manager appears to be 'off' - I have seen it jump around from 500 ms  to over 300 seconds back to 3 seconds in a matter of a few refreshes.

Windows resource monitor is showing a more consistent response time on multiple parallel writes of about 1.5-3 seconds per write.



-----Original Message-----
From: Robert W. Eckert <rob@xxxxxxxxxxxxxxx> 
Sent: Tuesday, May 7, 2024 8:36 AM
To: ceph-users@xxxxxxx
Subject:  Guidance on using large RBD volumes - NTFS

Hi - in my home , I have been running cephfs for a few years, and have reasonably good performance, however since exposing cephfs via SMB has been hit and miss.    So I thought I could carve out space for a RBD device to share from a windows machine


My set up:

CEPH 18.2.2  deployed using ceph adm

4 servers running RHEL9  on AMD 5600g  CPUs
64 Gb Each
10Gbe NICs
4x4Tb Hdd
1 2Tb NVME for DB/WAL
rbd pool is set to auto  PG - its currently at 256

I have tested the NIC connection between the servers and my PC, an each point to point works well at the 10Gbe speeds

Now the problem

I created a 8Tb RBD using

rbd create winshare -size 8T -pool rbd
rbd map winshare

I prepped the drive, formatted it, and the drive appears cleanly as an 8 Tb drive.

When I used fio on the drive/volume, speeds were good around 150-200 Mb/s.

Then I started trying to populate the drive from a few different sources, and performance took a nose dive.  - Write speeds are about 6-10 Mb/s,  and windows task manager shows  average response time  anywhere from 500ms to 30 seconds. - mainly around 4 seconds.


I don't see any obvious bottlenecks - cpu on the servers are about 5-10%, memory is good,   Network is showing under 1 Gb/s  on all servers.


I am wondering if I needed to use different parameters for creating the volume?  Or is there a practical limit to the volume size I exceeded?

Thanks,

Rob

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux