Guidance on using large RBD volumes - NTFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi - in my home , I have been running cephfs for a few years, and have reasonably good performance, however since exposing cephfs via SMB has been hit and miss.    So I thought I could carve out space for a RBD device to share from a windows machine


My set up:

CEPH 18.2.2  deployed using ceph adm

4 servers running RHEL9  on AMD 5600g  CPUs
64 Gb Each
10Gbe NICs
4x4Tb Hdd
1 2Tb NVME for DB/WAL
rbd pool is set to auto  PG - its currently at 256

I have tested the NIC connection between the servers and my PC, an each point to point works well at the 10Gbe speeds

Now the problem

I created a 8Tb RBD using

rbd create winshare -size 8T -pool rbd
rbd map winshare

I prepped the drive, formatted it, and the drive appears cleanly as an 8 Tb drive.

When I used fio on the drive/volume, speeds were good around 150-200 Mb/s.

Then I started trying to populate the drive from a few different sources, and performance took a nose dive.  - Write speeds are about 6-10 Mb/s,  and windows task manager shows  average response time  anywhere from 500ms to 30 seconds. - mainly around 4 seconds.


I don't see any obvious bottlenecks - cpu on the servers are about 5-10%, memory is good,   Network is showing under 1 Gb/s  on all servers.


I am wondering if I needed to use different parameters for creating the volume?  Or is there a practical limit to the volume size I exceeded?

Thanks,

Rob

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux