krdb vDisk best practice ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Running a single node Proxmox "cluster", with Ceph on top. 1 Mon. Same node.
I have 24 HDD (no dedicated journal) and 8 SSD split via "custom crush location hook". 
Cache-Tier (SSD-OSD) for a EC-pool (HDD-OSD) providing access for proxmox via krdb.
15 TB Capacity (Assortment of Disk sizes/speeds). Vdisks are Virtio and XFS. OSD's are XFS as well.

While setting up a virtual OpenmediaVault (VM) the following Question arose regarding vDisks (virtio) and their best practice.


Q1: How does the amount and size of vDisks affect Write/Read performance? Do i bottleneck myself with overhead (single Mon)? Or does it maybe not matter at all?

Values are academic examples.
120x 100GB vDisks - In OMV as Raid0 
120x 100GB vDisks - In OMV as JBOD

12x 1TB vDisk - In OMV as Raid0
12x 1TB vDisk - In OMV as JBOD

2x 6TB vDisk - In OMV as Raid0
2x 6TB vDisk - In OMV as JBOD

Q2: How does this best practice change if i add 2 more nodes (same config) and by implication 2 more mons?


Not been able to find much on this topic. 

kind regards, 
Wolf F.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux