Re: krdb vDisk best practice ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Just in case anyone in future comes up with the same question: 

I ran the following Test-case: 

3 identical Debian VM's. 4GB Ram, 4 vCores. Virtio for vDisks. On the same Pool. vDisks mounted at /home/test

1x 120GB 
12x 10GB JBOD via LVM
12x 10GB Raid 0

Then separately i wrote 100GB of Data using dd onto /home/test/testfile. 

All 3 Benchmarks had statistically the same write speeds. However the CPU-Consumption was about 5% higher on LVM and about 35% when using mdadm Raid-0 for 12 vDisks.



Question: Does Ceph have a upper limit for how "BIG" i can make virtio based vDisks on a EC-Pool ?


----- Original Message -----
> From: "Wolf F." <wolf.f@xxxxxxxxxxxx>
> To: ceph-users@xxxxxxxxxxxxxx
> Sent: Saturday, January 2, 2016 9:21:46 PM
> Subject: krdb vDisk best practice ?

> Running a single node Proxmox "cluster", with Ceph on top. 1 Mon. Same node.
> I have 24 HDD (no dedicated journal) and 8 SSD split via "custom crush location
> hook".
> Cache-Tier (SSD-OSD) for a EC-pool (HDD-OSD) providing access for proxmox via
> krdb.
> 15 TB Capacity (Assortment of Disk sizes/speeds). Vdisks are Virtio and XFS.
> OSD's are XFS as well.
> 
> While setting up a virtual OpenmediaVault (VM) the following Question arose
> regarding vDisks (virtio) and their best practice.
> 
> 
> Q1: How does the amount and size of vDisks affect Write/Read performance? Do i
> bottleneck myself with overhead (single Mon)? Or does it maybe not matter at
> all?
> 
> Values are academic examples.
> 120x 100GB vDisks - In OMV as Raid0
> 120x 100GB vDisks - In OMV as JBOD
> 
> 12x 1TB vDisk - In OMV as Raid0
> 12x 1TB vDisk - In OMV as JBOD
> 
> 2x 6TB vDisk - In OMV as Raid0
> 2x 6TB vDisk - In OMV as JBOD
> 
> Q2: How does this best practice change if i add 2 more nodes (same config) and
> by implication 2 more mons?
> 
> 
> Not been able to find much on this topic.
> 
> kind regards,
> Wolf F.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux