Re: Improving Performance with more OSD's?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 29 Dec 2014 00:05:40 +1000 Lindsay Mathieson wrote:

> Appreciate the detailed reply Christian.
> 
> On Sun, 28 Dec 2014 02:49:08 PM Christian Balzer wrote:
> > On Sun, 28 Dec 2014 08:59:33 +1000 Lindsay Mathieson wrote:
> > > I'm looking to improve the raw performance on my small setup (2
> > > Compute Nodes, 2 OSD's). Only used for hosting KVM images.
> > 
> > This doesn't really make things clear, do you mean 2 STORAGE nodes
> > with 2 OSDs (HDDs) each?
> 
> 2 Nodes, 1 OSD per node
> 
> Hardware is indentical for all nodes & disks
> - Mobo: P9X79 WS
> - CPU:Intel  Xeon E5-2620
Not particularly fast, but sufficient for about 4 OSDs

> - RAM: 32 GB ECC
Good enough.

> - 1GB Nic Public Access
> - 2 * 1GB Bond for ceph
Is that a private cluster network just between Ceph storage nodes or is
this for all ceph traffic (including clients)?
The later would probably be better, a private cluster network twice as
fast as the client one isn't particular helpful 99% of the time.

> - OSD: 3TB WD Red
> - Journal: 10GB on Samsung 840 EVO
> 
> 3rd Node
>  - Monitor only, for quorum
> - Intel Nuc 
> - 8GB RAM
> - CPU: Celeron N2820
> 
Uh oh, a bit weak for a monitor. Where does the OS live (on this and the
other nodes)? The leveldb (/var/lib/ceph/..) of the monitors likes it fast,
SSDs preferably.

> 
> 
> > In either case that's a very small setup (and with a replication of 2 a
> > risky one, too), so don't expect great performance.
> 
> Ok.
> 
> > 
> > Throughput numbers aren't exactly worthless, but you will find IOPS to
> > be the killer in most cases. Also without describing how you measured
> > these numbers (rados bench, fio, bonnie, on the host, inside a VM)
> > they become even more muddled.
> 
> - rados bench on the node to test raw write 
> - fio in a VM
> - Crystal DiskMark in a windows VM to test IOPS
> 
> 
> > You really, really want size 3 and a third node for both performance
> > (reads) and redundancy.
> 
> I can probably scare up a desktop PC to use as a fourth node with
> another 3TB disk.
> 
The closer it is to the current storage nodes, the better. 
The slowest OSD in a cluster can impede all (most of) the others.

> I'd prefer to use the existing third node (the Intel Nuc), but its
> expansion is limited to USB3 devices. Are there USB3 external drives
> with decent performance stats?
> 
I'd advise against it.
That node doing both monitor and OSDs is not going to end well.

Regards,

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux