Re: Improving Performance with more OSD's?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Lindsay,

Ceph is really designed to scale across large amounts of OSD's and whilst it
will still function with only 2 OSD's, I wouldn't expect it to perform as
well as compared to a RAID 1 mirror with Battery Backed Cache.

I wouldn't recommend running the OSD's on USB, although it should work
reasonably well.

If you can't add another full host, your best bet would be to add another
2-3 disks to each server. This should give you a bit more performance. It's
much better to have lots of small disks rather than large multi-TB ones from
a performance perspective. So maybe look to see if you can get 500GB/1TB
drives cheap.

Nick

-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Lindsay Mathieson
Sent: 28 December 2014 14:06
To: ceph-users@xxxxxxxx
Subject: Re:  Improving Performance with more OSD's?

Appreciate the detailed reply Christian.

On Sun, 28 Dec 2014 02:49:08 PM Christian Balzer wrote:
> On Sun, 28 Dec 2014 08:59:33 +1000 Lindsay Mathieson wrote:
> > I'm looking to improve the raw performance on my small setup (2 
> > Compute Nodes, 2 OSD's). Only used for hosting KVM images.
> 
> This doesn't really make things clear, do you mean 2 STORAGE nodes 
> with 2 OSDs (HDDs) each?

2 Nodes, 1 OSD per node

Hardware is indentical for all nodes & disks
- Mobo: P9X79 WS
- CPU:Intel  Xeon E5-2620
- RAM: 32 GB ECC
- 1GB Nic Public Access
- 2 * 1GB Bond for ceph
- OSD: 3TB WD Red
- Journal: 10GB on Samsung 840 EVO

3rd Node
 - Monitor only, for quorum
- Intel Nuc
- 8GB RAM
- CPU: Celeron N2820



> In either case that's a very small setup (and with a replication of 2 a
> risky one, too), so don't expect great performance.

Ok.

> 
> Throughput numbers aren't exactly worthless, but you will find IOPS to be
> the killer in most cases. Also without describing how you measured these
> numbers (rados bench, fio, bonnie, on the host, inside a VM) they become
> even more muddled.

- rados bench on the node to test raw write 
- fio in a VM
- Crystal DiskMark in a windows VM to test IOPS


> You really, really want size 3 and a third node for both performance
> (reads) and redundancy.

I can probably scare up a desktop PC to use as a fourth node with another
3TB 
disk.

I'd prefer to use the existing third node (the Intel Nuc), but its expansion

is limited to USB3 devices. Are there USB3 external drives with decent 
performance stats?


thanks,
-- 
Lindsay




_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux