Re: Is Ceph appropriate for small installations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Can you share your SSD products and ceph.conf?
In a test 3node cluster, 2ssds each intel s3500 i see very disappointing numbers.

I maintain a 6 node cluster mixed ssd and sata pools. IOPS are not enough for a kvm hosting company unless you have really low values at disk io throttling. A big cluster is needed full of ssds for kvm selling that will make the price per vm unreachable for clients.

For small offices it seems to me a  good choice and the maintenance is really easy adding-removing osd, updating, rebooting nodes.

> Date: Mon, 31 Aug 2015 08:06:37 +0200
> From: aderumier@xxxxxxxxx
> To: lindsay.mathieson@xxxxxxxxx
> CC: ceph-users@xxxxxxxxxxxxxx
> Subject: Re: Is Ceph appropriate for small installations?
>
> >>True, true. But I personally think that Ceph doesn't perform well on
> >>small <10 node clusters.
>
> Hi, I can reach 600000 iops 4k read with 3 nodes (6ssd each).
>
>
>
> ----- Mail original -----
> De: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
> À: "Tony Nelson" <tnelson@xxxxxxxxxxxxx>
> Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
> Envoyé: Lundi 31 Août 2015 03:10:14
> Objet: Re: Is Ceph appropriate for small installations?
>
>
> On 29 August 2015 at 00:53, Tony Nelson < tnelson@xxxxxxxxxxxxx > wrote:
>
>
>
>
> I recently built a 3 node Proxmox cluster for my office. I’d like to get HA setup, and the Proxmox book recommends Ceph. I’ve been reading the documentation and watching videos, and I think I have a grasp on the basics, but I don’t need anywhere near a petabyte of storage.
>
>
>
> I’m considering servers w/ 12 drive bays, 2 SDD mirrored for the OS, 2 SDDs for journals and the other 8 for OSDs. I was going to purchase 3 identical servers, and use my 3 Proxmox servers as the monitors, with of course GB networking in between. Obviously this is very vague, but I’m just getting started on the research.
>
>
>
>
>
> I run a small 3 node Proxmox cluster for our office as well with Ceph, but I'd now recommend against using Ceph for small setups like ours.
>
> - Maintenance headache. Ceph requires a lot of tweaking to get started and a lot of ongoing monitoring, plus a fair bit of skill. If you're running the show yourself (as typical in small businesses) its quite stressful. Who's going to fix the ceph cluster when a osd goes down when you're on holiday?
>
> - Performance. Its terrible on small clusters. I've setup a iSCSI over ZFS for a server and its orders of magnitude better at I/O. And I haven't even configured multipath yet.
>
> - Flexibility. Much much easier to expand or replace disks on my ZFS server.
>
> The redundancy is good, I can reboot a ceph node for maintenance and it recovers very quickly (much quicker than glusterfs), but cluster performance suffers badly when a node is down so in practice its of limited utility.
>
> I'm coming to the realisation that for us performance and ease of administration is more valuable than 100% uptime. Worst case (Storage server dies) we could rebuild from backups in a day. Essentials could be restored in a hour. I could experiment with ongoing ZFS replications to a backup server that makes that even quicker.
>
> Thats for use - your requirements may be different. And of course once you get into truly large deployments, ceph comes into its own.
>
>
>
>
> --
> Lindsay
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux