НА: Is Ceph appropriate for small installations?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

>Hi, I can reach 600000 iops 4k read with 3 nodes (6ssd each).

It is very interesting! Can you give any details about your config?

We can't get more than ~40kiops 4k random reads from 2node x 2ssd pool. :(
Under load our SSDs give ~8kiops each, and that is far too low for Intel DC S3700 400Gb.

Tracing showed us, that tcmalloc calls are in perf top. We use Debian Jessie and tcmalloc
seems to accept TCMALLOC_MAX_TOTAL_THREAD_CACHE_BYTES variable, but
it doesn't help. Both nodes have one e5-2670 cpu + 128 Ram in (8x16Gb). Maybe using one
cpu is not a goot idea, but we're get out of NUMA problems, and cpu is not maxed out
under load (~50% idle).


Megov Igor
CIO, Yuterra


________________________________________
От: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> от имени Alexandre DERUMIER <aderumier@xxxxxxxxx>
Отправлено: 31 августа 2015 г. 9:06
Кому: Lindsay Mathieson
Копия: ceph-users
Тема: Re:  Is Ceph appropriate for small installations?

>>True, true. But I personally think that Ceph doesn't perform well on
>>small <10 node clusters.

Hi, I can reach 600000 iops 4k read with 3 nodes (6ssd each).



----- Mail original -----
De: "Lindsay Mathieson" <lindsay.mathieson@xxxxxxxxx>
À: "Tony Nelson" <tnelson@xxxxxxxxxxxxx>
Cc: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Lundi 31 Août 2015 03:10:14
Objet: Re:  Is Ceph appropriate for small installations?


On 29 August 2015 at 00:53, Tony Nelson < tnelson@xxxxxxxxxxxxx > wrote:




I recently built a 3 node Proxmox cluster for my office. I’d like to get HA setup, and the Proxmox book recommends Ceph. I’ve been reading the documentation and watching videos, and I think I have a grasp on the basics, but I don’t need anywhere near a petabyte of storage.



I’m considering servers w/ 12 drive bays, 2 SDD mirrored for the OS, 2 SDDs for journals and the other 8 for OSDs. I was going to purchase 3 identical servers, and use my 3 Proxmox servers as the monitors, with of course GB networking in between. Obviously this is very vague, but I’m just getting started on the research.





I run a small 3 node Proxmox cluster for our office as well with Ceph, but I'd now recommend against using Ceph for small setups like ours.

- Maintenance headache. Ceph requires a lot of tweaking to get started and a lot of ongoing monitoring, plus a fair bit of skill. If you're running the show yourself (as typical in small businesses) its quite stressful. Who's going to fix the ceph cluster when a osd goes down when you're on holiday?

- Performance. Its terrible on small clusters. I've setup a iSCSI over ZFS for a server and its orders of magnitude better at I/O. And I haven't even configured multipath yet.

- Flexibility. Much much easier to expand or replace disks on my ZFS server.

The redundancy is good, I can reboot a ceph node for maintenance and it recovers very quickly (much quicker than glusterfs), but cluster performance suffers badly when a node is down so in practice its of limited utility.

I'm coming to the realisation that for us performance and ease of administration is more valuable than 100% uptime. Worst case (Storage server dies) we could rebuild from backups in a day. Essentials could be restored in a hour. I could experiment with ongoing ZFS replications to a backup server that makes that even quicker.

Thats for use - your requirements may be different. And of course once you get into truly large deployments, ceph comes into its own.




--
Lindsay

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux