Looking for information on full SSD deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello fellow Ceph users,

We have been using a small cluster (6 data nodes with 12 disks each, 3 monitors) with OSDs on spinners and journals on SATA SSD-s for a while now. We still haven't upgraded to Luminous, and are going to test it now, as we also need to switch some projects on a shared file system and cephFS seems to fit the bill.

What I'm mostly looking for is to get in contact with someone with experience in running Ceph as a full SSD cluster, or full SSD pool(s) on the main cluster. Main interest is in performance centric workloads generated by web applications that work directly with files, heavily both in read and write capacity, with low latency being very important.

As mentioned above, the other question is about viability of cephFS in production environment right now, for web applications with several nodes, using a shared file system for certain read and write operations.

I will not go into more detail here, if you have some experience and would be willing to share it, please write to valmar@xxxxxxxx


Also thanks to everyone in this list for the insights other people's random problems have given us. We have probably managed to prevent some problems in the current cluster just by skimming through these e-mails.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux