(un)stability of ceph

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi there.
I'm new to the list, but am following the development of ceph for a while now. Great project. It really looks like the next big things.

After researching parallel/shared filesystems for a last year, we tried to implement lustre. This project basically died for us with Oracle's aquirement of sun and all the confusion that came afterwards. Anyway, we then decided for another FS, which now turns out to be an error of the big kind. Its unstable und unreliable.

Now, I am in the bad position to quickly have to switch to another FS. Before I start implementing another FS, I would like to ask about the status of ceph.
I know that it is not supposed to be used in an production environment. Nevertheless, I am inclined to give ceph a try. Our plans were to switch to ceph anyway as soon as it is considered stable.
Our usage scenarios are these:

* multiple OSDs, multiple MDs.
* shared storage for KVM and XEN (does not need to be Rados Block device, can be images as we have it now)
** the VM-servers are interconnected via internal 2x1Gbit nics and also have 1Gbit nic to the external LAN.
** traffic for vm-images or RBD should go over internal NICS (can ceph handle such a scenario?).
* shared storage for user data. workloads are not very high, but potentially many clients (currently 100-200)
** POSIX compatibity is a must (directory permissions, primary and secondary group access, for personal files and working group files)
** speed is not of great importance, our LAN is 1Gbit.
** Workstations are Debian/Ubuntu Linux (most of them) and OS X, the fewest are Windows. Is it possible (without great hassle) to reexport ceph volumes via nfs/samba (for OS X, windows)

Our storage server currenlty are: 2x Dell R510 with 12x1TB (one of them), 12x2TB (the second), 16Gbs ram and a i7QuadCore. A third (with 12x2Tb) and fourth are on their way. These would make OSDs and MDses, I guess.

What are your opinions? Is ceph considered unstable in all use-cases, or would a moderate work-load work even in production environments? Are there any compatibility-breaking updates on their way?

I would rather start using ceph now, even if it is not 100% performant. As long as it is 99% stable and data-loss or corruption is not really expected.... 

Thanks for your thoughts,
udo.
-- 
:: udo waechter - root@xxxxxxxxx :: N 52º16'30.5" E 8º3'10.1"
:: genuine input for your ears: http://auriculabovinari.de 
::                          your eyes: http://ezag.zoide.net
::                          your brain: http://zoide.net




Attachment: smime.p7s
Description: S/MIME cryptographic signature


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux