Re: Newbie questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 1 Oct 2012, Adam Nielsen wrote:
> Hi all,
> 
> I've been investigating cluster filesystems for a while now, and I have a few
> questions about Ceph I hope you don't mind me asking here.  This is in the
> context of using Ceph as a POSIX filesystem and alternative to something like
> NFS.
> 
>   1. Is Ceph stable enough for "real" use yet?  I read that upgrading to v0.48
> required a reformat, which I imagine would be a bit of an issue in a
> production system.  Is this how upgrades are normally done?  Is anyone running
> Ceph in a production environment with real data yet?

The upgrade to v0.48 required a conversion the first time each ceph-osd 
daemon was started with new code.  It was transparent to the user except 
for the fact that the conversion was slow.  It is also atypical.

The goal is for all future Ceph upgrades to be transparent and rolling 
(i.e., upgrade one daemon/machine at a time while the cluster remains 
completely available).  This is an absolute requirement for 
upgrades/updates within a stable series (e.g., v0.48 -> v0.48.2), and a 
high priority and probability between versions (argonaut -> bobtail, v0.51 
-> v0.52, etc.).

>   3. If I have multiple disks in a machine that I can dedicate to Ceph, is it
> better to RAID them and present Ceph with a single filesystem, or do you get
> better results by giving Ceph a filesystem on each disk and letting it look
> after the striping and any faulty disks?

There are arguments to be made for both configurations.  Currently we are 
using (and generally recommending) one ceph-osd per disk.  I would 
consider going the RAID route if you have limited RAM on the node or have 
a high-end RAID array that you want to take full advantage of.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux