Re: New User Q: General config, massive temporary OSD loss

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jun 18, 2013 at 10:34 AM, Edward Huyer <erhvks@xxxxxxx> wrote:
> Hi, I’m an admin for the School of Interactive Games and Media at RIT, and
> looking into using ceph to reorganize/consolidate the storage my department
> is using.  I’ve read a lot of documentation and comments/discussion on the
> web, but I’m not 100% sure what I’m looking at doing is a good use of ceph.
> I was hoping to get some input on that, as well as an answer to a more
> specific question about OSDs going offline.
>
>
>
>
>
> First questions:  Are there obvious flaws or concerns with the following
> configuration I should be aware of?  Does it even make sense to try to use
> ceph here?  Anything else I should know, think about, or do instead of the
> above?

It looks basically fine to me. I'm a little surprised you want to pay
the cost of networking and replication for 44TB of storage (instead of
just setting up some arrays and exporting them), but I don't know how
to admin anything except for my little desktop. ;)
The big thing to watch out for is that if you're planning on using the
kernel RBD driver, you want a fairly new kernel. (For VMs this
obviously isn't a problem when the hypervisor is managing the
storage.)

> My more specific question relates to the two RAID controllers in the MD3200,
> and my intended 2 or 3 copy replication (also striping):  What happens if
> all OSDs with copies of a piece of data go down for a period of time, but
> then the OSDs come back “intact” (e.g. by moving them to a different
> controller)?

The specifics of what data will migrate where will depend on how
you've set up your CRUSH map, when you're updating the CRUSH
locations, etc, but if you move an OSD then it will fully participate
in recovery and can be used as the authoritative source for data.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux