Re: The OSD can be “down” but still “in”.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

If the OSD represents the primary one for a PG, then all IO will be
stopped..which may lead to application failure..

no, that's not how it works. You have an acting set of OSDs for a PG, typically 3 OSDs in a replicated pool. If the primary OSD goes down, the secondary becomes the primary immediately and serves client requests. I recommend to read the docs [1] to get a better understanding of the workflow, or set up a practice environment to test failure scenarios and watch what happens if an OSD/host/rack etc. fails.

Regards,
Eugen


[1] http://docs.ceph.com/docs/master/architecture/#peering-and-sets


Zitat von M Ranga Swami Reddy <swamireddy@xxxxxxxxx>:

Thanks for reply.
If the OSD represents the primary one for a PG, then all IO will be
stopped..which may lead to application failure..



On Tue, Jan 22, 2019 at 5:32 PM Matthew Vernon <mv3@xxxxxxxxxxxx> wrote:

Hi,

On 22/01/2019 10:02, M Ranga Swami Reddy wrote:
> Hello - If an OSD shown as down and but its still "in" state..what
> will happen with write/read operations on this down OSD?

It depends ;-)

In a typical 3-way replicated setup with min_size 2, writes to placement
groups on that OSD will still go ahead - when 2 replicas are written OK,
then the write will complete. Once the OSD comes back up, these writes
will then be replicated to that OSD. If it stays down for long enough to
be marked out, then pgs on that OSD will be replicated elsewhere.

If you had min_size 3 as well, then writes would block until the OSD was
back up (or marked out and the pgs replicated to another OSD).

Regards,

Matthew


--
 The Wellcome Sanger Institute is operated by Genome Research
 Limited, a charity registered in England with number 1021457 and a
 company registered in England with number 2742969, whose registered
 office is 215 Euston Road, London, NW1 2BE.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux