Re: librados behavior when some OSDs are unreachables

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 1/28/20 7:03 PM, David DELON wrote:
> Hi, 
> 
> i had a problem with one application (seafile) which uses CEPH backend with librados. 
> The corresponding pools are defined with size=3 and each object copy is on a different host. 
> The cluster health is OK: all the monitors see all the hosts. 
> 
> Now, a network problem just happens between my RADOS client and a single host. 
> Then, when my application/client tries to access an object which is situed on the unreachable host (primary for the corresponding PG), 
> it does not failover to another copy/host (and my application crashes later because after a while, with many requests, too many files are opened on Linux). 
> Is it the normal behavior? My storage is resilient (great!) but not its access...

Yes. Reads and Writes for a PG are always served by the primary OSD.
That's how Ceph is designed.


> If on the host, i stop the OSDs or change the affinity to zero, it solves, 
> so it seems like the librados just check and trust the osdmap 
> And doing a tcpdump show the client tries to access the same OSD without timeout. 

There is a network issue and that's the root cause. Ceph can't fix that
for you. You will need to make sure the network is functioning.

> 
> It can be easily reproduced with defining a netfilter rule on a host to drop packets coming from the client. 
> Note: i am still on Luminous (both on lient and cluster sides). 

Again, this is exactly how Ceph works :-)

The primary OSD serves reads and writes. Only when its marked as down
the client is informed using an osdmap update and then it goes to
another OSD.

Wido

> 
> Thanks for reading. 
> 
> D. 
> 
> 
> 
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> 
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux