On Mon, Jun 25, 2018 at 12:34 AM Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
On Fri, Jun 22, 2018 at 10:44 PM Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
>
> On Fri, Jun 22, 2018 at 6:22 AM Sergey Malinin <hell@xxxxxxxxxxx> wrote:
>>
>> From http://docs.ceph.com/docs/mimic/rados/troubleshooting/troubleshooting-pg/ :
>>
>> "Now 1 knows that these object exist, but there is no live ceph-osd who has a copy. In this case, IO to those objects will block, and the cluster will hope that the failed node comes back soon; this is assumed to be preferable to returning an IO error to the user."
>
>
> This is definitely the default and the way I recommend you run a cluster. But do keep in mind sometimes other layers in your stack have their own timeouts and will start throwing errors if the Ceph library doesn't return an IO quickly enough. :)
Right, that's understood. This is the nice behaviour of virtio-blk vs
virtio-scsi: the latter has a timeout but blk blocks forever.
On 5000 attached volumes we saw around 12 of these IO errors, and this
was the first time in 5 years of upgrades that an IO error happened...
Did you ever get more info about this? An unexpected EIO return-to-clients turned up on the mailing list today (http://tracker.ceph.com/issues/24875) but in a brief poke around I didn't see anything about missing objects doing so.
-Greg
-- dan
> -Greg
>
>>
>>
>> On 22.06.2018, at 16:16, Dan van der Ster <dan@xxxxxxxxxxxxxx> wrote:
>>
>> Hi all,
>>
>> Quick question: does an IO with an unfound object result in an IO
>> error or should the IO block?
>>
>> During a jewel to luminous upgrade some PGs passed through a state
>> with unfound objects for a few seconds. And this seems to match the
>> times when we had a few IO errors on RBD attached volumes.
>>
>> Wondering what is the correct behaviour here...
>>
>> Cheers, Dan
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com