slow requests with the ceph osd dead lock?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi all!

My ceph version is 10.2.11 and I use rgw and EC(7+3).

When I use muliti clients to read and write on one rgw bucket .The bucket have 200 shards.

There look like dead look on osd?

Thanks for all!

The osd block ops all have the similiar below logs  and the block osds not always on the same osd.

"description": "osd_op(client.3147989.0:2782587465 13.36f21cde .dir.517af746-28f1-454c-ba41-0c4fd51af270.896917.11.99 [call rgw.bucket_complete_op] snapc 0=[] ack+ondisk+write+known_if_redirected e26026)",
            "initiated_at": "2019-09-04 13:38:48.089766",
            "age": 3520.459786,
            "duration": 3521.432087,
            "type_data": [
                "delayed",
                {
                    "client": "client.3147989",
                    "tid": 2782587465
                },
                [
                    {
                        "time": "2019-09-04 13:38:48.089766",
                        "event": "initiated"
                    },
                    {
                        "time": "2019-09-04 13:38:48.089794",
                        "event": "queued_for_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:48.180941",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:48.180962",
                        "event": "waiting for rw locks"
                    },
                    {
                        "time": "2019-09-04 13:38:48.430598",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:48.430616",
                        "event": "waiting for rw locks"
                    },
                    {
                        "time": "2019-09-04 13:38:49.150673",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:49.150691",
                        "event": "waiting for rw locks"
                    },
                    {
                        "time": "2019-09-04 13:38:51.421887",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:51.421904",
                        "event": "waiting for rw locks"
                    },
                    {
                        "time": "2019-09-04 13:38:51.990943",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:51.990961",
                        "event": "waiting for rw locks"
                    },
                    {
                        "time": "2019-09-04 13:38:54.343921",
                        "event": "reached_pg"
                    },
                    {
                        "time": "2019-09-04 13:38:54.343938",
                        "event": "waiting for rw locks"
                    }



 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux