Re: RGW hung, 2 OSDs using 100% CPU

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I made a typo in my timeline too.

It should read:
At 14:14:00, I started OSD 4, and waited for ceph-w to stabilize.  CPU usage was normal.
At 14:15:10, I ran radosgw-admin --name=client.radosgw.ceph1c regions list && radosgw-admin --name=client.radosgw.ceph1c regionmap get.  It returned successfully.
At 14:16:00, I started OSD 8, and waited for ceph -w to stabilize.  CPU usage started out normal, but went to 100% before 14:16:40.
At 14:17:25, I ran radosgw-admin --name=client.radosgw.ceph1c regions list && radosgw-admin --name=client.radosgw.ceph1c regionmap get.  regions list hung, and I killed At 14:18:15, I stopped ceph-osd id=8.
At 14:18:45, I ran radosgw-admin --name=client.radosgw.ceph1c regions list && radosgw-admin --name=client.radosgw.ceph1c regionmap get.  It returned successfully.
At 14:19:10, I stopped ceph-osd id=4.

Some newlines were added.  The only material change is the last line, changing to id=4.

Craig Lewis
Senior Systems Engineer
Office +1.714.602.1309
Email clewis@xxxxxxxxxxxxxxxxxx

Central Desktop. Work together in ways you never thought possible.
Connect with us   Website  |  Twitter  |  Facebook  |  LinkedIn  |  Blog

On 3/26/14 15:04 , Craig Lewis wrote:
At 14:14:00, I started OSD 4, and waited for ceph-w to stabilize.  CPU usage was normal.
At 14:15:10, I ran radosgw-admin --name=client.radosgw.ceph1c regions list && radosgw-admin --name=client.radosgw.ceph1c regionmap get.  It returned successfully.
At 14:16:00, I started OSD 8, and waited for ceph -w to stabilize.  CPU usage started out normal, but went to 100% before 14:16:40.
At 14:17:25, I ran radosgw-admin --name=client.radosgw.ceph1c regions list && radosgw-admin --name=client.radosgw.ceph1c regionmap get.  regions list hung, and I killed At 14:18:15, I stopped ceph-osd id=8.
At 14:18:45, I ran radosgw-admin --name=client.radosgw.ceph1c regions list && radosgw-admin --name=client.radosgw.ceph1c regionmap get.  It returned successfully.
At 14:19:10, I stopped ceph-osd id=8.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux