Re: OSD bug: extra null list makes json.loads failed

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Watkins,
   It could be hit on ceph v14.0.1-2574-gc55dc78903:
       root@sceph7:~# ceph --version
       ceph version 14.0.1-2574-gc55dc78903 (c55dc7890316fa1061ce2d34c80270aaa37073c9) nautilus (dev)

       root@sceph7:~# ceph osd crush tree --format=json-pretty
       2019-01-25 20:33:57.630 7f9dc3153700 -1 WARNING: all dangerous and experimental features are enabled.
       2019-01-25 20:33:57.650 7f9dc3153700 -1 WARNING: all dangerous and experimental features are enabled.
       
       [
           {
               "id": -1,
               "name": "default",
               "type": "root",
               "type_id": 10,
               "children": [
                   -2
               ]
           },
           {
               "id": -2,
               "name": "sceph7",
               "type": "host",
               "type_id": 1,
               "pool_weights": {},
               "children": [
                   0
               ]
           },
           {
               "id": 0,
               "device_class": "ssd",
               "name": "osd.0",
               "type": "osd",
               "type_id": 0,
               "crush_weight": 1.000000,
               "depth": 2,
               "pool_weights": {}
           }
       ]
       []

B.R.
Changcheng

On 19:59 Thu 24 Jan, Noah Watkins wrote:
> The output I posted was from v13.2.0. Sorry that wasn't clear. It
> should be fixed in 13.2.4. Please let me know if you find otherwise.
> 
> On Thu, Jan 24, 2019 at 5:28 PM Liu, Changcheng
> <changcheng.liu@xxxxxxxxx> wrote:
> >
> > Hi Watkins,
> >     You can use json-pretty format to show "ceph osd crush tree" result.
> >         According to your result on 13.2.4, it's not fixed yet.
> >         Re-format your result below.
> >
> > On 10:39 Thu 24 Jan, Noah Watkins wrote:
> > > This seems to have been fixed in at least v13.2.4. I was able to
> > > reproduce it in 13.2.0.
> > >
> > > [root@minikube /]# ceph osd crush tree --format=json
> > > [{"id":-1,"name":"default","type":"root","type_id":10,"children":[-2]},{"id":-2,"name":"minikube","type":"host","type_id":1,"pool_weights":{},"children":[0]},{"id":0,"device_class":"hdd","name":"osd.0","type":"osd","type_id":0,"crush_weight":0.014999,"depth":2,"pool_weights":{}}][][root@minikube
> > > /]
> >
> >      [
> >         {"id":-1,"name":"default","type":"root","type_id":10,"children":[-2]},
> >         {"id":-2,"name":"minikube","type":"host","type_id":1,"pool_weights":{},"children":[0]},
> >         {"id":0,"device_class":"hdd","name":"osd.0","type":"osd","type_id":0,"crush_weight":0.014999,"depth":2,"pool_weights":{}}
> >      ]
> >      []
> >      ==> buggy null list in the end
> > >
> > >
> > >
> > > On Thu, Jan 24, 2019 at 5:16 AM Alfredo Deza <adeza@xxxxxxxxxx> wrote:
> > > >
> > > > On Thu, Jan 24, 2019 at 7:27 AM Liu, Changcheng
> > > > <changcheng.liu@xxxxxxxxx> wrote:
> > > > >
> > > > > Hi Watkins,
> > > > >     controller-0:~$ ceph --version
> > > > >     ceph version 13.2.2 (02899bfda814146b021136e9d8e80eba494e1126) mimic (stable)
> > > > >     controller-0:~$ ceph osd crush tree --format=json-pretty
> > > > >
> > > > >     [
> > > > >         {
> > > > >             "id": -1,
> > > > >             "name": "storage-tier",
> > > > >             "type": "root",
> > > > >             "type_id": 10,
> > > > >             "children": [
> > > > >                 -3
> > > > >             ]
> > > > >         },
> > > > >         {
> > > > >             "id": -3,
> > > > >             "name": "group-0",
> > > > >             "type": "chassis",
> > > > >             "type_id": 2,
> > > > >             "pool_weights": {},
> > > > >             "children": [
> > > > >                 -5,
> > > > >                 -4
> > > > >             ]
> > > > >         },
> > > > >         {
> > > > >             "id": -4,
> > > > >             "name": "storage-0",
> > > > >             "type": "host",
> > > > >             "type_id": 1,
> > > > >             "pool_weights": {},
> > > > >             "children": []
> > > > >         },
> > > > >         {
> > > > >             "id": -5,
> > > > >             "name": "storage-1",
> > > > >             "type": "host",
> > > > >             "type_id": 1,
> > > > >             "pool_weights": {},
> > > > >             "children": []
> > > > >         }
> > > > >     ]
> > > > >     []
> > > > >
> > > > > B.R.
> > > > > Changcheng
> > > > >
> > > > > On 16:27 Wed 23 Jan, Noah Watkins wrote:
> > > > > > On Tue, Jan 22, 2019 at 8:54 PM Liu, Changcheng
> > > > > > <changcheng.liu@xxxxxxxxx> wrote:
> > > > > > >
> > > > > > >           2) result:
> > > > > > >              [{"id":-1,"name":"default","type":"root","type_id":10,"children":[]}][]
> > > > > >
> > > > > > What version of Ceph are you using? Also.. is that the entire command
> > > > > > output? For example, on the latest version I would expect there to be
> > > > > > a "nodes" item:
> > > >
> > > > Seems like this has been the case for a while, I see it too on an
> > > > older 13.2.0 version.
> > > >
> > > > > >
> > > > > > {
> > > > > >     "nodes": [
> > > > > >         {
> > > > > >             "id": -1,
> > > > > >             "name": "default",
> > > > > >             "type": "root",
> > > > > >             "type_id": 10,
> > > > > >             "children": [
> > > > > >                 -3
> > > > > >             ]
> > > > > > ...
> > > > > >     ],
> > > > > >     "stray": []
> > > > > > }



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux