Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Awesome!  I had no idea that's where this was pulling it from!  However...

Both of the SSDs do have rotational set to 0 :(

root@ceph05:/sys/block# cat sd{r,s}/queue/rotational
0
0

I found a line in cephadm.log that also agrees; this one is from docker:

"sys_api": {
    "removable": "0",
    "ro": "0",
    "vendor": "HITACHI",
    "model": "HUSSL4020BSS600",
    "rev": "A120",
    "sas_address": "0x5000cca0132441de",
    "sas_device_handle": "0x0021",
    "support_discard": "0",
    "rotational": "0",
    "nr_requests": "256",
    "scheduler_mode": "mq-deadline",
    "partitions": {},
    "sectors": 0,
    "sectorsize": "512",
    "size": 200049647616.0,
    "human_readable_size": "186.31 GB",
    "path": "/dev/sdr",
    "locked": 0
},

lsscsi -twigs output:
[4:0:15:0]   disk    sas:0x5000cca0132441de          0
              /dev/sdr   SHITACHI_HUSSL4020BSS600_XTVMY4KA  /dev/sg19
200GB

[4:0:16:0]   disk    sas:0x5000cca013243b96          0
              /dev/sds   SHITACHI_HUSSL4020BSS600_XTVMXRLA  /dev/sg20
200GB

This is interesting, too.  Just a bit down in the docker line in cephadm,
there's an empty array for rejected reasons:
"available": true,
"rejected_reasons": [],
"device_id": "HUSSL4020BSS600_5000cca0132441dc",
"lvs": []

I wonder what's causing this!  Surely there's a reason hiding in here
somewhere.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux