Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I read your first email again and noticed that ceph-volume already identifies the drives sdr and sds as non-rotational and as available. That would also explain the empty rejected_reasons field because they are not rejected at (this stage?). Where do you read that information that one SSD is identified as rotational?

The next thing I'm wondering about is the ratio, if I counted correctly you try to fit 22 DBs into one SSD with 186 GB size which would make one DB only about 8 GB. What is your use case for this cluster? For RBD this small DB might be enough but for S3 this most likely won't be enough, depending on the actual workload of course. And why do you want to split WAL and DB anyway if both SSDs are identical? You would only benefit from a separate WAL device if that is faster than the DB device. This doesn't solve the mentioned problem, of course, but it doesn't really make sense. I would recommend to use both SSDs for both WAL and DB, this would improve your ratio (11 OSDs per SSD) and also reduce the impact of a failing SSD (22 drives vs. 11 drives if one SSD fails).

Regards,
Eugen


Zitat von Chris <hagfelsh@xxxxxxxxx>:

Awesome!  I had no idea that's where this was pulling it from!  However...

Both of the SSDs do have rotational set to 0 :(

root@ceph05:/sys/block# cat sd{r,s}/queue/rotational



I found a line in cephadm.log that also agrees; this one is from docker:

"sys_api": {
    "removable": "0",
    "ro": "0",
    "vendor": "HITACHI",
    "model": "HUSSL4020BSS600",
    "rev": "A120",
    "sas_address": "0x5000cca0132441de",
    "sas_device_handle": "0x0021",
    "support_discard": "0",
    "rotational": "0",
    "nr_requests": "256",
    "scheduler_mode": "mq-deadline",
    "partitions": {},
    "sectors": 0,
    "sectorsize": "512",
    "size": 200049647616.0,
    "human_readable_size": "186.31 GB",
    "path": "/dev/sdr",
    "locked": 0
},

lsscsi -twigs output:
[4:0:15:0]   disk    sas:0x5000cca0132441de          0
              /dev/sdr   SHITACHI_HUSSL4020BSS600_XTVMY4KA  /dev/sg19
200GB

[4:0:16:0]   disk    sas:0x5000cca013243b96          0
              /dev/sds   SHITACHI_HUSSL4020BSS600_XTVMXRLA  /dev/sg20
200GB

This is interesting, too.  Just a bit down in the docker line in cephadm,
there's an empty array for rejected reasons:
"available": true,
"rejected_reasons": [],
"device_id": "HUSSL4020BSS600_5000cca0132441dc",
"lvs": []

I wonder what's causing this!  Surely there's a reason hiding in here
somewhere.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux