Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I also just ran into what seems to be the same problem Chris did.  Despite all indicators visible to me saying my NVMe drive is non-rotational (including /sys/block/nvme0n1/queue/rotational ), the Orchestrator would not touch it until I specified it by model.

-----Original Message-----
From: Eugen Block <eblock@xxxxxx> 
Sent: Monday, September 27, 2021 10:04 AM
To: ceph-users@xxxxxxx
Subject:  Re: Orchestrator is internally ignoring applying a spec against SSDs, apparently determining they're rotational.

Hi,

just remove the wal_devices part from your specs, ceph-volume will automatically put the wal onto the same SSD. If you want to use both SSDs for DB I would also remove the "limit" filter so ceph-volume can use both SSDs for block.db. You don't seem to have more than those two SSDs per node so you wouldn't need a limit, just the rotational filter.


Zitat von Chris <hagfelsh@xxxxxxxxx>:

> The lines I cited as examples of cephadm misinterpreting rotational 
> states were pulled from the mgr container stderr, acquired via docker 
> logs <ceph mgr container>.
>
> Your comments on my deployment strategy are very helpful--I figured
> (incorrectly?) that having the db & wal on faster drives would benefit 
> the overall throughput of the cluster.
>
> The use case for this is just warm storage.  I built it out of scrap 
> parts in the lab, so  our expectations of its performance should be 
> carefully managed :)
>
> That being said, I'd be happy to purge the cluster again and apply 
> what you're describing: both WAL & DB on both SSDs.  I haven't figured 
> out how to assign both roles to them via an osd spec yaml.
>
> On the OSD Spec page ( https://docs.ceph.com/en/latest/cephadm/osd/ ), 
> I see mention of what I was trying to do: db_device and wal_device, 
> but nothing that would permit both to be simultaneously assigned to the SSDs.
>
> Here's the spec I was working on to get this far:
> service_type: osd
> service_id: osd_spec_default
> placement:
>   host_pattern: '*'
> data_devices:
>   rotational: 1
> db_devices:
>   rotational: 0
>   limit: 1
> wal_devices:
>   rotational: 0
>   limit: 1
>
> Do you know what to change to apply the plan you described?  I'd be 
> happy to try it!
>
>
> From: Eugen Block <eblock@xxxxxx>
> To: ceph-users@xxxxxxx
> Cc:
> Bcc:
> Date: Mon, 27 Sep 2021 10:06:43 +0000
> Subject:  Re: Orchestrator is internally ignoring applying 
> a spec against SSDs, apparently determining they're rotational.
> Hi,
>
> I read your first email again and noticed that ceph-volume already 
> identifies the drives sdr and sds as non-rotational and as available.
> That would also explain the empty rejected_reasons field because they 
> are not rejected at (this stage?). Where do you read that information 
> that one SSD is identified as rotational?
>
> The next thing I'm wondering about is the ratio, if I counted 
> correctly you try to fit 22 DBs into one SSD with 186 GB size which 
> would make one DB only about 8 GB. What is your use case for this 
> cluster? For RBD this small DB might be enough but for S3 this most 
> likely won't be enough, depending on the actual workload of course.
> And why do you want to split WAL and DB anyway if both SSDs are 
> identical? You would only benefit from a separate WAL device if that 
> is faster than the DB device. This doesn't solve the mentioned 
> problem, of course, but it doesn't really make sense. I would 
> recommend to use both SSDs for both WAL and DB, this would improve 
> your ratio (11 OSDs per SSD) and also reduce the impact of a failing 
> SSD (22 drives vs. 11 drives if one SSD fails).
>
> Regards,
> Eugen
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an 
> email to ceph-users-leave@xxxxxxx



_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux