Re: How to set up bluestore manually?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

Thanks for the super-fast response!

That did work somehow... Here's my commandline (As Bluestore seems to still require a Journal, I repurposed the SSD partitions for it and put the DB/WAL on the spinning disk):

   ceph-deploy osd create --bluestore <hostname>:/dev/sdc:/dev/mapper/cl-ceph_journal_sdc

But it created two (!) new OSDs instead of one, and placed them under the default CRUSH rule (thus making my cluster doing stuff; they should be under a different rule)...
Did I do something wrong or are the two OSDs part of the bluestore concept? If yes, how to handle them in the CRUSH map (I have different categories of OSD hosts for different use cases, split by appropriate CRUSH rules).

Thanks

Martin

-----Ursprüngliche Nachricht-----
Von: Loris Cuoghi [mailto:loris.cuoghi@xxxxxxxxxxxxxxx] 
Gesendet: Montag, 3. Juli 2017 13:39
An: Martin Emrich <martin.emrich@xxxxxxxxxxx>
Cc: Vasu Kulkarni <vakulkar@xxxxxxxxxx>; ceph-users@xxxxxxxxxxxxxx
Betreff: Re:  How to set up bluestore manually?

Le Mon, 3 Jul 2017 11:30:04 +0000,
Martin Emrich <martin.emrich@xxxxxxxxxxx> a écrit :

> Hi!
> 
> Thanks for the hint, but I get this error:
> 
> [ceph_deploy][ERROR ] ConfigError: Cannot load config: [Errno 2] No 
> such file or directory: 'ceph.conf'; has `ceph-deploy new` been run in 
> this directory?
> 
> Obviously, ceph-deploy only works if the cluster has been managed with 
> ceph-deply all along (and I won’t risk messing with my cluster by 
> attempting to “retrofit” ceph-deply to it)… I’ll setup a single VM 
> cluster to squeeze out the necessary commands and check back…

No need to start with ceph-deploy in order to use it :)

You can:

create a working directory (e.g. ~/ceph-deploy)
- cd ~/ceph-deploy
- copy the ceph.conf from /etc/ceph/ceph.conf in the working directory
- execute ceph-deploy gatherkeys to obtain the necessary keys

et voilà ;)

Give it a try!

-- Loris

> 
> Regards,
> 
> Martin
> 
> Von: Vasu Kulkarni [mailto:vakulkar@xxxxxxxxxx]
> Gesendet: Freitag, 30. Juni 2017 17:58
> An: Martin Emrich <martin.emrich@xxxxxxxxxxx>
> Cc: ceph-users@xxxxxxxxxxxxxx
> Betreff: Re:  How to set up bluestore manually?
> 
> 
> 
> On Fri, Jun 30, 2017 at 8:31 AM, Martin Emrich 
> <martin.emrich@xxxxxxxxxxx<mailto:martin.emrich@xxxxxxxxxxx>> wrote:
> Hi!
> 
> I’d like to set up new OSDs with bluestore: the real data (“block”) on 
> a spinning disk, and DB+WAL on a SSD partition.
> 
> But I do not use ceph-deploy, and never used ceph-disk (I set up the 
> filestore OSDs manually). Google tells me that ceph-disk does not
> (yet) support splitting the components across multiple block devices, 
> I also had no luck while attempting it anyways. (using Ceph 12.1 RC1)
> 
> why not give ceph-deploy a chance, It has options to specify for db 
> and wal in command line. you dont have to worry about what options to 
> pass to ceph-disk as it encapsulates that
> 
> ex: ceph-deploy osd create --bluestore --block-wal /dev/nvme0n1 
> --block-db /dev/nvme0n1 p30:sdb
> 
> 
> 
> I just can’t find documentation on how to set up a bluestore OSD
> manually:
> 
> 
>   *   How do I “prepare” the block, block.wal and block.db
> blockdevices? Just directing ceph-osd to the block devices via 
> ceph.conf does not seem to be enough.
>   *   Do bluestore OSDs still use/need a separate journal
> file/device? Or ist that replaced by the WAL?
> 
> http://docs.ceph.com/docs/master/ also has not very much information 
> on using bluestore, is this documentation deprecated?
> 
> The document has fallen behind for manual deployment and ceph-deploy 
> usage for bluestore as well, but if you use the above command, it will 
> clearly throw out what it is doing and how ceph-disk is being called, 
> those are essentially manual steps that should go in document. I think 
> we have a tracker for update which is pending for sometime.
> 
> 
> Thanks for any hints,
> 
> Martin
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx<mailto:ceph-users@xxxxxxxxxxxxxx>
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux