Re: OSD on a partition

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

The typecode allows udev to automatically boot the OSD when it is
detected. From what I remember, ceph-disk will prepare the partition
regardless if the typecode is there, but it has been so long since
I've tried. I've used this on Hammer as recently as two weeks ago.
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Tue, Dec 1, 2015 at 3:06 PM, Marek Dohojda  wrote:
>
> Didn’t mean to send previous email, I appologize for the spam.
>
> Anyway Thank you, I will give that a shot.
>
> Whenever I tried to use ceph-disk prepare on a partition I get OSD compalining that this isn’t a block device.  Will the sgdisk typecode fix that?
>
>
>
>
>> On Dec 1, 2015, at 3:03 PM, Robert LeBlanc  wrote:
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> The documentation is a little sparse in this regard, here is what I use:
>>
>> sgdisk --new=1:0:+10240M --change-name=1:"ceph journal"
>> --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/sdc
>> sgdisk --new=2:0:0 --change-name=2:"ceph data"
>> --typecode=2:4fbd7e29-9d25-41b8-afd0-062c0ceff05d /dev/sdc
>>
>> This creates a partition that is 10 GB from the front of the drive as
>> a journal, then uses the rest of the drive for an OSD. You can then
>> use ceph-disk prepare /dev/sdc2 [/dev/sdc1].
>>
>> Adapt it to fit your needs.
>> -----BEGIN PGP SIGNATURE-----
>> Version: Mailvelope v1.3.0
>> Comment: https://www.mailvelope.com
>>
>> wsFcBAEBCAAQBQJWXhkfCRDmVDuy+mK58QAAmEAP/2RtZfhDrnsS0ME3mya/
>> uCYhiXbEdTk+mhmJokC00bHwiZczIf11OUcHIZahvQ/Z/a5ojLIIJWnaSNbc
>> TMLLiHvfbsm8Rs8cMKzpX0NuGgJNefS3M7XpnacgC6ZmE31Rtnd1bi7ThqnK
>> CYzgyS/m5kjmRVwJNr76PpGx6tPiFC3oZgDesq0bm0T97RDjfyYXB3wxkVWY
>> V3u51m7CLa9y3rvNbdGmwiWoR6jhmFMic5tCLJYD6zKnvhhq6P6OLM4RA+6P
>> quSaFUAmZ/JrMWPEY3/B+lRx3j4kdXue2OJIgRQf7XiSJpeubFgVGtxSzBYz
>> OYsV09fOpS7dLojXtmsrQekSIQIGqy3PZMl/WfQVNdQ+etVOenR+8CBhTTst
>> or8fu+s8n+T9brcvFP2cfwickF5Rp+tVc3d238l+Kbc4t6SLtx71q5/AiQpR
>> 8mEOvRHlTTxoaozleepuw7xnymnNShFogwzCXYj7DoaBMxTT4igWfHwWb5/I
>> 0R5bYkheBkYxLlVaf7faUWcjySwunW1SY/rc2FkUFe52VlZ5cbFfJ+ym0an5
>> i5SdfLd0gk4zR5l35j7svdJZU9+QIZLcz/S12Nx5mwUxhnhEeqYMBS/ENSca
>> tKq4nlqyIGaCyDaLlcaECRLBjskrNRMeV7vnNUQ59BzJuMWOHhq571zHeXYO
>> tezS
>> =mxz9
>> -----END PGP SIGNATURE-----
>> ----------------
>> Robert LeBlanc
>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>>
>>
>> On Tue, Dec 1, 2015 at 2:46 PM, Marek Dohojda
>>  wrote:
>>> Thus far parted isn’t very nice with ext4, I suppose I could try to do so
>>> and the worst that would happen is me loosing 1 OSD, however in my tests
>>> this wasn’t very reliable.
>>>
>>> Non GPT partition, utilizing fdisk I can do this without a problem, but OSD
>>> requires GPT (to the best of my knowledge anyway).
>>>
>>> Hence why I would like to know if there is a way for me to do a partition
>>> from the get go.  Since if the shrink doesn’t work my only other option is
>>> this.  Unless of course I create a directory on the OSD file system and
>>> simlink the Spindle Journal within that new directory something like
>>>
>>> ln -s /var/lib/ceph/osd-0/spin_journal/journal /var/lib/ceph/osd-2/journal
>>>
>>> I feel that this approach is not very clean though.
>>>
>>>
>>>
>>> On Dec 1, 2015, at 12:39 PM, Nick Fisk  wrote:
>>>
>>>
>>>
>>>
>>>
>>> -----Original Message-----
>>> From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
>>> Marek Dohojda
>>> Sent: 01 December 2015 19:34
>>> To: Wido den Hollander
>>> Cc: ceph-users@xxxxxxxxxxxxxx
>>> Subject: Re:  OSD on a partition
>>>
>>> Well so here is my problem.  I want to move journal to SSD, but I have no
>>> more Disk slots available, and the SSD I do have is used for my higher IO
>>> OSDs.  Hence I don’t want to lose my OSD.
>>>
>>> So my thought was to partition the SSD into 10G and the rest with the
>>>
>>> “rest”
>>>
>>> being used for OSD, while the 10G be used for Journal.  However I can’t
>>>
>>> find
>>>
>>> a reliable way to put OSD on a partition which is why I am wondering if
>>>
>>> there
>>>
>>> is a way to do it.
>>>
>>>
>>> I'm wondering if you can stop the SSD OSD, unmount, shrink the partition,
>>> add the extra journal partition, start the OSD.
>>>
>>> Then stop the disk based OSD, flush the journal, move to new partition on
>>> SSD and then start it.
>>>
>>>
>>>
>>> Alernatively I could put the Journal on the SSD itself (it is ext4 file
>>>
>>> system) but
>>>
>>> not sure if that wouldn’t be bad from perspective of Ceph to do.
>>>
>>> Down the road I will have more SSD but this won’t happen until new budget
>>> hits and I can get more servers.
>>>
>>>
>>>
>>> On Dec 1, 2015, at 12:11 PM, Wido den Hollander
>>>
>>> wrote:
>>>
>>>
>>> On 12/01/2015 07:29 PM, Marek Dohojda wrote:
>>>
>>> I am looking through google, and I am not seeing a good guide as to
>>> how to put an OSD on a partition (GPT) of a disk.  I see lots of
>>> options for file system, or single physical drive but not partition.
>>>
>>> http://dachary.org/?p=2548
>>>
>>> This is only thing I found but that is from 2 years ago and no
>>> comments if this works or not.
>>>
>>> Is there a better guide/best practice for such a scenario?
>>>
>>>
>>> Well, what is the thing you are trying to achieve? All tools want full
>>> disks, but an OSD doesn't want it persé. It just wants a mount point
>>> where it can write data to.
>>>
>>> You can always manually bootstrap a cluster if you want to.
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> --
>>> Wido den Hollander
>>> 42on B.V.
>>> Ceph trainer and consultant
>>>
>>> Phone: +31 (0)20 700 9902
>>> Skype: contact42on
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.3.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWXhrGCRDmVDuy+mK58QAAYCYP/1hqt2qYidoMPWzG2mv9
sRvNR/vfaVjul6/7OlPfZQl5cU2I8t8F6EwZYQ9h+4Os3P9Ke0sB/tblPzVc
g/k9wSRrr5B7MCpHeLsVDOMZfomPhTuaWskT00Ml3SaKyQ7OFpFZDhnCjL0x
l3XqLD73Mn+giOtWUQO9Xf98dmLCd34ZI9x6G5XIqVc3Sjclr3hDAwJdAuT/
utJp349mpnAKHG0lwd20XmgqkXpDcbm/9/caEfgMxLJwkNSdIL7j8j+ac2tV
hzyWejTwBTdmEgLym0iW/kYsO0lodKx1GNKSqB3b3YRUiABv1c4SKr6EagOG
qyOnG3nvt0ucEazzr48qlhe2+YotlV9GrT06m//DRZfwXPFtu5+vja7NRHV1
4p6s7mUdN/dlJ5Al2Qx0M2MuHkM2dEEWLMKFWjJyVCZvNYDYBFTNk3grOfUj
U0CGSWkhahmJCZuVKNj/d77fkfeCDl9wIxsbcl11kpkgOycFJNrSuCxkeCkW
S8FW08W+X1uc4yjE7D22yZnPPiRnsS6oi7LH7AXoDbmvrdkZM/4SqECt9Emd
iQn2cebTHZPXPbGkypkl893YgMAGZgNWqYFdtih2/wNi4HQ8GkWVdKmMHW9+
IpJUBsuqVyBequMNIUFOgYKyGG9n0ahZsBj4h9zCN0vEiEdPiUgw4SRj1v0U
7q2P
=2hRr
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux