Re: Software Raid 1 for system disks on storage nodes (not for OSD disks)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

Ceph-disk (which ceph-deploy uses), uses GPT to partition OSDs so that
they can be automatically started by udev and reference partitions for
journals using unique identifiers. The necessary data to start the
OSD, the auth key, fsid, etc, are stored on the OSD file system. This
allows you to move OSD disks between nodes in a Ceph cluster. As long
as you don't reformat the OSD drives (and any journals), then if you
reimage the host, install Ceph, copy the ceph.conf and the OSD
bootstrap keys, then it will act as if you have just moved the disks
between servers. We have reformatted the OS of one of the nodes last
week and the OSDs survived and rejoined the cluster after Puppet laid
down the configuration.

If you have journals on SSD and you try to move a spindle, then you
have some more work to do. You either have to move the SSD as well
(and any other spindles that have journals on it), or flush the
journal and create a new one on the destination host.

If you are using dm-crypt, you also need to save the encryption key as
it is not on the OSD FS for obvious reasons.
- ----------------
Robert LeBlanc
PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1


On Mon, Sep 21, 2015 at 3:18 AM, Vickey Singh  wrote:
>
>
> On Fri, Sep 18, 2015 at 6:33 PM, Robert LeBlanc
> wrote:
>>
>> -----BEGIN PGP SIGNED MESSAGE-----
>> Hash: SHA256
>>
>> Depends on how easy it is to rebuild an OS from scratch. If you have
>> something like Puppet or Chef that configure a node completely for
>> you, it may not be too much of a pain to forgo the RAID. We run our
>> OSD nodes from a single SATADOM and use Puppet for configuration. We
>> also don't use swap (not very effective on SATADOM), but have enough
>> RAM that we feel comfortable enough with that decision.
>>
>> If you use ceph-disk or ceph-deploy to configure the OSDs, then they
>> should automatically come back up when you lay down the new OS and set
>> up the necessary ceph config items (ceph.conf and the OSD bootstrap
>> keys).
>
>
> Hello sir
>
> This sounds really interesting , could you please elaborate how after
> reinstalling OS and installing Ceph packages, how does Ceph detects OSD's
> that were hosted earlier on this node.
>
> I am using ceph-deploy to provision ceph , now what all changes i need to do
> after reinstalling OS of a OSD node. So that it should detect my OSD
> daemons. Please help me to know this step by step.
>
> Thanks in advance.
>
> Vickey
>
>
>>
>> - ----------------
>> Robert LeBlanc
>> PGP Fingerprint 79A2 9CA4 6CC4 45DD A904  C70E E654 3BB2 FA62 B9F1
>>
>>
>> On Fri, Sep 18, 2015 at 9:06 AM, Martin Palma  wrote:
>> > Hi,
>> >
>> > Is it a good idea to use a software raid for the system disk (Operating
>> > System) on a Ceph storage node? I mean only for the OS not for the OSD
>> > disks.
>> >
>> > And what about a swap partition? Is that needed?
>> >
>> > Best,
>> > Martin
>> >
>> > _______________________________________________
>> > ceph-users mailing list
>> > ceph-users@xxxxxxxxxxxxxx
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>> -----BEGIN PGP SIGNATURE-----
>> Version: Mailvelope v1.1.0
>> Comment: https://www.mailvelope.com
>>
>> wsFcBAEBCAAQBQJV/C7UCRDmVDuy+mK58QAAoTMQAMZBv4/lphmntC23b9/l
>> JWUPjZfbXUtNgnfMvWcVyTSXsTtM5mY/4/iSZ4ZfCQ4YyqWWMpSlocHONHFz
>> nFTtGupqV3vPCo4X8bl58/iv4J0H2iWUr2klk7jtTj+e+JjyWDo25l8V2ofP
>> edt5g7qcMAwiWYrrpjxQBK4AFNiPJKSMxrzK1Mgic15nwX0OJu0DDNS5twzZ
>> s8Y+UfS80+hZvyBTUGhsO8pkYoJQvYRGgyqYtCdxA+m1T8lWVe8SC0eLWOXy
>> xoyGR7dqcvEXQadrqfmU618eNpNEECPoHeIkeCqpTohrUVsyRcfSGAtfM0YY
>> Ixf2SCaDMAaRwvXGJUf5OP/3HHWps0m4YyLBOddPZ5XZb1utZiclh26KuOyw
>> QdGkP7uoYEMO0v40dcsIbOVhtgTdX+HrpEGuqEtNEGe194sS1nluw+49aLxe
>> eozHSRGq3GmRm/q3bR5f2p+WXwKqmdDRFhqII8H11bb5F7etU2PBo1JA2bTW
>> hUFqu6+ST8eI34OeC7LbC9Txfw/iUhL62kiCm+gj8Rg+m+TZ7a1HEaVc8uyq
>> Jw1+5hIgyTWFvKdIiW65k++8w9my6kUIsY8RT8p08DTSPzxuwGtHr7UJJ629
>> K/tlpGdQTRf7PXgmea6sSodnmaF5HRIUdU0nhQpRRxjX/V+PENI8Qq45KyfX
>> BovV
>> =Gzvl
>> -----END PGP SIGNATURE-----
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>

-----BEGIN PGP SIGNATURE-----
Version: Mailvelope v1.1.0
Comment: https://www.mailvelope.com

wsFcBAEBCAAQBQJWACdUCRDmVDuy+mK58QAAMFsP/3CdU8PvnbWYpT0uur5/
iUCEixJL3ChKY2RSODraR1FIUqFDannAzw/njafy21ZhzfY6YKtWfajE2qiC
vNr1Y7bkZ711lhn8XRE4zYTBAzOZWkGEfE9DV2eYaEXB2ZbktJcJkEW8ykOG
ynyDkKCkilROhyQTdiIUEx7IH12OJFUB3jjXIKdWtiMizZxDf/CnsXw6RwpY
k9diX0FnZ0tcDprKc5GkT7ZNHLPNPHr39cG+0fwsmjA/bgzoh2GL4TytGON6
f6wqBR9Msb+qIGtSLd2AytJYhtwQoaMLAvmJc/uoplGbArU4yQwan5XIsMww
jp5jK9z6uEBPHvMTOQjP8dp1eZoNE+PMrRssDsW28nLhr9TPohbOpjPyNVs7
/bpoEgZmmkKq3w8TWbkErS7O2ibNHSdKCjq0JISe0Qg5s58mcCLmiNj3OxC2
qogTDOVy9yP0VPKVdLa4XXNXYs5LMYHl6+3d6mtBWI01dVVIjO30Yc0mfvi4
pfuI1u5Fz8geIjcHcz/ruz51RgyEOeL2eEQsb0s/M7w6jPDhRQrZ42JS69Mg
0o48HXkxJRjy2kBd+h6sXd2wXgQwKTsse6Fy+cKU5xCuGYuT/RS6SxjKadqp
E4OswiEaKzz7FOoyxkN4KoRYQBOOMwOe5Pi1VOgdn9cbWmMaEuAbk7YVEor7
pSZG
=buz8
-----END PGP SIGNATURE-----
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux