Re: Minor inconsistency with ceph-deploy on Giant

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2 Feb 2015, Stephen Hindle wrote:
> Hi!
> 
>   Just thought I'd mention a minor inconsistency I noticed while
> re-installing our cluster...
> preparing an OSD with ceph-deploy will NOT activate it on a parttion
> (node1:/sda1/sda2)
> preparting an OSD with ceph-deploy WILL activate it on a disk (
> node1:/dev/sdc)
> 
> I think I noticed similiar behavior with ceph-deploy osd create, but I
> assumed I just messed up,
> so I didnt document it.  This is probably more of an 'issue' as create is
> supposed to activate it for you.
> 
> Manually issuing a ceph-deploy osd activate works fine to bring the
> partition OSDs up.

This is an artifact of how activation works: we label GPT partitions so 
that udev can launch ceph-osd.  If you use existing partitions, they 
aren't GPT (probably) and aren't labeled accordingly.

I think the "fix" is to make sure the existing partitions are GPT 
partitions, and make sure that ceph-disk will relabel is possible (I 
forget if it does this now...)

sage

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux