Re: New to Ceph - osd autostart problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
Partition unique GUID: 79FD1B30-F5AA-4033-BA03-8C7D0A7D49F5
First sector: 256 (at 1024.0 KiB)
Last sector: 976754640 (at 3.6 TiB)
Partition size: 976754385 sectors (3.6 TiB)
Attribute flags: 0000000000000000
Partition name: 'ceph data'


looks same with my disk, and they do autostart -- even if i deactivate
everything in /etc/systemd/system/ceph-osd.target.wants ( centos 7 ).

Ceph will simply generate the symlinks newly @ startup ^^;

This whole start up thing is something that could be renewed in the
documentation, that people would understand better what ceph does exactly :)

-- 
Mit freundlichen Gruessen / Best regards

Oliver Dzombic
IP-Interactive

mailto:info@xxxxxxxxxxxxxxxxx

Anschrift:

IP Interactive UG ( haftungsbeschraenkt )
Zum Sonnenberg 1-3
63571 Gelnhausen

HRB 93402 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic

Steuer Nr.: 35 236 3622 1
UST ID: DE274086107


Am 15.07.2016 um 07:38 schrieb Dirk Laurenz:
> Hello George,
> 
> 
> i did what you suggested, but it didn't help...no autostart - i have to
> start them manually
> 
> 
> root@cephosd01:~#  sgdisk -i 1 /dev/sdb
> Partition GUID code: 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D (Unknown)
> Partition unique GUID: 48B7EC4E-A582-4B84-B823-8C3A36D9BB0A
> First sector: 10487808 (at 5.0 GiB)
> Last sector: 104857566 (at 50.0 GiB)
> Partition size: 94369759 sectors (45.0 GiB)
> Attribute flags: 0000000000000000
> Partition name: 'ceph data'
> root@cephosd01:~#  sgdisk -i 2 /dev/sdb
> Partition GUID code: 45B0969E-9B03-4F30-B4C6-B4B80CEFF106 (Unknown)
> Partition unique GUID: 2B7CC697-EFA9-4041-A62C-A044DB2BB03B
> First sector: 2048 (at 1024.0 KiB)
> Last sector: 10487807 (at 5.0 GiB)
> Partition size: 10485760 sectors (5.0 GiB)
> Attribute flags: 0000000000000000
> Partition name: 'ceph journal'
> 
> 
> What makes me wonder - is that partition type is unknown.....
> 
> 
> Am 13.07.2016 um 17:16 schrieb George Shuklin:
>> As you can see you have 'unknown' partition type. It should be 'ceph
>> journal' and 'ceph data'.
>>
>> Stop ceph-osd, unmount partitions and change typecodes for partition
>> properly:
>> /sbin/sgdisk --typecode=PART:4fbd7e29-9d25-41b8-afd0-062c0ceff05d --
>> /dev/DISK
>>
>> PART - number of partition with data (1 in your case), so:
>>
>> /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d --
>> /dev/sdb (sdc, etc).
>>
>> You can change typecode for journal partition too:
>>
>> /sbin/sgdisk --typecode=2:45b0969e-9b03-4f30-b4c6-b4b80ceff106 -- /dev/sdb
>>
>>
>> On 07/12/2016 01:05 AM, Dirk Laurenz wrote:
>>>
>>> root@cephosd01:~# fdisk -l /dev/sdb
>>>
>>> Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
>>> Units: sectors of 1 * 512 = 512 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disklabel type: gpt
>>> Disk identifier: 87B152E0-EB5D-4EB0-8FFB-C27096CBB1ED
>>>
>>> Device        Start       End  Sectors Size Type
>>> /dev/sdb1  10487808 104857566 94369759  45G unknown
>>> /dev/sdb2      2048  10487807 10485760   5G unknown
>>>
>>> Partition table entries are not in disk order.
>>> root@cephosd01:~# fdisk -l /dev/sdc
>>>
>>> Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors
>>> Units: sectors of 1 * 512 = 512 bytes
>>> Sector size (logical/physical): 512 bytes / 512 bytes
>>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Disklabel type: gpt
>>> Disk identifier: 31B81FCA-9163-4723-B195-97AEC9568AD0
>>>
>>> Device        Start       End  Sectors Size Type
>>> /dev/sdc1  10487808 104857566 94369759  45G unknown
>>> /dev/sdc2      2048  10487807 10485760   5G unknown
>>>
>>> Partition table entries are not in disk order.
>>>
>>>
>>> Am 11.07.2016 um 18:01 schrieb George Shuklin:
>>>> Check out partition type for data partition for ceph.
>>>>
>>>> fdisk -l /dev/sdc
>>>>
>>>> On 07/11/2016 04:03 PM, Dirk Laurenz wrote:
>>>>>
>>>>> hmm, helps partially ... running
>>>>>
>>>>>
>>>>> /usr/sbin/ceph-disk trigger /dev/sdc1 or sdb1 works and brings osd up..
>>>>>
>>>>>
>>>>> systemctl enable does not help....
>>>>>
>>>>>
>>>>> Am 11.07.2016 um 14:49 schrieb George Shuklin:
>>>>>> Short story how OSDs are started in systemd environments:
>>>>>>
>>>>>> Ceph OSD parittions has specific typecode (partition type
>>>>>> 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D). It handled by udev rules
>>>>>> shipped by ceph package:
>>>>>> /lib/udev/rules.d/95-ceph-osd.rules
>>>>>>
>>>>>> It set up proper owner/group for this disk ('ceph' instead 'root')
>>>>>> and calls /usr/sbin/ceph-disk trigger.
>>>>>>
>>>>>> ceph-disk triggers creation of instance of ceph-disk@ systemd unit
>>>>>> (to mount disk to /var/lib/ceph/osd/...), and ceph-osd@ (i'm not
>>>>>> sure about all sequence of events).
>>>>>>
>>>>>> Basically, to make OSD autostart they NEED to have proper typecode
>>>>>> in their partition. If you using something different (like
>>>>>> 'directory based OSD') you should enable OSD autostart:
>>>>>>
>>>>>> systemctl enable ceph-osd@42
>>>>>>
>>>>>>
>>>>>> On 07/11/2016 03:32 PM, Dirk Laurenz wrote:
>>>>>>> Hello,
>>>>>>>
>>>>>>>
>>>>>>> i'm new to ceph an try to do some first steps with ceph to
>>>>>>> understand concepts.
>>>>>>>
>>>>>>> my setup is at first completly in vm....
>>>>>>>
>>>>>>>
>>>>>>> i deployed (with ceph-deploy) three monitors and three osd hosts.
>>>>>>> (3+3 vms)
>>>>>>>
>>>>>>> my frist test was to find out, if everything comes back online
>>>>>>> after a system restart. this works fine for the monitors, but
>>>>>>> fails for the osds. i have to start them manually.
>>>>>>>
>>>>>>>
>>>>>>> OS is debian jessie, ceph is the current release....
>>>>>>>
>>>>>>>
>>>>>>> Where can find out, what's going wrong....
>>>>>>>
>>
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux