Re: New to Ceph - osd autostart problem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



and this- after starting the osd manually


root@cephosd01:~# df
Filesystem     1K-blocks    Used Available Use% Mounted on
/dev/dm-0       15616412 1583180  13216900  11% /
udev               10240       0     10240   0% /dev
tmpfs              49656    4636     45020  10% /run
tmpfs             124132       0    124132   0% /dev/shm
tmpfs               5120       0      5120   0% /run/lock
tmpfs             124132       0    124132   0% /sys/fs/cgroup
/dev/sda1         240972   33309    195222  15% /boot
/dev/sdb1       47161840   35260  47126580   1% /var/lib/ceph/osd/ceph-0
/dev/sdc1       47161840   34952  47126888   1% /var/lib/ceph/osd/ceph-1


what i did not understand is that i would expect ceph-deploy to work properly. i just setup all six nodes in a fresh install, and then used ceph deploy to install them:

All done from a adminvm:

ceph-deploy new cephmon01 cephmon02 cephmon03
ceph-deploy install cephmon01 cephmon02 cephmon03 cephosd01 cephosd02 cephosd03
ceph-deploy mon create cephmon01
ceph-deploy mon create cephmon02
ceph-deploy mon create cephmon03
ceph-deploy osd prepare  cephosd01:sdb cephosd01:sdc
ceph-deploy osd prepare  cephosd02:sdb cephosd02:sdc
ceph-deploy osd prepare  cephosd03:sdb cephosd03:sdc
ceph osd tree

and directly afterwards (after seeing 6 OSDs up)

ssh cephosd01 shutdown -r

root@cephadmin:~# cat /etc/debian_version
8.5


Am 12.07.2016 um 00:05 schrieb Dirk Laurenz:

root@cephosd01:~# fdisk -l /dev/sdb

Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 87B152E0-EB5D-4EB0-8FFB-C27096CBB1ED

Device        Start       End  Sectors Size Type
/dev/sdb1  10487808 104857566 94369759  45G unknown
/dev/sdb2      2048  10487807 10485760   5G unknown

Partition table entries are not in disk order.
root@cephosd01:~# fdisk -l /dev/sdc

Disk /dev/sdc: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 31B81FCA-9163-4723-B195-97AEC9568AD0

Device        Start       End  Sectors Size Type
/dev/sdc1  10487808 104857566 94369759  45G unknown
/dev/sdc2      2048  10487807 10485760   5G unknown

Partition table entries are not in disk order.


Am 11.07.2016 um 18:01 schrieb George Shuklin:
Check out partition type for data partition for ceph.

fdisk -l /dev/sdc

On 07/11/2016 04:03 PM, Dirk Laurenz wrote:

hmm, helps partially ... running


/usr/sbin/ceph-disk trigger /dev/sdc1 or sdb1 works and brings osd up..


systemctl enable does not help....


Am 11.07.2016 um 14:49 schrieb George Shuklin:
Short story how OSDs are started in systemd environments:

Ceph OSD parittions has specific typecode (partition type 4FBD7E29-9D25-41B8-AFD0-062C0CEFF05D). It handled by udev rules shipped by ceph package:
/lib/udev/rules.d/95-ceph-osd.rules

It set up proper owner/group for this disk ('ceph' instead 'root') and calls /usr/sbin/ceph-disk trigger.

ceph-disk triggers creation of instance of ceph-disk@ systemd unit (to mount disk to /var/lib/ceph/osd/...), and ceph-osd@ (i'm not sure about all sequence of events).

Basically, to make OSD autostart they NEED to have proper typecode in their partition. If you using something different (like 'directory based OSD') you should enable OSD autostart:

systemctl enable ceph-osd@42


On 07/11/2016 03:32 PM, Dirk Laurenz wrote:
Hello,


i'm new to ceph an try to do some first steps with ceph to understand concepts.

my setup is at first completly in vm....


i deployed (with ceph-deploy) three monitors and three osd hosts. (3+3 vms)

my frist test was to find out, if everything comes back online after a system restart. this works fine for the monitors, but fails for the osds. i have to start them manually.


OS is debian jessie, ceph is the current release....


Where can find out, what's going wrong....


Dirk

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux