OSD: Newbie question regarding ceph-deploy odd create

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

I'm trying to setup my first cluster,   (have never manually bootstrapped a cluster)

Is ceph-deploy odd activate/prepare supposed to write to the master ceph.conf file, specific entries for each OSD along the lines of http://ceph.com/docs/master/rados/configuration/osd-config-ref/ ?

I appear to have the OSDs prepared without error, but then.. no OSD entries in master cepf.conf nor node /etc/cepf.conf

Am I missing something?

Thanks in advance,

Piers Dawson-Damer
Tasmania


2013-09-28 06:47:00,471 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
2013-09-28 06:47:01,205 [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
2013-09-28 06:47:01,205 [ceph_deploy.osd][DEBUG ] Preparing host storage03-vs-e2 disk /dev/sdm journal /dev/mapper/ceph_journal-osd_12 activate True
2013-09-28 06:47:01,206 [storage03-vs-e2][INFO  ] Running command: ceph-disk-prepare --cluster ceph -- /dev/sdm /dev/mapper/ceph_journal-osd_12
2013-09-28 06:47:20,247 [storage03-vs-e2][INFO  ] Information: Moved requested sector from 4194338 to 4196352 in
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] order to align on 2048-sector boundaries.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] Warning: The kernel is still using the old partition table.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] The new table will be used at the next reboot.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] The operation has completed successfully.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] Information: Moved requested sector from 34 to 2048 in
2013-09-28 06:47:20,249 [storage03-vs-e2][INFO  ] order to align on 2048-sector boundaries.
2013-09-28 06:47:20,249 [storage03-vs-e2][INFO  ] The operation has completed successfully.
2013-09-28 06:47:20,249 [storage03-vs-e2][INFO  ] meta-data=""              isize=2048   agcount=4, agsize=183105343 blks
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ]          =                       sectsz=512   attr=2, projid32bit=0
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ] data     =                       bsize=4096   blocks=732421371, imaxpct=5
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ]          =                       sunit=0      swidth=0 blks
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ] naming   =version 2              bsize=4096   ascii-ci=0
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ] log      =internal log           bsize=4096   blocks=357627, version=2
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ] The operation has completed successfully.
2013-09-28 06:47:20,252 [storage03-vs-e2][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
2013-09-28 06:47:20,266 [storage03-vs-e2][INFO  ] Running command: udevadm trigger --subsystem-match=block --action="">
2013-09-28 06:47:20,413 [ceph_deploy.osd][DEBUG ] Host storage03-vs-e2 is now ready for osd use.





2013-09-27 10:13:25,349 [storage03-vs-e2][DEBUG ] status for monitor: mon.storage03-vs-e2
2013-09-27 10:13:25,349 [storage03-vs-e2][DEBUG ] { "name": "storage03-vs-e2",
2013-09-27 10:13:25,350 [storage03-vs-e2][DEBUG ]   "rank": 2,
2013-09-27 10:13:25,350 [storage03-vs-e2][DEBUG ]   "state": "electing",
2013-09-27 10:13:25,350 [storage03-vs-e2][DEBUG ]   "election_epoch": 1,
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]   "quorum": [],
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]   "outside_quorum": [],
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]   "extra_probe_peers": [
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]         "172.17.181.47:6789\/0",
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]         "172.17.181.48:6789\/0"],
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]   "sync_provider": [],
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]   "monmap": { "epoch": 0,
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]       "fsid": "28626c0a-0266-4b80-8c06-0562bf48b793",
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]       "modified": "0.000000",
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]       "created": "0.000000",
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]       "mons": [
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]             { "rank": 0,
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]               "name": "storage01-vs-e2",
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]               "addr": "172.17.181.47:6789\/0"},
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]             { "rank": 1,
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]               "name": "storage02-vs-e2",
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]               "addr": "172.17.181.48:6789\/0"},
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]             { "rank": 2,
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]               "name": "storage03-vs-e2",
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]               "addr": "172.17.181.49:6789\/0"}]}}
2013-09-27 10:13:25,356 [storage03-vs-e2][DEBUG ] 
2013-09-27 10:13:25,356 [storage03-vs-e2][DEBUG ] ********************************************************************************
2013-09-27 10:13:25,356 [storage03-vs-e2][INFO  ] monitor: mon.storage03-vs-e2 is running





2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] ceph-all start/running
2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] Setting up ceph-fs-common (0.67.3-1precise) ...
2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] Setting up ceph-mds (0.67.3-1precise) ...
2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] ceph-mds-all start/running
2013-09-27 10:12:17,395 [storage03-vs-e2.groupthink.cc][INFO  ] Running command: ceph --version
2013-09-27 10:12:17,547 [storage03-vs-e2.groupthink.cc][INFO  ] ceph version 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux