Re: OSD: Newbie question regarding ceph-deploy odd create

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Another question:  What is the best practice for partitioning SSDs for journals?  I think my journal partitions might be the preventing ceph-deploy odd create from working.

I have a single Intel 910 PCIe SSD in each storage node.  The 910 is two (or four) Hitachi 200GB SSDs and an LSI SAS switch.
To enhance performance, I have shrunk each disk by half to 100GB, for 6 journals each.
Disk labeled with parted, -a optimal for sector alignment, I have created a concatenated LVM volume group, and then
Setup 12 LVM logical volumes, resulting in device names such as /dev/mapper/ceph_journal-osd_01

Per my earlier question to , I am not having success with
$ ceph-deploy osd prepare storage01-vs-e2:/dev/sdc:/dev/mapper/ceph_journal-osd_02 

Removing the separate journal block device, I find has some node config file activity. 
Earlier attempts must have populated # devices as depicted below

So, I find that

$ ceph-deploy -v osd create storage01-vs-e2:sde

.. runs.   It;
creates a /var/lib/ceph/osd/ceph-53 directory 
adds and entry to the crushmap 
but none in /etc/ceph/ceph.conf (is this normal?)


inspecting the crush map I find entries, including

# begin crush map

# devices
device 0 device0
device 1 device1
device 2 device2
device 3 device3
...
device 46 device46
device 47 device47
device 48 device48
device 49 device49
device 50 device50
device 51 device51
device 52 osd.52
device 53 device53
device 54 device54
device 55 osd.55

# types
type 0 osd
type 1 host
type 2 rack
type 3 row
type 4 room
type 5 datacenter
type 6 root

# buckets
host storage01-vs-e2 {
id -2 # do not change unnecessarily
# weight 5.460
alg straw
hash 0 # rjenkins1
item osd.52 weight 2.730
item osd.55 weight 2.730
}
root default {
id -1 # do not change unnecessarily
# weight 5.460
alg straw
hash 0 # rjenkins1
item storage01-vs-e2 weight 5.460
}
...

Thanks again in advance for any pointers.

Piers Dawson-Damer


On 28/09/2013, at 6:59 AM, Piers Dawson-Damer <piers@xxxxx> wrote:

Hi,

I'm trying to setup my first cluster,   (have never manually bootstrapped a cluster)

Is ceph-deploy odd activate/prepare supposed to write to the master ceph.conf file, specific entries for each OSD along the lines of http://ceph.com/docs/master/rados/configuration/osd-config-ref/ ?

I appear to have the OSDs prepared without error, but then.. no OSD entries in master cepf.conf nor node /etc/cepf.conf

Am I missing something?

Thanks in advance,

Piers Dawson-Damer
Tasmania


2013-09-28 06:47:00,471 [ceph_deploy.sudo_pushy][DEBUG ] will use a remote connection with sudo
2013-09-28 06:47:01,205 [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 12.04 precise
2013-09-28 06:47:01,205 [ceph_deploy.osd][DEBUG ] Preparing host storage03-vs-e2 disk /dev/sdm journal /dev/mapper/ceph_journal-osd_12 activate True
2013-09-28 06:47:01,206 [storage03-vs-e2][INFO  ] Running command: ceph-disk-prepare --cluster ceph -- /dev/sdm /dev/mapper/ceph_journal-osd_12
2013-09-28 06:47:20,247 [storage03-vs-e2][INFO  ] Information: Moved requested sector from 4194338 to 4196352 in
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] order to align on 2048-sector boundaries.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] Warning: The kernel is still using the old partition table.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] The new table will be used at the next reboot.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] The operation has completed successfully.
2013-09-28 06:47:20,248 [storage03-vs-e2][INFO  ] Information: Moved requested sector from 34 to 2048 in
2013-09-28 06:47:20,249 [storage03-vs-e2][INFO  ] order to align on 2048-sector boundaries.
2013-09-28 06:47:20,249 [storage03-vs-e2][INFO  ] The operation has completed successfully.
2013-09-28 06:47:20,249 [storage03-vs-e2][INFO  ] meta-data=""              isize=2048   agcount=4, agsize=183105343 blks
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ]          =                       sectsz=512   attr=2, projid32bit=0
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ] data     =                       bsize=4096   blocks=732421371, imaxpct=5
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ]          =                       sunit=0      swidth=0 blks
2013-09-28 06:47:20,250 [storage03-vs-e2][INFO  ] naming   =version 2              bsize=4096   ascii-ci=0
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ] log      =internal log           bsize=4096   blocks=357627, version=2
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ]          =                       sectsz=512   sunit=0 blks, lazy-count=1
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ] realtime =none                   extsz=4096   blocks=0, rtextents=0
2013-09-28 06:47:20,251 [storage03-vs-e2][INFO  ] The operation has completed successfully.
2013-09-28 06:47:20,252 [storage03-vs-e2][ERROR ] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
2013-09-28 06:47:20,266 [storage03-vs-e2][INFO  ] Running command: udevadm trigger --subsystem-match=block --action="">
2013-09-28 06:47:20,413 [ceph_deploy.osd][DEBUG ] Host storage03-vs-e2 is now ready for osd use.





2013-09-27 10:13:25,349 [storage03-vs-e2][DEBUG ] status for monitor: mon.storage03-vs-e2
2013-09-27 10:13:25,349 [storage03-vs-e2][DEBUG ] { "name": "storage03-vs-e2",
2013-09-27 10:13:25,350 [storage03-vs-e2][DEBUG ]   "rank": 2,
2013-09-27 10:13:25,350 [storage03-vs-e2][DEBUG ]   "state": "electing",
2013-09-27 10:13:25,350 [storage03-vs-e2][DEBUG ]   "election_epoch": 1,
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]   "quorum": [],
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]   "outside_quorum": [],
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]   "extra_probe_peers": [
2013-09-27 10:13:25,351 [storage03-vs-e2][DEBUG ]         "172.17.181.47:6789\/0",
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]         "172.17.181.48:6789\/0"],
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]   "sync_provider": [],
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]   "monmap": { "epoch": 0,
2013-09-27 10:13:25,352 [storage03-vs-e2][DEBUG ]       "fsid": "28626c0a-0266-4b80-8c06-0562bf48b793",
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]       "modified": "0.000000",
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]       "created": "0.000000",
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]       "mons": [
2013-09-27 10:13:25,353 [storage03-vs-e2][DEBUG ]             { "rank": 0,
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]               "name": "storage01-vs-e2",
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]               "addr": "172.17.181.47:6789\/0"},
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]             { "rank": 1,
2013-09-27 10:13:25,354 [storage03-vs-e2][DEBUG ]               "name": "storage02-vs-e2",
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]               "addr": "172.17.181.48:6789\/0"},
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]             { "rank": 2,
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]               "name": "storage03-vs-e2",
2013-09-27 10:13:25,355 [storage03-vs-e2][DEBUG ]               "addr": "172.17.181.49:6789\/0"}]}}
2013-09-27 10:13:25,356 [storage03-vs-e2][DEBUG ] 
2013-09-27 10:13:25,356 [storage03-vs-e2][DEBUG ] ********************************************************************************
2013-09-27 10:13:25,356 [storage03-vs-e2][INFO  ] monitor: mon.storage03-vs-e2 is running





2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] ceph-all start/running
2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] Setting up ceph-fs-common (0.67.3-1precise) ...
2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] Setting up ceph-mds (0.67.3-1precise) ...
2013-09-27 10:12:17,384 [storage03-vs-e2.groupthink.cc][INFO  ] ceph-mds-all start/running
2013-09-27 10:12:17,395 [storage03-vs-e2.groupthink.cc][INFO  ] Running command: ceph --version
2013-09-27 10:12:17,547 [storage03-vs-e2.groupthink.cc][INFO  ] ceph version 0.67.3 (408cd61584c72c0d97b774b3d8f95c6b1b06341a)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux