Disregard the udev comment above. Copy/paste mistake. :)
On Wed, Aug 7, 2013 at 4:43 PM, Joao Pedras <jppedras@xxxxxxxxx> wrote:
The journal devices entries beyond the 2nd (ie. /dev/sdg2) are not created under /dev. Basically doing the following addresses the issue:--- /usr/sbin/ceph-disk 2013-07-25 00:55:41.000000000 -0700+++ /root/ceph-disk 2013-08-07 15:54:17.538542684 -0700@@ -857,6 +857,14 @@'settle',],)+ subprocess.call(+ args=[+ # wait for udev event queue to clear+ 'partx',+ '-a',+ '{journal}'.format(journal=journal)+ ],+ )journal_symlink = '/dev/disk/by-partuuid/{journal_uuid}'.format(journal_uuid=journal_uuid,This is RHEL 6.4.Thanks for the help,--On Wed, Aug 7, 2013 at 12:48 PM, Joao Pedras <jppedras@xxxxxxxxx> wrote:
Some more info about this...The subject should have been journal on another device. The issue also occurs if using another disk to hold the journal.If doing something like 'ceph-deploy node:sda:sdk' a subsequent run like 'ceph-deploy:sdb:sdk' will show the error regarding sdb's osd. If doing 'ceph-deply node:sda:sdk node:sdb:sdk node:sdc:sdk [...]' the first 2 osds will be created and launched fine, sdc's and any others won't.Thanks.--On Wed, Aug 7, 2013 at 10:55 AM, Joao Pedras <jppedras@xxxxxxxxx> wrote:
Hello Tren,It is indeed:$> sestatusSELinux status: disabledThanks,--On Wed, Aug 7, 2013 at 9:33 AM, Tren Blackburn <iam@xxxxxxxxxxxxxxxx> wrote:On Tue, Aug 6, 2013 at 11:14 AM, Joao Pedras <jppedras@xxxxxxxxx> wrote:Greetings all.I am installing a test cluster using one ssd (/dev/sdg) to hold the journals. Ceph's version is 0.61.7 and I am using ceph-deploy obtained from ceph's git yesterday. This is on RHEL6.4, fresh install.When preparing the first 2 drives, sda and sdb, all goes well and the journals get created in sdg1 and sdg2:$> ceph-deploy osd prepare ceph00:sda:sdg ceph00:sdb:sdg[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks ceph00:/dev/sda:/dev/sdg ceph00:/dev/sdb:/dev/sdg[ceph_deploy.osd][DEBUG ] Deploying osd to ceph00[ceph_deploy.osd][DEBUG ] Host ceph00 is now ready for osd use.[ceph_deploy.osd][DEBUG ] Preparing host ceph00 disk /dev/sda journal /dev/sdg activate False[ceph_deploy.osd][DEBUG ] Preparing host ceph00 disk /dev/sdb journal /dev/sdg activate False
When preparing sdc or any disk after the first 2 I get the following in that osd's log but no errors on ceph-deploy:# tail -f /var/log/ceph/ceph-osd.2.log2013-08-06 10:51:36.655053 7f5ba701a780 0 ceph version 0.61.7 (8f010aff684e820ecc837c25ac77c7a05d7191ff), process ceph-osd, pid 115962013-08-06 10:51:36.658671 7f5ba701a780 1 filestore(/var/lib/ceph/tmp/mnt.i2NK47) mkfs in /var/lib/ceph/tmp/mnt.i2NK472013-08-06 10:51:36.658697 7f5ba701a780 1 filestore(/var/lib/ceph/tmp/mnt.i2NK47) mkfs fsid is already set to 5d1beb09-1f80-421d-a88c-57789e2fc33e2013-08-06 10:51:36.813783 7f5ba701a780 1 filestore(/var/lib/ceph/tmp/mnt.i2NK47) leveldb db exists/created2013-08-06 10:51:36.813964 7f5ba701a780 -1 journal FileJournal::_open: disabling aio for non-block journal. Use journal_force_aio to force use of aio anyway2013-08-06 10:51:36.813999 7f5ba701a780 1 journal _open /var/lib/ceph/tmp/mnt.i2NK47/journal fd 10: 0 bytes, block size 4096 bytes, directio = 1, aio = 02013-08-06 10:51:36.814035 7f5ba701a780 -1 journal check: ondisk fsid 00000000-0000-0000-0000-000000000000 doesn't match expected 5d1beb09-1f80-421d-a88c-57789e2fc33e, invalid (someone else's?) journal2013-08-06 10:51:36.814093 7f5ba701a780 -1 filestore(/var/lib/ceph/tmp/mnt.i2NK47) mkjournal error creating journal on /var/lib/ceph/tmp/mnt.i2NK47/journal: (22) Invalid argument2013-08-06 10:51:36.814125 7f5ba701a780 -1 OSD::mkfs: FileStore::mkfs failed with error -222013-08-06 10:51:36.814185 7f5ba701a780 -1 ** ERROR: error creating empty object store in /var/lib/ceph/tmp/mnt.i2NK47: (22) Invalid argumentI have cleaned the disks with dd, zapped them and so forth but this always occurs. If doing sdc/sdd first, for example, then sda or whatever follows fails with similar errors.Does anyone have any insight on this issue?Is SELinux disabled?t.Joao PedrasJoao PedrasJoao Pedras
Joao Pedras
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com