Yes this command cannot find the keyring service ceph-radosgw@gw1 start But this can radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gw1 -f I think I did not populate the /var/lib/ceph/radosgw/ceph-gw1/ folder correctly. Maybe the sysint is checking on 'done' file or so. I mannualy added the keyring there. But I don’t know the exact synaxt I should use, all seem to be generating the same errors. [radosgw.ceph-gw1] key = xxx== My osds have [osd.12] key = xxx== But my monitors have this one [mon.] key = xxx== caps mon = "allow *" -----Original Message----- From: Jean-Charles Lopez [mailto:jelopez@xxxxxxxxxx] Sent: woensdag 13 september 2017 1:06 To: Marc Roos Cc: ceph-users@xxxxxxxxxxxxxx Subject: Re: Rgw install manual install luminous Hi, see comment in line Regards JC > On Sep 12, 2017, at 13:31, Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx> wrote: > > > > I have been trying to setup the rados gateway (without deploy), but I > am missing some commands to enable the service I guess? How do I > populate the /var/lib/ceph/radosgw/ceph-gw1. I didn’t see any command > like the ceph-mon. > > service ceph-radosgw@gw1 start > Gives: > 2017-09-12 22:26:06.390523 7fb9d7f27e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:26:06.390537 7fb9d7f27e00 0 deferred set uid:gid to > 167:167 (ceph:ceph) > 2017-09-12 22:26:06.390592 7fb9d7f27e00 0 ceph version 12.2.0 > (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process > (unknown), pid 28481 > 2017-09-12 22:26:06.412882 7fb9d7f27e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:26:06.415335 7fb9d7f27e00 -1 auth: error parsing file > /var/lib/ceph/radosgw/ceph-gw1/keyring > 2017-09-12 22:26:06.415342 7fb9d7f27e00 -1 auth: failed to load > /var/lib/ceph/radosgw/ceph-gw1/keyring: (5) Input/output error > 2017-09-12 22:26:06.415355 7fb9d7f27e00 0 librados: client.gw1 > initialization error (5) Input/output error > 2017-09-12 22:26:06.415981 7fb9d7f27e00 -1 Couldn't init storage > provider (RADOS) > 2017-09-12 22:26:06.669892 7f1740d89e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:26:06.669919 7f1740d89e00 0 deferred set uid:gid to > 167:167 (ceph:ceph) > 2017-09-12 22:26:06.669977 7f1740d89e00 0 ceph version 12.2.0 > (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process > (unknown), pid 28497 > 2017-09-12 22:26:06.693019 7f1740d89e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:26:06.695963 7f1740d89e00 -1 auth: error parsing file > /var/lib/ceph/radosgw/ceph-gw1/keyring > 2017-09-12 22:26:06.695971 7f1740d89e00 -1 auth: failed to load > /var/lib/ceph/radosgw/ceph-gw1/keyring: (5) Input/output error Looks like you don’t have the keyring for the RGW user. The error message tells you about the location and the filename to use. > 2017-09-12 22:26:06.695989 7f1740d89e00 0 librados: client.gw1 > initialization error (5) Input/output error > 2017-09-12 22:26:06.696850 7f1740d89e00 -1 Couldn't init storage > provider (RADOS > > radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gw1 -f > --log-to-stderr > --debug-rgw=1 --debug-ms=1 > Gives: > 2017-09-12 22:20:55.845184 7f9004b54e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:20:55.845457 7f9004b54e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:20:55.845508 7f9004b54e00 0 ceph version 12.2.0 > (32ce2a3ae5239ee33d6150705cdb24d43bab910c) luminous (rc), process > (unknown), pid 28122 > 2017-09-12 22:20:55.867423 7f9004b54e00 -1 WARNING: the following > dangerous and experimental features are enabled: bluestore > 2017-09-12 22:20:55.869509 7f9004b54e00 1 Processor -- start > 2017-09-12 22:20:55.869573 7f9004b54e00 1 -- - start start > 2017-09-12 22:20:55.870324 7f9004b54e00 1 -- - --> > 192.168.10.111:6789/0 -- auth(proto 0 36 bytes epoch 0) v1 -- > 0x7f9006e6ec80 con 0 > 2017-09-12 22:20:55.870350 7f9004b54e00 1 -- - --> > 192.168.10.112:6789/0 -- auth(proto 0 36 bytes epoch 0) v1 -- > 0x7f9006e6ef00 con 0 > 2017-09-12 22:20:55.870824 7f8ff1fc4700 1 -- > 192.168.10.114:0/4093088986 learned_addr learned my addr > 192.168.10.114:0/4093088986 > 2017-09-12 22:20:55.871413 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 <== mon.0 192.168.10.111:6789/0 1 ==== > mon_map magic: 0 v1 ==== 361+0+0 (1785674138 0 0) 0x7f9006e8afc0 con > 0x7f90070d8800 > 2017-09-12 22:20:55.871567 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 <== mon.0 192.168.10.111:6789/0 2 ==== > auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (4108244008 0 0) > 0x7f9006e6ec80 con 0x7f90070d8800 > 2017-09-12 22:20:55.871662 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 --> 192.168.10.111:6789/0 -- auth(proto 2 > 2 bytes epoch 0) v1 -- 0x7f9006e6f900 con 0 > 2017-09-12 22:20:55.871688 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 <== mon.1 192.168.10.112:6789/0 1 ==== > mon_map magic: 0 v1 ==== 361+0+0 (1785674138 0 0) 0x7f9006e8b200 con > 0x7f90070d7000 > 2017-09-12 22:20:55.871734 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 <== mon.1 192.168.10.112:6789/0 2 ==== > auth_reply(proto 2 0 (0) Success) v1 ==== 33+0+0 (3872865519 0 0) > 0x7f9006e6ef00 con 0x7f90070d7000 > 2017-09-12 22:20:55.871759 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 --> 192.168.10.112:6789/0 -- auth(proto 2 > 2 bytes epoch 0) v1 -- 0x7f9006e6ec80 con 0 > 2017-09-12 22:20:55.872083 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 <== mon.0 192.168.10.111:6789/0 3 ==== > auth_reply(proto 2 -22 (22) Invalid argument) v1 ==== 24+0+0 > (3879741687 0 0) 0x7f9006e6f900 con 0x7f90070d8800 > 2017-09-12 22:20:55.872122 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 >> 192.168.10.111:6789/0 > conn(0x7f90070d8800 > :-1 s=STATE_OPEN pgs=3828 cs=1 l=1).mark_down > 2017-09-12 22:20:55.872166 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 <== mon.1 192.168.10.112:6789/0 3 ==== > auth_reply(proto 2 -22 (22) Invalid argument) v1 ==== 24+0+0 > (3090386669 0 0) 0x7f9006e6ec80 con 0x7f90070d7000 > 2017-09-12 22:20:55.872179 7f8ff07c1700 1 -- > 192.168.10.114:0/4093088986 >> 192.168.10.112:6789/0 > conn(0x7f90070d7000 > :-1 s=STATE_OPEN pgs=1962 cs=1 l=1).mark_down > 2017-09-12 22:20:55.872278 7f9004b54e00 0 librados: > client.radosgw.gw1 authentication error (22) Invalid argument > 2017-09-12 22:20:55.872615 7f9004b54e00 1 -- > 192.168.10.114:0/4093088986 shutdown_connections > 2017-09-12 22:20:55.872732 7f9004b54e00 1 -- > 192.168.10.114:0/4093088986 shutdown_connections > 2017-09-12 22:20:55.872759 7f9004b54e00 1 -- > 192.168.10.114:0/4093088986 wait complete. > 2017-09-12 22:20:55.872869 7f9004b54e00 1 -- > 192.168.10.114:0/4093088986 >> 192.168.10.114:0/4093088986 > conn(0x7f90070d4000 :-1 s=STATE_NONE pgs=0 cs=0 l=0).mark_down > 2017-09-12 22:20:55.873019 7f9004b54e00 -1 Couldn't init storage > provider (RADOS) > > > > > > Installing software on the gw node: > yum install ceph-radosgw > > Creating pools on a cluster node: > ceph osd pool create default.rgw 8 > ceph osd pool create default.rgw.meta 8 ceph osd pool create > default.rgw.control 8 ceph osd pool create default.rgw.log 8 ceph osd > pool create .rgw.root 8 ceph osd pool create .rgw.gc 8 ceph osd pool > create .rgw.buckets 16 ceph osd pool create .rgw.buckets.index 8 ceph > osd pool create .rgw.buckets.extra 8 ceph osd pool create .intent-log > 8 ceph osd pool create .usage 8 ceph osd pool create .users 8 ceph osd > pool create .users.email 8 ceph osd pool create .users.swift 8 ceph > osd pool create .users.uid 8 > > Creating the gw node user: > ceph auth get-or-create client.radosgw.gw1 ceph auth caps > client.radosgw.gw1 osd 'allow rwx' mon 'allow rwx' > > Adding the configuration to /etc/ceph/ceph.conf: > [client.radosgw.gw1] > host = c04 > rgw_frontends = civetweb port=80 > > service ceph-radosgw start > Redirecting to /bin/systemctl start ceph-radosgw.service Failed to > start ceph-radosgw.service: Unit not found. > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@xxxxxxxxxxxxxx > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com