Re: Rados gateway basic pools missing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Ok in the interface when I create a bucket the index in created automatically

1 device_health_metrics
2 cephfs_data
3 cephfs_metadata
4 .rgw.root
5 default.rgw.log
6 default.rgw.control
7 default.rgw.meta
8 default.rgw.buckets.index


* I think I just could not make an insertion using s3cmd

List command - connection problem
# s3cmd la 

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
  Please try reproducing the error using
  the latest s3cmd code from the git master
  branch found at:
    https://github.com/s3tools/s3cmd
  and have a look at the known issues list:
    https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
  If the error persists, please report the
  following lines (removing any private
  info as necessary) to:
   s3tools-bugs@xxxxxxxxxxxxxxxxxxxxx


!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Invoked as: /usr/bin/s3cmd la
Problem: <class 'ConnectionRefusedError: [Errno 111] Connection refused
S3cmd:   2.0.2
python:   3.8.5 (default, Jan 27 2021, 15:41:15) 
[GCC 9.3.0]
environment LANG=en_CA.UTF-8

Traceback (most recent call last):
  File "/usr/bin/s3cmd", line 3092, in <module>
    rc = main()
  File "/usr/bin/s3cmd", line 3001, in main
    rc = cmd_func(args)
  File "/usr/bin/s3cmd", line 164, in cmd_all_buckets_list_all_content
    response = s3.list_all_buckets()
  File "/usr/lib/python3/dist-packages/S3/S3.py", line 302, in list_all_buckets
    response = self.send_request(request)
  File "/usr/lib/python3/dist-packages/S3/S3.py", line 1258, in send_request
    conn = ConnMan.get(self.get_hostname(resource['bucket']))
  File "/usr/lib/python3/dist-packages/S3/ConnMan.py", line 253, in get
    conn.c.connect()
  File "/usr/lib/python3.8/http/client.py", line 921, in connect
    self.sock = self._create_connection(
  File "/usr/lib/python3.8/socket.py", line 808, in create_connection
    raise err
  File "/usr/lib/python3.8/socket.py", line 796, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
  Please try reproducing the error using
  the latest s3cmd code from the git master
  branch found at:
    https://github.com/s3tools/s3cmd
  and have a look at the known issues list:
    https://github.com/s3tools/s3cmd/wiki/Common-known-issues-and-their-solutions
  If the error persists, please report the
  above lines (removing any private
  info as necessary) to:
   s3tools-bugs@xxxxxxxxxxxxxxxxxxxxx
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!


-----Message d'origine-----
De : St-Germain, Sylvain (SSC/SPC) <sylvain.st-germain@xxxxxxxxx> 
Envoyé : 9 mars 2021 17:19
À : ceph-users@xxxxxxx
Objet :  Rados gateway basic pools missing

Hi everyone,

I just rebuild a (test) cluster using :

OS : Ubuntu 20.04.2 LTS
CEPH : ceph version 15.2.9 (357616cbf726abb779ca75a551e8d02568e15b17) octopus (stable)
3 nodes : monitor/storage


1.      The cluster looks good :

# ceph -s
cluster:
    id:     9a89aa5a-1702-4f87-a99c-f94c9f2cdabd
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum dao-wkr-04,dao-wkr-05,dao-wkr-06 (age 7m)
    mgr: dao-wkr-05(active, since 8m), standbys: dao-wkr-04, dao-wkr-06
    mds: cephfs:1 {0=dao-wkr-04=up:active} 2 up:standby
    osd: 9 osds: 9 up (since 7m), 9 in (since 4h)
    rgw: 3 daemons active (dao-wkr-04.rgw0, dao-wkr-05.rgw0, dao-wkr-06.rgw0)

  task status:

  data:
    pools:   7 pools, 121 pgs
    objects: 234 objects, 16 KiB
    usage:   9.0 GiB used, 2.0 TiB / 2.0 TiB avail
    pgs:     121 active+clean


2.      except that the main pools for the radosgw are not there

# sudo ceph osd lspools

1 device_health_metrics
2 cephfs_data
3 cephfs_metadata
4 .rgw.root
5 default.rgw.log
6 default.rgw.control
7 default.rgw.meta


Missing : default.rgw.buckets.index  & default.rgw.buckets.data

What do you think ?
Thx !

Sylvain


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux