Re: Ceph Bucket strange issues rgw.none + id and marker diferent.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



When the bucket id is different than the bucket marker, that indicates the bucket has been resharded. Bucket stats shows 128 shards, which is reasonable for that object count. The rgw.none category in bucket stats is nothing to worry about.

What ceph version is this? This reminds me of a fix in https://github.com/ceph/ceph/pull/23940, which I now see never got its backports to mimic or luminous. :(

On 5/7/19 10:20 AM, EDH - Manuel Rios Fernandez wrote:

Hi Ceph’s

We got an issue that we’re still looking the cause after more than 60 hour searching a misconfiguration.

After cheking a lot of documentation and Questions&Answer we find that bucket id and bucket marker are not the same. We compared all our other bucket and all got the same id and marker.

Also found some bucket with the rgw.none section an another not.

This bucket is unable to be listed in a fashionable time. Customer relaxed usage from 120TB to 93TB , from 7Million objects to 5.8M.

We isolated a single petition in a RGW server and check some metric, just try to list this bucket generate 2-3Gbps traffic from RGW to OSD/MON’s.

I asked at IRC if there’re any problem about index pool be in other root in the same site at crushmap and we think that shouldn’t be.

Any idea or suggestion, however crazy, will be proven.

Our relevant configuration that may help :

CEPH DF:

ceph df

GLOBAL:

    SIZE AVAIL       RAW USED     %RAW USED

    684 TiB     139 TiB      545 TiB         79.70

POOLS:

    NAME               ID     USED        %USED     MAX AVAIL OBJECTS

volumes                        21     3.3 TiB 63.90       1.9 TiB        831300

backups                        22         0 B 0       1.9 TiB             0

    images                     23     1.8 TiB     49.33       1.9 TiB        237066

vms                            24     3.4 TiB 64.85       1.9 TiB        811534

openstack-volumes-archive      25      30 TiB 47.92        32 TiB       7748864

.rgw.root                      26     1.6 KiB 0       1.9 TiB             4

default.rgw.control            27         0 B 0       1.9 TiB           100

default.rgw.data.root          28      56 KiB 0       1.9 TiB           186

default.rgw.gc                 29         0 B 0       1.9 TiB            32

default.rgw.log                30         0 B 0       1.9 TiB           175

default.rgw.users.uid          31     4.9 KiB 0       1.9 TiB            26

default.rgw.users.email        36        12 B 0       1.9 TiB             1

default.rgw.users.keys         37       243 B 0       1.9 TiB            14

default.rgw.buckets.index      38         0 B 0       1.9 TiB          1056

default.rgw.buckets.data       39     245 TiB 93.84        16 TiB     102131428

default.rgw.buckets.non-ec     40         0 B 0       1.9 TiB         23046

default.rgw.usage              43         0 B         0    1.9 TiB             6

CEPH OSD Distribution:

ceph osd tree

ID  CLASS   WEIGHT TYPE NAME                 STATUS REWEIGHT PRI-AFF

-41         654.84045 root archive

-37 130.96848     host CEPH-ARCH-R03-07

100 archive 10.91399         osd.100               up  1.00000 1.00000

101 archive 10.91399         osd.101               up  1.00000 1.00000

102 archive 10.91399         osd.102               up  1.00000 1.00000

103 archive 10.91399         osd.103               up  1.00000 1.00000

104 archive 10.91399         osd.104               up  1.00000 1.00000

105 archive 10.91399         osd.105               up  1.00000 1.00000

106 archive 10.91409         osd.106               up  1.00000 1.00000

107 archive 10.91409         osd.107               up  1.00000 1.00000

108 archive 10.91409         osd.108               up  1.00000 1.00000

109 archive 10.91409         osd.109               up  1.00000 1.00000

110 archive 10.91409         osd.110               up  1.00000 1.00000

111 archive 10.91409         osd.111               up  1.00000 1.00000

-23 130.96800     host CEPH005

  4 archive 10.91399         osd.4                 up  1.00000 1.00000

41 archive 10.91399         osd.41                up  1.00000 1.00000

74 archive 10.91399         osd.74                up  1.00000 1.00000

75 archive 10.91399         osd.75                up  1.00000 1.00000

81 archive 10.91399         osd.81                up  1.00000 1.00000

82 archive 10.91399         osd.82                up  1.00000 1.00000

83 archive 10.91399         osd.83                up  1.00000 1.00000

84 archive 10.91399         osd.84                up  1.00000 1.00000

85 archive 10.91399         osd.85                up  1.00000 1.00000

86 archive 10.91399         osd.86                up  1.00000 1.00000

87 archive 10.91399         osd.87                up  1.00000 1.00000

88 archive 10.91399         osd.88                up  1.00000 1.00000

-17 130.96800     host CEPH006

  7 archive 10.91399         osd.7                 up  1.00000 1.00000

  8 archive 10.91399         osd.8                 up  1.00000 1.00000

  9 archive 10.91399         osd.9                 up  1.00000 1.00000

10 archive 10.91399         osd.10                up  1.00000 1.00000

12 archive 10.91399         osd.12                up  1.00000 1.00000

13 archive 10.91399         osd.13                up  1.00000 1.00000

42 archive 10.91399         osd.42                up  1.00000 1.00000

43 archive 10.91399         osd.43                up  1.00000 1.00000

51 archive 10.91399         osd.51                up  1.00000 1.00000

53 archive 10.91399         osd.53                up  1.00000 1.00000

76 archive 10.91399         osd.76                up  1.00000 1.00000

80 archive 10.91399         osd.80                up  1.00000 1.00000

-26 130.96800     host CEPH007

14 archive 10.91399         osd.14                up  1.00000 1.00000

15 archive 10.91399         osd.15                up  1.00000 1.00000

16 archive 10.91399         osd.16                up  1.00000 1.00000

39 archive 10.91399         osd.39                up  1.00000 1.00000

40 archive 10.91399         osd.40                up  1.00000 1.00000

44 archive 10.91399         osd.44                up  1.00000 1.00000

48 archive 10.91399         osd.48                up  1.00000 1.00000

49 archive 10.91399         osd.49                up  1.00000 1.00000

52 archive 10.91399         osd.52                up  1.00000 1.00000

77 archive 10.91399         osd.77                up  1.00000 1.00000

89 archive 10.91399         osd.89                up  1.00000 1.00000

90 archive 10.91399         osd.90                up  1.00000 1.00000

-31 130.96800     host CEPH008

  5 archive 10.91399         osd.5                 up  1.00000 1.00000

  6 archive 10.91399         osd.6                 up  1.00000 1.00000

11 archive 10.91399         osd.11                up  1.00000 1.00000

45 archive 10.91399         osd.45                up  1.00000 1.00000

46 archive 10.91399         osd.46                up  1.00000 1.00000

47 archive 10.91399         osd.47                up  1.00000 1.00000

55 archive 10.91399         osd.55                up  1.00000 1.00000

70 archive 10.91399         osd.70                up  1.00000 1.00000

71 archive 10.91399         osd.71                up  1.00000 1.00000

78 archive 10.91399         osd.78                up  1.00000 1.00000

79 archive 10.91399         osd.79                up  1.00000 1.00000

91 archive 10.91399         osd.91                up  1.00000 1.00000

-1          27.91296 root default

-30          6.98199     host CEPH-SSD-004

92     ssd 0.87299         osd.92                up  1.00000 1.00000

93     ssd 0.87299         osd.93                up  1.00000 1.00000

94     ssd 0.87299         osd.94                up  1.00000 1.00000

95     ssd 0.87299         osd.95                up  1.00000 1.00000

96     ssd 0.87299         osd.96                up  1.00000 1.00000

97     ssd 0.87299         osd.97                up  1.00000 1.00000

98     ssd 0.87299         osd.98                up  1.00000 1.00000

99     ssd 0.87299         osd.99                up  1.00000 1.00000

-3 6.97699     host CEPH001

  1     ssd 0.43599         osd.1                 up  1.00000 1.00000

17     ssd 0.43599         osd.17                up  1.00000 1.00000

18     ssd 0.43599         osd.18                up  1.00000 1.00000

19     ssd 0.43599         osd.19                up  1.00000 1.00000

20     ssd 0.43599         osd.20                up  1.00000 1.00000

21     ssd 0.43599         osd.21                up  1.00000 1.00000

22     ssd 0.43599         osd.22                up  1.00000 1.00000

23     ssd 0.43599         osd.23                up  1.00000 1.00000

37     ssd 0.87299         osd.37                up  1.00000 1.00000

54     ssd 0.87299         osd.54                up  1.00000 1.00000

56     ssd 0.43599         osd.56                up  1.00000 1.00000

60     ssd 0.43599         osd.60                up  1.00000 1.00000

61     ssd 0.43599         osd.61                up  1.00000 1.00000

62     ssd 0.43599         osd.62                up  1.00000 1.00000

-5 6.97699     host CEPH002

  2     ssd 0.43599         osd.2                 up  1.00000 1.00000

24     ssd 0         osd.24                up  1.00000 1.00000

25     ssd 0.43599         osd.25                up  1.00000 1.00000

26     ssd 0.43599         osd.26                up  1.00000 1.00000

27     ssd 0.43599         osd.27                up  1.00000 1.00000

28     ssd 0.43599         osd.28                up  1.00000 1.00000

29     ssd 0.43599         osd.29                up  1.00000 1.00000

30     ssd 0.43599         osd.30                up  1.00000 1.00000

38     ssd 0.87299         osd.38                up  1.00000 1.00000

57     ssd 0.43599         osd.57                up  1.00000 1.00000

63     ssd 0.43599         osd.63                up  1.00000 1.00000

64     ssd 0.43599         osd.64                up  1.00000 1.00000

65     ssd 0.43599         osd.65                up  1.00000 1.00000

66     ssd 0.43599         osd.66                up  1.00000 1.00000

72     ssd 0.87299         osd.72                up  1.00000 1.00000

-7 6.97699     host CEPH003

  0     ssd 0.43599         osd.0                 up  1.00000 1.00000

  3     ssd 0.43599         osd.3                 up  1.00000 1.00000

31     ssd    0         osd.31                up  1.00000 1.00000

32     ssd 0.43599         osd.32                up  1.00000 1.00000

33     ssd 0.43599         osd.33                up  1.00000 1.00000

34     ssd 0.43599         osd.34                up  1.00000 1.00000

35     ssd 0.43599         osd.35                up  1.00000 1.00000

36     ssd 0.43599         osd.36                up  1.00000 1.00000

50     ssd 0.87299         osd.50                up  1.00000 1.00000

58     ssd   0.43599      osd.58                up  1.00000 1.00000

59     ssd 0.43599         osd.59                up  1.00000 1.00000

67     ssd 0.43599         osd.67                up  1.00000 1.00000

68     ssd 0.43599         osd.68                up  1.00000 1.00000

69     ssd 0.43599         osd.69                up  1.00000 1.00000

73     ssd 0.87299         osd.73                up  1.00000 1.00000

CEPH CONF:

[global]

#Normal-Memory 1/5

debug rgw = 1

#Disable

debug osd = 0

debug journal = 0

debug ms = 0

fsid = e1ee8086-7cce-43fd-a252-3d677af22428

mon_initial_members = CEPH001, CEPH002, CEPH003

mon_host = 172.16.2.10,172.16.2.11,172.16.2.12

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

osd pool default pg num = 1024

osd pool default pgp num = 1024

public network = 172.16.2.0/24

cluster network = 172.16.1.0/24

osd pool default size = 2

osd pool default min size = 1

rgw dynamic resharding = true

[osd]

osd mkfs type = xfs

osd op threads = 12

osd disk threads = 12

osd recovery threads = 4

osd recovery op priority = 1

osd recovery max active = 2

osd recovery max single start = 1

osd max backfills = 4

osd backfill scan max = 16

osd backfill scan min = 4

osd client op priority = 63

osd memory target = 2147483648

osd scrub begin hour = 23

osd scrub end hour = 6

osd scrub load threshold = 1 #low load scrubbing

osd scrub during recovery = false #scrub during recovery

[mon]

    mon allow pool delete = true

    mon osd min down reporters = 3

[mon.a]

    host = CEPH001

    public bind addr = 172.16.2.10

    mon addr = 172.16.2.10:6789

    mon allow pool delete = true

[mon.b]

    host = CEPH002

    public bind addr = 172.16.2.11

    mon addr = 172.16.2.11:6789

    mon allow pool delete = true

[mon.c]

    host = CEPH003

    public bind addr = 172.16.2.12

    mon addr = 172.16.2.12:6789

    mon allow pool delete = true

[client.rgw.ceph-rgw01]

host = ceph-rgw01

rgw dns name = eu-es-s3gateway.edh-services.com

rgw frontends = "beast endpoint=172.16.2.6 port=8080"

rgw resolve cname = false

rgw thread pool size = 4096

rgw op thread timeout = 600

rgw num rados handles = 1

rgw num control oids = 8

rgw cache enabled = true

rgw cache lru size = 10000

rgw enable usage log = true

rgw usage log tick interval = 30

rgw usage log flush threshold = 1024

rgw usage max shards = 32

rgw usage max user shards = 1

rgw log http headers = "http_x_forwarded_for"

[client.rgw.ceph-rgw03]

host = ceph-rgw03

rgw dns name = XXXXXXXXXXXXXX

rgw frontends = "beast endpoint=172.16.2.8 port=8080"

rgw resolve cname = false

rgw thread pool size = 4096

rgw op thread timeout = 600

rgw num rados handles = 1

rgw num control oids = 8

rgw cache enabled = true

rgw cache lru size = 100000

rgw enable usage log = true

rgw usage log tick interval = 30

rgw usage log flush threshold = 1024

rgw usage max shards = 32

rgw usage max user shards = 1

rgw log http headers = "http_x_forwarded_for"

[OSD HOST CROP]

BUCKET:

radosgw-admin bucket stats --bucket=XXXX

{

    "bucket": "XXXX",

    "zonegroup": "4d8c7c5f-ca40-4ee3-b5bb-b2cad90bd007",

    "placement_rule": "default-placement",

"explicit_placement": {

        "data_pool": "default.rgw.buckets.data",

"data_extra_pool": "default.rgw.buckets.non-ec",

        "index_pool": "default.rgw.buckets.index"

    },

    "id": "48efb8c3-693c-4fe0-bbe4-fdc16f590a82.14987264.1",

    "marker": "48efb8c3-693c-4fe0-bbe4-fdc16f590a82.3856921.7",

    "index_type": "Normal",

    "owner": "XXXXX",

    "ver": "0#38096,1#37913,2#37954,3#37836,4#38134,5#38236,6#37911,7#37879,8#37804,9#38005,10#37584,11#38446,12#37957,13#37537,14#37775,15#37794,16#38399,17#38162,18#37834,19#37633,20#37403,21#38173,22#37651,23#37303,24#37992,25#38228,26#38441,27#37975,28#38095,29#37835,30#38264,31#37958,32#37666,33#37517,34#38260,35#38168,36#37689,37#37551,38#37700,39#38056,40#38175,41#37765,42#37721,43#38472,44#37928,45#38451,46#37491,47#37875,48#38405,49#38011,50#38025,51#37983,52#37940,53#38306,54#37908,55#38181,56#37721,57#38366,58#37834,59#38392,60#37928,61#38235,62#37837,63#37940,64#38294,65#37610,66#37974,67#38304,68#37725,69#37301,70#37155,71#37681,72#37463,73#37603,74#37323,75#37717,76#37111,77#37688,78#37473,79#37052,80#37413,81#37758,82#36971,83#37327,84#37056,85#37302,86#37492,87#37250,88#37708,89#36891,90#38019,91#37485,92#37335,93#37712,94#37754,95#37117,96#37085,97#37694,98#37386,99#37384,100#37668,101#37329,102#37177,103#37494,104#37296,105#37366,106#37853,107#37234,108#36945,109#37040,110#37389,111#37973,112#37092,113#37327,114#37505,115#37545,116#37884,117#37325,118#37401,119#37208,120#37277,121#37087,122#37664,123#37649,124#37517,125#37618,126#37145,127#37300",

    "master_ver": "0#0,1#0,2#0,3#0,4#0,5#0,6#0,7#0,8#0,9#0,10#0,11#0,12#0,13#0,14#0,15#0,16#0,17#0,18#0,19#0,20#0,21#0,22#0,23#0,24#0,25#0,26#0,27#0,28#0,29#0,30#0,31#0,32#0,33#0,34#0,35#0,36#0,37#0,38#0,39#0,40#0,41#0,42#0,43#0,44#0,45#0,46#0,47#0,48#0,49#0,50#0,51#0,52#0,53#0,54#0,55#0,56#0,57#0,58#0,59#0,60#0,61#0,62#0,63#0,64#0,65#0,66#0,67#0,68#0,69#0,70#0,71#0,72#0,73#0,74#0,75#0,76#0,77#0,78#0,79#0,80#0,81#0,82#0,83#0,84#0,85#0,86#0,87#0,88#0,89#0,90#0,91#0,92#0,93#0,94#0,95#0,96#0,97#0,98#0,99#0,100#0,101#0,102#0,103#0,104#0,105#0,106#0,107#0,108#0,109#0,110#0,111#0,112#0,113#0,114#0,115#0,116#0,117#0,118#0,119#0,120#0,121#0,122#0,123#0,124#0,125#0,126#0,127#0",

    "mtime": "2019-04-30 15:14:20.152747",

    "max_marker": "0#,1#,2#,3#,4#,5#,6#,7#,8#,9#,10#,11#,12#,13#,14#,15#,16#,17#,18#,19#,20#,21#,22#,23#,24#,25#,26#,27#,28#,29#,30#,31#,32#,33#,34#,35#,36#,37#,38#,39#,40#,41#,42#,43#,44#,45#,46#,47#,48#,49#,50#,51#,52#,53#,54#,55#,56#,57#,58#,59#,60#,61#,62#,63#,64#,65#,66#,67#,68#,69#,70#,71#,72#,73#,74#,75#,76#,77#,78#,79#,80#,81#,82#,83#,84#,85#,86#,87#,88#,89#,90#,91#,92#,93#,94#,95#,96#,97#,98#,99#,100#,101#,102#,103#,104#,105#,106#,107#,108#,109#,110#,111#,112#,113#,114#,115#,116#,117#,118#,119#,120#,121#,122#,123#,124#,125#,126#,127#",

    "usage": {

        "rgw.none": {

            "size": 0,

"size_actual": 0,

"size_utilized": 0,

            "size_kb": 0,

"size_kb_actual": 0,

"size_kb_utilized": 0,

"num_objects": 0

        },

        "rgw.main": {

            "size": 95368837327307,

"size_actual": 95379421843456,

"size_utilized": 95370098064318,

            "size_kb": 93133630203,

"size_kb_actual": 93143966644,

"size_kb_utilized": 93134861391,

"num_objects": 5872260

        },

        "rgw.multimeta": {

            "size": 0,

"size_actual": 0,

"size_utilized": 0,

            "size_kb": 0,

"size_kb_actual": 0,

"size_kb_utilized": 0,

"num_objects": 467

        }

    },

    "bucket_quota": {

        "enabled": false,

        "check_on_raw": false,

        "max_size": -1024,

        "max_size_kb": 0,

        "max_objects": -1

    }

}


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux