Re: Replaced a disk, first time. Quick question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 

19446/16764 objects degraded (115.999%) ß I noticed that number seems odd

I don't think that's normal!

40795/16764 objects degraded (243.349%) ß Now I’m really concerned.

I'd recommend providing more info, Ceph version, bluestore or filestore, crushmap etc.

Hi, thanks for the reply.

12.2.2 bluestore

ceph osd crush tree

ID  CLASS WEIGHT    TYPE NAME     

 -1       174.62842 root default  

 -3        29.10474     host OSD0 

  0   hdd   3.63809         osd.0 

  6   hdd   3.63809         osd.6 

 12   hdd   3.63809         osd.12

 18   hdd   3.63809         osd.18

 24   hdd   3.63809         osd.24

 30   hdd   3.63809         osd.30

 46   hdd   3.63809         osd.46

 47   hdd   3.63809         osd.47

 -5        29.10474     host OSD1 

  1   hdd   3.63809         osd.1 

  7   hdd   3.63809         osd.7 

 13   hdd   3.63809         osd.13

 19   hdd   3.63809         osd.19

 25   hdd   3.63809         osd.25

 31   hdd   3.63809         osd.31

 36   hdd   3.63809         osd.36

 41   hdd   3.63809         osd.41

 -7        29.10474     host OSD2 

  2   hdd   3.63809         osd.2 

  8   hdd   3.63809         osd.8 

 14   hdd   3.63809         osd.14

 20   hdd   3.63809         osd.20

 26   hdd   3.63809         osd.26

 32   hdd   3.63809         osd.32

 37   hdd   3.63809         osd.37

 42   hdd   3.63809         osd.42

 -9        29.10474     host OSD3 

  3   hdd   3.63809         osd.3 

  9   hdd   3.63809         osd.9 

 15   hdd   3.63809         osd.15

 21   hdd   3.63809         osd.21

 27   hdd   3.63809         osd.27

 33   hdd   3.63809         osd.33

 38   hdd   3.63809         osd.38

 43   hdd   3.63809         osd.43

-11        29.10474     host OSD4 

  4   hdd   3.63809         osd.4 

 10   hdd   3.63809         osd.10

 16   hdd   3.63809         osd.16

 22   hdd   3.63809         osd.22

 28   hdd   3.63809         osd.28

 34   hdd   3.63809         osd.34

 39   hdd   3.63809         osd.39

 44   hdd   3.63809         osd.44

-13        29.10474     host OSD5 

  5   hdd   3.63809         osd.5 

 11   hdd   3.63809         osd.11

 17   hdd   3.63809         osd.17

 23   hdd   3.63809         osd.23

 29   hdd   3.63809         osd.29

 35   hdd   3.63809         osd.35

 40   hdd   3.63809         osd.40

45   hdd   3.63809         osd.45

ceph osd crush rule ls

replicated_rule

thanks.

-Drew

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux