Re: Fresh install - all OSDs remain down and out

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



​Hi Markus
I am not pretty sure where the problem is, but yes. it should weight all the osds automatically.
I found something in your first post. 

...
bd-2:/dev/sdaf:/dev/sdaf2
ceph-deploy disk zap bd-2:/dev/sdaf
...

You use 'ceph-deploy osd create --zap-disk bd-2:/dev/sdaf:/dev/sdaf2' right?
It did disk zap first, so it would format the disk(sdaf). So the partition of journal you pointed(sdaf2) would be deleted.
It seems that the ceph-deploy just have done the "prepare" to osd disk, and not activate it yet.


2016-03-22 19:56 GMT+08:00 Markus Goldberg <goldberg@xxxxxxxxxxxxxxxxx>:
Hi desmond,
this seems to be much to do for 90 OSDs. And possible a few mistakes in typing.
Every change of disk needs extra editing too.
This weighting was automatically done in former versions.
Do you know why and where this changed or was i faulty at some point?

Markus

Am 21.03.2016 um 13:28 schrieb 施柏安:
Hi Markus

You should define the "osd device" and "host" then make ceph cluster work.
Take the types in your map (osd, host, chasis.....root) to design the crushmap according to your needed.
Example:
​​
host node1 {
        id -1
        alg straw
        hash 0
        item osd.0 weight 1.00
        item osd.1 weight 1.00
}
host node2 {
        id -2
        alg straw
        hash 0
        item osd.2 weight 1.00
        item osd.3 weight 1.00
}
root default {
        id 0
        alg straw
        hash 0
        item node1 weight 2.00 (sum of its item)
        item node2 weight 2.00
}
​​Then you can use default ruleset. It is set to take the root "default".


2016-03-21 19:50 GMT+08:00 Markus Goldberg <goldberg@xxxxxxxxxxxxxxxxx>:
Hi desmond,
this is my decompile_map:
root@bd-a:/etc/ceph# cat decompile_map
# begin crush map
tunable choose_local_tries 0
tunable choose_local_fallback_tries 0
tunable choose_total_tries 50
tunable chooseleaf_descend_once 1
tunable straw_calc_version 1

# devices

# types
type 0 osd
type 1 host
type 2 chassis
type 3 rack
type 4 row
type 5 pdu
type 6 pod
type 7 room
type 8 datacenter
type 9 region
type 10 root

# buckets
root default {
        id -1           # do not change unnecessarily
        # weight 0.000
        alg straw
        hash 0  # rjenkins1
}

# rules
rule replicated_ruleset {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step chooseleaf firstn 0 type host
        step emit
}

# end crush map
root@bd-a:/etc/ceph#

How should i change It?
I never had to edit anything in this area in former versions of ceph. Has something changed?
Is any new parameter nessessary in ceph.conf while installing?

Thank you,
  Markus

Am 21.03.2016 um 10:34 schrieb 施柏安:
It seems that there no setting weight to all of your osd. So the pg stuck in creating.
you can use some command to edit crushmap for setting weight:

# ceph osd getcrushmap -o map
# crushtool -d map -o decompile_map
# vim decompile_map (then you can change the weight to all of your osd and its host weight)
# crushtool -c decompile_map -o changed_map
# ceph osd setcrushmap -i changed_map

Then, it should work in your situation.


2016-03-21 17:20 GMT+08:00 Markus Goldberg <goldberg@xxxxxxxxxxxxxxxxx>:
Hi,
root@bd-a:~# ceph osd tree
ID WEIGHT TYPE NAME    UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1      0 root default
 0      0 osd.0           down        0          1.00000
 1      0 osd.1           down        0          1.00000
 2      0 osd.2           down        0          1.00000
...    delete all the other OSDs as they are the same
...
88      0 osd.88          down        0          1.00000
89      0 osd.89          down        0          1.00000
root@bd-a:~#

bye,
  Markus

Am 21.03.2016 um 10:10 schrieb 施柏安:
What's your crushmap show? Or command 'ceph osd tree' show.

2016-03-21 16:39 GMT+08:00 Markus Goldberg <goldberg@xxxxxxxxxxxxxxxxx>:
Hi,
i have upgraded my hardware and installed ceph totally new as described in http://docs.ceph.com/docs/master/rados/deployment/
The last job was creating the OSDs http://docs.ceph.com/docs/master/rados/deployment/ceph-deploy-osd/
I have used the create command and after that, the OSDs should be in and up but they are all down and out.
An additionally osd activate command does not help.

Ubuntu 14.04.4 kernel 4.2.1
ceph 10.0.2

What should i do, where is my mistake?

This is ceph.conf:

[global]
fsid = 122e929a-111b-4067-80e4-3fef39e66ecf
mon_initial_members = bd-0, bd-1, bd-2
mon_host = xxx.xxx.xxx.20,xxx.xxx.xxx.21,xxx.xxx.xxx.22
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public network = xxx.xxx.xxx.0/24
cluster network = 192.168.1.0/24
osd_journal_size = 10240
osd pool default size = 2
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1
osd_mkfs_type = btrfs
osd_mkfs_options_btrfs = -f -n 32k -l 32k
osd_mount_options_btrfs = rw,noatime,nodiratime,autodefrag
mds_max_file_size = 50000000000000


This is the log of the last osd:
##########
bd-2:/dev/sdaf:/dev/sdaf2
ceph-deploy disk zap bd-2:/dev/sdaf
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.31): /usr/bin/ceph-deploy osd create --fs-type btrfs bd-2:/dev/sdaf:/dev/sdaf2
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  disk                          : [('bd-2', '/dev/sdaf', '/dev/sdaf2')]
[ceph_deploy.cli][INFO  ]  dmcrypt                       : False
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  dmcrypt_key_dir               : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f944e197488>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  fs_type                       : btrfs
[ceph_deploy.cli][INFO  ]  func                          : <function osd at 0x7f944e16b500>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  zap_disk                      : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks bd-2:/dev/sdaf:/dev/sdaf2
[bd-2][DEBUG ] connected to host: bd-2
[bd-2][DEBUG ] detect platform information from remote host
[bd-2][DEBUG ] detect machine type
[bd-2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty
[ceph_deploy.osd][DEBUG ] Deploying osd to bd-2
[bd-2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host bd-2 disk /dev/sdaf journal /dev/sdaf2 activate True
[bd-2][INFO  ] Running command: ceph-disk -v prepare --cluster ceph --fs-type btrfs -- /dev/sdaf /dev/sdaf2
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --cluster ceph
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --cluster ceph
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --cluster ceph
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is /sys/dev/block/65:242/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is /sys/dev/block/65:242/dm/uuid
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mkfs_options_btrfs
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_mount_options_btrfs
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_cryptsetup_parameters
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_key_size
[bd-2][WARNIN] INFO:ceph-disk:Running command: /usr/bin/ceph-conf --cluster=ceph --name=osd. --lookup osd_dmcrypt_type
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is /sys/dev/block/65:242/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:Journal /dev/sdaf2 is a partition
[bd-2][WARNIN] WARNING:ceph-disk:OSD will not be hot-swappable if journal is not the same device as the osd data
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf2 uuid path is /sys/dev/block/65:242/dm/uuid
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk -i 2 /dev/sdaf
[bd-2][WARNIN] WARNING:ceph-disk:Journal /dev/sdaf2 was not prepared with ceph-disk. Symlinking directly.
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:Creating osd partition on /dev/sdaf
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --largest-new=1 --change-name=1:ceph data --partition-guid=1:c9486257-e53d-40b8-b7f6-3d228d0cb1f7 --typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be -- /dev/sdaf
[bd-2][DEBUG ] The operation has completed successfully.
[bd-2][WARNIN] DEBUG:ceph-disk:Calling partprobe on created device /dev/sdaf
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdaf
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] DEBUG:ceph-disk:Creating btrfs fs on /dev/sdaf1
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/mkfs -t btrfs -f -n 32k -l 32k -- /dev/sdaf1
[bd-2][WARNIN] Turning ON incompat feature 'extref': increased hardlink limit per file to 65536
[bd-2][DEBUG ]
[bd-2][DEBUG ] WARNING! - Btrfs v3.12 IS EXPERIMENTAL
[bd-2][DEBUG ] WARNING! - see http://btrfs.wiki.kernel.org before using
[bd-2][DEBUG ]
[bd-2][DEBUG ] fs created label (null) on /dev/sdaf1
[bd-2][DEBUG ]  nodesize 32768 leafsize 32768 sectorsize 4096 size 3.63TiB
[bd-2][DEBUG ] Btrfs v3.12
[bd-2][WARNIN] DEBUG:ceph-disk:Mounting /dev/sdaf1 on /var/lib/ceph/tmp/mnt.lW5X6l with options rw,noatime,nodiratime,autodefrag
[bd-2][WARNIN] INFO:ceph-disk:Running command: /bin/mount -t btrfs -o rw,noatime,nodiratime,autodefrag -- /dev/sdaf1 /var/lib/ceph/tmp/mnt.lW5X6l
[bd-2][WARNIN] DEBUG:ceph-disk:Preparing osd data dir /var/lib/ceph/tmp/mnt.lW5X6l
[bd-2][WARNIN] DEBUG:ceph-disk:Creating symlink /var/lib/ceph/tmp/mnt.lW5X6l/journal -> /dev/sdaf2
[bd-2][WARNIN] INFO:ceph-disk:Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lW5X6l/ceph_fsid.35649.tmp
[bd-2][WARNIN] INFO:ceph-disk:Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lW5X6l/fsid.35649.tmp
[bd-2][WARNIN] INFO:ceph-disk:Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lW5X6l/magic.35649.tmp
[bd-2][WARNIN] INFO:ceph-disk:Running command: /bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.lW5X6l
[bd-2][WARNIN] DEBUG:ceph-disk:Unmounting /var/lib/ceph/tmp/mnt.lW5X6l
[bd-2][WARNIN] INFO:ceph-disk:Running command: /bin/umount -- /var/lib/ceph/tmp/mnt.lW5X6l
[bd-2][WARNIN] DEBUG:ceph-disk:get_dm_uuid /dev/sdaf uuid path is /sys/dev/block/65:240/dm/uuid
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/sgdisk --typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d -- /dev/sdaf
[bd-2][DEBUG ] The operation has completed successfully.
[bd-2][WARNIN] DEBUG:ceph-disk:Calling partprobe on prepared device /dev/sdaf
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/partprobe /dev/sdaf
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm settle
[bd-2][WARNIN] INFO:ceph-disk:Running command: /sbin/udevadm trigger --action="" --sysname-match sdaf1
[bd-2][INFO  ] checking OSD status...
[bd-2][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json
[bd-2][WARNIN] there are 90 OSDs down
[bd-2][WARNIN] there are 90 OSDs out
[ceph_deploy.osd][DEBUG ] Host bd-2 is now ready for osd use.
root@bd-a:/etc/ceph#


root@bd-a:/etc/ceph# ceph -s
    cluster 122e929a-111b-4067-80e4-3fef39e66ecf
     health HEALTH_WARN
            64 pgs stuck inactive
            64 pgs stuck unclean
     monmap e1: 3 mons at {bd-0=xxx.xxx.xxx.20:6789/0,bd-1=xxx.xxx.xxx.21:6789/0,bd-2=xxx.xxx.xxx.22:6789/0}
            election epoch 6, quorum 0,1,2 bd-0,bd-1,bd-2
     osdmap e91: 90 osds: 0 up, 0 in
            flags sortbitwise
      pgmap v92: 64 pgs, 1 pools, 0 bytes data, 0 objects
            0 kB used, 0 kB / 0 kB avail
                  64 creating
root@bd-a:/etc/ceph#

--
MfG,
  Markus Goldberg

--------------------------------------------------------------------------
Markus Goldberg       Universität Hildesheim
                      Rechenzentrum
Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
MfG,
  Markus Goldberg

--------------------------------------------------------------------------
Markus Goldberg       Universität Hildesheim
                      Rechenzentrum
Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------



-- 
MfG,
  Markus Goldberg

--------------------------------------------------------------------------
Markus Goldberg       Universität Hildesheim
                      Rechenzentrum
Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------



-- 
MfG,
  Markus Goldberg

--------------------------------------------------------------------------
Markus Goldberg       Universität Hildesheim
                      Rechenzentrum
Tel +49 5121 88392822 Universitätsplatz 1, D-31141 Hildesheim, Germany
Fax +49 5121 88392823 email goldberg@xxxxxxxxxxxxxxxxx
--------------------------------------------------------------------------

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux