Den tis 27 juli 2021 kl 10:09 skrev Marc <Marc@xxxxxxxxxxxxxxxxx>: > > > > Try to install a completely new ceph cluster from scratch on fresh > > installed LTS Ubuntu by this doc > > https://docs.ceph.com/en/latest/cephadm/install/ . Many interesting > > discoveries await you. > > on centos7 14.2.22, manual with no surprises (just installed, so not really fully tested), less than 100 lines. I prefer the online manuals to have it like this. Especially with user primissions, because these have been changing syntax a little over time. > > yum install python-werkzeug -y > yum install ceph-osd ceph-mgr ceph-mon ceph-mds ceph-radosgw -y ^^^^^ I usually end up needing to add epel-release, otherwise some deps didn't work out. > ======== > osd's > ======== > sudo -u ceph ceph auth get client.bootstrap-osd -o /var/lib/ceph/bootstrap-osd/ceph.keyring This part will not work unless the ceph admin key already is in place on all the OSD hosts, so if you are not running all services on the same host as the rest, this part would fail. > ceph-volume lvm zap --destroy /dev/sdb > ceph-volume lvm create --data /dev/sdb --dmcrypt > systemctl enable ceph-osd@0 ..and these (sdb and osd@0) of course will vary for everyone Radosgw was installed but not enabled, configured or started, nor was a bootstrap key for it made (does anyone use bootstrap for rgw?) Not trying to shoot you down or anything, just point out that there are hidden assumptions everytime anyone tries to write docs that are supposed to "help" everyone. -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx