On Mon, 24 May 2021, 21:08 Marc, <Marc@xxxxxxxxxxxxxxxxx> wrote: > > > > I'm attempting to use cephadm and Pacific, currently on debian buster, > > mostly because centos7 ain't supported any more and cenotos8 ain't > > support > > by some of my hardware. > > Who says centos7 is not supported any more? Afaik centos7/el7 is being > supported till its EOL 2024. By then maybe a good alternative for > el8/stream has surfaced. > Not supported by ceph Pacific, it's our os of choice otherwise. My testing says the version available of podman, docker and python3, do not work with Pacific. Given I've needed to upgrade docker on buster can we please have a list of versions that work with cephadm, maybe even have cephadm say no, please upgrade unless your running the right version or better. > > Anyway I have a few nodes with 59x 7.2TB disks but for some reason the > > osd > > daemons don't start, the disks get formatted and the osd are created but > > the daemons never come up. > > what if you try with > ceph-volume lvm create --data /dev/sdi --dmcrypt ? > I'll have a go. > > They are probably the wrong spec for ceph (48gb of memory and only 4 > > cores) > > You can always start with just configuring a few disks per node. That > should always work. > That was my thought too. Thanks Peter > > but I was expecting them to start and be either dirt slow or crash > > later, > > anyway I've got upto 30 of them, so I was hoping on getting at least get > > 6PB of raw storage out of them. > > > > As yet I've not spotted any helpful error messages. > > > _______________________________________________ > ceph-users mailing list -- ceph-users@xxxxxxx > To unsubscribe send an email to ceph-users-leave@xxxxxxx > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx