Re: Why you might want packages not containers for Ceph deployments

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Setting up cephadm was pretty straight forward and doing the upgrade was
> also "easy". But I was not fond of it at all as I felt that I lost
> control.
> I had set up a couple of machines with different hardware profiles to
> run
> various services on each, and when I put hosts into the cluster and
> deployed services, cephadm choose to put things on machines not well
> suited
> to handle that kind of work.

Indeed using a containerized service, requires knowledge of it. It is just ridiculous to assume you do not. For me this cephadm is just a dead end. 
Most will choose their own container environment, and what then with this ceph? You have to maintain two different environments? You have to allocate resources between 2 different container environments, I cannot even imagine what complications that brings.

> Also, future more running the upgrade, you
> got
> one line of text on the current progress, so I felt I was not in control
> of
> what happened.

There is nothing wrong with that. If you have 5 years experience with podman/docker, you would feel more comfortable and in control. But you are here for using ceph (as everyone) not to acquire knowledge of a new container environment the cephadm team has chosen for you.

> Currently, I run with the built packages for Debian and use the same
> operating system and packages on all machines, and upgrading a cluster
> is
> as easy as running the apt update and apt upgrade. After reboot, that
> machine is done. By doing that in the correct order, you will have
> complete
> control. 

Indeed, safe and sound. Have you read recently that some people were updating software what made dockerd terminate all ceph daemons? That is what you get when you think containers are an easy 'i do not know what to do' deployment tool.

> than ten hosts, as in my case. And might not be feasible if you have a
> server park with 1000 servers, but then again, controlling and managing
> your cluster is a part of the work, so perhaps you don't want an
> automatic
> solution there either.

There are companies using thousands of osd's for years like CERN and NASA. They have been managing ok, most important is keep things simple. 

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux