Re: Beginner's questions regarding Ceph Deployment with ceph-ansible

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On Mon, Aug 6, 2018 at 3:08 PM Jörg Kastning <joerg.kastning@xxxxxxxxxxxxxxxx> wrote:
Hi @all,

Hi!
 

I'm very new to Ceph and trying to learn how to setup a testing
environment in which we could explore the possibilities of Ceph.

For a complete beginner the documentation (URL:
http://docs.ceph.com/docs/master/) leaves some questions open. So I hope
to find someone on this list who is willing to take me by the hand and
guide me through the deployment process.

To give you an idea on what we are trying to accomplish I'm going to
describe our goals and the testing environment in the following sections
and ask specific questions about the deployment process afterwards.

# Goals

We would like to set up an environment to provide block, file and object
storage for our customers and as a storage backend for an OpenStack
environment.

We are totally beginners in Ceph and OpenStack and like to learn how to
set up and operate a Ceph cluster first before we take a look on OpenStack.

Good approach to have storage topic already figured out before next step with OS. As you maybe already know Ceph is most common storage option year after year in OS community surveys. It's because is fully supported, well documented and works like a charm if you will follow instructions.
 

# Testing Environment

Our testing environment contains for virtual machines (VM) in a VMware
vSphere Cluster. On VM should be used as an admin node and three should
serve as ceph nodes for the different daemons like mon, osd and so on.

For a testing enviroment you can go with low number of machines - just for ha mons (at least 3) and 3xosd's for replicated three copies config. 
 

I worked through the preflight checklist (URL:
http://docs.ceph.com/docs/master/start/quick-start-preflight/#) and
ensured connectivity from the admin node to the ceph nodes with the ceph
deploy user.

# Questions

## NTP

Section INSTALL NTP (URL:
http://docs.ceph.com/docs/master/start/quick-start-preflight/#install-ntp)
advises to install the 'ntp' package on each node. Does it have to be
'ntp' or could 'chrony' be used instead? If it has to be 'ntp', then why?

You can use whatever you want to provide stable and synchronized clocks on all machines. There is no dependency to ntp package. It is dependency to have all in sync. 
 

## ceph-ansible

### Section Releases (URL:
http://docs.ceph.com/ceph-ansible/master/index.html#releases)

This section says that supported ansible versions are 2.4 and 2.5. Is
the current version 2.6.2 supported as well or do I have to use one of
the earlier versions instead?

You can use 2.6. 
 

### Playbook (URL:
http://docs.ceph.com/ceph-ansible/master/index.html#playbook)

I took a quick look into the playbook and saw a lot of host group names
describing services I do not know if I need them or for what I they are
used for. In my understanding for a basic cluster I need only mons and
osds and could expand the cluster later with some metadata servers if I
like to use CephFS.

yes, correct, minimal cluster is mon, osd and mds for CephFS (numbers of nodes may vary if you need HA, scalability and so on.)
 

But what are agents, rgws, nfss, restapis, rbdmirrors, clients and
iscsi-gws? Where could I found additional information about them? Where
for and how do I use them? Please, put me to the right section in the
documentation.


As Ceph is multi protocol storage it can talk with http/rest based object storage clients, it can act like a nfs server and iscsi target. Those groups just configure proper software stack and daemons to support it. If are sure you will go with CephFS, don't bother with them. But please notice most common deployment scenario for OpenStack on Ceph storage include RBD Block storage (included in base mon/osd) and object storage supported by RadosGW. So double check if you really don't want it. 
 
I guess I delete all lines from my site.yml that are not necessary for a
minimal setup and add them later on when expanding the cluster.


Yes, you can comment out them easily.
 
Best regards!
-- 
pawel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux