Re: Testing Ceph cluster for future deployment.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Mon, 15 Aug 2016 14:14:05 +0200 jan hugo prins wrote:

> Hello,
> 
> I'm currently in the fase of testing a Ceph setup to see if it will fit
> our need for a 3 DC storage sollution.
> 
The usual warnings about performance (network latency) with a multi DC
Ceph cluster apply.
Do (re-)search the ML archives.

> I install Centos 7 with Ceph version 10.2.2
> 
> I have a few things that I noticed so far:
> 
[no RGW insights from me]
> 
> - I currently have 3 OSD nodes with each of them 3 1TB SSD drives for
> OSD. So in total I have 9 OSD drives. Looking at the documentation this
> would give me a total of 512 PG's in total. The total number of pools
> that we are going to house on this storage is currently unknown, but I
> have started with the installation of S3 which gives me 12 pools to
> start with, so the pg_num and the pgp_num per pool should be set to 32.
> Is this correct, or am I missing something here? 

That is basically correct, however you want to allocate more PGs to busy
and large (data) pools and less to infrequently used and small pools. 
Again, not ideas about RGW, but looking at http://ceph.com/pgcalc/
I'd say that most pools will be better off with 16 PGs and the buckets
pool with the remainder.

>What if I create more
> pools over time and have more then 16 pools? Then my total number of
> PG's is higher then this number of 512. 
You need to balance pools, PGs and OSDs.
The idea here obviously being that a pool that's actually needed will also
consume data/space and thus require more OSDs anyway.

>I allready see the message "too
> many PGs per OSD (609 > max 300)" and I could make this warning level
> higher, but where are the limits?
> 
That's way too high and you should not be seeing this if all your 12 pools
have 32 PGs. 
So you probably already have more pools and with a LOT of PGs (like 1500
more than your 12 RGW ones).

> - I currently have an warning stating the following: pool
> default.rgw.buckets.data has many more objects per pg than average (too
> few pgs?)

See above and pgcalc.

Christian
> Is it possible to spread the buckets in a pure S3 workload on multiple
> pools? Could I make a dedicated pool for a bucket if I expect that
> bucket to be very big, or to make a split between the buckets of
> different customers? Or maybe have different protection levels for
> different buckets?
> 
> - I try to follow the following howto
> (http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw)
> on how to put a bucket in a specific placement group so I can split data
> of different customers in different pools but some commands return an error:
> 
> [root@blsceph01-1 ~]# radosgw-admin region get > region.conf.json
> failed to init zonegroup: (2) No such file or directory
> [root@blsceph01-1 ~]# radosgw-admin zone get > zone.conf.json
> unable to initialize zone: (2) No such file or directory
> 
> This could have something to do with the other error radosgw-admin is
> giving me.
> 
> 


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux