答复: Testing Ceph cluster for future deployment.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



[root@blsceph01-1 ~]# radosgw-admin user info --uid=testuser
2016-08-15 12:04:33.290367 7f7bea1f09c0  0 RGWZoneParams::create():
error creating default zone params: (17) File exists

You might don't have a radosgw user named testuser.  to see a list of users: radosgw-admin --name client.admin metadata list user

[root@blsceph01-1 ~]# radosgw-admin region get > region.conf.json
failed to init zonegroup: (2) No such file or directory
[root@blsceph01-1 ~]# radosgw-admin zone get > zone.conf.json
unable to initialize zone: (2) No such file or directory

You might don't have a zone or region configured as in params

Other than above, I have also noticed not every result matches what their man says. More likely, the result is empty: []

Regarding, pg and osd and objects, my experience is once you have built a cluster with its initial settings, (maybe pg per pool), the more pools the more pgs. Sooner or later, you would get a health warn like "too many pg per osd" or "too many objects per xxx", then adding more OSDs into cluster would be the helpful.

发件人: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> 代表 jan hugo prins <jprins@xxxxxxxxxxxx>
发送时间: 2016年8月15日 12:14:05
收件人: ceph-users@xxxxxxxxxxxxxx
主题: Testing Ceph cluster for future deployment.
 
Hello,

I'm currently in the fase of testing a Ceph setup to see if it will fit
our need for a 3 DC storage sollution.

I install Centos 7 with Ceph version 10.2.2

I have a few things that I noticed so far:

- In S3 radosgw-admin I see an error:

[root@blsceph01-1 ~]# radosgw-admin user info --uid=testuser
2016-08-15 12:04:33.290367 7f7bea1f09c0  0 RGWZoneParams::create():
error creating default zone params: (17) File exists

I have found some reference to this error online but this was related to
some upgrade issue (http://tracker.ceph.com/issues/15597).
My install is a fresh install of 10.2.2. I think someone else also
mentioned he saw this error, but I can't find a sollution so far.


- I choose not to name my cluster simply "ceph" because we could end up
with multiple clusters in the future but I named my cluster blsceph01.
During installation I ran into the issue that the cluster wouldn't start
and I found a hard reference in the systemd files
(/usr/lib/systemd/system/) to the clustername ceph
(Environment=CLUSTER=ceph) and only after changing this to my
clustername everything would work normally.


- I currently have 3 OSD nodes with each of them 3 1TB SSD drives for
OSD. So in total I have 9 OSD drives. Looking at the documentation this
would give me a total of 512 PG's in total. The total number of pools
that we are going to house on this storage is currently unknown, but I
have started with the installation of S3 which gives me 12 pools to
start with, so the pg_num and the pgp_num per pool should be set to 32.
Is this correct, or am I missing something here? What if I create more
pools over time and have more then 16 pools? Then my total number of
PG's is higher then this number of 512. I allready see the message "too
many PGs per OSD (609 > max 300)" and I could make this warning level
higher, but where are the limits?

- I currently have an warning stating the following: pool
default.rgw.buckets.data has many more objects per pg than average (too
few pgs?)
Is it possible to spread the buckets in a pure S3 workload on multiple
pools? Could I make a dedicated pool for a bucket if I expect that
bucket to be very big, or to make a split between the buckets of
different customers? Or maybe have different protection levels for
different buckets?

- I try to follow the following howto
(http://cephnotes.ksperis.com/blog/2014/11/28/placement-pools-on-rados-gw)
on how to put a bucket in a specific placement group so I can split data
of different customers in different pools but some commands return an error:

[root@blsceph01-1 ~]# radosgw-admin region get > region.conf.json
failed to init zonegroup: (2) No such file or directory
[root@blsceph01-1 ~]# radosgw-admin zone get > zone.conf.json
unable to initialize zone: (2) No such file or directory

This could have something to do with the other error radosgw-admin is
giving me.


--
Met vriendelijke groet / Best regards,

Jan Hugo Prins
Infra and Isilon storage consultant

Better.be B.V.
Auke Vleerstraat 140 E | 7547 AN Enschede | KvK 08097527
T +31 (0) 53 48 00 694 | M +31 (0)6 26 358 951
jprins@xxxxxxxxxxxx | www.betterbe.com

This e-mail is intended exclusively for the addressee(s), and may not
be passed on to, or made available for use by any person other than
the addressee(s). Better.be B.V. rules out any and every liability
resulting from any electronic transmission.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux