Hello Greg,
Output of 'ceph osd tree':# id weight type name up/down reweight
-1 27.3 root default
-2 9.1 host stor1
0 3.64 osd.0 up 1
1 3.64 osd.1 up 1
2 1.82 osd.2 up 1
-3 9.1 host stor2
3 3.64 osd.3 up 1
4 1.82 osd.4 up 1
6 3.64 osd.6 up 1
-4 9.1 host stor3
7 3.64 osd.7 up 1
8 3.64 osd.8 up 1
9 1.82 osd.9 up 1
-1 27.3 root default
-2 9.1 host stor1
0 3.64 osd.0 up 1
1 3.64 osd.1 up 1
2 1.82 osd.2 up 1
-3 9.1 host stor2
3 3.64 osd.3 up 1
4 1.82 osd.4 up 1
6 3.64 osd.6 up 1
-4 9.1 host stor3
7 3.64 osd.7 up 1
8 3.64 osd.8 up 1
9 1.82 osd.9 up 1
(missing of osd.5 comes from previous test when I remove HDD from a working cluster, but I think this is not relevant now)
root@stor3:~# ceph osd pool get .rgw.buckets pg_num
pg_num: 250
root@stor3:~# ceph osd pool get .rgw.buckets pgp_num
pgp_num: 250
pgmap v129814: 514 pgs: 514 active; 818 GB data, 1682 GB used
root@stor3:~# ceph osd pool get .rgw.buckets pg_num
pg_num: 250
root@stor3:~# ceph osd pool get .rgw.buckets pgp_num
pgp_num: 250
pgmap v129814: 514 pgs: 514 active; 818 GB data, 1682 GB used
Thank you,
Mihaly
2013/9/16 Gregory Farnum <greg@xxxxxxxxxxx>
What is your PG count and what's the output of "ceph osd tree"? It's
possible that you've just got a slightly off distribution since there
still isn't much data in the cluster (probabilistic placement and all
that), but let's cover the basics first.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Mon, Sep 16, 2013 at 2:08 AM, Mihály Árva-Tóth
<mihaly.arva-toth@xxxxxxxxxxxxxxxxxxxxxx> wrote:
> Hello,
>
> I made some tests on 3 node Ceph cluster: upload 3 million 50 KiB object to
> single container. Speed and performance were okay. But data does not
> distributed correctly. Every node has got 2 pcs. 4 TB and 1 pc. 2 TB HDD.
>
> osd.0 41 GB (4 TB)
> osd.1 47 GB (4 TB)
> osd.3 16 GB (2 TB)
> osd.4 40 GB (4 TB)
> osd.5 49 GB (4 TB)
> osd.6 17 GB (2 TB)
> osd.7 48 GB (4 TB)
> osd.8 42 GB (4 TB)
> osd.9 18 GB (2 TB)
>
> Every 4 TB and 2 TB HDDs are from same vendor and same type. (WD RE SATA)
>
> I monitored iops with Zabbix under test, you can see here:
> http://ctrlv.in/237368
> (sda and sdb are system HDDs) This graph are same on every three nodes.
>
> Is there any idea what's wrong or what should I see?
>
> I'm using ceph-0.67.3 on Ubuntu 12.04.3 x86_64.
>
> Thank you,
> Mihaly
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com