Re: How to track one file store in ceph as an object

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thanks a lot to your reply.

i know that [5,3], osd.5 is the primary osd, since my replicate size
is 2.    and in my testing cluster. test.txt only have this only one
file.

i just mount -t cephfs 192.168.250.15:6789:/ , so means ,use pool data
by default ?
##The acting OSDs however are the OSD numbers. So for this PG 5 is
primary and 3 secondary.##   i am not understand it .  up[5,3] means
osd.5 is primary,osd.3 is replicate.   acting[5,3]  what is the
meaning?

 my purpose just want to know how is mapping between object and the file.

i read some information from:
http://www.ibm.com/developerworks/library/l-ceph/  ,  that is why some
confuse about it.

##Rather than rely on allocation lists (metadata to map blocks on a
disk to a given file), Ceph uses an interesting alternative. A file
from the Linux perspective is assigned an inode number (INO) from the
metadata server, which is a unique identifier for the file. The file
is then carved into some number of objects (based on the size of the
file). Using the INO and the object number (ONO), each object is
assigned an object ID (OID). Using a simple hash over the OID, each
object is assigned to a placement group. The placement group
(identified as a PGID) is a conceptual container for objects. Finally,
the mapping of the placement group to object storage devices is a
pseudo-random mapping using an algorithm called Controlled Replication
Under Scalable Hashing(CRUSH). In this way, mapping of placement
groups (and replicas) to storage devices does not rely on any metadata
but instead on a pseudo-random mapping function. This behavior is
ideal, because it minimizes the overhead of storage and simplifies the
distribution and lookup of data##


========== your last reply ==========
On 08/15/2013 05:18 PM  wido wrote:
> mount cephfs to /mnt/mycehfs on debian 7, kernel3.10
>
> eg: have one file
> root@test-debian:/mnt/mycephfs# ls -i test.txt
> 1099511627776 test.txt
> root@test-debian:/mnt/mycephfs# ceph osd map volumes test.txt

So you used the pool volumes here when mounting instead of the pool "data" ?

> osdmap e351 pool 'volumes' (3) object 'test.txt' -> pg 3.8b0b6108 (3.8)
> -> up [5,3] acting [5,3]

I don't think it's that easy since you also have to calculate in the
path name of the file. I haven't done this myself, but your method would
imply there can only be one "test.txt" in that whole pool.

The acting OSDs however are the OSD numbers. So for this PG 5 is primary
and 3 secondary.

> root@test-debian:/mnt/mycephfs# ceph osd  pool get volumes size
> size: 2
> root@test-debian:/mnt/mycephfs# ceph osd stat
> e351: 7 osds: 7 up, 7 in
> root@test-debian:/mnt/mycephfs# ceph osd dump
>
> epoch 351
> fsid db32486a-7ad3-4afe-8b67-49ee2a6dcecf
> created 2013-08-08 13:45:52.579015
> modified 2013-08-15 02:39:45.473969
> flags
>
> pool 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins
> pg_num 192 pgp_num 192 last_change 1 owner 0 crash_replay_interval 45
> pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash
> rjenkins pg_num 192 pgp_num 192 last_change 1 owner 0
> pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins
> pg_num 192 pgp_num 192 last_change 1 owner 0
> pool 3 'volumes' rep size 2 min_size 1 crush_ruleset 0 object_hash
> rjenkins pg_num 256 pgp_num 256 last_change 220 owner 18446744073709551615
>
>
> how to know where is store in ceph,which object, which pg?  i want
> understand how ceph work.
>

In your example the data is on osd.3 and osd.5

Wido

> pls give your guide.
>
> best wish.
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux