Re: understanding PG count for a file

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



thanks gregory.

If I give a file name in 'ceph osd map' command still am getting 2 OSD numbers , even this file has more objects. Why is it like this? can you please explain

and one more doubt is 
When a client write the object into primary OSD, 
1. will that write be over then the primary OSD starts writing the object into secondary OSD or
2. primary OSD parallely write the obeject into secondary

and another doubt is if a file have many objects all objects write starts at the same time or one at a time 

Thanks in advance

Regards
Surya Balan 


On Mon, Aug 6, 2018 at 11:20 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote:
There seems to be a more fundamental confusion here. "ceph osd map" asks the cluster where a single *object* is located. On a pool of size 2, that will return 2 OSDs, but it DOES NOT check to see if the object actually exists — it just outputs the CRUSH mapping!
Files in CephFS are composed of many objects, if they are large. To find their location using "osd map" you'd need to query for those individual objects, which are named by inode number and position within the file.
(Once upon a time we had a cephfs utility that would map a file segment's OSD location for you, but I think it's gone now so you'll need to wrap it up yourself, sorry.)
-Greg

On Thu, Aug 2, 2018 at 5:26 PM 赵赵贺东 <zhaohedong@xxxxxxxxx> wrote:
what is the size of your file?What about a big size file?
If the file is big enough, it can not be stored by only two osds.
If the file is very small, as you know object size is 4MB, so it can be stored by only one object in one primary osd, and slave osd.


在 2018年8月2日,下午6:56,Surya Bala <sooriya.balan@xxxxxxxxx> 写道:

I understood your explaination.
The result of 'ceph osd map <poolname> <filename> ' command always gives only 2 OSDs(1 primary, 1 secondary). But it is not mandatory the objects are stored only in 2 OSDs it should be spreaded many OSDs.

So my doubt is why the command gives this result 

Regards
Surya Balan


On Thu, Aug 2, 2018 at 1:30 PM, 赵贺东 <zhaohedong@xxxxxxxxx> wrote:
Hello,

file -> many objects-> many PG(each pg has two copies, because your replication count is two)-> many OSD
pgs can be distributed in OSDs, no limitation for only 2, replication count 2only determine pg copies is 2.

Hope this will help.

> 在 2018年8月2日,下午3:43,Surya Bala <sooriya.balan@xxxxxxxxx> 写道:
>
> Hi folks,
>
> From the ceph documents i understood about PG and why should PG number should be optimal. But i dont find any info about the below point
>
> I am using cephfs client in my ceph cluster. When we store a file(consider replication count is 2) , it will be splitted into objects and each object will be stored in different PG and each PG will be mapped to a OSD. It means there can be many OSD for a single file . But why are we getting only 2 OSDs by the command 'ceph OSD map'
>
> file -> many objects-> many PG-> many OSD
>
> Is that all objects of a file will be stored in only 2 OSD(in case of replication count is 2)?
>
> Regards
> Surya Balan
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux