Re: Ceph OSD with OCFS2

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

 

Ceph journal works in different way.  It’s a write ahead journal, all the data will be persisted first in journal and then will be written to actual place. Journal data is encoded. Journal is a fixed size partition/file and written sequentially. So, if you are placing journal in HDDs, it will be overwritten, for SSD case , it will be GC later. So, if you are measuring amount of data written to the device it will be double. But, if you are saying you have written a 500MB file to cluster and you are seeing the actual file size is 10G, it should not be the case. How are you seeing this size BTW ?

 

Could you please tell us more about your configuration ?

What is the replication policy you are using ?

What interface you used to store the data ?

 

Regarding your other query..

 

<< If i transfer 1GB data, what will be server size(OSD), Is this will write compressed format

 

No, actual data is not compressed. You don’t want to fill up OSD disk and there are some limits you can set . Check the following link

 

http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/

 

It will stop working if the disk is 95% full by default.

 

<< Is it possible to take backup from server compressed data and copy the same to other machine as Server_Backup  - then start new client using Server_Backup

For backup, check the following link if that works for you.

 

https://ceph.com/community/blog/tag/backup/

 

Also, you can use RGW federated config for back up.

 

<< Data removal is very slow

 

How are you removing data ? Are you removing a rbd image ?

 

If you are removing entire pool , that should be fast and do deletes data async way I guess.

 

Thanks & Regards

Somnath

 

From: gjprabu [mailto:gjprabu@xxxxxxxxxxxx]
Sent: Thursday, June 11, 2015 6:38 AM
To: Somnath Roy
Cc: ceph-users@xxxxxxxxxxxxxx; Kamala Subramani; Siva Sokkumuthu    
Subject: Re: RE: [ceph-users] Ceph OSD with OCFS2

 

Hi Team,

    Once data transfer completed the journal file should convert all memory data's to real places but our cause it showing double of the size after complete transfer, Here everyone will confuse what is real file and folder size. Also What will happen If i move the monitoring from that osd server to separately, is the double size issue may solve ?

    We have below query also.

1.  Extra 2-3 mins is taken for hg / git repository operation like clone , pull , checkout and update.

2.  If i transfer 1GB data, what will be server size(OSD), Is this will write compressed format.

3 . Is it possible to take backup from server compressed data and copy the same to other machine as Server_Backup  - then start new client using Server_Backup. 

4.  Data removal is very slow.

Regards

Prabu

 

 


---- On Fri, 05 Jun 2015 21:55:28 +0530 Somnath Roy <Somnath.Roy@xxxxxxxxxxx> wrote ----

Yes, Ceph will be writing twice , one for journal and one for actual data. Considering you configured journal in the same device , this is what you end up seeing if you are monitoring the device BW.

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of gjprabu
Sent: Friday, June 05, 2015 3:07 AM
To: ceph-users@xxxxxxxxxxxxxx
Cc: Kamala Subramani; Siva Sokkumuthu
Subject: [ceph-users] Ceph OSD with OCFS2

 

Dear Team, 

   We are newly using ceph with two OSD and two clients. Both clients are mounted with OCFS2 file system. Here suppose i transfer 500MB of data in the client its showing double of the size 1GB after finish data transfer. Is the behavior is correct or is there any solution for this.

Regards

Prabu

 

 



PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

 

 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux