Re: Ceph Design

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FYI, if you supply a block device partition as journal, the param osd_journal_size is ignored, the entire partition is used by ceph.

 

Thanks & Regards

Somnath

 

From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of Dominik Zalewski
Sent: Wednesday, August 05, 2015 8:48 AM
To: SUNDAY A. OLUTAYO; ceph-users
Subject: Re: [ceph-users] Ceph Design

 

Yes, there should a separate partition per OSD. You are probably looking at 10-20GB journal partition per OSD. If you are creating your cluster using ceph-deploy it can create journal partitions for you.

 

"The expected throughput number should include the expected disk throughput (i.e., sustained data transfer rate), and network throughput. For example, a 7200 RPM disk will likely have approximately 100 MB/s. Taking the min() of the disk and network throughput should provide a reasonable expected throughput. Some users just start off with a 10GB journal size". For example:

osd journal size = 10000

 

On Wed, Aug 5, 2015 at 4:48 PM, Dominik Zalewski <dzalewski@xxxxxxxxxxxxx> wrote:

Yes, there should a separate partition per OSD. You are probably looking at 10-20GB journal partition per OSD. If you are creating your cluster using ceph-deploy it can create journal partitions for you.

 

"The expected throughput number should include the expected disk throughput (i.e., sustained data transfer rate), and network throughput. For example, a 7200 RPM disk will likely have approximately 100 MB/s. Taking the min() of the disk and network throughput should provide a reasonable expected throughput. Some users just start off with a 10GB journal size". For example:

osd journal size = 10000

 

On Wed, Aug 5, 2015 at 4:38 PM, SUNDAY A. OLUTAYO <olutayo@xxxxxxxxxx> wrote:

I intend to have 5-8 OSDs for 400GB SSD.

 

Should there be different partitions for each OSD on the SSD?

 

Thanks,

Sunday Olutayo


From: "Dominik Zalewski" <dzalewski@xxxxxxxxxxxxx>
To: "SUNDAY A. OLUTAYO" <olutayo@xxxxxxxxxx>, "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Sent: Wednesday, August 5, 2015 3:38:20 PM
Subject: Re: [ceph-users] Ceph Design

 

I would suggest splitting OSDs across two or more SSD journals (depending on OSD write speed and SSD sustained speed limits) 

 

e.g 2x Intel S3700 400GB for 8-10 OSDs or 4x Intel S3500 300GB for 8-10 OSDs (it may vary depending on the setup)

 

If you RAID-1 SSD journals they will potentially "wear out" in the same time due to writes happening on both of them.

 

You only going to get journal write performance penalty with RAID-1.

 

Dominik

 

 

On Wed, Aug 5, 2015 at 3:37 PM, Dominik Zalewski <dzalewski@xxxxxxxxxxxxx> wrote:

I would suggest splitting OSDs across two or more SSD journals (depending on OSD write speed and SSD sustained speed limits) 

 

e.g 2x Intel S3700 400GB for 8-10 OSDs or 4x Intel S3500 300GB for 8-10 OSDs (it may vary depending on the setup)

 

If you RAID-1 SSD journals they will potentially "wear out" in the same time due to writes happening on both of them.

 

You only going to get journal write performance penalty with RAID-1.

 

Dominik

 

On Tue, Aug 4, 2015 at 10:54 PM, SUNDAY A. OLUTAYO <olutayo@xxxxxxxxxx> wrote:

I am thinking of having ceph journal on a RAID1 SSD.

 

Kindly advise me on this, does the RAID1 SSD for journal make sense?


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 

 

 

 

 




PLEASE NOTE: The information contained in this electronic mail message is intended only for the use of the designated recipient(s) named above. If the reader of this message is not the intended recipient, you are hereby notified that you have received this message in error and that any review, dissemination, distribution, or copying of this message is strictly prohibited. If you have received this communication in error, please notify the sender by telephone or e-mail (as shown above) immediately and destroy any and all copies of this message in your possession (whether hard copies or electronically stored copies).

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux