Re: ceph-deploy , osd_journal_size and entire disk partiton for journal

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks for the note, yes I know them all. It will be shared among multiple 3-4 HDD OSD Disks.

--
Deepak

On Jun 12, 2017, at 7:07 AM, David Turner <drakonstein@xxxxxxxxx> wrote:

Why do you want a 70GB journal?  You linked to the documentation, so I'm assuming that you followed the formula stated to figure out how big your journal should be... "osd journal size = {2 * (expected throughput * filestore max sync interval)}".  I've never heard of a cluster that requires such a large journal size.  The default is there because it works for 99.999% of situations.  I actually can't think of a use case that would require a larger journal than 10GB, especially on an SSD.  The vast majority of the time the space on the SSD is practically empty.  It doesn't fill up like a cache or anything.  It's just a place that writes happen quickly and then quickly flushes it to the disk.

Using 100% of your SSD size is also a bad idea based on how SSD's recover from unwritable sectors... they mark them as dead and move the data to an unused sector.  The manufacturer overprovisions the drive in the factory, but you can help out by not using 100% of your available size.  If you have a 70GB SSD and only use 5-10GB, then you will drastically increase the life of the SSD as a journal.

If you really want to get a 70GB journal partition, then stop the osd, flush the journal, set up the journal partition manually, and make sure that /var/lib/ceph/osd/ceph-##/journal is pointing to the proper journal before starting it back up.

Unless you REALLY NEED a 70GB journal partition... don't do it.

On Mon, Jun 12, 2017 at 1:07 AM Deepak Naidu <dnaidu@xxxxxxxxxx> wrote:

Hello folks,

 

I am trying to use an entire ssd partition for journal disk ie example /dev/sdf1 partition(70GB). But when I look up the osd config using below command I see ceph-deploy sets journal_size as 5GB. More confusing, I see the OSD logs showing the correct size in blocks in the /var/log/ceph/ceph-osd.x.log

So my question is, whether ceph is using the entire disk partition or just 5GB(default value of ceph deploy) for my OSD journal ?

 

I know I can set per OSD or global OSD value for journal size in ceph.conf . I am using Jewel 10.2.7

 

ceph --admin-daemon /var/run/ceph/ceph-osd.3.asok config get osd_journal_size

{

    "osd_journal_size": "5120"

}

 

I tried the below, but the get osd_journal_size shows as 0, which is what its set, so still confused more.

 

http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/

 

 

Any info is appreciated.

 

 

PS: I search to find similar issue, but no response on that thread.

 

--

Deepak

 


This email message is for the sole use of the intended recipient(s) and may contain confidential information.  Any unauthorized review, use, disclosure or distribution is prohibited.  If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux