Re: Bluestore OSD support in ceph-disk

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> On Sep 15, 2016, at 11:54 AM, Sage Weil <sage@xxxxxxxxxxxx> wrote:
> 
> 
> The 128MB figure is mostly pulled out of a hat.  I suspect it will be 
> reasonable, but a proper recommendation is going to depend on how we end 
> up tuning rocksdb, and we've put that off until the metadata format is 
> finalized and any rocksdb tuning we do will be meaningful.  We're pretty 
> much at that point now...
> 
> Whatever it is, it should be related to the request rate, and perhaps the 
> relative speed of the wal device and the db or main device.  The size of 
> the slower devices shouldn't matter, though.
> 
> There are some bluefs perf counters that let you monitor what the wal 
> device utilization is.  See 
> 
>  b.add_u64(l_bluefs_wal_total_bytes, "wal_total_bytes",
> 	    "Total bytes (wal device)");
>  b.add_u64(l_bluefs_wal_free_bytes, "wal_free_bytes",
> 	    "Free bytes (wal device)");
> 
> which you can monitor via 'ceph daemon osd.N perf dump'.  If you 
> discover anything interesting, let us know!
> 
> Thanks-
> sage

I could build and deploy the latest master (commit: 9096ad37f2c0798c26d7784fb4e7a781feb72cb8) with partitioned bluestore. I struggled a bit to bring up OSDs as the available documentation for bringing up the partitioned bluestore OSDs is mostly primitive so far. Once ceph-disk gets updated this pain will go away. We will stress the cluster shortly, but so far I am delighted to see that from ground-zero it is able stand up on it’s own feet to HEALTH_OK without any errors. If I see any issues in our tests I will share it here.

Thanks,
Nitin

��.n��������+%������w��{.n����z��u���ܨ}���Ơz�j:+v�����w����ޙ��&�)ߡ�a����z�ޗ���ݢj��w�f




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux