On 10/03/2017 07:59 AM, Alex Gorbachev wrote:
Hi Sam,
On Mon, Oct 2, 2017 at 6:01 PM Sam Huracan <nowitzki.sammy@xxxxxxxxx
<mailto:nowitzki.sammy@xxxxxxxxx>> wrote:
Anyone can help me?
On Oct 2, 2017 17:56, "Sam Huracan" <nowitzki.sammy@xxxxxxxxx
<mailto:nowitzki.sammy@xxxxxxxxx>> wrote:
Hi,
I'm reading this document:
http://storageconference.us/2017/Presentations/CephObjectStore-slides.pdf
I have 3 questions:
1. BlueStore writes both data (to raw block device) and metadata
(to RockDB) simultaneously, or sequentially?
2. From my opinion, performance of BlueStore can not compare to
FileStore using SSD Journal, because performance of raw disk is
less than using buffer. (this is buffer purpose). How do you think?
3. Do setting Rock DB and Rock DB Wal in SSD only enhance
write, read performance? or both?
Hope your answer,
I am researching the same thing, but recommend you look
at http://ceph.com/community/new-luminous-bluestore
And also search for Bluestore cache to answer some questions. My test
Luminous cluster so far is not as performant as I would like, but I have
not yet put a serious effort into tuning it, amd it does seem stable.
Hth, Alex
Hi Alex,
If you see anything specific please let us know. There are a couple of
corner cases where bluestore is likely to be slower than filestore
(specifically small sequential reads/writes with no client side cache or
read ahead). I've also seen some cases where filestore has higher read
throughput potential (4MB seq reads with multiple NVMe drives per OSD
node). In many other cases bluestore is faster (and sometimes much
faster) than filestore in our tests. Writes in general tend to be
faster and high volume object creation is much faster with much lower
tail latencies (filestore really suffers in this test due to PG splitting).
Mark
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx <mailto:ceph-users@xxxxxxxxxxxxxx>
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
--
--
Alex Gorbachev
Storcium
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com