Re: read and write speed.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



>From my (bad) memory, I've tried fsync, and the resulting was still
the same in the end, i.e., write 100gb+ and all will be the same.
Can't test it now as it is already running other tasks..

I think the small bs (4-8k) results to ~ KB/s level using fsync. and
large, 4m+ will get much higher, but at this large bs, eventually,
results to the same as no-fsync when you continuously write stretch to
100gb+
Is this too obvious for anyone? (other than the shortstrkoing effect
which should be at most 20% diff)

how then journal-size (e.g., 1gb, 2gb, etc) set by the ceph actually
effect in performance (other than reliability and latency, etc)
If no reliability is ever needed and, say journal is turned off, how's
the performance effect? from my bad memory again, no-journal worsens
performance or had no effect . But I think this is again, tested 'not
long-enough'
e.g., other than journal flushing to disk (which shouldn't be a major
bottleneck, as Collin said  stuffs in journal gets continuously
flushing out to disk.

so in other words, set journal to 10gb, and dd for just 10gb, i should
get an unbelievable performance, then just change dd to 100gb, how
much drop? I haven't try this,
but from my very long test wrtites/read, testing many ranges up to 5TB
actual content, the disk does about 12.5MB/s at most. For small sets,
yes there I see high performance, etc.

Thanks

On Tue, May 31, 2011 at 14:48, Henry C Chang <henry.cy.chang@xxxxxxxxx> wrote:
> 2011/5/31 djlee064 <djlee064@xxxxxxxxx>:
>> A bit off, but can you Fyodor, and all of devs run
>>
>> dd if=/dev/null of=/cephmount/file bs=1M count=10000   (10gb)
>> dd if=/dev/null of=/cephmount/file bs=1M count=50000   (50gb)
>> dd if=/dev/null of=/cephmount/file bs=1M count=100000 (100gb)
>>
>> and continue to 200gb,... 500gb
>>
>> see the MB/s difference,  expecting an enormous difference, e.g.,
>> starts ~200MB/s and drops down to well less than 50MB/s or even less
>> (Depending on the #disk, but about 12MB/s per-disk is what I have
>> analyzed)
>> I feel that Fyodor's and the rest of you are testing only a very small
>> part, the high rate at the start is likely to be due to the journal
>> size.
>
> It's client-side caching effect. Without adding the option
> conv=fdatasync (or fsync), dd returns the throughput without flushing
> the data.
>
> --Henry
>
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux