Re: Minimum Recommended Hardware

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, May 19, 2011 at 1:27 AM, Dyweni - Ceph-Devel
<YS3fpFE2ykfB@xxxxxxxxxx> wrote:
> 4 x OSD
>  OSD1 =
>    P3 1Ghz, 256K Cache
>    2GB RAM
>    80GB IDE (dd_rescue can read about 45MB/s sustained)
>    1Gb NIC (e1000 driver)
>  OSD2 =
>    P3 550Mhz, 512K Cache
>    768MB RAM
>    40GB IDE (dd_rescue can read about 30MB/s sustained)
>    1Gb NIC (e1000 driver)
>  OSD3 =
>    Celeron 1Ghz, 128K Cache
>    512MB RAM
>    18GB IDE (dd_rescue can read about 15MB/s sustained)
>    1Gb NIC (e1000 driver)
>  OSD4 =
>    P3 1Ghz, 256K Cache
>    512MB RAM
>    20GB IDE (dd_rescue can read about 40MB/s sustained)
>    1Gb NIC (e1000 driver)

I'm pretty sure these drives are your problem -- they're just not very
fast. Remember that in general, each write to Ceph is going to write
to both the journal and the data store -- which cuts your drives down
to 22.5, 15, 7.5, 20 MB/s in the best case (and likely a lot worse --
the journal will be a streaming write but the data store will be
random writes, and everything's slower at random writes). On top of
that, we've found that the slowest drives in a cluster can have a
pretty large impact on performance since the faster OSDs end up
consistently waiting on the slow drives.

We have seen other issues with slow CPUs, but those generally only
manifest themselves during recovery and are getting active improvement
as we come across them. :)
-Greg
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux