Re: Cache tier experiences (for ample sized caches ^o^)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello Udo,

On Wed, 07 Oct 2015 11:40:11 +0200 Udo Lembke wrote:

> Hi Christian,
> 
> On 07.10.2015 09:04, Christian Balzer wrote:
> > 
> > ...
> > 
> > My main suspect for the excessive slowness are actually the Toshiba DT
> > type drives used. 
> > We only found out after deployment that these can go into a zombie mode
> > (20% of their usual performance for ~8 hours if not permanently until
> > power cycled) after a week of uptime.
> > Again, the HW cache is likely masking this for the steady state, but
> > asking a sick DT drive to seek (for reads) is just asking for trouble.
> > 
> > ...
> does this mean, you can reboot your OSD-Nodes one after the other and
> then your cluster should be fast enough for app. one week to bring the
> additional node in?
> 
Actually shut down (power cycle), a reboot won't "fix" that state as the
power to the backplane stays on.

And even if the drives would be at full speed, at this point in time
(2x over planned capacity) I'm not sure if that's enough.

6 month and 140 VMs earlier I might have just tried that, now I'm looking
for something that is going to work 100%, no ifs and whens.

Christian
-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux