Re: Few questions.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 20 Oct 2014 11:07:43 +0200 Leszek Master wrote:

> 1) If i want to use cache tier should i use it with ssd journaling or i
> can get better perfomance using more ssd GB for cache tier?
> 
>From reading what others on this ML experienced and what Robert already
pointed out, cache tiering is definitely too unpolished at this point in
time and not particular helpful. Given the right changes and more tuning
abilities I'd expect it to be useful in the future (1-2 releases out
maybe?) though.

> 2) I've got cluster made of 26x900GB SAS disk with ssd journaling. The
> placement groups i've got is 1024. When i add new osd to cluster, my VMs
> get io errors and got stuck even if i had osd_max_backfills set to 1. If
> i change pgs from 1024 to 4096 would it get less affected by backfilling
> and recovery?
> 
You're not telling us enough about your cluster by far, starting with
Ceph and OS/kernel versions.
What are you storage nodes like (all the specs, cpu, memory, network,
what type of SSDs, journal to OSD ratio, etc.)?

If your replica size is 2 (risky!) then your PG and PGP count should be
2048, with a replica of 3 your current number is fine when it comes to the
formula but it might still be better for data distribution at 2048 as well.

But changing those values from what you have already should have little
effect on your data-migration impact, as in the end the same amount of
data needs to be moved if an OSD is added or lost and your current PG
count isn't horribly wrong.
If your cluster is running close to capacity (monitor with atop!) during
normal usage and with all the tunables already set to lowest impact your
only way forward is to address its shortcomings, whatever they are (CPU,
IOPS, etc).

Too high (way too high usually) PG counts will cost you in performance due
to CPU resource exhaustion caused by Ceph internal locking/protocol
overhead. 
Too little PGs on the other hand will not only cause uneven data
distribution but ALSO cost you in performance as the same cause is prone
to creating hotspots.

> 3) When i was adding my last 6 drives to a cluster i've noticed that the
> recovery speed had gone from 500-1000MB/s to 10-50 MB/s. When i restarted
> the osd that i was adding the transfers got back to normal. Also i've
> noticed that when i then do rados benchmark i've got dropping transfers
> to 0 MB/s even few times a row. The restarting osdes that i was adding
> or one by one that was already in cluster solved the problem. What can
> it be ? In the logs there isn't anything weird. The whole cluster stucks
> till i restart or even recreate journals on them. How to solve this ?
> 
That is very odd, maybe some of the Ceph developers have an idea or
recollection of seeing this before.

In general you will want to monitor all your cluster nodes with something
like atop in a situation like this to spot potential problems like slow
disks, CPU or network starvation, etc.

Christian

> Please help me.
> 
> Best regards !


-- 
Christian Balzer        Network/Systems Engineer                
chibi@xxxxxxx   	Global OnLine Japan/Fusion Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux