Re: Performance optimization

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello


> >

> > >> - The one 6TB disk, per node?
> > >
> > > You get bad distribution of data, why not move drives around between
> > these to clusters, so you have more the same in each.
> > >
> >
> > I would assume that this behaves exactly the other way around. As long
> > as you have the same number of block devices with the same size
> > distribution in each node you will get an even data distribution.
> >
> > If you have a node with 4 3TB drives and one with 4 6TB drives Ceph
> > cannot use the 6TB drives efficiently.
> >
> He has 2 clusters thus 3TB -> cluster 1, 6TB -> cluster eg.



Sorry for the bad Information.

I have two clusters, but my question was about just one of them.


Yes Robert is right, instead of this configuration:

| node1 | node2 | node3 | node4 | node5 | node6 | node7 | node8 |
| 1x1TB | 1x1TB | 1x1TB | 1x1TB | 1x1TB | 1x1TB | 1x1TB | 1x1TB |
| 4x2TB | 4x2TB | 4x2TB | 4x2TB | 4x2TB | 4x2TB | 4x2TB | 4x2TB |
| 1x6TB | 1x6TB | 1x6TB | 1x6TB | 1x6TB | 1x6TB | 1x6TB | 1x6TB |

This:
| node1 | node2 | node3 | node4 | node5 | node6 | node7 | node8 |
| 6x3TB | 6x3TB | 6x3TB | 6x3TB | 6x3TB | 6x3TB | 6x3TB | 6x3TB |


Would this even be a noticeable performance difference? Because if I'm not mistaken, ceph will try to fill every disk on one node to the same percentage.


And about Erasure Coded: what would be the recommended specification?
Because replicated uses so much more storage, it wasn't really an option until now.
We didn't have any problems with CPU utilization and I can go to 32GB for every node, and 64 for MDS nodes.


Thanks



________________________________
Von: Marc <Marc@xxxxxxxxxxxxxxxxx>
Gesendet: Montag, 6. September 2021 13:53:06
An: Robert Sander; ceph-users@xxxxxxx
Betreff:  Re: Performance optimization

>
> >> - The one 6TB disk, per node?
> >
> > You get bad distribution of data, why not move drives around between
> these to clusters, so you have more the same in each.
> >
>
> I would assume that this behaves exactly the other way around. As long
> as you have the same number of block devices with the same size
> distribution in each node you will get an even data distribution.
>
> If you have a node with 4 3TB drives and one with 4 6TB drives Ceph
> cannot use the 6TB drives efficiently.
>
He has 2 clusters thus 3TB -> cluster 1, 6TB -> cluster eg.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux