Re: Multicast communication compuverde

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 06/02/2019 11:14, Marc Roos wrote:
Yes indeed, but for osd's writing the replication or erasure objects you
get sort of parrallel processing not?



Multicast traffic from storage has a point in things like the old
Windows provisioning software Ghost where you could netboot a room full
och computers, have them listen to a mcast stream of the same data/image
and all apply it at the same time, and perhaps re-sync potentially
missing stuff at the end, which would be far less data overall than
having each client ask the server(s) for the same image over and over.
In the case of ceph, I would say it was much less probable that many
clients would ask for exactly same data in the same order, so it would
just mean all clients hear all traffic (or at least more traffic than
they asked for) and need to skip past a lot of it.


Den tis 5 feb. 2019 kl 22:07 skrev Marc Roos <M.Roos@xxxxxxxxxxxxxxxxx>:




	I am still testing with ceph mostly, so my apologies for bringing
up
	something totally useless. But I just had a chat about compuverde
	storage. They seem to implement multicast in a scale out solution.
	
	I was wondering if there is any experience here with compuverde and
how
	it compared to ceph. And maybe this multicast approach could be
	interesting to use with ceph?
	
	
	
	
	_______________________________________________
	ceph-users mailing list
	ceph-users@xxxxxxxxxxxxxx
	http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
	



It could be used for sending cluster maps or other configuration in a push model, i believe corosync uses this by default. For use in sending actual data during write ops, a primary osd can send to its replicas, they do not have to process all traffic but can listen on specific group address associated with that pg, which could be an increment from a base multicast address defined. Some additional erasure codes and acknowledgment messages need to be added to account for errors/dropped packets. i doubt it will give a appreciable boost given most pools use 3 replicas in total, additionally there could be issues to get multicast working correctly like setup igmp, so all in all in it could be a hassle.

/Maged

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux