Re: anyone ever done multicast AF_UNIX sockets?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



jamal wrote:
Did you also measure throughput?
No.  lmbench doesn't appear to test UDP socket local throughput.

You are overlooking the flexibility that already exists in IP based
transports as an advantage; the fact that you can make them
distributed instead of localized with a simple addressing change
is a very powerful abstraction.
True. On the other hand, the same could be said about unicast IP sockets vs unix sockets. Unix sockets exist for a reason, and I'm simply proposing to extend them.

From
userspace, multicast unix would be *simple* to use, as in totally
transparent.

You could implement the abstraction in user space as a library today by
having some server that muxes to several registered clients.
This is what we have now, though with a suboptimal solution (we inherited it from another group). The disadvantage with this is that it adds a send/schedule/receive iteration. If you have a small number of listeners this can have a large effect percentage-wise on your messaging cost. The kernel approach also cuts the number of syscalls required by a factor of two compared to the server-based approach.

So whats the addressing scheme for multicast unix? Would it be a
reserved path?
Actually I was thinking it could be arbitrary, with a flag in the unix part of struct sock saying that it was actually a multicast address. The api would be something like the IP multicast one, where you get and bind a normal socket and then use setsockopt to attach yourself to one or more of multicast addresses. A given address could be multicast or not, but they would reside in the same namespace and would collide as currently happens. The only way to create a multicast address would be the setsockopt call--if the address doesn't already exist a socket would be created by the kernel and bound to the desired address.

To see if its feasable I've actually coded up a proof-of-concept that seems to do fairly well. I tested it with a process sending an 8-byte packet containing a timestamp to three listeners, who checked the time on receipt and printed out the difference.

For comparison I have two different userspace implementations, one with a server process (very simple for test purposes) and the other using an mmap'd file to store which process is listening to what messages.

The timings (in usec) for the delays to each of the listeners were as follows on my duron 750:

userspace server: 104 133 153
userspace no server: 72 111 138
kernelspace: 60 91 113

As you can see, the kernelspace code is the fastest and since its in the kernel it can be written to avoid being scheduled out while holding locks which is hard to avoid with the no-server userspace option.

If this sounds at all interesting I would be glad to post a patch so you could shoot holes in it, otherwise I'll continue working on it privately.

Chris

--
Chris Friesen | MailStop: 043/33/F10
Nortel Networks | work: (613) 765-0557
3500 Carling Avenue | fax: (613) 765-2986
Nepean, ON K2H 8E9 Canada | email: cfriesen@nortelnetworks.com

-
: send the line "unsubscribe linux-net" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Netdev]     [Ethernet Bridging]     [Linux 802.1Q VLAN]     [Linux Wireless]     [Kernel Newbies]     [Security]     [Linux for Hams]     [Netfilter]     [Git]     [Bugtraq]     [Yosemite News and Information]     [MIPS Linux]     [ARM Linux]     [Linux RAID]     [Linux PCI]     [Linux Admin]     [Samba]

  Powered by Linux