Re: TCP

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Rich;

On Dec 18, 2007, at 9:18 AM, Richard Carlson wrote:

Marshall;

As always, it really depends on what the scientist needs to accomplish.

The VLBI community is moving raw data from multiple radio telescopes to a single coorelator machine to generate a new image. All the data is time stamped and those multiple streams must be time sync'ed to get the coorelator to function. As with most real time data streams, timeouts and retransmissions are bad because the timing gets hosed. So it's better to drop some packets instead of retransmitting them. In addition, the application itself has multiple data channels, so it is in the best position to know what to drop if congestion becomes a problem. This means that plain old UDP is the best protocol for this community.


I agree that UDP is the best for VLBI and have been saying that for some time. The most fundamental thing to consider is that the VLBI correlation coefficients are typically very low (that is indeed why such high data rates are needed) and the antennae need to be widely separated. Because the correlation coefficients are low, the value of any data packet is also low, it is literally better to use any excess bandwidth to send a new data packet instead of retransmitting an old one. So, retransmissions are not helpful in this case.

In the new 2010 VLBI program, the locations of antennae will be limited primarily by the need for fiber optic access. (Why, oh why, doesn't French Polynesia have undersea fiber access ?) It will not be possible to provide anything like a dedicated data network for this effort, sites will be at remote locations with hosts without a lot of resources, and the need for a UDP scavenger service is strong.

People interested in more details here can look at the proceedings of the 6th International e-VLBI Workshop :

http://www.mpifr-bonn.mpg.de/div/vlbi/6th_evlbi/

(Disclosure - I used to be chief scientist for the US Naval Observatory VLBI observing program.)

The LHC community is trying to move large amounts of stored data between compute sites. The data flows out of the detector(s) run through several triggers that filter out the uninteresting stuff so only a fraction of events get recorded to local disks. This raw data then gets moved from CERN Switzerland to one of a dozen sites scattered around the globe. These remote sites provide long term storage of the raw data, provide some compute resources to post process the data, and support a 2nd level of distributed compute sites to post process the data. This is really a classic bulk transport task with tons of data being moved around the globe. It must be done in a reliable manner (raw data backup and post processed data). TCP is the protocol of choice for this community.


I know some particle physicists who feel that UDP would serve here too, for similar reasons, but the community have gone in a different direction. It is true that particle physics computation tends to occur mostly in places with good access to network resources, which I think may have influenced their thinking.

Regards
Marshall

The LHC community has used DSCP scavenger service in the past, they are moving to a new model where they lease their own infrastructure so they can meet the science demands.

Rich

At 07:29 PM 12/17/2007, Marshall Eubanks wrote:

On Dec 17, 2007, at 3:05 PM, Matthew J Zekauskas wrote:

On 12/17/2007 2:30 PM, Fred Baker wrote:

It is probably worth looking into the so-called "LAN Speed
Records" and
talking with those who have achieved them. An example of a news
report

Internet2 has sponsored some, and as part of the award the
contestant is
required to say exactly how they did it so the experiment can be
reproduced... the history list with pointers to contestant sites is
here:

<http://www.internet2.edu/lsr/history.html>


Operationally, the guys who worry about this sort of thing the
most are probably the astronomers, who routinely move sensor data
from radio-telescopes across the research backbones for data
reduction. In their cases, the sensors routinely generate in
excess of 1

Actually, I believe the physicists actually worry more (or at least as
much); there's lots of data to be moved around as part of the Large
Hadron Collider that is starting up at CERN.


Note that for VLBI for sure, and particle particle physics IMO,
fairly high packet loss rates could easily be
accommodated with no need for retransmission, and so there is no
reason to use TCP for these applications.

This situation cries out for some sort
of "worst than best effort" scavenger service. If anyone else feels
the same way, we should try and arrange a Bar BOF in Philadelphia.

Regards
Marshall


--Matt



_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf


_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

------------------------------------



Richard A. Carlson e-mail: RCarlson@xxxxxxxxxxxxx
Network Engineer                                phone:  (734) 352-7043
Internet2                                       fax:    (734) 913-4255
1000 Oakbrook Dr; Suite 300
Ann Arbor,  MI  48104


_______________________________________________

Ietf@xxxxxxxx
https://www1.ietf.org/mailman/listinfo/ietf

[Index of Archives]     [IETF Annoucements]     [IETF]     [IP Storage]     [Yosemite News]     [Linux SCTP]     [Linux Newbies]     [Fedora Users]