The IESG has approved the following document: - 'Mobile IPv6 Fast Handovers ' <draft-ietf-mipshop-fmipv6-rfc4068bis-07.txt> as a Proposed Standard This document is the product of the Mobility for IP: Performance, Signaling and Handoff Optimization Working Group. The IESG contact persons are Jari Arkko and Mark Townsley. A URL of this Internet-Draft is: http://www.ietf.org/internet-drafts/draft-ietf-mipshop-fmipv6-rfc4068bis-07.txt Technical Summary Mobile IPv6 enables a Mobile Node to maintain its connectivity to the Internet when moving from an Access Router to another, a process referred to as handover. During this time, the Mobile Node is unable to send or receive packets due to both link switching delay and IP protocol operations. The "handover latency" resulting from standard Mobile IPv6 procedures, namely, movement detection, new Care of Address configuration and Binding Update, is often unacceptable to real-time traffic such as Voice over IP. Reducing the handover latency could be beneficial to non real-time, throughput-sensitive applications as well. This document specifies a protocol to improve handover latency due to Mobile IPv6 procedures. Working Group Summary The document has gone through WGLC in MIPSHOP WG. Document Quality This is a revision of an existing Experimental specification, RFC 4068. There are a few implementations of the proposed protocol available already. There has also been one interop event where two implementations were tested. This specification has been reviewed by Jari Arkko for the IESG. Note to RFC Editor Please insert "Obsoletes: 4068" to the header. Change this text in Section 5.4: OLD: Whereas buffering can enable a smooth handover, the buffer size and the rate at which buffered packets are eventually forwarded are important considerations when providing buffering support. For instance, an application such as Voice over IP typically needs smaller buffers compared to high-resolution streamig video, which has larger packet sizes and higher arrival rates. This specification does not restrict implementations to providing buffering support for any specific application. However, the implementations should recognize that the buffer size requirements are dependent on the application characteristics (including the arrival rate, arrival process, perceived performance loss in the event buffering is not offered, and so on), and arrive at their own policy decisions. Particular attention must be paid to the rate at which buffered packets are forwarded to the MN once attachment is complete. Just as in any network event where a router buffers packets, forwarding buffered packets in a handover at a rate inconsistent with the policy governing the outbound interface can cause performance degradation to the existing sessions and connections. Implementations must take care to prevent such occurances, just as routers do with buffered packets on the Internet. NEW: Whereas buffering can enable a smooth handover, the buffer size and the rate at which buffered packets are eventually forwarded are important considerations when providing buffering support. There are a number of aspects to consider: o Some applications transmit less data over a given period of data than others, and this implies different buffering requirements. For instance, Voice over IP typically needs smaller buffers compared to high-resolution streaming video, as the latter has larger packet sizes and higher arrival rates. o When the mobile node re-appears on the new link, having the buffering router send a large number of packets in quick succession may overtax the resources of the router, the mobile node itself, or the path between these two. In particular, if a large number of packets are buffered, sending them out one after another may cause some of them to be dropped by routers on the path. Or they may stand in queue, blocking new packets reaching the mobile node. This would be problematic for real-time communications. o The routers are not one of the parties in the end-to-end communication, so they has no knowledge of transport layer conditions. o The wireless connectivity of the mobile node may vary over time. It may achieve a smaller or higher bandwidth on the new link, signal strength may be weak at the time it just entered the area of this access point, and so on. As a result, it is hard to design an algorithm that would send the packets out from the buffer properly spaced under all circumstances. Note that running out of resources due to too fast draining of the buffer can be harmful to both the mobile node itself or other nodes using the same path. The purpose of fast handovers is to avoid packet loss. Yet, too fast draining can by itself cause loss of the buffered packets as well as blocking or losing other packets also trying to reach the mobile node. This specification does not restrict implementations from providing specialized rules buffering support for any specific situation. However, attention must be paid to the rate at which buffered packets are forwarded to the MN once attachment is complete. Routers implementing this specification MUST implement at least the default algorithm, which is based on the original arrival rates of the buffered packets. A maximum of 5 packets MAY be sent one after another, but all subsequent packets SHOULD use a sending rate that is determined by metering the rate at which packets have entered the buffer, potentially using smoothing techniques such as recent activity over a sliding time window and weighted averages [RFC 3290]. It should be noted, however, that this default algorithm is crude and may not be suitable for all situations. Future revisions of this specification may provide additional algorithms, once enough experience of the various conditions in deployed networks is attained. Also, add a new informative reference RFC 3290. _______________________________________________ IETF-Announce@ietf.org https://www.ietf.org/mailman/listinfo/ietf-announce