What is MPLS?

MPLS stands for Multiprotocol Label Switching. Multiprotocol because it might be applied with any Layer 3 network protocol, although almost all of the interest is in using MPLS with IP traffic. But that doesn’t actually give us any idea what MPLS does for you (we’ll get to that momentarily).

Depending on which vendor you ask, MPLS is the solution to any problem they might conceivably have. So the question “What is MPLS” could have a lot of right answers. The presentations from this Spring’s MPLS Forum were all over the place on precisely this.

For me, MPLS is about gluing connectionless IP to connection-oriented networks. Six months ago, I’d have said “gluing IP to ATM”, but now there’s a big push on to use MPLS to mate IP to optical networks. The IETF draft documents refer to this as the “shim layer”, the idea that MPLS is something between Layer 2 and Layer 3 that makes them fit better (and perhaps carries a small amount of information to help that better fit).

MPLS started out as Tag Switching. Ipsilon (remember them?) was the company that got the MPLS buzz started. Back then, there were perhaps two key insights. One was that there is no reason an ATM switch can’t have a router inside it (or a router have ATM switch functionality inside it). Another was that once you’ve got a router on top of your ATM switch, you can use dynamic IP routing to trigger virtual circuit (VC) or path setup. In other words, instead of using management software, or human configuration, or (gasp!) even ATM routing (PNNI) to drive circuit setup, dynamic IP routing might actually drive the creation of circuits. You might even have a variety of protocols for different purposes, each driving Label Switch Path establishment.

I’ve been thinking of this as avoiding the hop-by-hop decision making, by setting up a “Layer 2 fast path” using tags (think ATM or Frame Relay addressing) to move things quickly along a pre-established path, without such “deep analysis”. The packet then needs to be examined closely exactly once, at entry to the MPLS network. After that, it is somewhere along the path, and forwarding is based on the simple tagging scheme, not on more complex and variable IP headers. The U.S. postal system seems to work like that: forward mail to a regional center, do handwriting recognition once, apply some sort of infrared or ultraviolet bar code to the bottom edge of the envelope, from there onwards, just use the bar code to route the letter. When you start thinking about fast forwarding with Class of Service (CoS), then incoming interface, source address, port and application information, all might play a role in the forwarding decision. By rolling the results into one label per path the packet might take, subsequent devices do not need to make such complex decisions.

 

Configuring Basic MPLS

Turning on basic MPLS is pretty simple:

MPLS Class of Service

 

Since the marketing and interest in MPLS is tied up with ATM coexisting with IP, the question of providing Quality of Service (QoS) always comes up. The focus in MPLS is more on differentiated Classes of Service than on ATM-like QoS, although with Traffic Engineering features, MPLS seems to come a long way towards IP-style QoS.

Right now, the Cisco CoS features used for MPLS are CAR or CBWFQ, WRED, and WFQ.

We start by using CAR or CBWFQ (or a couple of other techniques, see the QoS articles) to classify or recognize traffic at the edge of the network. These techniques also let us mark the traffic, setting the 3 IP Precedence or 6 DSCP bits in the IP Type of Service field. Recall that marking allows downstream (core) devices to provide appropriate service to the packet without having to delve as closely into headers to figure out what service the packet deserves.

We also configure WRED or WFQ to provide differentiated service based on IP Precedence (or DSCP) in the downstream (core) routers. These queue managment and scheduling techniques can be applied whether or not MPLS is in effect, if we’re operating MPLS over IP. The IP Precedence information determines the weights to be used (the ‘W’ in WRED or WFQ). Higher IP Precedence gets preferentail treatment.

MPLS comes into the picture in two possible ways. One is by copying IP Precedence bits to the MPLS header (if desired). This MPLS header is used for MPLS over IP and has a field for such CoS information, the EXP field (3 bits). The second way MPLS can deal with CoS is by storing Precedence information as part of the Label Information Base (LIB). Each level of precedence is assigned a different Label Switch Path, so the label can be thought of as implicitly specifying the precedence. (If a Label Switch Router needs to know the precedence, it can look it up in the LIB.)

So when a frame arrives at a LSR, the label is used to determine outbound interface and new label, but the precedence or EXP field is then used to determine queuing treatment.

On ATM LSRs, the same thing happens. We’re dealing with a Label Virtual Circuit (LVC) for our Label Switch Path. The LIB determines outgoing interface, which happens to be an ATM interface. WFQ and WRED can then be applied on the outgoing ATM interface, along with WEPD (Weighted Early Packet Discard).

With a non-MPLS ATM core, the edge LSRs are interconnected by ATM PVCs through the core ATM switches. WFQ and WRED can be applied on a per-VC basis. The BPX 8650 also allows you to use different PVCs for different classes of service.

Configuring MPLS CoS

 

To use multiple VCs for MPLS Cos on an ATM interface, configure:

This creates four VCs for each MPLS destination. An alternative is to use fewer label VCs by configuring CoS mapping. See the documentation (basically, the above URL) for details and alternatives.

 

MPLS Traffic Engineering

 

The idea of MPLS Traffic Engineering is to use unidirectional tunnels to shift traffic off one path and onto another. The tunnels can be statically or automatically determined by the LSRs. Multiple tunnels can be used for load sharing when a traffic flow is too large for a single path.

Although the figure shows edge to edge tunnels, TE tunnels can be shorter. They can be used by a Service Provider to shift traffic off an overloaded trunk, until more capacity can be added.

The tunnel mechanism works because we can stack up the labels applied to IP packets. That is, additional labels are applied temporarily, to the outside of the packet and existing label, to shunt traffic into the tunnel. The tunnel LSP is followed until the end of the tunnel, where the outermost label is popped off. At that point the packet resumes following the original LSP to its destination.

A link state protocol (IS-IS or OSPF) is used with enhanced link state advertisements to track network capacity and to ensure that the tunnel does not create a routing loop. The actual signaling for dynamic tunnel establishment is based on RSVP, which acts to reserve bandwidth on a link.

The following example shows all of these factors at work. It sets up an explicit tunnel (where we statically specify the path) with a dynamic backup tunnel. This is a configuration snippet from the LSR at the entrance to the tunnel (top of the picture).

 

You also would have to enable tunnels on routers and interfaces the tunnel might traverse:

For the dynamic path establishment to work, we would also need to configure IS-IS for MPLS Traffic Engineering, and specify which traffic is to use the tunnel. The traffic to go through this tunnel is that exiting the BGP Autonomous System at router 10.5.5.5.

The metric-style wide command allows ISIS to track the additional routing metric information needed for Traffic Engineering. There is a routing protocol migration issue here and you should read all the relevant documentation before attempting this in a production network! See:

MPLS Traffic Engineering