Distribution Policy Configuration
Distributing endpoints equitably among the cluster members is the primary function of the SLB. The lb-policy configuration element allows you to control the method of the SLB’s distribution based on matching criteria. Using inbound packet matching criteria, you can control the assignment of users to OCSBCs. Matching is done by data available up to and including the transport layer of the packet: source IP address and port, destination IP address and port, and transport protocol. The IP addresses and ports may or may not include bit masks as well.
Conceptually, the load balancer policy table, with sample data, looks akin to the following.
Source IP/Mask | Source Port/Mask | Destination IP/Mask | Destination Port/Mask | Transport Protocol Requirements (list) | Realm Identifiers (list) |
---|---|---|---|---|---|
192.168.7.22/32 | 0/0 | 10.0.0.1/32 | 5060/16 | West | |
192.168.1.0/24 | 0/0 | 10.0.0.1/32 | 5060/16 | UDP, TCP | North, South, West |
192.168.0.0/16 | 0/0 | 10.0.0.1/32 | 5060/16 | UDP, TCP | East, West |
0.0.0.0/0 | 0/0 | 0.0.0.0/0 | 0/0 |
Policies are matched using a longest prefix match algorithm; the most specific policy is selected when comparing policies to received packets. One and only one policy is chosen per packet; if the next hops in that route are all unavailable, the next best route is not consulted (instead, the default policy may be consulted – see below). This is different than the local-policy behavior on the OCSBC.
Within each policy you may configure multiple next hops, where each next hop is a named group of OCSBCs. In the sample policy table, this is indicated in the second policy with a source IP range of 192.168.1.0/24. The realm identifier list for this policy indicates North, South, West. Each of these realm identifiers represents a collection of zero or more OCSBCs, in OCSBC parlance these are roughly analogous to session-agent groups. Each of these realm identifiers is also assigned a priority (a value between 1 and 31, with 31 representing the highest priority) in the configuration, and the SLB sorts the possible destinations with the highest priority first. Upon receipt of a packet matching a policy with multiple configured realm identifiers, the SLB gives preference to OCSBCs from the realm identifier with the highest priority. Should no OCSBCs be available in that priority level (due to saturation, unavailability, and so on.) the SLB moves on to investigate the next priority level, and so on. Should no OCSBCs become available after traversing the entire list of all OCSBCs within each priority level, the OCSBC either drops the packet or attempt to use the default policy.
The bottom row of the sample table shows this implicit, last resort default policy. When enabled, the SLB reverts to the default policy when all of the potential next hop realms referenced in the endpoint’s distribution rule are unavailable. In that event, the default policy attempts to locate a clustered OCSBC that advertises support for the service-interface that the packet arrived on. The realm is not considered when matching to the default policy. If such an OCSBC is found, the SLB forwards the packet to that DBC; if such an OCSBC is not found, the SLB drops the packet.
It is not necessary to configure the default policy — it is simply intended as a catchall policy, and may be used when all that is required is a simple round-robin balancing scheme based on simple metrics (for example, CPU utilization and number of registrations currently hosted by an OCSBC). If no policies are configured on the SLB, the default policy is used. The default realm is implied in the above table as * and is enabled by default for policy records.
Use the following procedure to perform required lb-policy configuration.