Branch High Availability in The Distributed Enterprise: Implementation Guide
Branch High Availability in The Distributed Enterprise: Implementation Guide
Branch High Availability in The Distributed Enterprise: Implementation Guide
Table of Contents
Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Scope. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Target Audience. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Terminology and Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Design Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Juniper Networks Distributed Enterprise Connectivity Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Branch Office High Availability Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Understanding Chassis Cluster (JSRP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Understanding Virtual Chassis Technology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Implementation and Configuration Guidelines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Configuring Branch Office with a Single SRX Series Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Configuring Redundant IPsec Tunnels on a Single SRX Series Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Configuring Zones on an SRX Series Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Configuring NAT on SRX Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Configuring Branch Office with Redundant SRX Series Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Configuring SRX Series Chassis Cluster (JSRP) For Device-Level WAN high availability . . . . . . . . . . . . . . . . 11
Configuring Device-Level LAN high availability with EX Series Virtual Chassis . . . . . . . . . . . . . . . . . . . . . . . . 14
Enable Virtual Chassis Uplink Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Connectivity Use Cases and Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Internet Backhaul Only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Normal State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Private WAN as Primary and Internet as Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Normal State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Internet and Private WAN Split Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Normal State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Failover Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
About Juniper Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
ii
Table of Figures
Figure 1: Distributed enterprise connectivity architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Figure 2: Distributed enterprise high availability network design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Figure 3: SOHO or retail store link-level high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 4: Remote office link-level high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Figure 5: Medium-to -large branch office device-level high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Figure 6: Link-level redundant WAN connectivity architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Figure 7: Security zones on an SRX Series device. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Figure 8: Device-level redundancy high availability architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Figure 9: SRX Series chassis cluster (JSRP) configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
Figure 10: Configuring EX4200 Virtual Chassis in medium-to-large branch office. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Figure 11: Connecting EX Series Virtual Chassis to redundant branch SRX Series Services Gateways. . . . . . . . . . . . 15
Figure 12: Traffic flow example in the Internet Backhaul Only use case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Figure 13: Traffic flow example in the Internet as Backup use case. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
Figure 14: Traffic example in the Internet and PWAN Split Tunneling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
iii
iv
Introduction
Because there are various levels of high availability that can be deployed in the branch offices, enterprises need to
identify what level they want to achieve, and then deploy the appropriate level of device and link redundancy that
supports the high availability requirements.
Link-level redundancy essentially requires two links to operate in an active/active or active/backup setting so that if
one link fails, the other can take over (or likely reinstate) the forwarding of traffic that had been previously forwarded
over the failed link. Failure on any given access link should not result in a loss of connectivity. This only applies to
branch offices with at least two upstream links connected either to a private network or to the Internet.
Another level of high availability is device-level redundancy, effectively doubling up on devices to ensure that a
backup device can take over in the event of a failed device. Typically, the link redundancy and device redundancy are
coupled, and this coupling effectively ties failures together. With this strategy, no single device failure should result in
a loss of connectivity from the branch office to the data centers.
A truly high availability design that provides assured connectivity to business-critical enterprises and branch offices
should employ a combination of link and device redundancy that should connect the branch to dual data centers.
Traffic from the branch office should be dual-homed to each data center so that in the event of a complete failure
in one of the data centers, traffic can be rerouted to a backup data center. Whenever failures occur (link, device, or
data center), traffic should be rerouted in less than 30 seconds. Within this period of time, packet loss might occur.
However, sessions will be maintained if the user applications can withstand these failover times. Branch offices with
redundant devices should provide session persistence so that in the event of a failure, established sessions will not
be dropped, even if the failed device was forwarding traffic.
This document provides configuration data and how to information relevant to providing link-level as well
as device-level high availability deployment, for each of the branch office network profiles at the distributed
enterprise environment.
Scope
This paper provides configuration parameters for each branch network deviceincluding Juniper Networks
SRX Series Services Gateways and Juniper Networks EX Series Ethernet Switchesas part of a high availability
enterprise network. No security or unified threat management (UTM) features are detailed in this document. For
details about implementing UTM features in branch offices, see the Branch Office UTM Implementation Guide. For
details about deploying EX Series Ethernet Switches in the branch office, see Deploying EX Series switches in
Branch Offices.
Target Audience
Security and IT engineers
Network architects
Description
Connectivity
Link-level redundancy
Failure on any given access link should not result in a loss of connectivity.
This only applies to distributed enterprise locations with at least two
upstream links connected either to a private network or to the Internet.
Device-level redundancy
Session persistence
Load balancing
NAT is enabled so that machines in the trust and guest zones can access
the Internet. During a failure, Internet sessions might not be preserved
because the translated addresses of that traffic might have to change,
since different service providers might be using them to connect to the
Internet for higher availability.
Security
Traffic symmetry
Site wide, real-time, and historical status monitoring and reporting of log
files, device status and configurations to a centralized location such as a
network operations center (NOC)
Design Considerations
There are design considerations that can be applied universally, regardless of the branch office profile. The following
paragraphs discuss these considerations as they apply to high availability of network infrastructure, services, and
applications. Next is a discussion of how Juniper Networks enterprise reference architecture applies to distributed
enterprises and all of their major locationsthe branch offices, campus, and data centers.
CAMPUS
DATA CENTERS
ENTERPRISES
OWN CORE
SRX Series
BRANCH OFFICE
EX4200/Virtual
Chassis
SRX Series
PRIVATE WAN
(Managed Services)
NOC
DATA CENTER
EX2200/EX3200
REMOTE OFFICE
PUBLIC WAN
(Internet)
SRX Series
3G wireless
SOHO
Juniper Networks offers a set of compelling solutions to meet the needs of distributed enterprise deployments. The
aforementioned connectivity architecture addresses the concerns of enterprises in each of the areas of connectivity,
security, end-user performance, visibility, and manageability. As detailed in the Branch Office Reference Architecture,
enterprises require a completely new approach in branch office networking to avoid services disruption, which
also impairs productivity in the branches and in turn negatively impacts the business. These requirements define a
new product category know as services gateways, which simplifies branch networking and lowers TCO by unifying
multiple services such as security, voice, data, and access servers into one remotely manageable platform.
Copyright 2010, Juniper Networks, Inc.
As detailed in the Branch Office Reference Architecture, there are two types of high availability designs at most
distributed enterprise locations. Each of these design profiles is discussed in detail in the following considerations:
Link-level redundancyThis uses a single SRX Series device and either single or dual private WAN or Internet
connection, as seen in many small branch offices. The SRX Series device provides integrated LAN switching
capability with 8 to 16 Ethernet ports.
SRX Series
PRIVATE WAN
PSTN
3G wireless
WX Series
Client
For connecting more devices, LAN connectivity in link-level high availability-only branch offices can be implemented
with a single fixed configuration of a Juniper Networks EX2200 Ethernet Switch or Juniper Networks EX3200
Ethernet Switch. These Ethernet switches offer cost-efficient, complete Layer 2 and Layer 3 switching capabilities;
10/100/1000BASE-T copper port connectivity with either full or partial Power over Ethernet (PoE); and the full
JUNOS Software feature set.
Access
Point
3G wireless
SRX Series
PoE
INTERNET
PSTN
EX2200/
EX3200
PoE
Local
Printer
WX Client
Device-level redundancyThis consists of two SRX Series devicesone connects to a private WAN, or a managed
services connection, while the other connects to the Internet, as seen in many medium-to-large branch offices.
Device redundancy is achieved through a chassis cluster (JSRP) and redundant Ethernet groups. LAN redundancy is
implemented with a Juniper Networks EX4200 Ethernet Switch connected to both the edge devices to provide a high
availability configuration.
Access
Point
PoE
SRX Series
PRIVATE WAN
Virtual
Chassis
SRX Series
PoE
INTERNET
Local
Printer
PSTN
WX Client
The solution profile types and the services they provide are derived from a basic reference architecture in which the
connectivity between distributed enterprise locations (branch/campus) and data centers is provided using the public
network (the Internet) and private WAN/MAN networks (either using point-to-point lines, metro Ethernet, managed
services, or MPLS Layer 2-/Layer 3-based VPNs).
It is not the purpose of this document to detail the different design decisions made at the branch or campus.
Instead, the intention is to use these designs as a starting point for building an IPsec-based VPN network.
Virtual Chassis configurations enable economical deployments of switches that deliver network availability in the
branch office locations where installation might otherwise be cost prohibitive or physically impossible. In a Virtual
Chassis configuration, all member switches are managed and monitored as a single logical device. This approach
simplifies the branch office network operations, allows the separation of placement and logical groupings of physical
devices, and provides efficient use of resources.
The Virtual Chassis solution also offers the same high availability features as other Juniper Networks chassis-based
switches and routers, including GRES for hitless failover.
Hardware Requirements
distributed enterprise branch secure routersJuniper Networks SRX Series Services Gateways
distributed enterprise branch secure switchesJuniper Networks EX Series Ethernet Switches
Software Requirements
JUNOS Software Release 9.5 or later is required.
Advertises prefix
10.16.2.0/24
interface Tunnel.1
st0.0: 10.255.1.5
Trust Network
10.16.2.0/24
Advertises prefix
172.18.8.0/21
172.18.0.0/18
192.168.4.0/24
192.168.5.0/24
over interface
Tunnel.1
T1: 10.255.1.254
INTERNET
SRX Series
st0.1: 10.255.2.5
T2: 10.255.2.254
DC-A Network
172.18.8.0/21
NOC Network
192.168.4.0/24
10.255.1.5/24
st0.0
IPsec to
Data Center A
Legend
1.2.1.219/24
ge-0/0/0.0
Color
Trust Network
10.16.2.0/24
SRX Series
1.4.0.219/24
ge-0/0/1.0
10.255.2.5/24
st0.1
Zone
To ISP
To ISP
Count
Description
VPN
Untrust
Trust
IPsec to
Data Center A
A sample configuration for setting up security zones on an SRX Series device is listed in the following:
[edit]
set security zones functional-zone management interfaces fe-0/0/2.0
set security zones functional-zone management host-inbound-traffic system-services all
set security zones functional-zone management host-inbound-traffic protocols all
# Management zone configuration
set security zones security-zone trust host-inbound-traffic system-services all
set security zones security-zone trust host-inbound-traffic protocols all
set security zones security-zone trust interfaces fe-0/0/3.0 host-inbound-traffic systemservices dhcp
set security zones security-zone trust interfaces fe-0/0/3.0 host-inbound-traffic systemservices ping
# trust zone configuration
set security zones security-zone untrust host-inbound-traffic system-services all
set security zones security-zone untrust host-inbound-traffic protocols all
set security zones security-zone untrust interfaces lo0.0
set security zones security-zone untrust interfaces ge-0/0/0.0 host-inbound-traffic systemservices dhcp
set security zones security-zone untrust interfaces ge-0/0/0.0 host-inbound-traffic systemservices ping
set security zones security-zone untrust interfaces ge-0/0/1.0 host-inbound-traffic systemservices dhcp
set security zones security-zone untrust interfaces ge-0/0/1.0 host-inbound-traffic systemservices ping
# untrust zone configuration
set security zones security-zone VPN host-inbound-traffic system-services all
set security zones security-zone VPN host-inbound-traffic protocols all
set security zones security-zone VPN interfaces st0.0
set security zones security-zone VPN interfaces st0.1
# VPN zone configuration
nat
nat
nat
nat
nat
source
source
source
source
source
rule-set
rule-set
rule-set
rule-set
rule-set
default
default
default
default
default
DATA CENTERS
PRIVATE WAN
SRX Series
Virtual
Chassis
SRX Series
PoE
INTERNET
Local
Printer
PSTN
WX Client
Note: Each SRX Series device can terminate a pair of tunnels (one to each data center) because each is connected to
a different network.
10
Configuring SRX Series Chassis Cluster (JSRP) For Device-Level WAN high availability
The SRX Series devices are deployed as an active/active chassis cluster in such a way that whenever a device or
tunnel fails, JSRP fails over to the other SRX Series device with an active tunnel. In this way, the peer-to-peer
network is normally preferred over the Internet, as long as the tunnels are active. Whenever a tunnel fails to any of
the data centers, traffic is rerouted to the secondary SRX Series device. Whenever the primary SRX Series device
is active, Internet traffic can be routed either by backhauling to the data centers or using the data link interface
connecting the SRX Series devices (see Figure 9 for detailed connections between SRX Series devices) to the
Internet connection on the secondary SRX Series device. Traffic, in turn, is translated to use the IP address of the
egress interface on the secondary SRX Series device as the source address. The peer-to-peer network is also used
to back up the Internet connection whenever the link between the secondary SRX Series device and the Internet fails.
One of the data centers advertises a default route over the peer-to-peer-transported IPsec tunnels. In this manner,
when the connection between the secondary SRX Series device and the Internet fails, the primary SRX Series device
selects the default route received through IPsec and consequently backhauls all of its Internet traffic to the data
center.
A sample configuration for setting up chassis cluster (JSRP) on an SRX Series device is listed in the following:
1. Prepare cluster mode on the first cluster memberSRX210-A.
Setting up the cluster and node ID is done in Operational mode and requires a reboot of the related devices:
root@SRX210-A> set chassis cluster cluster-id 1 node 0 reboot
The following message will appear after inputting the previous command.
Successfully enabled chassis cluster. Going to reboot now
It might take two to three minutes before the SRX-210 device actually reboots. The following message will
appear
when it reboots.
*** FINAL System shutdown message from root@srx2100-a ***
System going down IMIDEM ATELY
2. After the reboot, prepare the cluster configuration on SRX210-A:
{primary:node0}
root@SRX210-A> edit
The following warning will appear after entering the Configuration mode:
warning: Clustering enabled; using private edit
warning: uncommitted changes will be discarded on exit
n
E tering configuration mode
Configure the following on cluster node 0:
{primary:node0}[edit]
set interfaces fab0 fabric-options member-interfaces ge-1/0/0
set interfaces fab1 fabric-options member-interfaces ge-3/0/0
# set up data link interfaces between two SRXs
set chassis cluster redundancy-group 0 node 0 priority 100
# set up higher cluster node priority for primary SRX
set chassis cluster redundancy-group 0 node 1 priority 1
# set up lower cluster node priority for secondary SRX
commit
# commit the chassis cluster (JSRP) configuration
11
3. For SRX210 devices, you must connect devices back-to-back over a pair of Fast Ethernet connections. The
connection that serves as the control link must be the built-in controller port (fe-0/0/7 and fe-2/0/7 in this case)
on each device. The fabric link connection can be a combination of any pair of Gigabit Ethernet interfaces on the
devices, as illustrated in detail in Figure 9.
WAN1 Mgmt
fe-0/0/0
fab0
SRX210-A
WAN2
ge-2/0/0
fe-0/0/7
Fabric link
ge-0/0/1
SRX210-B
Control link
ge-0/0/0
fe-2/0/7
fab1
ge-2/0/1
reth1
EX Series Switch
Figure 9: SRX Series chassis cluster (JSRP) configuration
Preempt
Manual failover
no
no
When you initialize an SRX Series device in chassis cluster mode, the system creates a redundancy group 0.
Redundancy group 0 manages the primacy and failover between the routing engines on each node of the cluster.
As is the case for all redundancy groups, redundancy group 0 can be primary on only one node at a time. The
node on which redundancy group 0 is primary determines which routing engine is active in the cluster. A node is
considered the primary node of the cluster if its routing engine is the active one.
The redundancy group 0 configuration specifies the priority for each node. Redundancy group 0 is primary, and
the routing engine is active on the node with the higher priority. By default, both nodes have the same priority
for redundancy group 0, but you can change the default setting to specify which node is primary for redundancy
group 0. Here is how redundancy group 0 primacy is determined:
If both nodes of a cluster are initialized at the same time and you have not changed the default setting for
redundancy group 0 node priority, node 0 takes precedence.
If you have not changed the default setting for redundancy group 0 node priority and one node of the cluster
is initialized before the other, the first node to be initialized takes precedence. In this case, the routing engine
on the first initialized node is the active one and the node is considered primary. (The primary node is not
necessarily node 0. If you boot node 1 before node 0, node 1s routing engine takes precedence.)
The other node is considered secondary. The secondary nodes routing engine is synchronized with state
information from the primary node so that it is ready to take over if the primary node fails.
12
If you set the redundancy group 0 node priority, the routing engine on the node with the higher priority takes
precedence.
You cannot enable preemption for redundancy group 0. If you want to change the primary node for redundancy
group 0, you must do a manual failover.
6. Create redundant Ethernet interface for connecting to EX Series Ethernet Switches:
For creating redundant Ethernet interfaces, one or more redundancy groups numbered 1 through 255 needs
to be set up. Up to eight redundant Ethernet interfaces can be created on an SRX Series chassis cluster. Each
redundancy group acts as an independent unit of failover and is primary on only one node at a time.
Each redundancy group can contain one or more redundant Ethernet interfaces. A redundant Ethernet interface
is a pseudo interface that contains a pair of physical Gigabit Ethernet interfaces or a pair of Fast Ethernet
interfaces. Redundancy groups can contain one or more redundant Ethernet interfaces. A redundant Ethernet
interface has two child links, one from each node. If a redundancy group is active on node 0, then the child links
of all the associated redundant Ethernet interfaces on node 0 are active. If the redundancy group fails over to
node 1, then the child links of all redundant Ethernet interfaces on node 1 become active.
When you configure a redundancy group, you must specify a priority for each node to determine the node on
which the redundancy group is primary. The node with the higher priority is selected as primary. The primacy
of a redundancy group can fail over from one node to the other. When a redundancy group fails over to the other
node, its redundant Ethernet interfaces on that node are active and their interfaces are passing traffic.
A sample configuration for setting up the redundant Ethernet interface on the SRX Series is listed as follows:
For corresponding redundant interface configuration on EX Series switches, see the steps of the next section,
Configuring Device-Level LAN high availability with EX Series Virtual Chassis.
{primary:node0}[edit]
set chassis cluster reth-count 2
# Set the number of redundant interfaces in the cluster
set interfaces reth1 redundant-ether-options redundancy-group 1
# Create redundant group 1 for redundant ethernet interface reth1
set interfaces reth1 description Connect-to-EX-switch
set interfaces reth1 vlan-tagging
set interfaces reth1 unit 163 vlan-id 163
set interfaces reth1 unit 163 family inet address 10.16.3.1/24
# Configure the redundant ethernet interface reth1
set security zones security-zone trust interfaces reth1.163 host-inbound-traffic system-services
all
# Assign interface reth1.163 to trust zone with services enabled
set interfaces ge-0/0/1 gigether-options redundant-parent reth1
set interfaces ge-2/0/1 gigether-options redundant-parent reth1
# Join the redundant group on the physical interfaces
set chassis cluster redundancy-group 1 interface-monitor ge-0/0/1 weight 255
set chassis cluster redundancy-group 1 interface-monitor ge-2/0/1 weight 255
# Track the physical interfaces in Redundancy Group 1
commit
# Commit the redundant Ethernet interface configuration
run show interface reth1 terse
run show chassis cluster interfaces
# Verify the redundant Ethernet interface is correctly defined
13
1 Gigabit Ethernet
For more details about how to implement EX Series switches in branch offices, see the implementation guide
Deploying EX Series Switches in Branch Offices.
A single Virtual Chassis configuration allows up to 10 EX4200 switches to be interconnected and managed
as a single unit.
In the previous command, a Virtual Chassis configuration is formed through the dedicated VCPs (vcp-0) and the
front-panel uplink module (vcp-255/1/0).
When EX4200 switches are deployed in a Virtual Chassis configuration, the member switches automatically elect
a master and backup routing engine. The master routing engine is responsible for managing the Virtual Chassis
configuration, while the backup is available to take over in the event a master fails. All other switches in a Virtual
Chassis configuration take on the role of a line card and are eligible as a master or backup routing engine if the
original master or backup were to fail.
14
3. Configure the interfaces on the EX Series Ethernet Switches for connectivity to the SRX Series cluster
The following figure shows an example connecting the EX4200 Virtual Chassis to redundant branch SRX Series
devices.
SRX210-A
SRX210-B
data link
fab0
ge-0/0/1
fab1
ge-2/0/1
reth1
EX Series Switch
Figure 11: Connecting EX Series Virtual Chassis to redundant branch SRX Series Services Gateways
A sample configuration for setting up Ethernet interfaces on an EX Series device is listed in the following:
{master:0}[edit]
set interfaces ge-0/0/0 unit 0 family ethernet-switching port-mode trunk
set interfaces ge-0/0/0 unit 0 family ethernet-switching vlan members
set interfaces ge-1/0/0 unit 0 family ethernet-switching port-mode trunk
set interfaces ge-1/0/0 unit 0 family ethernet-switching vlan members vlan163
# Allow interconnect vlan on EX trunk interfaces connecting to SRX cluster
set interfaces ge-0/0/12 description UC Machines in Remote Branch
set interfaces ge-0/0/12 unit 0 family ethernet-switching port-mode access
set interfaces ge-0/0/12 unit 0 family ethernet-switching vlan members vlan163
# Configure EX access downlink port connecting to the host
set vlans vlan163 vlan-id 163
set vlans vlan163 interface ge-0/0/12.0
# Configure the vlan and associate with the downlink access port
run show interfaces terse
run show vlans vlan163
# Verify that the vlan and interfaces are correctly configured
15
SRX Series
SOHO
SRX Series
SRX Series
Virtual Chassis
BRANCH OFFICE
PRIVATE WAN
(Managed Services)
SRX Series
M Series
M Series
SRX Series
M Series
DATA CENTER
INTERNET
SA Series
NSMXpress
STRM500
HQ/CAMPUS
ISG Series
NOC
IC Series
IDP Series
Figure 12: Traffic flow example in the Internet Backhaul Only use case
16
Failover Scenarios
The failover in Internet Backhaul Only scenario is relatively simple and straightforward. In this use case, each
branch has two WAN connections and two IPsec VPN tunnels for each data center if there are multiple data
centers in the enterprise.
Traffic is load-balanced across each pair of tunnels through equal-cost multipath (ECMP) routing configuration.
Whenever traffic is directed to a given data center, sessions are load-balanced in a round-robin fashion across each
IPsec tunnel going to that data center. In case a link or tunnel fails, the ECMP configuration will automatically detect
the failure and withdraw the route with the next hop of the failed link or tunnel from the SRX Series devices routing
table. Consequently, all the traffic on the failed link or tunnel will be forced to go through the remaining active tunnel
to the data center. Because no interface source NAT is deployed for this use case, the session will be normally
retained and no traffic loss would happen.
SRX Series
SOHO
SRX Series
SRX Series
Virtual Chassis
BRANCH OFFICE
PRIVATE WAN
(Managed Services)
SRX Series
M Series
M Series
M Series
DATA CENTER
HQ/CAMPUS
INTERNET
SA Series
NSMXpress
STRM500
SRX Series
ISG Series
NOC
IC Series
IDP Series
Figure 13: Traffic flow example in the Internet as Backup use case
Copyright 2010, Juniper Networks, Inc.
17
Failover Scenarios
In case the primary route fails in the Internet as Backup use case, the data center routes will be advertised through
the remaining tunnel on top of the Internet link. All the internal traffic from the branch office to the data center
will go through the IPsec tunnel on the Internet link and flow into the respective data center. While considering the
bandwidth limit normally resides on the Internet backup link, it is recommended that the branch office flow the
Internet traffic directly to the Internet to increase performance in the failover situation. This design can be achieved
by configuring a default static route to use the primary tunnel to the data center as qualified-next-hop on the
SRX Series device with a lower metric than the default route from the Internet interface. When the primary tunnel
to the data center fails, this default route will be withdrawn from the SRX Series devices routing table. All the
Internet traffic will take the secondary default route and flow directly into the Internet. It is also recommended that
a certain level of security should be implemented locally at the branch office because of this failover nature.
SRX Series
SOHO
SRX Series
SRX Series
Virtual Chassis
BRANCH OFFICE
PRIVATE WAN
(Managed Services)
SRX Series
M Series
M Series
M Series
DATA CENTER
HQ/CAMPUS
INTERNET
SA Series
NSMXpress
STRM500
SRX Series
ISG Series
NOC
IC Series
IDP Series
Figure 14: Traffic example in the Internet and PWAN Split Tunneling
18
Failover Scenarios
In the normal state, all the internal traffic is load-balanced over all the tunnels on top of both the private WAN and
Internet connections, while the Internet traffic flows directly from the branch office to the Internet.
In case the tunnel on the private WAN fails, the data center routes will be advertised through the remaining tunnel
on top of the Internet link. All the internal traffic from the branch office to the data center will go through the IPsec
tunnel on the Internet link and flow into the respective data center. The Internet traffic remains the same path and
flows directly from the branch office to the Internet.
In case the Internet connection fails, the internal traffic will go through the private WAN connection to the data
center, as the data center routes will be only advertised over this connection. The Internet traffic now takes the
private WAN connection and also backhauled through the data center to the Internet. This design can be achieved
by configuring a default static route to use the primary tunnel to the data center as a qualified-next-hop on the
SRX Series device with a higher metric than the default route from the Internet interface. When the Internet
connection fails, the default route using the Internet connection as a next hop will be withdrawn from the SRX Series
devices routing table. All the Internet traffic will take the default route to the data center and be backhauled through
the data center into the Internet.
Summary
Juniper Networks offers a set of compelling solutions to meet the needs of branch office deployments. The different
levels of high availability design with Juniper Networks SRX Series Services Gateways and EX Series Ethernet
Switchesthat both run industry-leading JUNOS Softwareaddress the requirements of todays distributed
enterprises, while ensuring that businesses continue to gain efficiencies in CapEx and OpEx.
Implementing different levels of high availability at all kinds of branch offices can be achieved by using Juniper
Networks SRX Series Services Gateways as integrated security, VPN, routing and switching devicesand by
using the EX Series with Virtual Chassis technology as branch local switches. By practicing the implementation
guidelines addressed, network administrators can better understand the different levels of high availability
deploymentsparticularly with Junipers innovative chassis cluster and virtual technologiesin building reliable
branch office networks.
APAC Headquarters
EMEA Headquarters
Phone: 35.31.8903.600
or 408.745.2000
Phone: 852.2332.3636
Fax: 408.745.2100
Fax: 852.2574.7803
Fax: 35.31.8903.601
www.juniper.net
Copyright 2010 Juniper Networks, Inc. All rights reserved. Juniper Networks, the Juniper Networks logo, Junos,
NetScreen, and ScreenOS are registered trademarks of Juniper Networks, Inc. in the United States and other
countries. All other trademarks, service marks, registered marks, or registered service marks are the property of
their respective owners. Juniper Networks assumes no responsibility for any inaccuracies in this document. Juniper
Networks reserves the right to change, modify, transfer, or otherwise revise this publication without notice.
8010017-003-EN
Sept 2010