Configure routing for an additional network interface


This tutorial describes how to create a virtual machine (VM) instance with multiple network interfaces, each of which is attached to different Virtual Private Cloud (VPC) networks. Additionally, the tutorial provides an example of how to configure routing on a Linux VM so that you can successfully ping the nic1 interface.

VMs with multiple network interface controllers are referred to as multi-NIC VMs.

Costs

In this document, you use the following billable components of Google Cloud:

To generate a cost estimate based on your projected usage, use the pricing calculator. New Google Cloud users might be eligible for a free trial.

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Make sure that billing is enabled for your Google Cloud project.

  4. Enable the Compute Engine API.

    Enable the API

  5. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  6. Make sure that billing is enabled for your Google Cloud project.

  7. Enable the Compute Engine API.

    Enable the API

Example configuration

The following diagram shows the VPC networks, subnets, and VMs that you create in this tutorial, along with example values that you can use for resource names and subnet IP address ranges:

Figure 1. In this tutorial, you create two VPC networks that each have two subnets. All subnets are in the same region. Additionally, you create three VMs: one multi-NIC VM that attaches to the first two subnets, and one VM in each of the two remaining subnets (click to enlarge).

Create two VPC networks

To create a multi-NIC VM, the VPC networks that you are connecting it to must already exist. Create two VPC networks. In this tutorial, each VPC network has two subnets.

To create the configuration shown in the example configuration, create your networks and subnets with the following values:

  • A network called network-1 that contains the following:
    • A subnet called subnet-1 that has a primary IPv4 address range of 10.10.1.0/24.
    • A subnet called subnet-3 that has a primary IPv4 address range of 10.10.3.0/24.
  • A network called network-2 that contains the following:

    • A subnet called subnet-2 that has a primary IPv4 address range of 10.10.2.0/24.
    • A subnet called subnet-4 that has a primary IPv4 address range of 10.10.4.0/24.

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click Create VPC network.

  3. In the Name field, enter a name for the VPC network.

  4. Choose Custom for the Subnet creation mode.

  5. In the New subnet section, specify the following:

    1. Provide a Name for the subnet.
    2. Select a Region. Make sure that both VPC networks that you create use the same region for at least one of their subnets. Use this same region when you create the multi-NIC VM in the following section. The example configuration uses the same region for all subnets.
    3. Enter an IP address range. This is the primary IPv4 range for the subnet.

      If you select a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    4. Click Done.

  6. Click Add subnet to create a second subnet. Use this second subnet for testing ping from outside of the primary subnet range of the network interface of your VM instance.

  7. In the Firewall rules section, select the allow-custom rule, and then click EDIT. Configure the rule as follows to ensure that you can test connectivity from the test VMs to multi-nic-vm:

    1. Under IPv4 ranges, keep the checkboxes selected for the subnets' IPv4 address ranges.
    2. Under Other IPv4 ranges, enter 35.235.240.0/20 so that you can connect to the test VMs using SSH. Including this range allows SSH connections using Identity-Aware Proxy (IAP) TCP forwarding. For more information, see Allow ingress ssh connections to VMs.
    3. Under Protocols and ports, select specified protocols and ports.
      1. Select TCP, and then enter 22, 3389 to allow RDP and SSH.
      2. Select Other, and then enter icmp to allow ICMP.
  8. Click Create.

  9. Repeat these steps to create a second VPC network. Make sure that the subnet IP address ranges don't overlap with the subnets from your first network, such as the IP address ranges used in the example configuration.

gcloud

  1. Use the networks create command to create a VPC network.

    gcloud compute networks create NETWORK --subnet-mode=custom
    

    Replace the following:

    • NETWORK: a name for the VPC network.
  2. Use the networks subnets create command to create a subnet for your VPC network.

    gcloud compute networks subnets create NAME \
      --network=NETWORK \
      --range=RANGE \
      --region=REGION
    

    Replace the following:

    • NAME: a name for the subnet.
    • NETWORK: the name of the VPC network.
    • RANGE: an IP address range. This is the primary IPv4 range for the subnet.

      If you enter a range that is not an RFC 1918 address, confirm that the range doesn't conflict with an existing configuration. For more information, see IPv4 subnet ranges.

    • REGION: a region. Make sure that both VPC networks that you create use the same region for at least one of their subnets. Use this same region when you create the multi-NIC VM in the following section. The example configuration uses the same region for all subnets.

  3. Repeat the previous step to create another subnet. Use this second subnet for testing ping from outside of the primary subnet range of the network interface of your VM instance.

  4. Create a firewall rule to allow SSH, RDP, and ICMP:

    gcloud compute firewall-rules create allow-ssh-rdp-icmp \
     --network NETWORK \
     --action=ALLOW \
     --direction=INGRESS \
     --rules=tcp:22,tcp:3389,icmp \
     --source-ranges=SOURCE_RANGE
    

    Replace the following:

    • NETWORK: enter the value that corresponds to the network you're creating:
      • For the first network, enter network-1.
      • When you repeat the steps in this section for the second network, enter network-2.
    • SOURCE_RANGE: enter the value that corresponds to the network you're creating:
      • For the first network, enter 10.10.3.0/24, 35.235.240.0/20. Including 10.10.3.0/24 ensures that you can test connectivity from test-vm-1 to the nic0 interface of the multi-nic-vm. Including 35.235.240.0/20 allows SSH connections using Identity-Aware Proxy (IAP) TCP forwarding. For more information, see Allow ingress ssh connections to VMs.
      • When you repeat the steps in this section for the second network, enter 10.10.4.0/24, 35.235.240.0/20. Including 10.10.4.0/24 ensures that you can test connectivity from test-vm-2 to the nic0 interface of the multi-nic-vm. Including 35.235.240.0/20 allows SSH connections using Identity-Aware Proxy (IAP) TCP forwarding. For more information, see Allow ingress ssh connections to VMs.
  5. Repeat these steps to create a second VPC network. Make sure that the subnet IP address ranges don't overlap with the subnets from your first network, such as the IP address ranges used in the example configuration.

Create a multi-NIC VM

Create a VM instance that has one interface for each VPC network that you created in the previous section.

To create a multi-NIC VM:

Console

  1. In the Google Cloud console, go to the Create an instance page.

    Go to Create an instance

  2. In the Name field, enter a name for the instance. This corresponds to multi-nic-vm in the example configuration.

  3. In the Region field, select the same region in which you created one subnet in each of your VPC networks. The VM instance must be in the same region as the subnets to which its interfaces connect. The example configuration uses the same region for all subnets.

  4. In the Zone field, select a zone.

  5. In the Advanced options section, expand Networking, and then do the following:

    1. Review the Network interfaces section. Google Cloud automatically populates the first network interface with a network and subnetwork. This corresponds to network-1 and subnet-1 in the example configuration.
    2. For Primary internal IPv4 address, select one of the following:
      • Ephemeral to assign a new ephemeral IPv4 address
      • A reserved static internal IPv4 address from the list
      • Reserve static internal IPv4 address to reserve and assign a new static internal IPv4 address. If you are using the example configuration, reserve 10.10.1.3.
    3. For External IPv4 address, select one None.

    4. To add another interface, click Add network interface.

    5. For Network and Subnetwork, select the second network and subnetwork that you created. This corresponds to network-2 and subnet-2 in the example configuration.

    6. For IP stack type, select IPv4 (single-stack).

    7. For Primary internal IPv4 address, select one of the following:

      • Ephemeral to assign a new ephemeral IPv4 address
      • A reserved static internal IPv4 address from the list
      • Reserve static internal IPv4 address to reserve and assign a new static internal IPv4 address. If you are using the example configuration, reserve 10.10.2.3.
    8. For External IPv4 address, select one None.

    9. To finish adding the network interface, click Done.

  6. Click Create.

gcloud

To create network interfaces on a new instance, use the instances create command.

Include the --network-interface flag for each interface, followed by any appropriate networking keys, such as network, subnet, private-network-ip. For the external IP address, the following command specifies no-address.

gcloud compute instances create INSTANCE_NAME \
    --zone ZONE \
    --network-interface \
        network=NIC0_NETWORK,subnet=NIC0_SUBNET,private-network-ip=NIC0_INTERNAL_IPV4_ADDRESS,no-address \
    --network-interface \
        network=NIC1_NETWORK,subnet=NIC1_SUBNET,private-network-ip=NIC1_INTERNAL_IPV4_ADDRESS,no-address

Replace the following:

  • INSTANCE_NAME: the name of the VM instance to create. This corresponds to multi-nic-vm in the example configuration.
  • ZONE: the zone where the instance is created. Enter a zone in the same region in which you created one subnet in each of your VPC networks. The VM instance must be in the same region as the subnets to which its interfaces connect. The example configuration uses the same region for all subnets.
  • Values for the first interface:
    • NIC0_NETWORK: the network where the interface attaches. This corresponds to network-1 in the example configuration.
    • NIC0_SUBNET: the subnet where the interface attaches. This corresponds to subnet-1 in the example configuration.
    • NIC0_INTERNAL_IPV4_ADDRESS: the internal IPv4 address that you want the interface to have in the target subnet. If you are using the example configuration, enter 10.10.1.3. Omit if you just want any valid address assigned.
  • Values for the second interface
    • NIC1_NETWORK: the network where the interface attaches. This corresponds to network-2 in the example configuration.
    • NIC1_SUBNET: the subnet where the interface attaches. This corresponds to subnet-2 in the example configuration.
    • NIC1_INTERNAL_IPV4_ADDRESS: the internal IPv4 address that you want the interface to have in the target subnet. If you are using the example configuration, enter 10.10.2.3. Omit if you just want any valid address assigned.

Create two test VMs

Create two additional VM instances:

  • One in the same network, but different subnet, as the nic0 interface of the multi-NIC VM that you created. This corresponds to test-vm-1 in subnet-3 in the example configuration.
  • One in the same network, but different subnet, as the nic1 interface of the multi-NIC VM that you created. This corresponds to test-vm-2 in subnet-4 in the example configuration.

You use these VM instances for testing ping from the subnets that are outside of the primary subnet range of your VM instance that has multiple network interfaces.

To create the VM instances:

Console

  1. In the Google Cloud console, go to the Create an instance page.

    Go to Create an instance

  2. In the Name field, enter a name for the instance.

  3. In the Region field, select the region in which you placed the additional subnet in your first VPC network.

  4. In the Zone field, select a zone.

  5. In the Advanced options section, expand Networking, and then do the following:

    1. Review the Network interfaces section. Make sure that the subnet is different from the one used by the nic0 interface of your multi-NIC VM.
  6. Click Create.

  7. Repeat these steps to create an instance in your second VPC network, and in a subnet that is different from that of the nic1 interface of your multi-NIC VM.

gcloud

  1. Run the instances create command and include the --network-interface flag for each interface, followed by any appropriate networking keys, such as network, subnet, private-network-ip, or address.

    gcloud compute instances create INSTANCE_NAME \
      --zone ZONE \
      --network-interface \
           network=NIC0_NETWORK,subnet=NIC0_SUBNET, private-network-ip=NIC0_INTERNAL_IPV4_ADDRESS
    

    Replace the following:

    • INSTANCE_NAME: the name of the VM instance to create.
    • ZONE: the zone where the instance is created. Enter the region in which you placed the additional subnet in your first VPC network—the subnet that is not used by the multi-NIC VM.
    • NIC0_NETWORK: the network where the interface attaches.
    • NIC0_SUBNET: the subnet where the interface attaches.
    • NIC0_INTERNAL_IPV4_ADDRESS: the internal IPv4 address that you want the interface to have in the target subnet. Omit if you just want any valid address assigned.
  2. Repeat the previous step to create an instance in your second VPC network and in a subnet that is different from that of the nic1 interface of your multi-NIC VM.

Test connectivity to the multi-NIC VM

Follow the steps in this section to test ping from the additional VM instances that you created to each interface of your VM instance with multiple network interfaces.

The following table shows the scenarios in which you can successfully ping at this point in the tutorial using the IP address values from the example configuration.

From To ping successful
VM (test-vm-1) in the same network, but different subnet, as the nic0 interface of the multi-nic-vm. Internal IP address (10.10.1.3) of the nic0 interface of multi-nic-vm
VM (test-vm-2) in the same network, but different subnet, as the nic1 interface of multi-nic-vm Internal IP address (10.10.2.3) of the nic1 interface of multi-nic-vm

Get the IP addresses of the multi-NIC VM

If necessary, get the interface IP addresses of your multi-NIC VM so that you can ping them in the following sections.

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. In the list of VM instances, find the multi-NIC VM that you created, and record these values so that you can ping them in the following steps:

    • The Internal IP addresses of its nic0 and nic1 interfaces

gcloud

  1. Run the instances list command:

    gcloud compute instances list
    
  2. Locate your multi-NIC VM and record the following from the output:

    • INTERNAL_IP: the first and second addresses correspond to the nic0 and nic1 network interfaces.

Ping the nic0 interface of your VM

  1. In the list of VM instances, locate the VM that you created in the same network, but different subnet, as the nic0 interface of the multi-NIC VM.

    1. In the row of the instance, click SSH.
  2. Run the following command to ping the internal IP address of the nic0 interface of your multi-NIC VM:

    ping INTERNAL_IP_NIC0
    

    Replace INTERNAL_IP_NIC0 with the corresponding address that you recorded previously. If you are using the example configuration, enter 10.10.1.3.

    Note that the ping is successful.

  3. Run exit to close the terminal window.

Ping the nic1 interface of your VM

  1. In the list of VM instances, locate the instance that you created in the same network, but different subnet, as the nic1 interface of the multi-NIC VM.

    1. In the row of the instance, click SSH.
  2. Run the following command to ping the internal IP address of the second interface of your multi-NIC VM:

    ping INTERNAL_IP_NIC1
    

    Replace INTERNAL_IP_NIC1 with the corresponding address that you recorded previously. If you are using the example configuration, enter 10.10.2.3.

    Note that the ping is unsuccessful.

  3. Run exit to close the terminal window.

Configure policy routing

The ping test in the preceding section failed due to asymmetric routing— traffic is sent to the nic1 interface of multi-nic-vm, but the default route for the VM results in the replies being sent from nic0. For more information, see DHCP behavior with multiple network interfaces.

Follow the steps in this section to configure policy routing to make sure that egress packets leave through the correct interface.

This tutorial uses Linux VMs. Source-based policy routing is not supported by Windows operating systems.

Find the default gateway for the nic1 interface of the VM

You can find the default gateway for a VM instance's interface by querying the metadata server. If you are using the example configuration, the value is 10.10.2.1.

To find the default gateway for the nic1 interface's IPv4 address, make the following request from the multi-NIC VM:

curl http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/1/gateway -H "Metadata-Flavor: Google"

To find the default gateway for a different network interface, specify the appropriate interface number. To find the name that Google Cloud has assigned to the interface, see Get the IP addresses of the multi-NIC VM. This is different from the interface name that the operating system assigns. The interface has the format nicNUMBER. In your request to the metadata server, enter only the number. For example, for nic2, specify 2.

Configure a new routing table on the multi-NIC VM

This section describes how to configure a new routing table on the multi-NIC VM.

  1. Enable the serial console by following the steps in Enabling access for a VM instance.

  2. To avoid losing connectivity to the VM while you change the default route, connect to the serial console.

  3. Run ip link list to list your VM's network interfaces, and then record the name of the nic1 interface, such as ens5.

  4. Run the following command to ensure that the nic1 interface is configured with an IP address.

    ip addr show NIC
    

    Replace NIC with the name of the nic1 interface from the previous step.

    If the nic1 interface has not been assigned an IP address automatically, you can manually assign an IP address by running the following command:

    sudo ip addr add IP_ADDRESS dev NIC
    

    Replace the following:

    • IP_ADDRESS: the internal IP address to configure on the interface. This corresponds to 10.10.2.3 in the example configuration.
    • NIC: the name of the nic1 interface from the previous step.
  5. Create a custom route table for the nic1 network interface.

    echo "1 ROUTE_TABLE_NAME" | sudo tee -a /etc/iproute2/rt_tables
    

    Replace ROUTE_TABLE_NAME with a name for the route table, such as route-nic1.

  6. Create the default route in the custom route table intended for the nic1 network interface and a route with a source hint for packets sent to the gateway.

    sudo ip route add default via GATEWAY dev NIC table ROUTE_TABLE_NAME
    sudo ip route add GATEWAY src IP_ADDRESS dev NIC table ROUTE_TABLE_NAME
    

    Replace the following:

    • GATEWAY: the default gateway IP address of the interface. This corresponds to 10.10.2.1 in the example configuration.
    • NIC: the interface that you want to add a route for. For example, ens5.
    • ROUTE_TABLE_NAME: the name of your route table.
    • IP_ADDRESS: the internal IP address configured on the interface. This corresponds to 10.10.2.3 in the example configuration.
  7. Create routing rules that instruct the VM to use the custom route table for packets with sources or destinations that match the primary internal IPv4 address assigned to the nic1 interface:

    sudo ip rule add from IP_ADDRESS/PREFIX_LENGTH table ROUTE_TABLE_NAME
    sudo ip rule add to IP_ADDRESS/PREFIX_LENGTH table ROUTE_TABLE_NAME
    

    Replace the following:

    • IP_ADDRESS: the internal IP address configured on the interface. This corresponds to 10.10.2.3 in the example configuration.
    • PREFIX_LENGTH: the prefix length for the configured IP address.
    • ROUTE_TABLE_NAME: the name of your route table.
  8. Run the following command to remove all entries from the cache route table. This might be necessary if you are using an existing VM with previously configured route tables.

    sudo ip route flush cache
    

Retest connectivity to the multi-NIC VM

The following table shows the scenarios in which you can successfully ping now that you have configured policy routing. Repeat the steps to ping the nic1 interface of your VM to confirm that you can now ping both IP addresses successfully.

From To ping successful
VM (test-vm-1) in the same network, but different subnet, as the nic0 interface of multi-nic-vm. Internal IP address (10.10.1.3) of the nic0 interface of multi-nic-vm
VM (test-vm-2) in the same network, but different subnet, as the nic1 interface of multi-nic-vm Internal IP addresses (10.10.2.3) of the nic1 interface of the multi-nic-vm

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

  1. In the Google Cloud console, go to the Manage resources page.

    Go to Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.
  3. In the dialog, type the project ID, and then click Shut down to delete the project.

Delete individual resources

If you don't want to delete the entire project, delete the VPC networks and VM instances that you created for the tutorial.

Before you can delete a network, you must delete all resources in all of its subnets, and all resources that reference the network.

Delete instances

To delete instances:

Console

  1. In the Google Cloud console, go to the VM instances page.

    Go to VM instances

  2. Check the instances you want to delete.

  3. Click the Delete button.

gcloud

Use the gcloud compute instances delete command. When you delete an instance in this way, the instance shuts down and is removed from the list of instances, and all resources attached to the instance are released, such as persistent disks and any static IP addresses.

To delete an instance, use the following command:

gcloud compute instances delete example-instance [example-instance-2 example-instance-3..]

Delete VPC networks

To delete a VPC network:

Console

  1. In the Google Cloud console, go to the VPC networks page.

    Go to VPC networks

  2. Click the name of a VPC network to show its VPC network details page.

  3. Click Delete VPC network.

  4. In the message that appears, click Delete to confirm.

gcloud

Use the networks delete command.

gcloud compute networks delete NETWORK

Replace NETWORK with the name of the network to delete.

What's next