Oracle Cloud Infrastructure Documentation

Advanced Scenario: Transit Routing

This topic explains an advanced networking scenario called transit routing. This scenario enables communication between an on-premises network and multiple virtual cloud networks (VCNs) over a single Oracle Cloud Infrastructure FastConnect or IPSec VPN.

Warning

Avoid entering confidential information when assigning descriptions, tags, or friendly names to your cloud resources through the Oracle Cloud Infrastructure Console, API, or CLI.

Highlights

  • You can use a single FastConnect or IPSec VPN to connect your on-premises network with multiple VCNs in the same region, in a hub-and-spoke layout.
  • The VCNs must be in the same region but can be in different tenancies. For accurate routing, the CIDR blocks of the various subnets of interest in the on-premises network and VCNs must not overlap.
  • The hub VCN uses a dynamic routing gateway (DRG) to communicate with the on-premises network. The hub VCN peers with each spoke VCN. The hub and spoke VCNs use local peering gateways (LPGs) to communicate.
  • To enable the desired traffic from the on-premises network through the hub VCN to a peered spoke VCN, you implement route rules for the hub VCN's DRG attachment and LPG, and for the spoke VCN's subnets.
  • By configuring route tables that reside in the hub VCN, you can control whether a particular subnet in a peered spoke VCN is advertised to the on-premises network, and whether a particular subnet in the on-premises network is advertised to a peered spoke VCN.

Overview of Transit Routing

A basic networking scenario involves connecting your on-premises network to a VCN with either Oracle Cloud Infrastructure FastConnect or an IPSec VPN. These two basic scenarios illustrate that layout: Scenario B: Private Subnet with a VPN and Scenario C: Public and Private Subnets with a VPN.

There's an advanced networking scenario that lets you use your single FastConnect or IPSec VPN to communication with multiple VCNs from your on-premises network. The VCNs must be in the same region but can be in different tenancies.

Here's a basic example of why you might use transit routing: you have a large organization with different departments, each with their own VCN. Your on-premises network needs access to the different VCNs, but you don't want the administration overhead of maintaining a secure connection from each VCN to the on-premises network. Instead you want to use a single FastConnect or IPSec VPN.

The scenario uses a "hub and spoke" layout, as illustrated in the following diagram.

This image shows the basic hub and spoke layout of VCNs connected to your on-premises network.

One of the VCNs acts as the hub (VCN-H) and connects to your on-premises network by way of FastConnect or an IPSec VPN. The other VCNs are locally peered with the hub VCN. The traffic between the on-premises network and the peered VCNs transits through the hub VCN. The VCNs must be in the same region but can be in different tenancies.

Gateways Involved in Transit Routing

The next diagram shows the gateways on the VCNs. The hub VCN has a dynamic routing gateway (DRG), which is the communication path with the on-premises network. For each locally peered spoke VCN, there's a pair of local peering gateways (LPGs) that anchor the peering connection. One LPG is on the hub VCN, and the other is on the spoke VCN.

This image shows the basic hub and spoke layout of VCNs along with the gateways required.

Tip

If you're already familiar with the Networking service and local VCN peering, these are the most important new concepts to understand:

  • For each spoke VCN subnet that needs to communicate with the on-premises network, you must update the subnet's route table with a rule that sets the target (the next hop) as the spoke VCN's LPG for all traffic destined for the on-premises network.
  • You must add a route table to the hub VCN, associate it with the DRG attachment, and add a route rule that sets the target (the next hop) as the hub VCN's LPG (for that spoke) for all traffic destined for that spoke VCN (or a specific subnet in that VCN).
  • You must add another route table to the hub VCN, associate it with the hub VCN's LPG (for that spoke), and add a route rule that sets the target (the next hop) as the DRG for all traffic destined for the on-premises network (or a specific subnet in that network).

See the instructions in Task 5: Add a route rule to the spoke VCN's subnet and Task 6: Set up ingress routing between the DRG and LPG on the hub VCN.

Example: Components and Routing for a Hub and Single Spoke

This example includes a hub VCN and only a single spoke VCN for simplicity. Each VCN also has a single subnet for simplicity. Notice that to make transit routing work, you do not have to create a subnet in the hub VCN. However, you can if you like. The subnet can contain cloud resources that your on-premises network or the spoke VCN need to use.

The following diagram shows the basic layout for this example.

Note

In a hub-and-spoke model, the hub VCN can have multiple spokes and therefore multiple LPGs (one per spoke). This topic uses the phrase the hub VCN's LPG, which could therefore be ambiguous. When the phrase is used here, it means the hub LPG for the particular spoke of interest. In the following diagram, it's LPG-H-1. Additional spokes would involve creation of an LPG-H-2, LPG-H-3, and so on.

This image shows the route tables and rules required when setting up the scenario.

The diagram also shows the required Networking service route tables and route rules for transit routing through the hub VCN. The diagram has four route tables, each associated with a different resource:

  • DRG attachment:

    • The route table belongs to the hub VCN and is associated with the DRG attachment. Why the attachment and not the DRG itself? Because the DRG is a standalone resource that you can attach to any VCN in the same region and tenancy as the DRG. The attachment itself identifies which VCN.
    • The route table routes the inbound traffic that is from the on-premises network and destined for the spoke VCN (VCN-1). The rule sends that traffic to LPG-H-1.
  • LPG-H-1:

    • This route table belongs to the hub VCN and is associated with LPG-H-1.
    • The route table routes inbound traffic that is from VCN-1 and destined for the on-premises network. The rule sends that traffic to the DRG.
  • Subnet-H

    • This route table belongs to the hub VCN and is associated with subnet-H.
    • It includes rules to enable traffic with the on-premises network and with subnet-1.
  • Subnet-1

    • This route table belongs to the spoke VCN and is associated with subnet-1.
    • It includes rules to enable traffic with subnet-H and the on-premises network.

Here are some additional important details to note:

  • A route table that is associated with a DRG attachment can have only rules that target an LPG. Conversely, a route table that is associated with an LPG can have only rules that target a DRG. These rules route the traffic through the hub VCN and to the spoke VCN or on-premises network.
  • Even though the preceding statement is true, inbound traffic to subnets within the hub VCN is still allowed. You do not need to set up explicit rules for this inbound traffic in the DRG attachment's route table or hub LPG's route table. When this kind of inbound traffic reaches the DRG or the hub LPG, the traffic is automatically routed to its destination in the hub VCN. And in general: for any route table belonging to a given VCN, you can't create a rule that lists that VCN's CIDR (or a sub-section) as the rule's destination.
  • A DRG attachment can exist without a route table associated with it. However, after you associate a route table with a DRG attachment, there must always be a route table associated with it. But, you can associate a different route table. You can also edit the table's rules, or delete some or all of the rules.

About CIDR Overlap

In this example, the various networks do not have overlapping CIDR blocks (172.16.0.0/12 versus 10.0.0.0/16 versus 192.168.0.0/16). The Networking service does not allow local VCN peering between two VCNs with overlapping CIDRs. That means each spoke must not overlap with the hub.

However, the Networking service does not validate whether the spoke VCNs themselves overlap with each other, or if any of the VCNs overlap with the on-premises network. You must ensure that CIDRs for all the subnets that need to communicate with each other don't overlap. Otherwise, traffic may be dropped.

A Networking service route table cannot contain two rules with the exact same destination CIDR. However, if two rules in the same route table have overlapping destination CIDRs, the most specific rule in the table is used to route the traffic (that is, the rule with the longest prefix match).

Route Advertisement to the On-Premises Network and Spoke VCNs

From a security standpoint, you can control route advertisement so that only specific subnets in the on-premises network are advertised to the spoke VCNs. Similarly, you can control which subnets in the spoke VCNs are advertised to the on-premises network.

The routes advertised to the on-premises network consist of:

  • The rules listed in the route table associated with the DRG attachment (192.168.0.0/16 in the preceding diagram)
  • The individual subnets in the hub VCN

The routes advertised to the spoke VCN consist of:

  • The individual subnets in the hub VCN
  • The rules listed in the route table associated with the hub VCN's LPG for the spoke (172.16.0.0/12 in the preceding diagram)

Therefore, the administrator of the hub VCN alone can control which routes are advertised to the on-premises network and spoke VCNs.

In the preceding example, the relevant routes use the full CIDR block of the on-premises network and spoke VCN as the destination (172.16.0.0/12 and 192.168.0.0/16, respectively), but they could instead use a subnet of those networks to restrict routing to specific subnets.

Details About Routing for Different Traffic Paths

To further illustrate how routing takes place in the preceding example, let's look more closely at different paths of traffic. Here's the same diagram again:

This image shows the route tables and rules required when setting up the scenario.

Traffic from the on-premises network to the spoke VCN
Traffic from the spoke VCN to the on-premises network
Traffic from the spoke VCN to a subnet in the hub VCN

Required IAM Policy

To use Oracle Cloud Infrastructure, you must be given the required type of access in a An IAM document that specifies who has what type of access to your resources. It is used in different ways: to mean an individual statement written in the policy language; to mean a collection of statements in a single, named "policy" document (which has an Oracle Cloud ID (OCID) assigned to it); and to mean the overall body of policies your organization uses to control access to resources. written by an administrator, whether you're using the Console or the REST API with an SDK, CLI, or other tool. If you try to perform an action and get a message that you don’t have permission or are unauthorized, confirm with your administrator the type of access you've been granted and which A collection of related resources that can be accessed only by certain groups that have been given permission by an administrator in your organization. you should work in.

If you're a member of the Administrators group, you already have the required access to set up transit routing. Otherwise, you need access to the Networking service, and you need the ability to launch instances. See IAM Policies for Networking.

Setting Up VCN Transit Routing in the Console

Tip

You might already have many of the necessary Networking components and connections in this advanced scenario already set up. So you might be able to skip some of the following tasks. If you already have a hub VCN connected to your on-premises network, and spoke VCNs locally peered with the hub VCN, then Task 5 and Task 6 are the most important. They enable traffic to be routed between your on-premises network and the spoke VCN.

Task 1: Set up the hub VCN
Task 2: Connect the hub VCN with your on-premises network
Task 3: Set up a spoke VCN with at least one subnet
Task 4: Set up a local peering between the hub VCN and the spoke VCN
Task 5: Add a route rule to the spoke VCN's subnet
Task 6: Set up ingress routing between the DRG and LPG on the hub VCN

If you need more spoke VCNs, here is the general process for each spoke VCN:

  1. Repeat Tasks 3-5 for the new spoke VCN.
  2. Repeat Task 6 with these changes:

    • For Step 1: Instead of creating a new route table for the DRG attachment, update the existing route table to include a new rule for the new spoke VCN. The destination CIDR is the spoke VCN's CIDR (or a subnet within). The target is the hub VCN's LPG for the new spoke.
    • For Step 2: Skip this step entirely because the DRG attachment is already associated with its route table.
    • For Step 3: Repeat as is. Name the new route table according to which spoke the route table is for (for example, Hub LPG-2 Route Table for the second spoke).
    • For Step 4: Repeat as is. Associate the new route table you created in Step 3 with the hub VCN's LPG for the new spoke.

Turning Off Transit Routing

To turn off transit routing, remove the rules from:

  • The route table associated with the DRG attachment.
  • The route table associated with each LPG on the hub VCN.

A route table can be associated with a resource but have no rules. Without at least one rule, a route table does nothing.

A DRG attachment or LPG can exist without a route table associated with it. However, after you associate a route table with a DRG attachment or LPG, there must always be a route table associated with it. But, you can associate a different route table. You can also edit the table's rules, or delete some or all of the rules.

Changes to the API

For information about changes to the Networking service API to support transit routing, see the transit routing release notes.