What it Cisco Virtual Device Context (VDC), its features and caveats.

Concept

Virtual Device Context (VDC) is a NX-OS technology that allows you to virtually separate a physical chassis. Its a pretty much a layer 1 virtualization similar concept to SDR in IOS XR or Contexts in ASA. It allows to partition a single physical device into multiple logical devices enabling true isolation for management plane, data plane and control plane. This is especially handy around fault domain isolation. If something is misconfigured on one VDC it will not propagate over to the other VDCs because there is no intercommunication between them. Only way to have VDCs talk to each other is by running physical cable between them. Fig.1 provides the visual representation of VDC concept.

Fig.1

vdc concept

Personally, I think its one of the most powerful features you get from NX-OS from both architecture and business(ROI) stand points.

Architecture wise it allows you to establish multiple logical roles on a single chassis in order to accomplish your design requirements. Considering you have purchased single chassis as N+1/1+1 (Fig.2) but your requirement is to create classical 3-tier design which includes Core, Aggregation and Access.

Fig.2

nexus 7k
i.e Nexus 7009 Chassis

You can easly accomplish that with single chassis(Fig.3). From the failover standpoint you want to make sure you have the redundancy on supervisor level as well as line cards to eliminate single point of failures. In the perfect world you would want to have dual chassis with dual sup and redundant line cards but that is a different conversation($).

Fig.3

single chassis multiple vdcs

From the business side it enables you to buy less equipment thus less upfront expense.  Additionally with less equipment your MRC(monthly recurrent cost) will be decreased by requiring less cooling, space and power

Since, we are able to provide true separation overlapping VLAN databases is no longer a problem as shown in Fig.4.

Fig.4

High Availability on VDC

Cisco NX-OS incorporates high availability features such that minimal or no effect on data plane should occur. Three features that I personally like are as follow:

  1. In-service Software Upgrade (ISSU) on Dual Supervisors – If you are lucky enough to afford dual supervisors on your chassis ISSU will allow you to perform firmware upgrade while forwarding traffic non disruptively. It leverages already existing Nonstop Forwarding (NFS) with Stateful switchover (SSO) to perform the operation.
  2. Multichassis EtherChannel (MEC) – Not only provides fault tolerance for single point of failures i.e line card going bad but it also helps with eliminating STP by unblocking redundant links and providing more bandwidth. Assuming you have redundant line cards.
  3. VDC Boot Priority – In case of chassis failure you can define which VDC resources and forwarding will be brought up first. Simply changing vdc boot-order # under particular vdc context will provide priorities to your shared architecture.

Layer 2 MAC Learning on VDC

MAC address learning and forwarding decisions are the responsibility of each line card. Once the initial line card learned the MAC it will forward its table to all other line cards that are part of the same VDC. Cisco provides good flow logic for MAC address learning represented in Figure 5.  MAC Address “A” once learned on I/O Module 1 TCAM table (line card 1) it will forward it to I/O Module 2 since they are both part of VDC10.  Notice that forward will not be extended to I/O Module 3 since that line card doesn’t have any port-group associated with that VDC.

Fig.5

MAC Address Learning on VDC - Source Cisco
MAC Address Learning on VDC – Source Cisco

Layer 3 Resources on VDC

Same as with Layer 2 MAC addresses each line card is in control of max amount of entries they can hold. By default if you are running single VDC all learned entries are copied among all I/O Modules. This enables each line card to have all the information stored locally for fast forwarding.

Topic becomes interesting as soon as you start resource splitting. When deploying multiple VDCs you are able to allocate specific resource per VDC instance. This becomes extremely important when you get into traffic engineering and designing purpose for all of your modules.

To check the resource allocation, you will need to login into your admin VDC and verify configuration for each VDC context. In the example below from my N7K you can see limit resource difference per VDC.

N7K1# sh run vdc

vdc CORE id 1
cpu-share 5
limit-resource vlan minimum 16 maximum 4094
limit-resource monitor-session minimum 0 maximum 2
limit-resource monitor-session-erspan-dst minimum 0 maximum 23
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 96 maximum 96
limit-resource u6route-mem minimum 24 maximum 24
limit-resource m4route-mem minimum 58 maximum 58
limit-resource m6route-mem minimum 8 maximum 8
limit-resource monitor-session-inband-src minimum 0 maximum 1
limit-resource anycast_bundleid minimum 0 maximum 16
limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
limit-resource monitor-session-extended minimum 0 maximum 12

vdc IAAS id 2
limit-resource module-type f2 f2e
allow feature-set fabricpath
cpu-share 5
allocate interface Ethernet3/1-12
allocate interface Ethernet6/1-12
boot-order 1
limit-resource vlan minimum 16 maximum 4094
limit-resource monitor-session minimum 0 maximum 2
limit-resource monitor-session-erspan-dst minimum 0 maximum 23
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 8 maximum 8
limit-resource u6route-mem minimum 4 maximum 4
limit-resource m4route-mem minimum 8 maximum 8
limit-resource m6route-mem minimum 5 maximum 5
limit-resource monitor-session-inband-src minimum 0 maximum 1
limit-resource anycast_bundleid minimum 0 maximum 16
limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
limit-resource monitor-session-extended minimum 0 maximum 12

Bottom line you can get creative when allocating resources for your line cards within VDCs.

Caveats with VDC

Now, let’s talk about some of the common caveats and “need to know” aspects around VDC. This is especially crucial during BOM phase to make sure you order proper hardware/software and quantity.

  1. VDC limit – Depending on the supervisor that is a max limit of VDCs that you can create on your chassis. Please note that without a license (LAN_ADVANCED_SERVICES_PKG) only default VDC can exist.
    1. Supervisor 1 Module – Allows to create three nondefault VDCs with 1 default/admin VDC.
    2. Supervisor 2 Module – Allows to create four nondefault VDCs and have 1 admin VDC. Admin VDC doesn’t count against limit so we have 4+1.
    3. Supervisor 2e Module – Allows to create 8 nondefault VDCs and 1 admin VDC. Again, as the rule if default VDC is non-admin its -1 on nondefault. By default, you have 4+1 and additional VDCs needs to be purchased via license.
  2. Inter-switching between VDCs – There is no switching between VDCs via backplane. If its required to have connectivity between two different VDCs you will need to have physical connectivity between them.
  3. Modules Port Groups – You cannot share interface between different VDCs.  This is not the same concept as subintefaces.  Physical interface will be assigned to its appropriate VDC and can only be used by that instance. What more interesting is the concept of Port-Grouping. When assigning interface to a VDC you need to be very carefully on what ethernet module you are using. Different line cards have different port-group assignments. For example one of mine line cards N7K-F248XP-25 have 12 different port-groups (4 ports per group. 4×12=48 ports). I.e Port-Group 1 = Port 1-4, Port-Group 2 = Port 5-8 and so on.
    Port Groups for N7K-F248XP-25 – Source Cisco

    Reason why I said to be careful is when for example you have Port-Group 1 assigned to VDC 1 but you want to assign Port 4 to VDC 2.  What would happen if you proceed is VDC 2 would take over Port-Group 1 Ports (1-4) and any configuration on VDC 1 for these ports would be erased.  This is catastrophic so make sure you know your Port-Groups for your line cards!

  4. Modules Compatibility – You cannot mix and match modules within your chassis because they will simply won’t work or features will be degraded. This is something that needs to be considered during design phase prior proceeding with the BOM. Please see chart below on which modules play nice with each other.

 

Table 1 VDC Module Type Compatibility for Release 8.0(1)
F1 F2 M2XL F2e(F2CR) F3 M3
F1 True False True False False False
F2 False True False True True False
M2XL True False True True True True
F2e(F2CR) False True True True True False
F3 False True True True True True
M3 False False True False True True

Initial VDC Configuration

Creating VDC within NX-OS is fairly straight forward. Under admin VDC global config mode define VDC <name> and start allocating resources(optional) and port-groups. You can also follow my post on allocating port-groups to find out more.

!create VDC
N7K1#conf t
N7K1(config)# vdc CORE
!verify VDCs
N7K1# show vdc detail

Switchwide mode is m1 f1 m1xl f2 m2xl f2e f3

vdc id: 1
vdc name: VDC 0
vdc state: active
vdc mac address: 84:78:ac:xx:xx:xx
vdc ha policy: RELOAD
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 1
CPU Share: 5
CPU Share Percentage: 25%
vdc create time: Sat Aug 20 21:34:57 2016
vdc reload count: 0
vdc uptime: 633 day(s), 13 hour(s), 57 minute(s), 6 second(s)
vdc restart count: 0
vdc type: Admin
vdc supported linecards: None

vdc id: 2
vdc name: VDC 1
vdc state: active
vdc mac address: 84:78:ac:xx:xx:xx
vdc ha policy: RESTART
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 1
CPU Share: 5
CPU Share Percentage: 25%
vdc create time: Sat Aug 20 21:38:12 2016
vdc reload count: 0
vdc uptime: 633 day(s), 13 hour(s), 54 minute(s), 1 second(s)
vdc restart count: 0
vdc type: Ethernet
vdc supported linecards: f2 f2e

vdc id: 3
vdc name: VDC 2
vdc state: active
vdc mac address: 84:78:ac:xx:xx:xx
vdc ha policy: RESTART
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 1
CPU Share: 5
CPU Share Percentage: 25%
vdc create time: Sat Aug 20 21:38:39 2016
vdc reload count: 0
vdc uptime: 633 day(s), 13 hour(s), 53 minute(s), 30 second(s)
vdc restart count: 0
vdc type: Ethernet
vdc supported linecards: f2 f2e

vdc id: 4
vdc name: CORE
vdc state: active
vdc mac address: 84:78:ac:xx:xx:xx
vdc ha policy: RESTART
vdc dual-sup ha policy: SWITCHOVER
vdc boot Order: 1
CPU Share: 5
CPU Share Percentage: 25%
vdc create time: Sat Aug 20 21:39:08 2016
vdc reload count: 0
vdc uptime: 633 day(s), 13 hour(s), 53 minute(s), 3 second(s)
vdc restart count: 0
vdc type: Ethernet
vdc supported linecards: f2 f2e

!verify VDCs resource allocation
N7K1# sh run vdc | b CORE
vdc CORE id 4
limit-resource module-type f2 f2e
allow feature-set fabricpath
cpu-share 5
allocate interface Ethernet3/21-28,Ethernet3/45-48
allocate interface Ethernet5/45-48
allocate interface Ethernet6/21-28
boot-order 1
limit-resource vlan minimum 16 maximum 4094
limit-resource monitor-session minimum 0 maximum 2
limit-resource monitor-session-erspan-dst minimum 0 maximum 23
limit-resource vrf minimum 2 maximum 4096
limit-resource port-channel minimum 0 maximum 768
limit-resource u4route-mem minimum 8 maximum 8
limit-resource u6route-mem minimum 4 maximum 4
limit-resource m4route-mem minimum 8 maximum 8
limit-resource m6route-mem minimum 5 maximum 5
limit-resource monitor-session-inband-src minimum 0 maximum 1
limit-resource anycast_bundleid minimum 0 maximum 16
limit-resource monitor-session-mx-exception-src minimum 0 maximum 1
limit-resource monitor-session-extended minimum 0 maximum 12
I hope this has been informative and if you want to add anything please drop a comment. Thanks Bart.

References:

Tags:, ,

Add a Comment

Your email address will not be published. Required fields are marked *