AVD 3.8 has been released

Arista AVD 3.8 is out!!  Looks like its been out for a few days now so lets dig in....
This blog post will focus on two new features introduced in this release, that being Schema Validation and Automatic Variable Conversion, as well as the long anticipated L2 Leaf/Spine fabric buildout capability.  These new features are documented in detail in the lates AVD documentation located here:  https://avd.sh/en/stable/

Upgrade

The upgrade to AVD 3.8 is simple.  Run:

  • "ansible-galaxy collection install -U arista.avd" 

on your AVD server or container.  This will pull the latest 3.8.0 AVD code and update your collections.  The upgrade process also installs any dependencies such as updated ansible or python libraries.  I expect the avd-all-in-one container will be updated in a few days.  

Schema Validation and Automatic Variable Conversion

So right out of the gate, it looks like the schema validation/conversion feature is included in this release.  This looks like it was a massive amount of work, but is really cool. This feature will allow AVD to run validation of your input variables against a predefined list of acceptable inputs which should cut down on mistakes and time spent debugging when developing your .yml files.  It also serves as a quick and dirty cheat sheet for when you are working on your .yml files.  Download the schema and you'll see exactly what you can and cannot configure for a certain .yml key.  

The type conversion functionality related to the schema will convert input values from, say, a string to an integer if so required.  This means your .yml files will have less formatting required to ensure that correct input types are added to the structured config files.  

Currently the validate and type conversion tasks are only enabled by default in the eos_cli_config_gen role, however you can enable it in the eos_designs role by adding --tags validate to your playbook.  

With AVD 3.8 installed, now when you run a build and/or provision you will notice (if you have schema validation errors), that during the following task execution, AVD now presents warnings related to your schema validation issues:

  • TASK [arista.avd.eos_cli_config_gen : Generate eos intended configuration]
  • TASK [arista.avd.eos_cli_config_gen : Generate device documentation] 




By default these warnings are informative in nature only, i.e it will not stop the run of the playbook, however this behavior can be changed by modifying:

  • avd_data_validation_mode < "disabled" | "error" | "warning" | "info" | "debug" | default -> "warning" >
  • disabled = validation is bypassed, error = error messages and fails the task, warning = warning messages (default), info = normal log messages, debug = additional debug messages visible via -v

In the above example, you can see that my topology.yml file was modified to include a spanning-tree mode type input of "test".  Test is clearly an unsupported STP mode, and therefore subsequently fails the input validation steps for the two switches configured via this section of the topology file.  

AVD input type conversion behavior can also be changed by modifying:

  • avd_data_conversion_mode < "disabled" | "warning" | "info" | "debug" | default -> "debug" >
  • disabled = type conversion is bypassed (don't do this), error = error messages and fails the task, warning = warning messages, info = normal log messages, debug = additional debug messages visible via -v (default)

It is suggested to NOT disable input type conversion as the conversion process also updates and modifies .yml structure which may evolve over time as AVD grows and matures.  By disabling the conversion process, you may be building structured config files that AVD cannot process properly resulting in configuration issues or playbook failure.

Layer2 Leaf/Spine support

Also in this release is the long awaited L2_Leaf/Spine fabric builder capabilities.  Up until now, AVD built 3 stage or 5 stage L3_Leaf/Spine fabrics.  With AVD release 3.8, AVD can now generate L2 Leaf/Spine topologies which are geared mostly towards campus style deployments.  That's not meant to imply that you SHOULD do L2 L/S at your campus, but you now have the option of using AVD to roll clean fabrics to your offices if you have a need to deploy an L2 topology.

Lets try to build a simple L2 topology using AVD.  We will build a four node network, with two nodes acting as L2 Spines, and two as L2 leafs.  

  • L2 Spines will perform routing for the vlans configured in the environment.  
  • L2 Spines will be an MLAG pair
  • Leaf and Spine tiers will connect using MLAG for redundancy
  • Spine will peer OSPF to a WAN router configured outside of AVD.

















Looking at the default node-types supported in AVD, it will be necessary to create a custom node, considering the default SPINE node type is not Layer3 enabled, i.e will not install L3 related commands by default.  

Start by deciding on your AVD node_group structure for you site.  Since I have the Spine tier as an MLAG pair, it makes sense to build a node_group for the two Spine switches.  The two Leaf switches will be standalone nodes in the fabric.  My fabric name will be USPLZ_FABRIC.

inventory.yml

---
all:
  children:
    CVP_group:
      hosts:
        cvaas:
          ansible_host: www.arista.io
          ansible_user: cvaas
          ansible_password: "{{ lookup('file', '../../cvaas.token')}}"
          ansible_connection: httpapi
          ansible_network_os: eos
          ansible_become: true
          ansible_become_method: enable
          ansible_httpapi_port: 443
          ansible_httpapi_host: www.arista.io
          ansible_httpapi_use_ssl: true
          ansible_httpapi_validate_certs: false
    USPLZ1:
      children:
        USPLZ1_FABRIC:
          children:
            USPLZ1_SPINE00:
              vars:
                type: spine
              hosts:
                USPLZ1-CSW1001:
                  ansible_host: 10.7.207.195
                  is_deployed: false
                USPLZ1-CSW1002:
                  ansible_host: 10.7.207.196
                  is_deployed: false
            USPLZ1_L2_LEAFS:
              vars:
                type: leaf
              hosts:
                USPLZ1-ASW1001:
                  ansible_host: 10.7.207.197
                  is_deployed: false
                USPLZ1-ASW1002:
                  ansible_host: 10.7.207.198
                  is_deployed: false

You can see that my site prefix is USPLZ1, with a child object called USPLZ1_FABRIC.  Under that we have a node_group USPLZ1-SPINE00 which contain both Spine nodes.  Also contained in the USPLZ1_FABRIC group is a node_group for the USPLZ_L2_LEAFS, whuch contain the two Leaf nodes.  The node_types are "spine" and "leaf" respectively.

group_vars

The resulting folder structure in the inventory/group_vars folder looks like this:



Just like other versions of AVD, settings applied in the FABRIC folder apply fabric wide, settings in L2_LEAFS applies to the leafs, and SPINE00 applies to the Spine tier.

This blog will focus on three files required to build the topology.  
  • nodes.yml contained in the USPLZ_SPINE00 folder
  • topology.yml contained in USPLZ1_FABRIC
  • tenants contained in USPLZ1_FABRIC

nodes.yml

AVD 3.8 has default node_types defined for L2 fabric deployments.  

  • l3spine
  • spine
  • leaf








Looking at these node_types, and given the requirements for our new network, we see that the "leaf" node type will work, however neither the "l3leaf" or "spine" node_type checks all our boxes considering l3spine has a p2p uplink model, and the spine node is not l3 enabled.  In cases like this, we need to create a new node_type, or modify an existing node_type.  To do this, you modify the node_type_keys yml object.

node_type_keys:
  spine:
    type: spine
    connected_endpoints: true
    default_evpn_role: none
    mlag_support: true
    network_services:
      l2: true
      l3: true
    underlay_router: false
    uplink_type: port-channel
    vtep: false

I created a nodes.yml file in my USPLZ1_SPINE00 folder and added the above content to modify the "spine" node type to include "l3: true".  This will unlock l3 capabilities for our spine tier.

topology.yml

This file contains the core fabric connectivity related information.  It also contains the design type element, which we will set to "l2ls" (L2 Leaf/Spine).  The file contains:
  • uplink/downlink info
  • mlag port and ip info
  • STP info
  • v-mac info
  • node_type definitions and associated node_groups and nodes
It is here where we lay the groundwork for how our fabric will be connected and configured for core services.

---
fabric_name: USPLZ1_FABRIC

design:
  type: "l2ls"

spine:
  defaults:
    platform: default
    mlag_interfaces: [ Ethernet55/1, Ethernet56/1 ]
    mlag_peer_ipv4_pool: 10.7.207.0/29
    virtual_router_mac_address: 00:1c:73:00:dc:42
    spanning_tree_mode: rapid-pvst
    spanning_tree_priority: 16384
  node_groups:
    USPLZ1_SPINE00:
      nodes:
         USPLZ1-CSW1001:
          id: 1
          mgmt_ip: 10.7.207.195/24
         USPLZ1-CSW1002:
          id: 2
          mgmt_ip: 10.7.207.196/24

leaf:
  defaults:
    platform: default
    uplink_interfaces: ['Ethernet1/1', 'Ethernet1/2']
    uplink_switches: ['USPLZ1-CSW1001', 'USPLZ1-CSW1002']
    spanning_tree_mode: rapid-pvst
    spanning_tree_priority: 16384
  nodes:
    USPLZ1-ASW1001:
      id: 3
      mgmt_ip: 10.7.207.197/24
      uplink_switch_interfaces: [Ethernet1, Ethernet1]
    USPLZ1-ASW1002:
      id: 4
      mgmt_ip: 10.7.207.198/24
      uplink_switch_interfaces: [Ethernet2, Ethernet2]




tenants.yml

tenants.yml, placed in the USPLZ1_FABRIC folder is what defines your vrfs (if applicable) as well as your SVI/vlan info and routing config.  In our instance this is OSPF.

For our network, we define four vlans:
  • vlan 50 - transit vlan to an active/passive firewall/router
  • vlan 100 - servers
  • vlan 200 - management
  • vlan 300 - VOIP
Vlan 50 is to be OSPF enabled, and we redistribute connected routes into the OSPF 100 process.

tenants:
  USPLZ1:
    vrfs:
      default:
        svis:
          50:
            name: SPINE_WAN_TRANSIT
            enabled: true
            nodes:
              USPLZ1-CSW1001:
                ip_address: 10.7.192.245/28
              USPLZ1-CSW1002:
                ip_address: 10.7.192.246/28
            ospf:
              enabled: true
              point_to_point: false
              area: 0
              cost: 100
          100:
            name: SERVER
            enabled: true
            ip_address: 10.7.192.1/24
            ip_helpers:
              10.2.2.2:
          200:
            name: Management
            enabled: true
            ip_address: 10.7.197.1/24
            ip_helpers:
              10.2.2.2:  
          300:
            name: VOIP
            enabled: true
            ip_address: 10.7.200.1/24
            ip_helpers:
              10.0.2.2.2:              
        ospf:
          enabled: true
          process_id: 100
          redistribute_bgp:
            enabled: false
        raw_eos_cli: |
          router ospf 100 vrf default
            redistribute connected

We define our vrf, in our case default, and from there build out the relevant config, including the transit vlan SVIs and ospf config, the SVIs that are to be installed on our L3 enabled Spine tier, and the related OSPF configuration to enable connected route redistribution, and to disable the default behavior of bgp redistribution when OSPF is enabled.

Now lets run a build

We will execute the playbook to run the initial build process on the AVD.  This will generate our switch configs and related documentation.  Lets see what happens...









We have a successful build!  Lets see what the configs look like:

USPLZ1-CSW1001

!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname USPLZ1-CSW1001
!
spanning-tree mode rapid-pvst
no spanning-tree vlan-id 4094
spanning-tree vlan-id 1-4094 priority 16384
!
no enable password
no aaa root
!
clock timezone CST6CDT
!
vlan 50
   name SPINE_WAN_TRANSIT
!
vlan 100
   name SERVER
!
vlan 200
   name Management
!
vlan 300
   name VOIP
!
vlan 4094
   name MLAG_PEER
   trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
   description USPLZ1-ASW1001_Po11
   no shutdown
   switchport
   switchport trunk allowed vlan 50,100,200,300
   switchport mode trunk
   mlag 1
!
interface Port-Channel2
   description USPLZ1-ASW1002_Po11
   no shutdown
   switchport
   switchport trunk allowed vlan 50,100,200,300
   switchport mode trunk
   mlag 2
!
interface Port-Channel47
   description WAN_RTR_1_WAN_RTR
   no shutdown
   switchport
   switchport trunk allowed vlan 50,601
   switchport mode trunk
   port-channel lacp fallback timeout 90
   port-channel lacp fallback static
!
interface Port-Channel551
   description MLAG_PEER_USPLZ1-CSW1002_Po551
   no shutdown
   switchport
   switchport trunk allowed vlan 2-4094
   switchport mode trunk
   switchport trunk group MLAG
!
interface Ethernet1
   description USPLZ1-ASW1001_Ethernet1/1
   no shutdown
   channel-group 1 mode active
!
interface Ethernet2
   description USPLZ1-ASW1002_Ethernet1/1
   no shutdown
   channel-group 2 mode active
!
interface Ethernet47
   description WAN_RTR_1_Eth1.7
   no shutdown
   speed forced 10000full
   channel-group 47 mode active
   lacp port-priority 8192
!
interface Ethernet48
   description WAN_RTR_1_Eth1.8
   no shutdown
   speed forced 10000full
   channel-group 47 mode active
   lacp port-priority 32768
!
interface Ethernet55/1
   description MLAG_PEER_USPLZ1-CSW1002_Ethernet55/1
   no shutdown
   channel-group 551 mode active
!
interface Ethernet56/1
   description MLAG_PEER_USPLZ1-CSW1002_Ethernet56/1
   no shutdown
   channel-group 551 mode active
!
interface Management1
   description oob_management
   no shutdown
   vrf MGMT
   ip address 10.7.207.195/24
!
interface Vlan50
   description SPINE_WAN_TRANSIT
   no shutdown
   ip address 10.7.192.245/28
   ip ospf area 0
   ip ospf cost 100
!
interface Vlan100
   description SERVER
   no shutdown
   ip helper-address 10.2.2.2
   ip address virtual 10.7.192.1/24
!
interface Vlan200
   description Management
   no shutdown
   ip helper-address 10.2.2.2
   ip address virtual 10.7.197.1/24
!
interface Vlan300
   description VOIP
   no shutdown
   ip helper-address 10.2.2.2
   ip address virtual 10.7.200.1/24
!
interface Vlan4094
   description MLAG_PEER
   no shutdown
   mtu 9000
   no autostate
   ip address 10.7.207.0/31
!
ip virtual-router mac-address 00:1c:73:00:dc:42
!
ip routing
no ip routing vrf MGMT
!
mlag configuration
   domain-id USPLZ1_SPINE00
   local-interface Vlan4094
   peer-address 10.7.207.1
   peer-link Port-Channel551
   reload-delay mlag 300
   reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 10.7.193.1
!
router ospf 100 vrf default
   passive-interface default
   no passive-interface Vlan50
!
management api http-commands
   protocol https
   no shutdown
   !
   vrf MGMT
      no shutdown
!
router ospf 100 vrf default
  redistribute connected

!
end


USPLZ1-CSW1002

!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname USPLZ1-CSW1002
!
spanning-tree mode rapid-pvst
no spanning-tree vlan-id 4094
spanning-tree vlan-id 1-4094 priority 16384
!
no enable password
no aaa root
!
clock timezone CST6CDT
!
vlan 50
   name SPINE_WAN_TRANSIT
!
vlan 100
   name SERVER
!
vlan 200
   name Management
!
vlan 300
   name VOIP
!
vlan 4094
   name MLAG_PEER
   trunk group MLAG
!
vrf instance MGMT
!
interface Port-Channel1
   description USPLZ1-ASW1001_Po11
   no shutdown
   switchport
   switchport trunk allowed vlan 50,100,200,300
   switchport mode trunk
   mlag 1
!
interface Port-Channel2
   description USPLZ1-ASW1002_Po11
   no shutdown
   switchport
   switchport trunk allowed vlan 50,100,200,300
   switchport mode trunk
   mlag 2
!
interface Port-Channel47
   description WAN_RTR_2_WAN_RTR
   no shutdown
   switchport
   switchport trunk allowed vlan 50,601
   switchport mode trunk
   port-channel lacp fallback timeout 90
   port-channel lacp fallback static
!
interface Port-Channel551
   description MLAG_PEER_USPLZ1-CSW1001_Po551
   no shutdown
   switchport
   switchport trunk allowed vlan 2-4094
   switchport mode trunk
   switchport trunk group MLAG
!
interface Ethernet1
   description USPLZ1-ASW1001_Ethernet1/2
   no shutdown
   channel-group 1 mode active
!
interface Ethernet2
   description USPLZ1-ASW1002_Ethernet1/2
   no shutdown
   channel-group 2 mode active
!
interface Ethernet47
   description WAN_RTR_2_Eth1.7
   no shutdown
   speed forced 10000full
   channel-group 47 mode active
   lacp port-priority 8192
!
interface Ethernet48
   description WAN_RTR_2_Eth1.8
   no shutdown
   speed forced 10000full
   channel-group 47 mode active
   lacp port-priority 32768
!
interface Ethernet55/1
   description MLAG_PEER_USPLZ1-CSW1001_Ethernet55/1
   no shutdown
   channel-group 551 mode active
!
interface Ethernet56/1
   description MLAG_PEER_USPLZ1-CSW1001_Ethernet56/1
   no shutdown
   channel-group 551 mode active
!
interface Management1
   description oob_management
   no shutdown
   vrf MGMT
   ip address 10.7.207.196/24
!
interface Vlan50
   description SPINE_WAN_TRANSIT
   no shutdown
   ip address 10.7.192.246/28
   ip ospf area 0
   ip ospf cost 100
!
interface Vlan100
   description SERVER
   no shutdown
   ip helper-address 10.2.2.2
   ip address virtual 10.7.192.1/24
!
interface Vlan200
   description Management
   no shutdown
   ip helper-address 10.2.2.2
   ip address virtual 10.7.197.1/24
!
interface Vlan300
   description VOIP
   no shutdown
   ip helper-address 10.2.2.2
   ip address virtual 10.7.200.1/24
!
interface Vlan4094
   description MLAG_PEER
   no shutdown
   mtu 9000
   no autostate
   ip address 10.7.207.1/31
!
ip virtual-router mac-address 00:1c:73:00:dc:42
!
ip routing
no ip routing vrf MGMT
!
mlag configuration
   domain-id USPLZ1_SPINE00
   local-interface Vlan4094
   peer-address 10.7.207.0
   peer-link Port-Channel551
   reload-delay mlag 300
   reload-delay non-mlag 330
!
ip route vrf MGMT 0.0.0.0/0 10.7.193.1
!
router ospf 100 vrf default
   passive-interface default
   no passive-interface Vlan50
!
management api http-commands
   protocol https
   no shutdown
   !
   vrf MGMT
      no shutdown
!
router ospf 100 vrf default
  redistribute connected

!
end


USPLZ1-ASW1001

!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname USPLZ1-ASW1001
!
spanning-tree mode rapid-pvst
spanning-tree vlan-id 1-4094 priority 16384
!
no enable password
no aaa root
!
clock timezone CST6CDT
!
vlan 50
   name SPINE_WAN_TRANSIT
!
vlan 100
   name SERVER
!
vlan 200
   name Management
!
vlan 300
   name VOIP
!
vrf instance MGMT
!
interface Port-Channel11
   description USPLZ1_SPINE00_Po1
   no shutdown
   switchport
   switchport trunk allowed vlan 50,100,200,300
   switchport mode trunk
!
interface Ethernet1/1
   description USPLZ1-CSW1001_Ethernet1
   no shutdown
   channel-group 11 mode active
!
interface Ethernet1/2
   description USPLZ1-CSW1002_Ethernet1
   no shutdown
   channel-group 11 mode active
!
interface Management1/1
   description oob_management
   no shutdown
   vrf MGMT
   ip address 10.7.207.197/24
!
ip routing
no ip routing vrf MGMT
!
ip route vrf MGMT 0.0.0.0/0 10.7.193.1
!
management api http-commands
   protocol https
   no shutdown
   !
   vrf MGMT
      no shutdown
!
end


USPLZ1-ASW1002

!RANCID-CONTENT-TYPE: arista
!
vlan internal order ascending range 1006 1199
!
transceiver qsfp default-mode 4x10G
!
service routing protocols model multi-agent
!
hostname USPLZ1-ASW1002
!
spanning-tree mode rapid-pvst
spanning-tree vlan-id 1-4094 priority 16384
!
no enable password
no aaa root
!
clock timezone CST6CDT
!
vlan 50
   name SPINE_WAN_TRANSIT
!
vlan 100
   name SERVER
!
vlan 200
   name Management
!
vlan 300
   name VOIP
!
vrf instance MGMT
!
interface Port-Channel11
   description USPLZ1_SPINE00_Po2
   no shutdown
   switchport
   switchport trunk allowed vlan 50,100,200,300
   switchport mode trunk
!
interface Ethernet1/1
   description USPLZ1-CSW1001_Ethernet2
   no shutdown
   channel-group 11 mode active
!
interface Ethernet1/2
   description USPLZ1-CSW1002_Ethernet2
   no shutdown
   channel-group 11 mode active
!
interface Management1/1
   description oob_management
   no shutdown
   vrf MGMT
   ip address 10.7.207.198/24
!
ip routing
no ip routing vrf MGMT
!
ip route vrf MGMT 0.0.0.0/0 10.7.193.1
!
management api http-commands
   protocol https
   no shutdown
   !
   vrf MGMT
      no shutdown
!
end


Looking at this config its clear that AVD correctly built our network.  We see the correct interfaces used for uplink and that they are added to an MLAG enabled port-channel on our Spine tier.  We see that SVIs were properly built on the Spine tier and commensurate L2 vlans were created on both leaf/spine tiers.  We see correct trunking config applied to the uplink port-channels on both spine and leaf tier, and we see appropriate transit SVI and OSPF configuration created on our Spine tier.  I'd say we got us the beginnings of a pretty good L2_Leaf/Spine network!  Obviously we would need to add configuration like TACACS/logging/ntp/dns/etc, however this build hopefully demonstrates the core functionality of how AVD 3.8 builds L2 Leaf/Spine topologies.

To wrap it up

As you can see if you've followed this post to here, AVD 3.8 includes some great new features that really simplify how we can deploy standardized, Arista validated design based Leaf/spine architectures, both at L2 and L3.  I will blog again to get into some more details related to AVD and some of the cool stuff you can do with L3 enabled fabrics. I hope you've found this blog helpful. If so, give it a like and subscribe to be notified when more of my thoughts are spilt out on a page.  

Cheers and whatever you do, take care of your shoes...   
$Matthias$


Comments

  1. Thank you for sharing your post! The information you provided is highly valuable. FYI my Arista SE is who shared your blog with our team. Currently, we are in the initial stages of our Arista campus deployment, and the incorporation of Layer 2 Leaf-Spine (L2LS) in AVD is expected to significantly enhance our project.

    ReplyDelete

Post a Comment

Popular posts from this blog

The beginning of something cool (maybe) :-)