DefaultDict for the Network Engineer

There might come a time as a Network Engineer you will need to create a data structure to process your information. A common one I’ve had to do is creating Dictionaries. 

My_little_dictionary = { My_key, My_value}

As your use cases increase in complexity you will need to dig deeper into the python ecosystem. One such handy features is DefaultDict. It is extremely versatile, I’m only going to cover one such use case. 

So let’s decide on what our key is going to be, I started with using the hostname. I have a list that I create with the hosts. 

from collections import defaultdict
my_list = ['Host1', 'Host2', 'Host3']
d = defaultdict(list)

I’ll pause here point out the data structure for you default dictionary. 

defaultdict(list, {})

https://docs.python.org/3/library/collections.html#collections.defaultdict – “Using list as the default_factory, it is easy to group a sequence of key-value pairs into a dictionary of list.”

This simplifies dictionary lists generation, with an elegant way of calling the code, making it easy to read and follow. 

for item in my_list:
my_config = [f'hostname {item}','int gi1/0/1', ' no shut’]
d[item] = my_config

What I’m doing here is looping through my list by each item, Host1-3. I’m generating a list that calls the hostname so each list would look like – [‘hostname Host1’, ‘int gi1/0/1’, ‘ no shut’]. Then it takes that list and assigns it to the defaultdict called ‘d’. As you call each Key in the default dict, it creates an entry for the key. The variable it’s looking for is it’s list. You can insert your list into it just like you’re assigning a variable. Since it’s a for loop, it does this so on until you fill in all the hosts. The results:

defaultdict(list,
{'Host1': ['hostname Host1', 'int gi1/0/1', ' no shut'],
'Host2': ['hostname Host2', 'int gi1/0/1', ' no shut'],
'Host3': ['hostname Host3', 'int gi1/0/1', ' no shut']})

It’s a dictionary of lists. Key being the hostname and it’s values the list. Very handy when you’re needing something a little more complex and scary simple to implement. You can even pass this to pandas and have it work some magic. Running the following code generates a Table like below. 

df = pd.DataFrame.from_dict(d)
df2 = df.transpose()
df2.columns = ['Hostname', 'Interface', 'State']

UntitledImage

 

 

 

 

 

 

Hopefully this can help, if you can think of anything I should include or am wrong about? Drop me a line. I’m happy to learn.

Ansible – Infrastructure as Code Part 3 (Let’s do something interesting!)

We’ve collected our state data in Part 2.

Now let’s do something interesting with our data. I’ve switched out the role I will be using. So this will be a long blog post on how to set up and finally make something pretty with your data!

 

The Setup

What you’ll need –

The reason I’m using a different module for this, is that it will automatically grab a snapshot and format it into JSON. It has other features, like compare that I won’t get into. But for now, this will be good enough to start manipulating data.

The Play Book

---
- hosts: localhost
connection: local
gather_facts: no

- name: Interface Snapshot
hosts: switches
gather_facts: no
connection: network_cli
roles:
- ansible-pyats

tasks:
- name: Gather Snapshot
include_role:
name: ansible-pyats
tasks_from: snapshot_command

vars:
command: show interfaces
file: "snapshots/{{ inventory_hostname }}_interface_snapshot.json"

This play book is assigned to the switches in the inventory. It calls the role that we downloaded from GitHub with the task snapshot_command. I’m telling it to snapshot the show interfaces command. 

Step 1 – Grab the state of the interfaces using your playbook and output to JSON.

Step 2 – Import the data into Jupyter Notebook and convert it into useable information using Pandas.

Now you may be asking why Pandas and why Jupyter? I have a blog post here on why it’s easier to work with. On the note about Pandas, it’s a great tool for quickly putting information into a structure that easier to analyze. Need to do math based on Date time? Need to do some quick counting on cell values? Or maybe convert row data to columns? Very quick an easy to do in Pandas. So let’s get started.

Now this may seem a little denser than normal. But this is a direct export from Jupyter. This is using Python and the two modules to make the magic happen. If you want to try it out, you can copy the code below into Jupyter and use this JSON document. 

import pandas as pd #import pandas for data manipulation
import plotly.express as px #For a quick pretty graph at the end

df
= pd.read_json('/Users/**/Automation/ansible/snapshots/SW1_interface_snapshot.json') #Import the JSON document df.loc['arp_timeout':'bandwidth' , 'FastEthernet0/1':'FastEthernet0/13'] #grab the first 5 columns and top 3 rows
  FastEthernet0/1 FastEthernet0/10 FastEthernet0/11 FastEthernet0/12 FastEthernet0/13
arp_timeout 04:00:00 04:00:00 04:00:00 04:00:00 04:00:00
arp_type arpa arpa arpa arpa arpa
bandwidth 100000 10000 10000 10000 10000
df2 = df.loc[ ['line_protocol', 'last_input'] , : ] #export the desired values
df2.loc['line_protocol':'last_input', 'FastEthernet0/1':'FastEthernet0/13']
  FastEthernet0/1 FastEthernet0/10 FastEthernet0/11 FastEthernet0/12 FastEthernet0/13
line_protocol up down down down down
last_input 00:00:01 never never never never
df2 = df2.transpose() #flip the columns and rows; by default the interfaces are the columns
df2.head(3)
  line_protocol last_input
FastEthernet0/1 up 00:00:01
FastEthernet0/10 down never
FastEthernet0/11 down never
df2.loc[(df2.last_input == "never"), 'last_input']='23:59:59' #convert never to time value
df2.head(3)
  line_protocol last_input
FastEthernet0/1 up 00:00:01
FastEthernet0/10 down 23:59:59
FastEthernet0/11 down 23:59:59
df2["last_input"]= pd.to_datetime(df2["last_input"]) #convert time values to datetime; panda adds todays date
df2.head(3)
  line_protocol last_input
FastEthernet0/1 up 2020-06-18 00:00:01
FastEthernet0/10 down 2020-06-18 23:59:59
FastEthernet0/11 down 2020-06-18 23:59:59
df_value_counts = df2['line_protocol'].value_counts() #grab value counts and put into new dataframe
df_value_counts = df_value_counts.reset_index() #reset the index
df_value_counts.columns = ['State', 'Count'] #set column values
df_value_counts #display values
  State Count
0 down 26
1 up 6
fig = px.bar(df_value_counts,              # dataframe
       x="State",         # x will be the 'State' column of the dataframe
       y="Count",   # y will be the 'Count' column of the dataframe
       color="State", # color gets assigned to the State axis
       title=f"Interface State",
       labels={"State": "Up/Down","Count": "Count"}, # the axis names
       color_discrete_sequence=["red", "green"], # the colors used
       height=500,
       width=800) 
fig.show()
Interface State Graph

 

Ansible – Infrastructure as Code Part 2 (State Data)

Now that you have backups running successfully. Let’s talk about abstracting data states!

The Why?

It’s pretty important to differentiate between state and config data. Configuration will only get you half the picture and in many cases less than half. How are relationships built on your network? Do you know who the primary neighbors for your devices are? That’s state information, many people keep it in their visio. But the critical piece here is you need that information to build programmability. That’s where state abstraction comes into play.

We will be using Ansible to gather this information and with the help of the PyATS library do some quick state abstraction and turn it into something useful.

State Abstraction –

What you’ll need –

  • Python
  • Ansible
  • PyATS – Ansible Galaxy Module

The Play –

---
- name: pyATS testing
  hosts: cisco
  gather_facts: no
  connection: network_cli
  roles:
    - parse_genie

  tasks:
  - name: Run the show ip interface command
    ios_command:
      commands:
        - show ip interface brief
    register: interface_output

  - name: Set fact with Genie filter plugin
    set_fact:
      pyats_showipinterface: "{{ interface_output['stdout'][0] | parse_genie(command='show ip interface brief', os='ios') }}"

  - name: Debug interface parse
    debug:
      var: pyats_showipinterface

  - name: Convert parse data to YAML
    template:
      src: templates/show_ip_interface_brief.j2
      dest: "reports/{{ inventory_hostname }}"

This play book logs into each device and runs the “show ip interface brief” command. It then parses the output into a JSON structure using PyATS Genie. In the final step it calls a Jinja template and converts to a structured YAML format.

You’ll need to install Parse Genie. The directions are available on Ansible Galaxy. I did rename my folder for easier reference. Normally the role is “(user).(module)”.

Here’s the Jinja template –

---
interfaces:
{% for key1 in pyats_showipinterface %}
{% for nested_key in pyats_showipinterface[key1] %}
- name: {{ nested_key }}
- IP address: {{ pyats_showipinterface[key1][nested_key]["ip_address"] }}
- Status: {{ pyats_showipinterface[key1][nested_key]["status"] }}
{% endfor %}
{% endfor %}

Jinja is a Python templating engine. If you need anything done at speed, you’ll need to hand your data to Jinja. Could I simplify this and just call the nested key? Yes, but this also shows you how to parse through multiple nested keys. Very handy if say you have a Route Table -> VRFs -> Routes. You may need to parse multiple key value structures.

You can take data that’s human readable-

R1#sh ip int bri
Interface              IP-Address      OK? Method Status                Protocol
GigabitEthernet1       unassigned      YES NVRAM  up                    up      
GigabitEthernet1.13    155.1.13.1      YES NVRAM  up                    up      
GigabitEthernet1.100   169.254.100.1   YES NVRAM  up                    up      
GigabitEthernet1.146   155.1.146.1     YES NVRAM  up                    up      
GigabitEthernet2       unassigned      YES NVRAM  administratively down down    
GigabitEthernet3       192.168.3.61    YES NVRAM  up                    up      
Loopback0              150.1.1.1       YES NVRAM  up                    up      
Tunnel0                155.1.0.1       YES NVRAM  up                    up

And present it in a format that a machine can consume and is human readable –

---
interfaces:
- name: GigabitEthernet1
  - IP address: unassigned
  - Status: up
- name: GigabitEthernet1.13
  - IP address: 155.1.13.1
  - Status: up
- name: GigabitEthernet1.100
  - IP address: 169.254.100.1
  - Status: up
- name: GigabitEthernet1.146
  - IP address: 155.1.146.1
  - Status: up
- name: GigabitEthernet2
  - IP address: unassigned
  - Status: administratively down
- name: GigabitEthernet3
  - IP address: 192.168.3.61
  - Status: up
- name: Loopback0
  - IP address: 150.1.1.1
  - Status: up
- name: Tunnel0
  - IP address: 155.1.0.1
  - Status: up

Conclusion –

Ansible with the power of modules and Jinja allow you to take your current deployment and put it into a consumable format. This is just a fraction of what’s possible. I only picked interfaces since it’s a common and easy item to pull state info from. But the possibilities are endless. Take a look at the Genie parsers for yourself and consider what you can accomplish!

https://pubhub.devnetcloud.com/media/genie-feature-browser/docs/#/parsers

Ansible – Infrastructure as Code Part 1 (Backups)

In this series I plan on taking a look at how we can begin improving the reliability and re-useability of infrastructure as code. 

Backups – 

The old way of doing backups is pretty useless. It’s often a flat text file in a flat directory. With little version control other than time stamps. And we often backup a config whether it’s changed or not. This can cause a bit of head ache when trying to track down a specific change or even just auditing over time.

Backups 2.0 – 

What you’ll need – 

  • Python Virtual Environment
  • Ansible(Tower is optional)
  • Git
  • Your favorite text editor(I use Sublime)
  • Git Repository(I use Gogs)
  • Test Environment(I have ESXi with CSR1000v’s Running)

I’ll show to perform backups without Tower. They require two different approaches since Tower works in a sandbox and after a playbook is run the working directory is deleted.

Local Backup – 

Step 1 – Clone Backups Repository

We will start off by cloning an empty repository. This will show how effective an initial backup commit and diff works in Git.

Before we run the playbook we check for the directory backups. There is none. This playbook connects to the git repository and clones the resources.

---
#This module runs to build git repository locally
- name: Clone Git Backups Repository
  hosts: 127.0.0.1
  connection: local
  gather_facts: no
  tasks:
    - name: “Clone Git Backups"
      git:
        repo: git@[your git repository here].git
        dest: ./backups
        update: yes
        version: master
      register: git
      ignore_errors: True
    - debug:
        var: git

Step 2 – Perform Device Backup

The next playbook to run is the device configuration backups. The repository initially is blank. All devices are in the state “changed” due to not having been run before. The playbook takes the backup and renames it according the device name and drops it in the backups folder. When navigating to backups you can see the conf files for the 10 routers and that the directory is a git repository.

---
#Device Backup
- name: Backup
  hosts: cisco
  connection: network_cli
  gather_facts: no  tasks:
    - name: Backup
      ios_config:
        backup: yes
        backup_options:
          dir_path: "./backups"
          filename: "{{inventory_hostname}}.config"
      register: config_output
    - debug:
        var: config_output

Step 3 – Git Add Commit Push

This playbook is more complicated as it’s stepping through all the git procedures to push to a repository. It adds all existing files in the directory. Then commits with ansible date time as the comment. Keep in mind to perform the date time function you will need get_facts enabled for the localhost.

---
#Push the config to the git add and push to repository
- name: Add devices to Git and Push
  hosts: 127.0.0.1
  connection: local
  gather_facts: yes
  tasks:
    - name: Add to Git
      shell: "git add ."
      args:
        chdir: ./backups
      register: gitadd
    - debug:
        var: gitadd
    - name: Commit
      shell: git commit -m "{{ ansible_date_time.iso8601 }}"
      args:
        chdir: ./backups
      register: gitcommit
    - debug:
        var: gitcommit.stdout
    - name: Push
      shell: git push
      args:
        chdir: ./backups
      register: gitpush
    - debug:
        var: gitpush

Step 4 – Ansible Idempotency at Play

At the start of the play I push an interface change to Router 1. I then run the config backup and git steps together. As the playbook runs through, you will notice that only R1 has changed. Buried within the wall of text there is also a summary of the amount of lines changed. This can be quite handy for report generation.

Looking at the Git repository, we can see that only R1 has been updated. Browsing the history of R1 Git will report the changes that have taken place. 

As a final note. I did use local SSH keys to push to git. If you plan on using Tower. It’s going to be a different process that I’ll detail in a future post. But requires special credentials and roles to successfully run this playbook. 

Jupyter Notebook

Make sure to install Jupyter using your favorite method. I installed jupyter in my venv and run it from there.

Let’s start off by launching Jupyter!Jupyter notebook launch low res

It’s as easy as that to get started.

 

Here’s the value that jupyter notebook can add to your work flow. 

Retains State –

Jupyter hellow world low res

When running each section shown. It evaluates it independently. Making an error and re-running the section doesn’t require me to re-run the whole script. This is great when you have to evaluate many config files and make a mistake. No reason to re-run the whole script. 

You can even run your connect portion of the script and work on other sections while you wait for the pull to finish.

Export and Share – 

You can chose many formats from python to another jupyter notebook. Share you work easily with others.

It’ll even retain variables, such as when pulling a show run from netmiko.

It Uses iPython – 

That’s cool I guess. I’m not 100% sure on the benefits of iPython. But you do get some magic commands that don’t come with regular python – https://ipython.readthedocs.io/en/stable/interactive/magics.html.

It Easily Works – 

I haven’t run into a situation yet where calling a module hasn’t worked. Even something like getpass.getpass() works.

Debugging – 

No need to print in the middle of the script. Want to see what a variable contains? Just call it. I have found that printing in a loop doesn’t work. Now I may be doing something wrong, but I can get it to work in regular python. 

Undo – 

When hitting ctrl-z, it will only undo what you did in that section!!!

 

 

Did I miss anything? Let me know! I probably haven’t explored jupiter’s full potential yet!

Why Understanding Classful Routing is Still Important

What happens when you have classful routing enabled and are only advertising downstream default route as a summary?

NewImage

 

We can get some odd behavior with classless disabled. Everything may appear fine on first glance. But can remain in a broken state.

Let’s set up the test. We can start by disabling classless routing on R02

R02(config)# no ip classless

There’s a static route that R01 points to R02. And R02 points to R01.

ip route 0.0.0.0 0.0.0.0 10.1.12.X

R02 and R03 are allowed to dynamically exchange routes.

Troubleshooting – 

Let’s test connectivity to the loopback on R02.

R01#ping 10.2.2.2
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.2.2.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
R01#ping 10.2.2.2 source loop0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 10.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms

Now to test connectivity to R03.

R01#ping 10.3.3.3
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.3.3.3, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/2 ms
R01#ping 10.3.3.3 source loop0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.3.3.3, timeout is 2 seconds:
Packet sent with a source address of 10.1.1.1
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

Great we now have full reachability! Everyone can go home. Mission Accomplished!

But the next day come into work and you can no longer reach your router! Oh no it’s a bug, it’s a dead router, it’s a TAC call!

You can successfully telnet into the router behind R02.

R01#telnet 10.3.3.3 /source-interface loop0
Trying 10.3.3.3 ... Open

User Access Verification

Username: cisco
Password:
R03#exit

But try as you might you can’t get into R02.

R01#telnet 10.2.2.2 /source-interface loop0
Trying 10.2.2.2 ... % Connection timed out; remote host not responding

You even go as far as to create an access-group on the inbound interface. 

R02(config)#do sh access-list
Extended IP access list TELNET_TEST
10 permit tcp any host 10.2.2.2 eq telnet (4 matches)
20 permit tcp any host 10.3.3.3 eq telnet (42 matches)
30 permit ip any any (590 matches)

The Why – 

But of course we already know the answer. It’s classful routing.

The router will accept the Telnet packets and punt them to the CPU.  You can see this using packet-trace debugging on the newer XE. 

R02#show platform packet-trace packet 0
Packet: 0 CBUG ID: 49
Summary
Input : GigabitEthernet1.12
Output : internal0/0/rp:0
State : PUNT 11 (For-us data)
Timestamp
Start : 41385897469217 ns (08/12/2019 02:31:45.258831 UTC)
Stop : 41385897506086 ns (08/12/2019 02:31:45.258868 UTC)
Path Trace
Feature: IPV4(Input)
Input : GigabitEthernet1.12
Output : Source : 10.1.1.1 Destination : 10.2.2.2 Protocol : 6 (TCP) SrcPort : 44461 DstPort : 23

But Since you can’t disable CEF on the newer platforms, you can’t see the full logic. Also keep in mind that ping still works and so does transit traffic. I believe it’s because of CEF for the transit traffic. Since it never has to go to general CPU. Not 100% sure yet why ping seems to work fine.

And with a full ip packet debug running –

FIBipv4-packet-proc: packet routing failed

https://www.cisco.com/c/en/us/support/docs/ip/enhanced-interior-gateway-routing-protocol-eigrp/8651-21.html

I highly recommend a read through.

Now you may be asking yourself why do I need to know something like this? I’ll just never enable it. Problem solved. But there’s a small chance that someone might. And recognizing that behavior will go a long way in less hair pulling.

Wi-Fi – Circles Are Bad

Let’s start with the classic design guideline, the trope that you need overlapping circles. “Make sure they overlap 20%, that’s a proper design.” At least that’s what they say.

The queuestion is, how do you achieve that? How do you know you have the percentage overlap that you need? Ok, maybe we account for dB loss? Well then how do you properly calculate that loss? That’s easy, find a document that gives you the loss over the distance you need, correlate the two. Call it a day! Well…

Let’s take a look at the design guide by Cisco.

Source : Mobility Design Guide 8.1

Great! So I’ve got my answer! Overlapping channels, with a 19dB separation. That shouldn’t be too hard. Right? Right?!?

Well let’s see here, let me put this AP down right here in the hallway.

That’s odd. It’s not a circle. It’s like the hallway reflected the signal down the hall instead of letting it spread evenly.

This is why your wifi isn’t performing as expected. A long enough hallway will require you to place multiple APs in it to get the coverage you need. The instant you have co-channel interference you just reduced your AP count from 2 to 1.

How do we solve for this, so the circles actually behave more like circles? You place the AP in rooms.

This allows your design to reduce the chance of co-channel interference. And allows for you to account for the amount of overlap you need. But look carefully, we still don’t have circles. It’s very critical to find out the dB loss your walls will create, this circle won’t look the same in a different office.

Using multiple addresses from the same subnet

One of the first things anyone pursuing their CCNA learns is that you can’t configure multiple IP addresses in the same subnet on the same router.

The Problem 
I attempt to configure a ‘198.18.0.0’ address on two different interfaces.


R01(config)#int gi2 
R01(config-if)#ip add 198.18.0.2 255.255.255.0
!
R01(config)#int gi1
R01(config-if)#ip add 198.18.0.1 255.255.255.0
% 198.18.0.0 overlaps with GigabitEthernet2

Alternate Solutions
These solutions won’t be covered in detail in this blog post. They both solve and come with their own unique problems.

  1. HSRP
  2. IP Unnumbered
  3. Secondary IP Address

Alternate Vendor
Juniper allows you to configure multiple IP addresses in the same subnet.


   ge-0/0/1 {
        vlan-tagging;
        unit 0 {
            vlan-id 0;
        }
        unit 1 {
            vlan-id 1;
            family inet {
                address 198.18.0.65/24;
                address 198.18.0.101/24;
    ge-0/0/2 {
        vlan-tagging;                   
        unit 0 {
            vlan-id 0;
        }
        unit 1 {
            vlan-id 1;
            family inet {
                address 198.18.0.90/24;

After some quick testing it appears that Juniper originates traffic from the lowest interface and then lowest IP address.

The Rub
Why is it that Juniper allows for multiple addresses but Cisco can only in specific use cases? CEF allows for multiple destinations, even unequal cost load balancing.

Possibilities

  1. Historical
  2. RFC
  3. Other?

Historical
Looking through the mists of time I found this book – “Inside Cisco IOS Software Architecture.”

Unfortunately I don’t have an AGS+ and can only infer from the text it’s possible functions. The Cisco AGS+ used autonomous switching for the line cards. It was very costly in bandwidth and cpu to send a packet to the route processor. From what I can tell, the individual line cards didn’t retain a full copy of the routes. Any packet that arrived that had an unknown destination in the line card had to be passed up to the route processor. After that, the destination could be cached on the line card. The book also mentions that the AGS+ was the basis of the 7000 router and IOS.

Could this have been an early form of control plane protection? Or was it used to prevent from transferring unnecessarily across the low bandwidth bus?

RFC1009
My original theory was that it was based off of the RFC for requirements for an internet gateway. Here is the text in question –

“A different subnet address mask must be configurable for each interface of a given gateway. This will allow a subnetted gateway to connect to two different subnetted networks, or to connect two subnets of the same network with different masks.”

Unless I’m misreading it, it seems a pretty clear definition of what we are running into.

Other?
Is it a combination of the two above or something completely different. I would love to know. Drop me a line! admin at solutions-haven.com

Posted in CEF

PPPoE

This post is intended for a quick analysis of the PPPoE protocol authentication and its relevant configuration on Cisco routers.

Base Configuration

R1 –


interface GigabitEthernet1.13
encapsulation dot1Q 13
pppoe enable group global
!
bba-group pppoe global
virtual-template 1
!
interface Virtual-Template1
ip address 198.18.0.1 255.255.255.0
!
end


R3-


interface GigabitEthernet1.13
encapsulation dot1Q 13
pppoe enable group global
pppoe-client dial-pool-number 1
!
interface Dialer1
ip address 198.18.0.3 255.255.255.0
encapsulation ppp
dialer pool 1
!
end


Key Points

  1. With a virtual-template PPP is the default encapsulation
  2. A connected route will be generated in the routing table for the PPP neighbor.
  3. This is a basic configuration. Other topics like DHCP are outside of this post.

 

Authentication and Analysis

In this section we will take a look at the means of authentication and analyze the packets involved.

There are two main modes for authenticating the line – PAP and CHAP.

PAP Configuration – Cisco Configuration Link; Cisco Configuration PDF

R1 –


username R3 password cisco
!
interface virtual-template 1
ppp authentication pap
!
end


R3 –


interface dialer1
ppp pap sent-username R3 password cisco
!
end


Key Point - The process is straight forward. The server tells the client in the configuration request that the authentication is PAP. Client responds with its username and password. The username and password is authenticated against the local database.

PAP Packet Capture – PPP PAP PCAP

PPP PAP

 

CHAP – Cisco Configuration Link; Cisco Configuration PDF

Chap isn’t as straight forward and is misleading compared to PAP.

R1 –


username R3 password cisco
!
interface virtual-template 1
ppp authentication chap
!
end


R3 –


username R1 password cisco
!
end


Key Point - The functional difference between PAP and CHAP is that CHAP performs a semi-bidirectional authentication. The server tells the client that the authentication is CHAP. The server during authentication sends its username to the client router. The client router then looks up the password in the database and sends the hash in response with its username. The server then checks the database to confirm.

CHAP Packet Capture – PPP CHAP PCAP

PPP CHAP

 

F5 Edit Config File

I recently ran into an issue where I needed to edit Monitors, Pools, and Rules on an F5. I didn’t want to delete each item and recreate. I was able to achieve this by going to into the bigip.conf file and editing it there. This is one way to do it. It’s probably not the fastest way, but it is a way. Check with your local Linux admin for a better way.

 

  1. SSH into the load balancer.
  2. Run this command – ~ # vi /config/bigip.conf
  3. To enter a command in vi type <SHIFT> + :
  4. Run this command in vi – :%s/<text to search for>/<replacement text>/gc

 

% – Search all lines
s – search
g – replace all occurrences
c – confirm before replacing text

 

  1. Go back into the vi command line (step 2)
  2. Save your work by going to the vi command line again (step 2) and entering w
  3. Quit by entering the command line again and entering q
  4. Load the edited config into the running config by typing  #tmsh load sys conf

 

 

 

Posted in F5Tagged
Scroll to Top