Administrator Audit Made Easy – Create CSV of MDS user permissions

Darn auditors want to know who has what permissions in MDS……but want it in a spreadsheet! What’s up with that old technology?

Here it is, a matrix of users and their permissions.


Python Program #2: Adminparser

NOTE: Goes hand in hand with my Cparser module.

Hopefully this will be easier with the R80 REST interface.

Audit OUT!


Convert any CheckPoint .C file into Python List

Killing two birds  with 1 pebble. Learning some python and automating our admin audits.

This is the core of it. Converts any .C file into a Python list. So you can use this to parse through your objects, rulebases, users, admin lists, etc.Once converted you can create GUIs, other parsing tools (like I will use for admin user deltas)

Download here:


Zero Downtime Upgrade between major versions WITH/OUT dynamic routing

Good news:Can be done possible
Bad news: This is work in progress, hope to update with pictures. If you call CP support, they might be able to fish up the document.


  1. Go through CP steps for zero time upgrades. But don’t take them toooooo seriously or you will have surprises. Make sure you do these steps.
  2. Run the upgrade on the standby – DO NOT REBOOT
  3. If you have to copy fwkern.conf from the ACTIVE member it now
  4. control_bootsec – install initial policy and makes sure that the default filter (bricks the firewall) is not loaded. Run from UPGRADE file system, not old file system.

    cd /opt/CPsuite-R77/fw1/bin



  5. Reboot standby
  6. Standby comes back up “Active Attention” – no problem has no cluster policy
  7. In dynamic routing, if you have “Wait for Clustering” enabled. Disable it. Let the routed startup without a cluster
  8. Start/Stop routed:
    tellpm process:routed
    tellpm process:routed t
  9. On mgt server change policy to latest version  R77.10/20/30 and push to upgraded member (uncheck mark in policy install for cluster push). Upgraded member now knows it has to be part of a cluster. It will go to READY state, waiting for the failover
  10. Use this script to export the routes off the ACTIVE firewall onto the Standby firewall. It will turn them into STATIC routes. NOTE: There is no ‘save config’ at the end. This are only temporary until the system reboots and get real OSPF routes. Make sure you differentiate between dynamic routes that will go away on reboot and real static routes that will be kept on reboot.
  11. Reboot the READY firewall just to clear out the cobwebs.
  12. Run the ospf script on the READY firewall. This will load all the OSPF and STATIC routes onto the firewall. NOTE: YOu will have to decide if you want to keep/delete the STATIC routes. You might have to SAVE CONFIG on the static routes if you want to keep them.
  13. Do a netstat -an | wc -l and fw tab -t connections -s to metric the routes and states
  14. Do a ‘cphaprob stat’ to get the IP and ‘number ID’ of the ACTIVE member.
  15. Now on the READY member PULL the state table from the ACTIVE member.cphaprob stat   –
    Retrieve the cluster NUMBER and sync IP of the ACTIVE membercphacu start <Active Member IP> <Cluster member Number>  –
    So if active was and number 2 in cluster:
    cphacu start 2
    Will pull the state table from the ACTIVE onto the READY member. This is like the OLD fcu command…but snazzier somehow.
  16. Do a netstat and fw tab -t connections and make sure the numbers are about the same on both members
  17. On the ACTIVE member – drum roll.
    cphaprob stop
  18. On the DOWN member STOP the routing daemon because you don’t want it to fight with the new ACTIVE member. This is where the checkpoint cluster and routing teams never broke bread and coordinated cluster & routing activity and you have to do it manually.tellpm process:routed
  19. The READY member will now go to ACTIVE
  20. On ACTIVE member check out the state tables and network tables again. OSPF should be populating. Check the neighbor status to see if OSPF neighbors are negotiating. If not, they are stuck, then stop and restart. No worries you have static entries until you reboot.clish> show ospf neighbors
    clish> show route ospftellpm process:routed         ##### stop
    tellpm process:routed t        #### start
  21. You are over the hump, congrats
  22. Upgrade the OLD system
  23. Copy fwkern from the standby if required
  24. Reboot
  25. Push policy to both members
  26. Reboot both (to clear out static network entries and cobwebs)
  27. Done

Modify firewall config without authentication – Recover admin password and much more

Yes I’m back from bumming around this summer and yes I had a great time knowing all you were working and paying taxes while I was playing on a beach and climbing in Finale Ligure Italy. Who’s the smart one now????

Meanwhile I spent the summer and lately studying for my Amazon Web Services cert. The Cloud and SDN is changing the world as we know it so you better get on the train….or apply at Walmart. $15/hour isn’t so bad.

So once upon a time Joe Bob decided to retire and forgot to give us all the passwords for our gateways. Fun time. Wish I would of known this little trick. How to recover a gateway admin and expert password without having to log in! Or DVD boot the machine on recovery disk.

WARNING: This could be really dangerous. You can execute almost ANY command on ALL your gateways raining death and destruction. Logging is minimal and tying it back to a human user to blame could be very tricky. I would only use this for emergencies.

  1. Switch to the context of the involved Domain that manages your Security Gateway:

[Expert@HostName]# mdsenv <Domain_Name>

  1. Generate hash for new password – run the following command and save the generated hash string. This will prompt you for password and give you back a hash.

[Expert@HostName]# /sbin/grub-md5-crypt

  1. Ensure that the Clish database is unlocked on the remote Security Gateway:

[Expert@HostName]# $CPDIR/bin/cprid_util -server <IP_of_Gateway> -verbose rexec -rcmd /bin/clish -s -c ‘set config-lock on override’

  1. Change the admin user password:

[Expert@HostName]# $CPDIR/bin/cprid_util -server <IP_of_Gateway> -verbose rexec -rcmd /bin/clish -s -c ‘set user admin password-hash <Password_Hash_from_Step_2>’ 

  1. You can also change the Expert password:

[Expert@HostName]# $CPDIR/bin/cprid_util -server <IP_of_Gateway> -verbose rexec -rcmd /bin/clish -s -c ‘set expert-password-hash <Password_Hash_from_Step_2>’

Be careful out there!


2015 CPX Part Zwei – SDN

UPDATE: CheckPoint R80, R77.20(with updates) has announced integration with Vmware 6.0 which is great. Called Vsec. Clarifies many of the questions below. I haven’t seen it (because I’m sitting on a beach in Italy), but hope to do a pro/con when I get back.

========================================Date 5/10/2015 CPX Conference ===============

Summary: CheckPoint R80 is integrating with most the other SDN players: NSX, ACI, OpenStack. Looks great so far. Problem: (Heard this at CPX) Financial IT guy said CIO called him and asked for a 600 server farm to do some big data mining on confidential financial data. Classic physical deployments would take 6 months. They did it in 2 weeks – virtual world and scripting. How does/will CP protect this data mining farm? BEGIN SDN Glossary:

  • North-South Traffic: data traffic in/out of a physical VMware/Virtual host
  • East-West Traffic: data traffic between virtual guests internally within a physical VMware/Virtual host
  • ESXi – VMware’s Hypervisor or operating system that operates on bare metal
  • vSphere – VMware’s total virtual package offering
  • vCenter – VMware’s management station component for managing servers
  • NSX – Networking component of VMware
  • Virtual Guest – A OS environment (Linux, Windows XP, MAC OS, OEM custom product) running in an emulated physical environment on top of a hypervisor (VMware, OpenStack, VirtualBox, KVM, etc). Common operations are virtual guests can be paused, take snapshots, have an API for automating/monitoring guests.

END SDN Glossary; BEGIN CP VE Glossary: CheckPoint VE is CP’s firewall product that runs in a vMware environment. It has two modes:

  • Network mode – Firewall as you know it runs as a guest in a virtual environment, cannot see any other objects  in the virtual environment
  • Hypervisor mode – runs inside the hypervisor, can see all objects in the virtual environment. This allows you to assign a L2 firewall to each virtual guest. So in the end, nothing more than host based firewalling….but saying the word ‘hypervisor’ sounds so much more cool.

END CP VE Glossary So CP has a couple problems with VMware right now:

  • Currently not integrated in the latest ESXi 6.0 release at the Hypervisor level (Hypervisor level is like being inside the Windows OS. In Windows if you want a list of all processes you must ask or be inside of the Windows OS to see all the processes. If you want a ‘firewall’ to protect process A from process B you have to be inside Windows OS. Same thing with Vmware Hypervisor.)
  • Management: R75.20—- cannot grab VMware objects/IP addresses/network fabric
  • Enforcement: So right now CP is not integrated inside ESXi 6.0 VMware Hypervisor so CP cannot protect East-West Traffic.

The fuzzy details are CP has integrated with an old Vmware API 5.5, but not the current 6.0. In order to get into the real SDN game CP firewall must run inside the Vmware Hypervisor which is the Vmware OS. Specifically is must have access to NSX. Now one CAN today manually spin up CP VE network mode instances (as guests) inside the 600 virtual server farm and manually connect into the virtual network…..but a human being has to manually configure the firewalls as we do in the physical world because only humans know the IP addresses and server names and protocols. What R80 WILL do is use the VMware REST API (see my blah blah on REST) to grab all the VMware objects and their IP addresses. They appear as DataCenter objects (if I remember right) in Dashboard and can be referenced like any other object.Note that these objects are really pointers into the VMware environment, and R80 keeps sync with VMware so if the object is deleted in VMware, it disappears from CP (little scary, VMware modifying firewall policy, another discussion). What R80 can’t do is enforce policy on east-west traffic today because 1) There is no R80 firewall and 2) I’m not sure VMware released the latest 6.0 API. So I saw demos of the management integration and it looks good. VMware objects look like any other objects, but note they are pointers into VMware and not managed by CP. If all goes as planned, the R80 firewall should be supported in the NSX 6.0 Hypervisor. What are the bells and whistles?

  • If a new VM is spun up, you can automatically generate a policy and a L2 firewall to protect it
  • If a VM vMotions from Fargo to Shanghai, the firewall follows it
  • At L2, you can redirect a service/port to the firewall for filtering (this host is infected, inspect all its port 80 traffic), and then back to its original route
  • You can quarantine a VM if it misbehaves and not let it talk or shut it down

All this looks good, just hope they can get it to work. You see some of this in EXSi 5.5. So someone ask me “What do I think Software Define Protection” is? Mike what is “Software Defined Protection”??? Glad you asked. Firewall performance in a virtual world is a game changer. CheckPoint’s edge with Software Defined Protection is that it has been designed ground up in software. Performance is based on throwing more CPUs at the problem, and not custom ASICs. Other vendors rely on custom ASICs for performance so migrating their code to a software based virtual world requires re-coding and/or los of hardware based performance gains. In addition, in the virtual world security will become more dynamically scripted with no expensive slow humans in the chain. Firewalls, rules, objects will become more automatically created and destroyed all through software. CheckPoint’s R80 has the API and the tools (so they say) to play in a scripted automated world all managed from a single pane of glass centralized security management platform. Now THAT’s Software Defined Protection

SDN – Part Vier

So I have hinted how firewalls integrate into this new world. Up to now, firewalls were just virtual guests and you have to use network routing to direct traffic to them…just you like you do in the real world. So you can take a stock off the shelf ISO image of a firewall and load it into VMware and have it monitor traffic with no modifications. I actually do it all the time with my labs. So what has changed???

[QUALIFICATION: I have little experience in R80 or how PA or others operate in a VMware environment. This is just my gather of thoughts from speaking with others, CPX, and reading documentation. So put a grain of salt on this discussion. As I gain experience I’ll update the blog]

So what is about to change is the integration between VMware and the SmartCenter database. Currently a firewall only knows other about other VM guests if a user creates an object and types in the IP address of that VMguest. So if I create 1000000000 VMguests I have to type them in by hand.

Well, in the new world SmartCenter will automatically keep track of the VMware objects through the REST interface. SmartCenter will poll vCenter (see, they even named them similarly) to keep track of what VMware objects exist. SmartCenter will put all the VMware objects into the DataCenter bin in SmartCenter. From the DataCenter bin, you can use them in rules and push the rules to the firewalls in Vmware.

(Question: If a VMware object is deleted, and you are using the object in a rulebase, does that mean the rulebase gets updated automatically???. Not sure.That would be bad.)

So we have this Borg Cube with 30,000 processors on it and tens of thousands of VMobjects. Let’s say we get R80 going and it just sucks in all 30,000++++ objects and puts them into the DataCenter bin. Wouldn’t that be a mess? And its only going to get worst as the virtual world grows. Imagine what the naming scheme looks like, it will be all over the map.


But I diverge…So let’s talk about why CheckPoint might have the edge in the virtual market.

[This is all by word of mouth, so make sure you ask your vendors. Email me if I’m right/wrong]

There is a Facade that the firewall vendors want you to see, and its based on a VMware restriction and not a vendor restriction. Once a firewall is integrated into the hypervisor, (currently it is CheckPoint, PA, Fortinet) it is like having a host based firewall in each virtual guest. Well The Reality is that you will have to run a (many??) separate firewalls as ‘special’ virtual guests and the hypervisor will direct traffic to that ‘special’ firewall and it will emulate being embedded into the individual virtual guest.

As I said, I have been told that this is a VMware restriction and not a firewall vendor restriction. I am not sure if this applies to the native VMware firewalls (basicallly IPchains, pretty primitive). But MAYBE, IF Vmware is actually embedded within each virtual guest, that is all you really need and not all the wizbang that commercial firewall vendors offer. Ask your vendor.


So what does this architecture mean:?

  1. Hopefully the ‘special’ firewall(s) will be tuned to utilize CPUs for performance because they will need it if it is suppose to support a whole Borg Cube (CheckPoint SecureXL, CoreXL)
  2. Unfortunately there will be a performance hit as traffic has to be shuttled to a separate ‘special’ virtual guest to be filtered. Perhaps in the short term it makes sense to virtualize environments that do not have a performance requirement.
  3. Hopefully the management environment will be able to scale as Vmware environment scales (CheckPoint MDS – NOTE: R80 MDS details have not been released. Only SmartCenter. So not sure how VMware will integrate into R80 MDS.)
  4. I am not sure how service chaining will work. Recall that in VMware you can create a rule that says ” HTTP traffic from vmguest A to vmguest B go through firewall C”
    traffic steering
    in addition, I guess in R80 this can be dynamic so admins can isolate vmguest A as a ‘bad guest’, change it security tag, and require that its traffic be ‘filtered’ by a firewall. So I am not sure how service chaining will integrate into this architecture.


So I am here in Germany drinking a really nice  Weisbier, sunny, 6pm, my woman is cooking for me,  and I’m running out of things to rant on about. Maybe tommorrow, SDN can wait.

Play Time

Well, its that time of year again for my Hot German Babe – Gaby and myself to hit the road. 3 months of rock climbing NE USA and Europe while all the little people work and pay SSN taxes.

Little bummed because my SDN series is not finished and I feel its going to be a winner. BUT…I’m sure it will be there when I get back.

I’m sure the CP world will somehow continue without me…..

Dreez OUT!

summer 2014 101

SDN for Dummies – Part Drei

The fun is about to begin. Let’s look at a definition of SDN and some of the major components that make it unique.

Dreez Definition: Software Defined Networking (SDN): The ability to co-manage both the network and security components of a Cloud Infrastructure from a single centralized management platform through the use of automation (software scripting / orchestration).

I know…pretty deep….Let’s go through it one step at a time:

  1. Co-Manage both network and security of a Cloud infrastructure from a single centralized management Platform SDN is a layer of software that gives its transparency. SDN allows virtual guests to float between physical platforms with neither the guest nor the end user knowing what physical platform it is on. SDN will merge network management with security management. You see that a bit now in vCenter Security Manager. vcenter security manager So imagine in the future if you click on ‘Security’ and an MDS/Security Management Station pops up and all the VMware objects exist in that view.
  2. Through the use of software scripting (orchestration)EVERY!!! SDN platform has APIs that enable scripting tools to do whatever you can do in the GUI. If there is an SDN that does not, then it is NOT SDN. What does this mean. Massive unemployment for most of you reading this.Think of what happens now if you have to move a subnet from Chicago to New York. The routing geeks have to touch several routers and switches one at a time. The firewall people have to redo their routing and rulesets on two firewalls. The Load Balancing people have work magic.In the future with SDN this subnet move will occur with scripts. One person will write and execute a script making this whole move happen transparently.

Security  and Network Tags

So what is the mechanism that keeps keeps the security world organized? The glue is called Security and Network Tags. Each virtual guest has a tags that contains security and network information such as policy, encryption keys, IP information. When you ‘orchestrate’ your virtual world and create policy and encryption keys for the guests, the information gets stuck into these tags. vmtags Now notice it doesn’t matter where these virtual guests are running…no one knows except VMware and the administrator. These tags are what I call the operation context of a virtual guest. They allow the guests to float between physical platforms and maintain their current state and environment.

VMware Security Groups

Now for the best part with regards to security. This next image is so innocuous but yet is the heart of SDN that will ravage networking as we know it and could be CheckPoint’s/Tufin’s strength if they could swing it politically. This will make most of you reading this unemployed boat anchors, Cisco routing geeks bankrupt, network gear makers bankrupt….I think you get the picture.


[ crickets….]

So in VMware you can gather hosts into security groups based on characteristics of the hosts – and not just IPv4 addresses. Not really a big deal, you can do that on almost any management platform. But in a virtual orchestrated world grouping is the glue that keeps the management station from self destructing exponentially. Imagine 10 scripting manics generating 10000000000’s of objects in seconds – and then they quit and you get a new batch of 20-year-old scripting maniacs, et al. After a couple days your management environment will be out of control. You HAVE to use groups to keep the virtual world manageable. Now in the future groups will also change and be enhanced to manage the scope creep. There will be tagging, labels, etc. just like you see in CheckPoint’s R80 and Google gmail. But in the end, there will be several forms of grouping. I know, I know….Doesn’t look so powerful does it? So this is the crux of my rant. Currently both network and security geeks provide separation (grouping) via subnets. Once we created a subnet, we then can create a firewall rule to protect it. Notice below that all firewall rules are based on networking. securitysubnets

Now I was never sure how a subnet alone prevents Himachi/HeartBleed/etc from spreading throughout your network, but that’s what we do because that is what we have always done since the dawn of time (aside: subnets only stop broadcast storms). Like lemmings walking off a cliff we are planning our future based on what we have done in the past. But I claim in the future we will provide separation based on SECURITY GROUPS. (and not networking). A DMZ Security Group will be made up of DMZ machines and NOT a subnet. Remember from my Part Eins rant about how networking will change and routing will become less important. Well this is another nail in the coffin. We won’t need routing because the world is becoming L2 Ethernet (no IP addresses) and security groups (no IP addresses) and so without IP addresses who needs routing? And if The Cloud is hosted on a big Borg Cube, why do we even need classic IPv4 networks to transfer packets, it might just be some combination of virtual guest UIDs (instead of IPv4 addresses) and distributed shared memory communication. For example, in the old old days an operating system called Multics all communications were performed via distributed shared files. Everything was a file, even networking.

Next Gen Firewalls

Now start thinking about NGF. Policy rules are based on USERs/Machines and Services. So the rule looks like:


  • “John Adams” – Do you see any IP addresses? Does it have to be an IP address?
  • HR – Is this a subnet or a group of people or a group of machines? In SDN – who cares? Could be UIDs.
  • Facebook – Is this  a IPv4 port or a network protocol? In NGF it is a network protocol on ANY port

So even in NGF, you are starting to see the disappearance or the REQUIREMENT for IPv4 constructs.  People can be AD credentials, Facebook is a protocol not a port, HR could be a group of VMware objects imported from VMware and could be MAC addresses, could be UID’s or VMware security tabs created by NSX – Basically you don’t care what is underneath.

 Service Chaining/Traffic Steering.

Yet another nail in the Network coffin. In the virtual world, you don’t always depend upon IP routing to direct the flow of traffic. You can use Traffic Steering. Right now if packet X has IP it will always go to router next hop

In the virtual world, you can build rules that say “Traffic from Security Group X to Server Y, will always go through FirewallZ”. Do you see any IPv4 routing in that statement? What if FirewallZ is in Siberia on not on the direct subnet – doesn’t matter NSX will direct it to Siberia somehow.

traffic steering


Notice that they are on the same subnet but in two different physical locations? How is this done? VXLAN!!!


VXLAN is the magic tunneling protocol SDN uses to make virtual guests float. In the physical world subnets usually are in 1 physical location (e.g. DMZ is physically located connected to firewall). With VXLAN tunneling, you can have virtual guests all over the world on the same subnet, so they can float and maintain their same network/operational context.


How does it work? Well in the above, you can see 4 virtual guests in different geographic locations all on the same subnet 1.1.1.X. NSX keeps track of where the subnet is by a VNID (Virtual Network ID). In this case it is VNID 12345. NSX uses VXLAN tunnels by encapulating VXLAN packets inside UDP port 4789 packets.

Now tunneling is not the most efficient way of transporting network packets. If you have 2 high bandwidth applications talking to each other over a 4000 mile encrypted tunnel, chances are there will be lots of latency. But technology moves on and in time network bandwidth will be almost as free as water so latency will scale. Historically it always has.

What will blow up????

Let’s review a couple things…

  1. The CLOUD is not SDN and SDN is not The CLOUD. Slide8 The Cloud is where virtual guests are floating through time and space not knowing or carrying what physical platform they are on. SDN is the underlying infrastructure that magically allows them to float…securely. In the above picture SDN is NSX in vmWare land and ACI in Cisco land.
  2. SDN will change: In my definition in today’s technology ‘network infrastructure’ you might assume we will have routers, switches, IP addresses, load balancers, etc 25 years from now. WRONG. 25 years from now the ‘network infrastructure’ might be the backplane of an enormous gyrating Borg Cube with lights (aka ‘War Games’) with no network. All the virtual guests will be running in the Borg Cube and use distributed shared memory (vs. IPv4) to share data. Who knows? But those communication channels still exchange packets hence I use the term ‘network infrastructure’.
  3. The Cloud  – 10 years out I am trying to decide what The Cloud will look like 10 years out. If we go the Google route with 1000000’s of generic Linux servers, then you have to transport packets between systems. If you pack it all into a gigantic Borg Cube, you’ll have a pica-sec latency backplane with oodle’s of terabit/second throughput….but you will be wedded to an evil OEM. I’m will guess a enterprise will buy multiple Borg Cubes for redundancy because they want to be able to call 1-800-xxx-xxxxx and scream at someone if it blows up.
  4. Rapid Deployment – Rapid Destruction Kelman from CheckPoint’s quote. Scripts can deploy quickly and just as quickly destroy the whole environment. In addition you now have an even more concentrated group of employees with super uber admin privileges administrating the Borg Cube. One bad apple and your enterprise is gone.
  5. Single Point of Failure – The Borg Cube – Any failure in SDN internals or Cloud management will bring down the whole Borg. You know how firewall clusters only fail over in the perfect world? Same thing here.
  6. Orchestration – See Rapid Deployment Rapid Destruction
  7. Staffing – Need to know networking, security, scripting.
  8. Licensing – Will be interesting how licensing models will change. Currently licensing is based a lot on IP addresses. Also licensing is based on the physical world..deploying 400 firewalls is a big deal now but imagine deploying 1000000000’s of firewalls with scripts. In addition, retiring 400 firewalls takes years but now you can do it in a second. So what will licensing look like in this IP-less dynamic world? It is a nightmare now with many products.
  9. Compliance – Remember application weenies want to just fire and forget. The CIO will want you to deploy a 600 server farm today and worry about security later. So how do you ensure that these dynamic environments maintain a security compliant profile? Not sure what products can adapt to this dynamic environment at this point.
  10. Debug vs Deploy – Debugging will be a nightmare in this dynamic environment created by scripts. Have you ever debugged in a load balanced environment when packets are never following the same path? This will be even more fun with encrypted tunnels, floating guests, scripted deployments….
  11. VMware security architecture issue – Rumor has it (I heard this through the grapevine, have not verified details) that VMware based firewalls (Palo, CheckPoint, etc) are not totally embedded within the hypervisor. When the hypervisor sees traffic that needs to be processed by a firewall, it forwards it to a virtual guest that is a firewall. So while the abstraction is that every virtual guest has a mini-firewall running inside of it, the reality is that there could be only 1 virtual guest firewall that manages security for all the virtual guests. So this means a Borg cube with 10000000 virtual guests might have 1 actual firewall managing security for 1000000 virtual guests.

    So when talking to vendors ask them explicitly how firewall processing works and how it differs from VMware native firewalls. I will too.


  1. Classic IPv4 networking will go away away over time as network speeds, bandwidth, latency all improve. Will be replace by The Borg, UIDs, share file systems or similar.
  2. Orchestration/Scripting/Automation will replace people, outsourcing
  3. Security will be an afterthought…always is in new technology
  4. Failures will be catastrophic and really cool to talk about over beers
  5. If CheckPoint/Tufin play it right, their management framework could win in this virtual world. Ideal would be if VMware bought CheckPoint for their management environment  intellectual property.

SDN For Dummies – Part Zwei

So Jacob and all the router geeks are still shaking their heads from Part Eins “Who needs routing”. “You’ll have to pry my Nexus 7000 out of my cold dead hands” they say. In fact routing is becoming more important they point out as we have to tunnel L2 virtual world traffic over L3 (to make a subnet look geographically neutral) and for VLAN separation.  (hold on to these thoughts, old school)

Before we dive into SDN, let’s review what the server side of the equation looks like and start defining some terms.

Back in 1991, this Dreez dinosaur use to play a Macintosh game called SpaceHO! I only had a Sun Workstation at the time, so to get this game running we had to use a Macintosh emulator software package. Space Ho was a multi player game so it was able to network to other players. To get to the network there was a virtual network cable that attached to the host’s physical network cable and used the host’s real IP address. This virtual network cable was Version 1 of SDN. And this Macintosh emulator was the forerunner of The Cloud…but it only hosted 1 virtual guest…a Macintosh environment.


Everyone is probably familiar with VMworkstation (damn I should of bought stock in them). The Mac Emulator above had babies and now can run multiple guests in a virtual world and they could all network with each other over virtual switches – all inside a single computer.


Enter today’s Vsphere. Now you can have multiple physical hosts and the virtual guests can run on any of them and you don’t even know where the virtual guest is running at any giving moment. Virtual guests can even move between physical hosts (vMotion).

[begin music]


Dreez’s Cloud Definition: The ability of a virtual guest to execute on any piece of physical hardware without the application nor the end user knowing where it is executing.

[end music]

So in the diagram below The Cloud is Vmware’s Vsphere…the total package that makes virtual guests execute and float throughout The Cloud. A portion of Vsphere is NSX…the underlying SDN software that makes it all transparent to the physical world……


Enter Vmware’s version of SDN…NSX….

In this virtual world VMware’s NSX is distributed across each VMware Hypervisor running on each physical platform…but it runs as though it is a single piece of software.  NSX is the NETWORKING portion that supports The Cloud. NSX knows how to emulate switches/routers/routing protocols/spanning tree/etc/support/etc….all in software. But most importantly…. when a guest moves between physical hosts NSX makes sure that the IP address, security context, peer communications, VPN, etc will never change – The Operational Context – Vmotion. NSX keeps track of all this inside NSX and when the guest moves, NSX keeps contextual info floating with it.

Think of Google. Thousands of Linux PCs out there and you never know or care which one you are executing on…and it may change moment to moment. All possible with their version of SDN.

Next up SDN…….

SDN for Dummies – Part Eins

I’ve been researching SDN, interviewing routing geeks, going to presentations and the one thing they all have in common is the blah blah. Like a bunch of music majors (FYI – I’m a music major)  turned SDN marketing geeks that can’t find a job and heard about SDN and now they have learned to use big words like ‘ecosystem’, ‘hypervisor’, ‘virtualize’, ‘east-west’, ‘tenant’, ‘orchestration’ and want to make a name for themselves. Ask them what SDN, Cloud, Hypervisor, mean and you will get 100 different blah-blah 2 hour speeches. Cisco ACI is really complex I still don’t understand it even after taking a class. Vmware seems to make it simple via their GUI. Tufin’s Ruvi Kitov has had the best perspective to date on how to manage this beast.

So I decided to decode the SDN blah-blah into my own Dreez blah-blah that maybe my mom (The Italian Tornado)


would understand.

In my previous rant on SDN I talked about how this baby will scale massively because scripts can generate 1000’s of objects/rulesets/firewalls in seconds, so the problem is who will manage this beast? CP and Tufin could capitalize on this next big hit.

But first let’s do a primer on SDN and compare were we are today to where we will be in 10 years.

Here is your basic boring enterprise network environment. Let’s start at a a ‘campus’ type environment. A PC is connected to a 3800 Cisco switch in the campus building and that has a leased dedicated 10g fiber to the datacenter 1 or 2  miles away to a Cisco 500 switch. At the campus, there is a firewall with max blades turned on. There are several geographically co-located buildings setup this way in a ‘campus’ network – high speed 10g (expensive) fiber connecting them together. Data is routed between the campus sites and data center (this is key remember this).

Remote sites have MPLS connections. What is MPLS you ask (this is key remember this):

  1. Buy dedicated guaranteed bandwidth to remote sites
  2. If you want you could run Layer 2 over these links . So in theory you could get rid of internal L3 routing (this is key) but current limitations on hardware and bandwidth to handle broadcasts restrict this ability.

So remote sites are connected on this dedicated bandwidth MPLS network with speeds of 52K dial-up line to the netherland like Zambia to Ethernet over Fiber 100 Mb at more civilized locales like England. These are connected with Cisco 6X00 routers.

My point of this discussion will be that you can predict the evolution of SDN based on network speeds and prices to remote locations. Just watch…..


Let’s go back in time…waaaay back…to when I had hair. Ethernet was at 1-10Mb 10BaseXXX 1/2 duplex and because of hardware limitations it could only support 10-100 devices all within 500 ft of each other (OK, my numbers might be off but you get the idea). One misbehaving PC sending out too many broadcast messages would bring the network to its knees. Ethernet broadcast storms are like a Wall Street trading floor, 100’s of people all yelling at the same time just slowing the whole thing down and you can’t hear anything. Because of advances in technology today you can have hundreds of devices on 10g Ethernet around the world (if you had enough money) and there is more technology to handle broadcast storms.

With that in mind let’s look at today’s remote sites. Today in Zambia the network link is so slow and unreliable that we have to have some of our servers (file, database, Active Directory) based locally in Zambia or else production would stop. Now what will happen as network speeds increase and become more reliable (Fiber All Around)?


You guessed it, we will then localize those servers back to the data center so they are easier to manage. Many companies already have remote sites with decent low latency bandwidth connectivity so those sites so you will not find any servers, just overpaid whining employees.

So let’s look at the campus infrastructure. This should look like remote sites 2025. They have high bandwidth low latency reliable connectivity to these campus buildings so no need for servers. So by 2025, the only real change will be faster connectivity of both fiber and wireless between my PC and the data center. No big change.


Now the fun part. Let’s look at the data center. Today the data center is filled with OEM equipment. OEM firewalls, database servers, file servers, etc – tied together with via this behemoth Nexus switch(s).Yes we have dramatically started to virtualize this world into VMware and so it is slowly reducing its footprint. As remote network speeds increase and the price of bandwidth decreases and applications migrate back from remote sites to the data center this evolution into the centralized virtual world will hasten linearly?? No I will say exponentially. (stay tuned, its all about the scripts).

Datacenter 2025 will look like this. Either a room filled with 1000’s of $100 gray market PCs running Linux Virtual Systems/VMware OR one big Borg Cube that has 1000’s of CPUs in it tons of memory running VMware or something similar. You can see this now with some of Ciscos products (talk more about this). (I won’t talk about public/private cloud services at this time). Why?


Think of google. Their datacenters are all generic Linux systems. Google and companies are DONE being wedded into the greedy hands of a single vendor. Technology is changing so fast, deployment times are reducing, and prices are dropping so dramatically in the virtual world that being wedded to a OEM is like an Carrie Fisher married to Attila the Hut, just not pretty (She was hilarious in that one Big Bang episode).


(sidenote: If you believe in The Borg, sell your Cisco stock….minimal need for network ports and routing…all done inside The Borg)

So basically, the network infrastructure is headed to fiber and fast wireless all around, 255 TB/sec check it out.


Ok, enough rambling get to the point. L3/routing is currently required because of:

  • Router geeks seem to feel that there is some security advantages to subnetting.
  • Firewall technology requires us to subnet so we can protect ‘zones’ of IP addresses
  • Limit broadcasts to a subnet, if all systems in an enterprise would arp network would stop
  • Route across WANs to remote sites (because you can’t arp to find a peer system)
  • Networking’s legacy is based on L3. DNS and embedded IP addresses in apps.
  • Available network address space. IPv4=2**32   <     EUI-64 MAC 2**64   <    IPv6 2**128

But sorry to say L3 people, notice how L2 is getting bigger and and L3 is becoming smaller as network bandwidth/speeds/latency improve? Notice how L3 diminishes as you virtualize onto a single Borg Cube? No WAN routing is required in a Borg cube. No IP ‘zones’ required if every virtual guest has a firewall and grouped by virtual host (say tuned more on this later). Fewer arp issues as backplanes get faster and broadcast dampening technology mature inside a virtual host.

Sorry to say L3 people, routing is slowly disappearing. Imagine an enterprise with only L2 worldwide! Imagine being able to fire all your L3 router geeks!

  • What will happen to firewall rules? How do we separate networks?
  • What will happen to L2 broadcast storms?
  • Where does this leave Cisco/Juniper/Alcatel

IP addresses exist because of routing, what happens if we don’t need routing?

Oh yes, I can see all you Cisco and Security geeks roll your eyes. How can your comfy little world disappear from under your feet when you have mortgages to pay and boat loans to pay off?

[music stops]

But we still have mainframes

[music continues]

Well, you can relax IP addresses will be around for a long time just like COBOL is still out there….but you might want to think about sprucing up your resume.

In 2025, a CIO will wake up and decide he/she wants to spin up a 1000 server big data mining site to find aberrations in health care pricing. You get the phone, what do you do? Do you call India and start hiring deployment geeks for $2/hour? NO! You write a Python/PHP/Perl script.

for server= 1 to 1000 DO{

server_farm[server] = windowsserver.create_new; # create new server
assign_networking( server_farm[server] ); # assign networking template to server
assign_security_controls(server_farm[server] ); # assign security template to server
assign_application(application_ptr, server_farm[server]); # load application on server
start_server( server_farm[server] ); # start the server


Deployments will be like writing software…generate and destroy objects and constructs at line speeds. On you management station you group these 1000 servers into a group, create a firewall and build a policy that says:

# Allow users to connect
FROM: user_pc TO: server_farm ACTION: ACCEPT
# Nothing leaves the server farm
FROM: server_farm TO: NOT server_farm ACTION: DENIED

Do you see any IP addresses? Do you see teams of overpaid IT people running around plugging in cables and entering Cisco commands?

Welcome to 2025 Software Defined Networking……..

Michael Endrizzi's - St. Paul MN - CheckPoint blog on topics related to Check Point products and security in general.