Monthly Archives: March 2013

How NOT to choose a firewall


Over the 20+ years (ugh) I’ve been doing firewalls, I think I’ve seen them all. So I think I’ve answered the question “What firewall should we use??” a hundred times. For the last 5 years I’ve been back supporting Check Point and MDS and being on the front line of deploying and operational support of enterprise environments, I think I now know the answer for how NOT to choose a firewall. It all goes back to my days developing Sidewinder back in the dark ages.

In those days us developers so were so geeked out on developing the most secure firewall that all the other important operational requirements were an afterthought… manageability, monitoring, DR, availability, scalability, etc. Only now do I appreciate how poorly we designed from the bottom up totally blowing off the enterprise.

I can still see this design in a lot of existing firewalls. Cisco ASA in particular has to be the laughing stock of the firewall market. I’m not sure why they just don’t hang it up. The GUI is like Notepad++ for a command line driven router with ACLs. Juniper’s WebGUI must have been designed by the same people that designed CheckPoint’s User Center – high school interns writing their first Java script. I could go on, but all of these have one thing in common. They are all designed from the bottom up. Hardware->Command Line->Crappy GUI-> Even crappier Enterprise Manager for multiple firewalls.  Each layer worse then the previous.

These are the red herrings you should avoid when reading the marketing fluff

  1. Security –
    You know what…these days security is the same across all firewalls. Ports, IPs, applications, identities, etc. Firewall X may not have the feature now but it will shortly 
  2. Performance/ASIC support –
    In large enterprises there might be 4 or 5 ingress points that really need high performance firewalls.  If one were to spend some time tuning them they probably could support the load…or just use load balancing…or architect the flows a little different.
  3. Fancy GUI – GUIs are always pretty but do they support 1000’s of rules. How does one search, delete, add, manage huge rulesets with huge object sets. Day 1 looks like the marketing slide but day 1000 after 10 admins have cycled through what do you have? A ruleset and object base that are too scary to touch so you can’t remove anything and you just keep adding.
  4. GUI performance – Day 1 it looks like a marketing slide, but after the 50’th rule it hangs and blows up and corrupts the database
  5. Hardware vs software; Who cares? As long as you can setup a test lab in VMware. Biggest problem is for software firewalls is making sure they have a base hardware configuration so you don’t have to play pick-the-network-card-of-the-week game because the device drivers are not in the HCL.

These are the true factors you should consider when evaluating firewalls. Why? Because firewalls are the chockstones of our network. In the time of war the president asks 4 words “Where are the carriers” Same with firewalls. If anything goes wrong in the environment the CEO screams “What are the firewalls doing? (i.e. its a firewall problem). Firewalls are in a unique position in the network. They can not only police the flow of traffic at ALL levels of the network stack, but they can also be a huge network debugging tool. When something blows up, the first thing everyone looks to for information is the firewalls.

  1. Centrally manage enterprise environments:
    1. Object sets: Large object sets with scoping rules, coloring, groupings, access controls, search capability, ability to add/modify/delete at all levels. Imagine tracking millions of objects and rules. How do you organize all of them? You have to be able to group, label, sort, search.
    2. Rule sets: large rule sets with scoping rules and generous comment sections, indents, colors,access rules, search capability.
    3. Admins: Large number of admins with various security controls. All from centralized management database like RADIUS. Ability to have multiple admins administrate concurrently read/write.
    4. Provisioning: Ability to set date, passwords, routes, interfaces, debugging commands, scripts across multiple firewalls with 1 command.
    5. Revision controls: ability to store revisions of policy and objects so you can role back/forward in case of problems
  2. Monitoring and Logging
    1. Monitoring: Monitor status of whole environment from high level and ability to drill down to detail such as interface performance
    2. Logging: Logging of entire environment and ability to drill down into single log entry
    3. SNMP Integration: Easy integration of product specific parameters into SNMP environment for real-time and historical monitoring of whole environment
    4. Detailed logs at the kernel level, not just network traffic. These are dire in case you have to debug on your own in case support does not answer the phone
  3. Staging: Can you stage rules before you deploy them. In the stage environment can you perform some sort of QA workflow before rules are deployed. Ability for separation of duties for admins to create, approve and deploy rulesets.
  4. Cluster/Failover: Clustering should be brainless and flawless. Members should never fight for active-active control. You shouldn’t have to understand the minutia or issue 1000 commands to fail the members over. Clustering should work transparently with dynamic routing on failovers so that the firewalls and supporting routes all fail at the same time. Clustering should also work with upgrades so that you can upgrade members and never have downtime as you fail over between the upgraded members.
  5. Test Labs: Can you create VM test labs of your environment to test deployment scenarios. Why VM? Because you can quickly snapshot and roleback. In hardware labs you never know what environment you are testing in, you can’t role back.  I can’t tell you how many close calls I have filtered out in test environments that would of been a disaster in a production environment.
  6. Command Line Debugging: GUI debugging is so limited because when things go wrong, you never know if the GUI is broken or you are looking at the real problem. I like debugging in Unix where you have a rich set of tools and can debug inside the kernel.
  7. Easy to learn GUI: Assume that your company is cheap and will go through admins like toilet paper. When one admin quits, the next newbie will have to pick up the slack (of course we can’t send them to training!) and try and figure out where the last person left off. If the GUI is so complicated you will never be able remove anything, it will only grow more complicated bowl of spaghetti.
  8. Network level Packet filtering preferably CLI. Aside from Wireshark, I have not seen a good GUI packet sniffer on a firewall. TCPDUMP is mandatory for any firewall environment and it has to be able to feed into wireshark
  9. Virtual Environments: Supports virtual firewalls to for easy deployment
  10. Upgrades and cloning: Ability to easily upgrade clusters and/or clone a firewall with minimal downtime
  11. Support: This is the tough one.  Monitor blogs or user groups to see what they say about their support environment. May want to stage a test and see if you can call support and see how long it takes to answer a question. How extensive is there knowledge base and forums? Is there a knowledge leader out there you can email and ask their opinion?
  12. Licensing: Easy to support licensing model. If complex, this will not scale in an enterprise environment and you will never figure out how many licenses you have and what they do.
  13. Backup/Restore: Yes every product can be backed up, but how hard is it to restore?  Cisco probably has the best restore capability here.
  14. Provisioning: What are you going to do if you need to set a new password on 1200 firewalls? Can’t do them one at a time. Need some sort of interface to execute local commands once on 1200 firewalls
  15. Remote upgrades: So you are upgrading that firewall in Botswana and you have no local console access and the local SalesGuy/IT Guruwho speaks Zamini had a bit too much fire water that morning and it all blows up? Sure hope the firewall can revert back to its previous image automatically or else you are going to be on a plane.
  16. Fail open: OK, the security geeks just crapped in their pants. But there should be an option to fail open. How many millions of dollars have been lost because of cluster failure, memory leaks, etc. Security geeks have usually never had to endure the pain of throwing away millions of dollars of product because  a production run was spoiled when an cluster failed. Some firewalls are not the front line of the battle. Many firewalls are nothing more then shopping mall cops that are there to keep general order. But when hell breaks loose they should just alert admins and get the hell out of the way and fail open.
  17. Market share: So you have firewall Product X, and your admin quits. Product X has 1% of the market so you can’t find an admin with Product X experience. Of course you have no training budget, so you are forced to find a needle in the haystack and hope it doesn’t cost you $`150K/year and tons of downtime as the new admin tries to figure out Product X on your dime.

Hopefully this will save some of you from the firewall marketing machine.




So, had a peek at Palo Alto today. I would say very very nice.   PA = SmartDashboard + MDM : That’s what PA is. It has all the features of SmartDashboard and P1 put together in 1 GUI. What I like about it is that conceptually they have the concepts of centralized enterprise management designed correctly from the top down. All the other firewalls on the market never got this right. They would design firewalls from the hardware up and then try and throw a centralized manager on top of the mess as a last thought (Cisco, Juniper, Sonicwall, Sidewinder). I should know, this is what I did at Secure Computing working on the Sidewinder.

Here are some of the other features I noted that support centralized management:

They have 3 levels of policies and objects that make it great to manage:

  1. Global – Like MDM
  2. Device Group – Group several firewalls into one group. Somewhat like a Domain/CMA
  3. Local – Local to device. Like a standalone Domain/CMA

These 3 levels of scoping applies to both objects and policies. The GUI does a great job of highlighting the different scopes in different colors so its easy to mix and match. This flexibility allows one to build both enterprise and local objects and rules and manage them effectively.

Another feature which is nice is instead of SmartDashboard tabs of functions like IPS, AV across the top, they have one rulebase, and each rule you can specify if you want AV, IPS, MALWARE, etc on it. There are icons on the right and you drag them onto the rules that you wish to apply IPS on.

The SE stated that it has a cool feature to detect dial back control connections from internal zombies. He said it impresses clients when they do demos when they identify these connections real time. Probably similar to anti-malware bot from CP.

They have a global monitoring environment somewhat like Indeni. You can see all the logs, SNMP, etc all in one panel or individually. You can build filters to determine what you see. Very nice. I have been begging CP to include this feature for years.

SE claimed they have a global provisioning environment that works. I did not see it so can’t comment much. But you can set passwords and dates, etc across the whole environment. Somewhat like SmartProvisioning but on a global environment.

No SmartLog capability that I saw, but you can view all firewalls in one log and build filters. Not sure how fast it is compared to SmartLog.

Everything is GUI based which is OK, not sure how it works on HUGE rulesets.

At the local level the hardware is custom??? chips that maps to their policy enforcement? DIdn’t get this part. FYI, you do enforce policy on a per interface basis like Juniper and Cisco. Not sure I like that but OK.

It is application based ruleset but can also do port numbers, but you lead with applications and users access to applications.  I hope their IA works better than CP’s IA – I’ve experienced flaky IA behavior.

The CLI is all from the management station with a GAIA like interface. You can’t get to Unix which I absolutely don’t like. I guess you can’t log onto the firewalls themselves?

The SE state about several ways to do port mirroring?? I didn’t get all of that.


  1.  Is PA a better centralized management  product: Definitely yes
  2.  Is it more secure: Basically the same as CP
  3.  Is it easier to manage: From the little I saw yes. One GUI, one ruleset, one global monitoring and logging and provisioning
  4.  Is the management scalable: From the little I saw conceptually yes but I haven’t seen it in huge environments
  5. Is it worth converting to a new firewall: This is the crux question. The product is sexy, but in the end what do you get? What is the delta? Is the delta big enough? The answer is linear. If you have a small environment possibly yes. If you have a large environment..I think the cost/benefit is not there at this point. The conversion and training and support costs are so huge and the delta is so small.


  1. Will the GUI support huge environments? This has always been the breaking point for similar products.
  2. Performance in real environments? Unknown
  3. Application control and Identity Awareness issues? New concepts always have new bugs especially when they scale
  4. Not software based so very difficult to build labs to do testing in VMware.  Huge for me personally.
  5. Can’t log into the CLI with Unix. Huge for me personally for debugging. I hate going through 10 layers of software to debug, you never know where the real problem lies.
  6. Is the delta worth it when you consider the conversion costs, etc? No for huge shops.
  7. What is their support like? Unknown

I would say PA is the best thing that every happened to CP. CP has been sitting on its laurels for far too long and now the pressure of a true competitor exists to update MDM into a true single centralized management platform.  I’m hearing rumors that CP has a new MDM in the making. Hopefully it can rise to the challenge.

Still an MDM nut,


Not Feeling the LOM

So I was working on our Smart-1 50 MDSs from home and had some problems. I had to get to the LOMs to reboot…..and of course 5 of 6 of our LOMs either hung on the web page or the console said could not connect.

I don’t think anyone has used the LOMs for 6 months, so advanced aging set in. No excuse. When you need them…they aren’t there. Kinda like some ex-partners of mine…but that’s another drama blog I won’t go there.

Anyways I had to either reset them in the web page which was easy or physically power off the box..unplug all the cables (network and power)…let it sit…chant my chokras while finding my inner child..ummmmmm…Then reboot.

THey came back up.

CP support was great. If this doesn’t work, then they have some magic passwords and Phoenician chants they use to restore the LOMs.

Do you LOM me or NOT?


========= Random Updates Until I fix this ==================

On 12000’s: Hit “TAB” to interrup the boot. Then use super secret password to get to BIOS to reset the LOM

2013 CPX Questions

So I’m heading out to 2013 CPX in DC this year again. If you ever read my posts from 2012 How to Attend CPX  and CPX review, the conference is mostly rah-rah but the true value is getting to The Developers! If you approach it right you will run into a gold-mine of fantastic people and knowledge base.

These are the questions I’ve been picking up from my customers and self that I will hunt down this year. Please feel free to add to the list. If you going to the conference and want to meet up, just drop me a line.

  1. How does OPSEC SIC work in MDM environment? Is there a global cert that enables LEA for all domains?
  2. How does packet processing work with multiple blades. Is it done in parallel
  3. Compare PA global management against CP global management
  4. Memory model – Why is 64-bit so restricted to limited amounts of memory like 64Gbs. Why does old 32 bit recognize 64g of memory, and so does 64-bit, but how do we know if/not they are using the memory.
  5. Disk space – Why didn’t kernel recognize > 2Tb and 1K blocks
  6. NAT and packet flow through VSX. HOw is it different than R65? How does the kernel keep the virtual spaces different if at all.
  7. Issues with R75.40 VSX we should know about or work around (advanced routing?)
  8. MDM enhancements?
  9. SmartLog enhancements
  10. What is in VSX R75.40? New architecture? What are problems we should avoid?
  11. Kernel flow of packet: firewall to IPS to application control to DLP,etc all separate processes?
  12. The world is going virtual, away from routed and switched fabric. Now CP is heading into appliances at the tail end of the appliance craze…chasing the market. What is the next step into the virtual world, integrating into VMware? VMware should buy CP seems like the answer.
  13. Why is Identity Awareness so hard? Why do Identity’s keep dropping? What are some better debug tools? What do the existing tools really do? What is the underlying architecture?
  14. Why does clustering fail over when pushing

Cool fw ctl debug notes cheat sheet

I found these in my archive and can’t find them anywhere else on the Internet so decided to share.

fw ctl debug-odd  pages and fw-ctl-debug-even pages   cheat sheet

Sergei Shir of CP Intl Support did a great job creating this. I’m going to see if he has an update.

Cluster Debug Notes

There are tons of these, but I wanted to keep my own copy from Sergei so I can update.

Enabling VMAC is not related to cluster failover
VMAC is intended to eliminate problems with ARP cache on L2/L3 networking devices

The issues with different values in ‘Required interfaces’ are solved in the following way :

A) make sure the configuration of interfaces is identical on all cluster members (i.e., pairs of interfaces are assigned the same subnet mask , the total number of interfaces is identical, etc)

NOTE: on GAIA you should double check the configuration – the outputs of ‘show interfaces’ in CLISH must match the outputs of ‘ifconfig’ in Expert mode

B) SmartDashboard – cluster object – ClusterXL – Topology – Edit
— get the interfaces with topology from each member
— configure VIP addresses
— OK
— File menu – Save

C) SmartDashboard – install policy onto cluster object

D) on each cluster member check that the policy was installed
# cpstat -f policy fw

E) reboot each member

F) output of ‘cphaprob -a if’ must be identical on all cluster members

If these outputs differ on cluster members, then it is necessary to collect the debug of cluster configuration from each member

# fw ctl debug 0
# fw ctl debug -buf 32000
# fw ctl debug -m cluster + conf stat pnote if

# fw ctl kdebug -T -f 1>> /var/log/$(uname -n)_cluster_debug.txt 2>> /var/log/$(uname -n)_cluster_debug.txt

Install policy in SmartDashboard

press CTRL+C
# fw ctl debug 0

Send for analysis
— CPinfo file from each member
— /var/log/HOSTNAME_cluster_debug.txt from each member
— /var/log/messag* from each member
— CPinfo file from MGMT Server

=====================  Mike Notes ========================

fw ctl zdebug -m fw + drop  SK80520

fw ctl zdebug -m cluster + select  SK35211


Super detailed CP clustering info



Enable/Disable Sync: fw ctl setsync start/stop

Print out sync stats: fw ctl pstat


Helen's Loom

"Peculiar travel suggestions are dancing lessons from God." - Kurt Vonnegut

Life Stories from Dreez

These are stories from my travels. Generally I like to write stories about local people that I meet and also brag about living the retirement dream with my #1 wife Gaby. She is also my only wife.