Stay up to date with the latest information on network security,
business continuity, and network availability.

In 2016, DevOps is no longer a mysterious, edgy new concept – it’s a well-established set of principles and practices driving distributed IT applications and systems development and delivery, from startups to Fortune 500 enterprise.

DevOps extends the tools and processes used in Agile development, QA automation and continuous integration, applying them down the stack to the IT systems responsible for application delivery.  Contrast this with traditional IT development, with programmers, testers and sysadmins operating in siloized teams, a model that has proven inefficient and error prone at Internet-scale and Internet-pace.

DevOps: Before & After

DevOps: Before & After

Infrastructure as code

This shift has been made possible of two key technology trends that have transformed IT over the last ten years: virtualization (e.g. VMware, libvirt) and Linux-based open source development tools (e.g. git, Python & Ruby, Jenkins).

Rather thinking of IT servers in terms of individually maintained physical boxes, virtualized servers are dispensable “instances” that can be automatically created, provisioned, updated and destroyed in response to load and changing business needs.  System configuration is abstracted into high-level descriptions and templates that can be updated, tested, deployed and audited using the same revision control systems and workflow used to develop the applications they’re serving.

Barriers to adoption

At the next layer down the stack lies the network.

Historically, network engineering has been highly specialized world, with vendor-specific tools, appliances, OS environments and certifications.  While the same challenges of scale addressed by DevOps do exist, the preconditions haven’t been as conducive to rapid adoption of DevOps-like practices for the network (aka NetDevOps).

By contrast, network management practices may seem archaic, where router and switch boxes are individually configured by typing console commands, or pasting in configuration files (which may be have been automatically generated but by in-house or specialized systems, rather than DevOps-centric general configuration orchestration systems such as Puppet, Chef, Ansible and SaltStack).  And while RANCID has proven a valuable tool for post-hoc configuration revision control, it’s a far cry from a git workflow where configuration change requests can be automatically tested, merged and pushed to production.

The culture of specialization also means that most network and security teams continue to reside behind ticket walls, with responsibilities that begin and end with their network SLAs.  Network engineers are less likely to be familiar with the “well known” programming tools that make up the DevOps stack, than their IT and sysadmin counterparts.  Network engineers are right to be wary of seismic shifts in controlling a system where a one character typo in a netmask can bring down their entire data center.

Therefore the adoption of NetDevOps, while in our view inevitable, is coming at more considered pace – as the DevOps movement continues to mature it will evolve to embrace network engineers as much as vice versa.  At Opengear, we recognize the enormous benefits NetDevOps adoption as outlined by Ken Wilson in his presentation at AusNOG 2016, but equally recognize the considerable challenges, and continue to support our users to ease the transition.