The love/hate relationship collectively known as Devops

Mark Cutting Consolidation, Cooperation, Devops, Management, Planning, Strategy 3 Comments

Pretty much everyone in information technology has heard the various buzzwords and terms that are attributed to the consolidation of collective technologies and practices between teams to create an intersection of skills that are vital to success. Leveraging strategic and technical partnerships, businesses small and large can harness a greater knowledge and talent pool to increase their scope and overall reach. Such partnerships between IT Operations and Developers (DevOps) or Security (SecOps) are relatively commonplace and an accepted standard in today’s world.  

Unfortunately, the relationship between these two entities isn’t always the rose garden that it’s painted to be. For a variety of reasons, not every personality or trait is appreciated (or wanted / tolerated), and can often be the cause of tension, arguments, and a general clash or conflict of interest. As a result, the two entities that are supposed to be working together often pull in opposite directions, causing productivity chaos in their wake. Here, we’ll look at the most common source of irritation in technology pairing – DevOps.

For as long as I can remember, DevOps have been expected to work together in order to define the application development timeline, and in a similar ilk, the subset of technology that should underpin that development roadmap. However, in most cases, frustration is borne out of multiple factors. On the one hand, developers typically adopt the view that the technology should support their deadlines and deliverables, and not vice-versa. This is admittedly the way it’s supposed to work, but very frequently, the deep seated root of underlying technology issues is like a red rag to a bull for a developer. Let me explain a bit here. If a developer releases code that works fine in a test environment, passes UAT, and is promoted to production, the developer’s job is done, right ? Unfortunately, no. It’s often the case (and I’ve been on the receiving end of this one several times) that the application performs poorly in the production environment – therefore, it’s a problem with the infrastructure.

In reality, that’s not always the case. Why ? For a number of reasons.

  • The first is that developers do not fully understand (or need / want to in most cases) what the underlying technologies are that provide the necessary transport for their applications – they are just informed by the users that the experience is poor, and is portraying them in a bad light. Clearly, this isn’t the level of service that they want to exude, therefore, a full code inspection is then underway. Most often, it’s the case that the developers certify their code to be clean, well written, and efficient. In most cases, IT Operations are not in a position (or qualified) to argue this point, and begin the process of identifying what could be a potential bottleneck. Dependent on the level of control afforded to IT Operations, the analysis may not always yield an immediate smoking gun – particularly if the DBA function is managed by an external party, and they insist that nothing is wrong with the database itself.
  • The second is that developers have key deliverables and milestones to meet, and could really do without infighting or internal politics getting in the way of meeting a particular objective. In addition, frequent DevOps meetings often descend into a platform for finger pointing – a “shit slinging” exercise where one opposing department blames the other for failings or inefficiencies that meant a delay or deviation in terms of progress. Developers take an agile approach to application evolution, and often the 5:30pm deployment request to IT Operations doesn’t go down well – particularly in a controlled environment (such as Sarbanes Oxley 404 or PCIDSS).

Such an agile approach can lead to frustration and a “red tape” ethic where application development blames IT Operations for their failure to release an update on time if they believe the infrastructure does not support their requirements.

On the flip side, IT Operations can easily offended by constant negative remarks about the infrastructure they implement and support, and as part of human nature, will defend their castle – effectively blaming the developers for delivering a poorly written application that performs slowly because it contains bugs, or uses badly written SQL statements to retrieve data. This then creates a feeling of contempt between the departments, and as a result, managers end up locking horns in a bid to defend their respective teams – and reputations. Arguments over who is at fault are commonplace, and effectively, these two departments create so much negative energy in the workplace sense, that going to work each day becomes less appealing as time goes by.

As if the relationship wasn’t damaged enough, there’s nothing worse than a developer who has knowledge of infrastructure. Rather than suggest improvements, these types of character often force their views of how things should be done onto those who may not necessarily be as receptive to this ideology, and see it as nonconstructive criticism. I’ve seen this myself where developers have knowledge outside of what is required for their required function, and will exercise this “authoritative view” to whoever will listen – often their direct line manager – who in fairness is fed up with listening to a constant barrage of developer complaints about the infrastructure, and wants change. As a result, the network and server landscape becomes a “kaleidoscope” view of what it really is – and what is achievable. You wouldn’t expect to get 0-60mph in under 3 seconds out of a low performance car, so it’s unrealistic to expect 1Gbps out of a 10 Mbps MPLS circuit.

Now let’s look at the other side of the coin. How would a developer like it if their applications and associated code were under constant external scrutiny ? I know how to write code, and am proficient in the full LAMP stack – but I would never interfere with the developers or force my opinions on them – it’s not my responsibility, or business to do so. However, buttons that are typically reserved for nuclear warfare and really shouldn’t be touched are often pushed which results in bad feeling, angry staff, and a general unwillingness to offer any sort of service or cooperation – particularly if someone goes out of their way to find fault with literally everything.

  • Slow code ? Blame the anti virus.
  • Slow performance ? Blame the endpoint software.
  • Slow processing speeds ? Blame the network

……even if it was never designed to handle what’s being pushed down it, and was implemented years ago and never reviewed or upgraded. The fact that you’re trying to push a football down a hosepipe is irrelevant. Or is it ?

What can also make matters worse is an infrastructure department set in their ways – one that not so much refuses to accept or foster change, but doesn’t consider the suggested technology good for the business – both from a security and reliability angle. As they tend to have the final authority on what goes in and what stays out, the desire from a developer for a blazing Ferrari often turns out to be nothing more than an ageing horse and cart. Whilst this can easily be seen as obstructive, and a deliberate misuse of power to prevent progress, there are a good number of reasons why an infrastructure department will potentially refuse technology change if they consider it unsuitable. It’s not always malice. Here’s a paradigm. If you have been using a private managed network for years, and it’s been “stable” (that term is used loosely here, as stability can also be at the cost of productivity), why would you consider moving to a much faster unmanaged internet VPN that could be subject to an unprecedented (and unwanted) DDoS attack ? Now look at the other side of the argument. If the current topology does not permit a flexible approach to application deployment, then it will impose limits and restrictions on what should be agile. It may serve the business well from an infrastructure perspective in terms of basic site links, centralised services such as telephones and thin frame (there goes my age again) technology, but the evolution wheel is constantly spinning. A frustrated developer will then ask the blisteringly obvious question

If my internet connection at home is 100Mb, why are we running operations on a line one tenth of the speed for double the cost ?

That’s a good question, and deserves a satisfactory response. Bandwidth is cheap these days, and with gigabit circuits being competitively priced it makes the technology much more accessible. The downside of moving from private to public networks is the security factor – a topic that deserves an article of its own in actual fact. The real issue here is the immediate trade off between a dramatic increase in bandwidth, and the exposure to threat – going from private to public networks typically attracts threats like iron filings to a magnet, and without adequate preparation or protection, your security posture will suffer as a result. In addition, cost savings on internet circuits in comparison to managed lines are quickly realised, but any project to move into faster circuits could be cost neutral or even negative once you consider the additional security requirements. The final comment here is that if you are running time sensitive services such as video or voice, the general lack of CoS/QoS control could make a move to an unmanaged circuit prohibitive.

The above justification isn’t enough to hold back the river. Change is inevitable, but sometimes needs a catalyst to accelerate the process.What I’m alluding to here is a need for both developers an infrastructure to reach a compromise  -one that does not negate security or productivity. Striking a balance is difficult, but as long as both departments realise they won’t get their way with everything, a mutual agreement can often be reached. Over the years, I’ve been exposed to a number of differing tactics to try and improve the relationship between developers and operations, ranging from going out for drinks, all sitting together (or near each other) in the same bank of desks, team building activities, and much more. Whilst most of these attempts have their merits, they don’t always work. Establishing a decent relationship between developers and IT operations is not an easy task – you need to work at it, and learn to both appreciate the individuals and their skill sets whilst at the same time, learning not to bite when something is said that pushes all those wrong buttons. The last point here is that the relationship really can work if both departments actually want it to. There is a need for some adjustment – sometimes even on the personnel front, but this should be handled on a case-by-case basis.

Have you ever been in a similar situation ? Over to you..

About the Author
Mark Cutting

Mark Cutting

Facebook Twitter Google+

Mark Cutting is the founder of Phenomlab.com and Inocul8r.net. He is a network, security and infrastructure expert with more than 27 years service in the Information Technology sector. Mark has a significant eye for detail, coupled with an extensive skill set. Having worked in numerous industries including trading, finance, hedge funds, marketing, manufacturing and distribution, he has been exposed to a wide variety of environments and technologies alike.

3
Leave a Reply

avatar
2 Comment threads
1 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
3 Comment authors
Lucas bainesMark CuttingMandy Robinson Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
Mandy Robinson
Guest
Mandy Robinson

I haven’t been and I feel lucky that I haven’t. Thank goodness!

Lucas baines
Guest
Lucas baines

Well, am new to this game and reading this for the first time. Thanks to mark, I will surely come back to read more articles of yours!