DevOps Report 2018 notes
  • It’s in landscape.
  • No longer at Puppet, eh? Also, Puppet isn’t sponsoring it - what kind of politics could be behind that!! But Pivotal’s all up in it!
  • Datical PR quote!
  • First, how does this “cluster” stuff work? Coté has issues:
  • I guess it “dynamically, machine lurn” groups people together based on some characteristics.
  • It seems like, if you select those characteristics right, then you end up with the clusters you expect.
  • Like, “I want to find all people who deliver 5 times a day. HEY LOOK! There are people who deliver 5 times a day! And they use open source.”
  • Versus: “when we look at organizations that deliver 5 times a day, they use open source.”
  • All the cluster-talk(tm) makes it seem like there’s some magic discovery, therefore more science, and therefore more truth in it.
  • And then the demographics are 40% from “technology” and 15% from banking…so like half of respondents are the elite of the tech world already, with the rest (except “other”) in single digits.
  • “This year we examine the impact that cloud adoption, use of open source software, organizational practices (including outsourcing), and culture all have on software delivery performance.”
  • Basically composed of “technology” (40%) and “financial services” (15% - banking and insurance, I’d guess) people, 50% from US, 22% Europe. Heavy on large companies: 38% at 10,000+ people companies: probably banks and Google, then? (But the # of servers breakdown…?)
  • Clarifies that it’s been custom software all along: “What we referred to as IT performance in earlier research is now referred to as software delivery performance to differentiate this work from IT helpdesk and other support functions.”
  • This might be a good source for toil benchmarks (that is, how much toil you could expect to be doing to get by): 
  • “In our survey, 67 percent of respondents said the primary application or service they were working on was hosted on some kind of cloud platform” (pg. 34) - this could be something, or nothing. Maybe “hosted on some kind of cloud platform” means public or private, but then “hosted” is an annoying word chouce. Anecdotally, most organizations I talk with are hosting their primary apps in private cloud/no cloud… so… demographics, here? That said, this is supposed to be caus-o-lation: the people doing well are in public cloud, which seems more believable.
  • Next page shows that 32% of respondents hosted their “primary service or product” in private cloud (not exclusively).
  • “Only 24 percent of respondents report using a PaaS. However, respondents that do most of their work on a PaaS are 1.5 times more likely to be in the elite performance group.”
  • Pg. 44: “Analysis shows that low-performing teams are 3.9 times more likely to use functional outsourcing (overall) than elite performance teams, and 3.2 times more likely to use outsourcing of any of the following functions: application development, IT operations work, or testing and QA. This suggests that outsourcing by function is rarely adopted by elite performers.”
  • Good story here (pg. 45): “Outsourcing tends to lead to batching work—and thus long lead times— because the transaction cost of taking work from development to QA to operations is so high when these are held in outsourced groups. When work is batched into projects or releases, high-value and low-value features get lumped together into each release, meaning that all of the work—whether high or low value—is delivered at the same speed.”
  • And: “Important and critical features are forced to wait for low-value work because they are all grouped together into a single release.”
  • Monitoring vs. observablity (pg. 53):
  • Monitoring: “is tooling or a technical solution that allows teams to watch and understand the state of their systems and is based on gathering predefined sets of metrics or logs.”
  • Observablity: “is tooling or a technical solution that allows teams to actively debug their system and explore properties and patterns they have not defined in advance.”
  • WORD CLOUD! lulz.

Summary of what how to be DevOps-cool

So. Were’ left with the stuff on the left as things you should do if you want to do software well, right?



Basically, these can be summarized as:

  • Small batches of code for release - both in amount of features (a few lines of code if doing many releases a day, to 2-5 “stories” if doing weekly releases, second based on Pivotal anecdotes) and release length (multiple times a day to, at most, a week).
  • Automated release management and testing, but across all the things: the app, DB, networking/infrastructure.
  • Arguably, you could make “check everything into version control and just mainline” a part of that?
  • Be able to manage and diagnose problems in production (monitoring an observability).
  • Autonomous, multi-role teams.
  • Retrospective-driven learning (and, we’d assume, changing based on those learnings).

How does this differ from “not DevOps”? (I hesitate to call it “waterfall,” or “traditional,” but that’s likely accurate.) “Not DevOps” is (negatively) characterized by:

  1. Long release cycles - they might have month developer builds/Scrums, but releasing to production is done every 2 to 120+ months.
  1. This usually implies - though does not always mean - big up-front requirements rather than “backlogs” that can easily change as user needs/priorities shift. This leads to “you built exactly the right software for two years ago, which the user doesn’t need now.”