DevQoSOps at PREVAIL2021

At @IBM-PREVAIL 2020 we coined the term ‘DevQoSOps’ as title for a hands-on workshop in which learners built a DevOps pipeline for a containerized application on cloud.

The concept DevQoSOps is, however, much broader than the scope of the workshop. DevQoSOps refers to the enrichment of your DevOps processes and pipelines with ‘non-functional’, ‘quality-of-service’ stages and aspects. Not only security is addressed, but also resilience, availability, performance, capacity and any other service level quality that your stakeholders deem important and expect implicitly.

Although these service level qualities are expected implicitly, they are mostly not built in implicitly. Ensuring that a software service meets the service level expectations requires, today as well as in the past, specialist skill and knowledge.

The big difference with traditional software development is that today scarce specialist skill and knowledge can be made widely available through automation. And that is where the DevQoSOps pipeline comes in the picture!

At PREVAIL 2021 we will offer version 2 of the DevQoSOps workshop, now with a more interesting application, a more elaborated performance stage and a NEW availability stage! Don’t miss it! Join the PREVAIL LinkedIn event to indicate your interest and we will keep you updated on the latest news…

To join the LinkedIn event :

To submit a paper for PREVAIL 2021 :

To find more information on PREVAIL :

PREVAIL2020x24 : the place to be if you’re passionate about IT resilience!

Dear IT practitioners, technologist(a)s and Academy members,

I hope that you and your loved ones are safe and that you are finding new ways to be essential working-from-home.

More than anything else the Corona-crisis is teaching us the importance of resilience; resilience of our society as a whole and resilience of the IT that supports it. IT has become one of the topmost enablers today to keep society moving; it has become a utility like water or electricity…

Now that the majority of the working and school-going population is forced to stay at home, the traffic on the internet and home networks has dramatically increased. The IBM business resilience team have provisioned 60,000 employees in India with remote access on top of the 30,000 that they had originally planned for. Various companies are stepping up to provide free laptops and internet access to school-children. Online sellers and home deliverers are doing well whereas hotels, restaurants and cafes are struggling.

Always ON, fast and secure IT for everyone is no longer a vision for some far-away future; it is essential here and now!

PREVAIL2020x24 is the place to be if, like us, you are passionate about IT resilience!

PREVAIL is a conference by and for technical professionals, empowered by the IBM Academy of Technology.

Unlike Tokyo2020, PREVAIL2020x24 will NOT be rescheduled! After all, resilience is about keeping business running, and we are adapting to keep PREVAIL online, online.
Our goal is to give you plenty of opportunity to share your experiences with and your views on resilience, performance, security, SRE and related topics…

To make the event better accessible to a large worldwide audience, PREVAIL2020x24 will be hosted as an online, follow-the-sun event. It will run for 24 hours in 4 tracks (Performance, Availability, Security, SRE/Resilience), starting on September 15 at 9:00 Sydney time and ending at 15:00 San Francisco time. It is open for IBM-ers and IBM customers.

We are sure you’ll have lots of good stories to share. We are interested in hearing how you kept your services available in the light of overwhelming volumes, how your careful resilience engineering over the last decade ensured your systems would cope with unexpected demand, how your cloud app was engineered to scale and scaled when the time came.

The call for papers is still open, so do not hesitate to share your abstracts.
More information can be found in EasyChair :

Prevail 2020 Toronto


You’re interested in performance like good response times and high throughput in your application? You don’t want your application to be hacked? You don’t want your work to end up in the press because it was not available? Then the PREVAIL 2020 conference is for you!

Announcing the September 2020 Prevail conference in Toronto Canada, presented by the IBM Academy of Technology Performance and Availability Community of Practice and IBM CATA (Canadian Academy of Technology Affiliate), a conference for and by technical professionals, where you can build your eminence by presenting a lecture or leading a discussion or workshop, and (for IBM speakers) earn a teaching badge.

Resilience is booming. Clients are showing renewed interest. New ways of working such as Agile and Cloud are yielding new resilience capabilities. Always in the background there are complex problems to be solved.

IBM has a heritage of engineering and managing for Performance, Availability and Security. Most of our clients are now heavily using some kind of cloud or containers or micro-services and may be enjoying the benefits, for example flexibly scaling out to support peak load or to manage availability incidents or to face cyber threats.

What does it mean to engineer resilience in today’s technology landscape? Do our customers still need resiliency engineering? How do you enjoy the benefits of new capabilities or helping them avoid unforeseen pitfalls? What does it mean to engineer resilience for traditional business critical systems, while other applications in the ecosystem are moving to microservices Cloud and DevOps? Are you solving complex resilience problems in Cloud, IoT, Quantum or Blockchain? Have you successfully integrated resilience engineering into Agile?

I would be glad if many of you could submit presentations for the conference. In 2019 we had a great even in Munich, this year it’s Toronto, so a great opportunity to now only expand your knowlegde, but also you network!

See you in Toronto!


On October 14-16 the IBM Academy of Technology will host the PREVAIL 2019 conference in the IBM’s Watson IoT center in Munich.

The conference is devoted to the non-functional or quality-of-service aspects of designing and delivering IT services. Its subtitle is ‘Delivering chapter 2 of the journey to cloud fast, always on and secure’.

After an igniting welcome from Richard Hopkins, president of the Academy of Technology, we are offering many interesting keynotes and a large variety of lectures, posters, panels, unconference and hands-on workshops in three parallel tracks. Track topics are performance, availability and security… You may register for one track or hop between tracks as you like. 

We have contracted many renowned speakers such as John D Vasquez , senior software engineer and cybersecurity specialist at the IBM Watson IoT Center, Chris Winter , emiritus IBM fellow and founder of the worldwide performance community, Simon Whelband , chief system architect and always on advocate at A.P.Moeller Maersk, Ingo Averdunk , distinguished engineer and SRE advocate, Surya Duggirala , cloud performance guild leader and many other good speakers.

The Watson IoT center was chosen on purpose as the event’s location and we are very grateful to its leader, past Academy president Andrea Martin for her hospitality! Take the virtual tour of the site and you’ll understand what is so inspiring about this location.

The main conference is open to all IBM employees and invited IBM customers. The meetup on Monday evening is open for everyone who is interested in resilient, fast and secure IT services in the era of Agile, DevOps, SRE, cloud and containers. 

There is no entrance fee. We only expect you to actively contribute to the challenges and questions raised through a poster, a blog or tweet or your active participation in the event.

This event was organised by and for technical professionals from the worldwide performance and availability community and TEC DACH. We strive to look beyond the facades of marketing slogans in an attempt to identify today’s (very) real technical IT resilience challenges and find answers. You can be part of that discussion!

More information on the conference:

Virtual tour of the Watson IoT center:

Twitter-handles #ibmaot #wwpacop #tecdach #ibmprevail2019

My DEVOPS journey [4] join a workshop,

As my colleague and mentee Mark had to travel to a faraway destination to attend to important private business, I replaced him in a DevOps SPoC workshop that had been planned several months ago and for which he was registered. 

He told me that the workshop would take place in Copenhagen on The 4th and the 5th of July. I was looking forward to enjoy a couple of days in “CPH”, one of my favourite destinations in Europe, and told him not to worry – I would be happy to help him out and take his place. 

He subsequently asked me if I can deliver a 30 minute presentation on the relation between DevOps and resilience. I said ‘Okay’ planning to share the outcomes of the ‘resilient DevOps’ initiative that I discussed in ‘my DEVOPS journey [3]’ and that we had conducted in the first months of the year.

Two weeks before the workshop the location was changed to Berlin, also an interesting European city, making it difficult to book non-stop flights for a decent price…but that did not stop workshop registrants from joining…

I arrived on wednesday night late, not knowing what to expect. We started the next morning at 9:00 in one of the conference rooms in the business center of the hotel. I had only met three of the participants before, the rest of the group were new to me…

Although this international team had been working together for almost two years and had met in various European cities such as Rome, Copenhagen and Zürich, they were very welcoming and it wasn’t difficult at all to blend in and contribute. We started with presentations from various team members. 

Especially the presentation from Maite Gonzalez helped me understand how IBM sees the business perspective and the positioning of DevOps amongst its service offerings. 

The question to what extent DevOps (as well as the related topics of Agile and SRE) lends itself for clearcut offerings is, in my view, an interesting one. One could argue that DevOps (and Agile or SRE) is just a ‘way of working’; the end goal of an IT project always is to deliver some functionality, i.e. one or more (micro-)services and DevOps (and Agile or SRE) is one approach towards achieving that goal. In theory there could be other approaches… Looking at it from that angle, DevOps falls in the category ‘method and tools’ in terms of offering positioning.

My presentation was scheduled for later that afternoon and arose good constructive discussion. The point that I made was that best practices to engineer IT solutions for resilence (and performance) remain valid, irrespective of the ‘way of working’ chosen, and that with ‘chapter 2 of the journey to cloud’ resilience (and performance) are becoming more important rather than less…

The bulk of the work was done in the breakout sessions. I decided to join the team that was preparing education materials for executives. Our first task was to finalize the agenda of the new education module. I figured that my experience as leader of the operational resilience education workstream in a UK based financial institution would come in helpful. 

The first question to be asked was : ‘who are the intended audience, what are the messages that they will need to pick up and what is their attention span?’ The previous experience of team members Maite and José as executives helped to impersonate the audience and make the right choices in terms of agenda and length of the education module. Jan-Paul, our team leader, guarded the quality of the content that we were planning to share and walked us step by step through the draft agenda.

While we were chewing on the setup of the executive education module, the other teams were investigating DevOps tools and finetuning deep dive education for technical professionals. 

Another topic that received plenty of attention was the DevOps learning path and the badges that can be earned by technical professionals to formally certify their knowledge and experience in this area. We investigated the possibility to reach out to IBM’s large community of certified architects and explore common ground to find fastpaths to increase skills in DevOps architecture. 

All in all it was a positive experience to work with like-minded technical professionals and some executives as well.

Thanks Frank Hollenberg for organising and leading this initiative – I think it is a good example of technical community empowerment and I am happy to help, going forward! And, by the way, apologies for finishing this ‘July Proudmoments’ blog on August 3rd, i.e. three days late…

Agile! Read and reflect…!

You have not heard from me for a long time – I know: I broke all the rules for effective blogging… but it took an exceptional book to break the silence. “Agile! The good, the hype and the ugly” by Bertrand Meyer is such a book. This book is a must read for all IT professionals who call themselves Agilists or are bewilderedly trying to understand and adapt to agile ways of working:

What makes this book so great? It is a razor-sharp analysis of agile principles, practices, techniques and artefacts. Having been actively engaged in software engineering since the early days of object orientation, the author has a deep understanding of and experience with old and new software engineering methods. His classification of agile practices into the four categories good-and-new, good-and-not new, not good-and-new, not good-and-not new has helped me understand much better what Agile really is about, what to keep and what to avoid.

Obviously only the practices in the top of the quadrant need to be remembered!

  • Good but not new are iterative development in short iterations, the recognition that change plays an important role in software engineering and the central role of code
  • Good and new are team empowerment, the daily meeting, freezing requirements during iterations, time-boxed iterations, the practical importance of testing,

Meyer convincingly invalidates a number of rhetorical traps that the more evangilistical Agile texts are guilty of. The examples he gives of ‘proof by anecdote’ , ‘slander by association’ and other tricks are sometimes quite funny.

Stating that a principle must be both abstract and falsifiable, he disects the ‘principles’ from the Agile Manifesto, rejects some of them as ‘not really a principle’ and derives the (in his view) real underlying principles.

This leads to the following “usable list”

In his agile-sceptic and ‘strict though righteous’ manner he walks through all the principles on his list and discusses pro’s and con’s. Regarding the principle to put the customer at the center, he points out that the best end users are probably also the busiest. As it is unlikely that any software development team can get their fulltime attention, clever ways must be devised to make optimal use of the expert user’s scarce time.

But a fundamental danger of basing requirements on user stories alone is that they are limited to one or some user’s view and do not necessarily uncover the underlying capabilities that the software needs to deliver. It takes deep thinking to do this type of fundamental requirements analysis!

Similar disadvantages apply to the principle to build no more than the bare working minimum and keep adding on to that so-called ‘minimal viable product’ (MVP) until it is complete. How would you like it if the building company that you hired to build your new house would start by building a shed without any foundations, just to be able to show you something that remotely looked like a house? I guess the answer is clear.

Meyer points out that there is a basic difference between two types of complexities and illustrates that difference neatly with pictures of his favourite pasta : lasagne to illustrate additive complexity and linguine to illustrate multiplicative complexity. Unfortunately the MVP approach does not really help to solve problems that are characterized by multiplicative complexity.

Meyer argues that adding on functionality only works provided that the core application architecture is sound. So, fellow application architects, don’t despair! There is still hope for us. The architect profession has not yet become entirely obsolete because most business problems that we try to solve by IT suffer from multiplicative complexity and it takes a sound architecture to tackle these problems.

Meyer makes short work of agilist’s preferences for open spaces. He argues that all programmers are different and that some are more productive when they can quietly focus on the job at hand.

And there is more… One by one Meyer discusses the agile roles and the agile artefacts and gives his learned view on them.

The book concludes with the assessment of what is good, hype and ugly. As that assessment deserves my and your full attention I will come back to it in a follow-on blog!

“Agile! The good, the hype and the ugly” by Bertrand Meyer.

There’s got to be a better way

Innovation in IT, there’s got to be a better way

Last year I installed Kerberized Kafka with Ranger for authorisation and Solr for auditing by a manual installation of handpicked versions of components it needs.

Working through the documentation took me a few months. And yes, I also did installations of Kerberized Kafka via open source Ambari which were not to my full satisfaction.

It took quite some time to get it all sorted out.

Before you have something running at a customer site there is also the product selection and license negotiations that are involved.

Though it is possible to innovate with such an approach it takes the speed out of innovation, not to mention scaling up or down an installation.

It was not the first time that I worked through the documentation on how to install a product. New products come out frequently and the rate accelerates each year.

Before you know it, another quarter has passed and what if your customer does not want to adopt the product that you have prepared for?

Then early 2018 IBM offered me to go on a training for IBM Cloud Private (with the focus on Kubernetes) where I learned how to install containerized IBM middleware.

Installing containerized middleware can now be done in minutes. And with a catalogue of about 50 containerized products in IBM Cloud Private at this moment and more coming, it made sense to me that this is the route to go.

If you have IBM Cloud Private you can have a new middleware platform in an afternoon even with your own customized container images if you want that. Good job, IBM!

And so, I started my Kubernetes journey somewhere during my preparations for the IBM Cloud Private boot camp early this year.

I bought Marco Luksa’s ‘ Kubernetes in Action’  which is a very good buy.

And therefore, I needed a Kubernetes environment.


What do you need?

For those who are short pressed and understand that time is money: 64 GB of RAM, 16 cores and 1 TB SSD will do fine for starters. But please check the hardware pre-requisites.

I started off by installing minikube as well as a regular Kubernetes cluster with 1 master node and 3 worker nodes in virtualbox. It did work although such an installation is very basic. For example, when you want a dashboard you need to install it. I had set it up with bridged nic’s on my home network and all was fine and dandy with very limited resource requirements.

During the ICP boot camp we installed a standard ICP installation consisting of 1 master node, 1 proxy node, 3 worker nodes and an NFS server in an afternoon.

After the training I performed an installation of IBM Cloud Private Community via the vagrant approach on my Lenovo W5210 with 16 GB of RAM in virtualbox which did not work out for me.

Next, I performed a single node install which succeeded. For this I used a virtualbox guest with a NAT nic. It turned out that I could only access the console inside the image. Port forwarding to the console did not work. Also, the 10 GB VM was severely short of memory.

After re-doing the install with a bridged nic I had something up and running but it left little room on my 16 GB laptop with 8 cores. You might be lucky if your laptop has 32 GB.

I then managed to acquire 1 old computer with a 4 core I3 CPU on which I installed a single node ICP cluster directly on bare metal to get the most out of the 16 GB of RAM it had. The machine thus ran the master, the proxy and a single worker node. I had to exclude installing the management and vulnerability advisor node from the installation because of the lack of compute resources available.

I did manage to install a Jenkins pipeline on it to deploy the blue compute shop. It did work but the amount of surplus memory was not a lot.

At the time that ICP came out I managed to get a new computer with an 8 core I7 core. I decided to setup a VM containing the master and the proxy on the I3 and 2 VM’s containing worker nodes on the I7. I used RHEL ‘s KVM instead of Windows 10 VirtualBox running Ubuntu and I must say that that was a very positive experience in terms of start-up times. I can recommend it.

The installation took almost 3.5 hours and in retrospect I believe that this was because the I3 machine has an old-fashioned HDD. After all the installation on the IBM sky-tap environment took about 20 minutes or so, if I re-call it correctly.

I performed the “pod auto scaling“ exercise from chapter 15 of Marko’s book and I used my 8 core I7 laptop to send calls to the I3 running the master and the proxy and then I discovered that sometimes the auto-scaling did not work. The machine containing the master (controller) and the proxy (CNI) was overloaded during the test as evidenced by a Linux utility called atop. When I throttled the rate down to 250 calls per second, the master had sufficient compute power to scale the deployment up.

Now, you understand why IBM has chosen a different topology for real life situations. IBM has service offerings to get you on the right track with adopting ICP from the start, which makes sense in a professional engagement.

I looked at my laptop and asked myself, where do a get a new computer to replace the I3? Oh, … well, … I bought a Lenovo from the X series (light to carry) and moved the KVM running the master/proxy to my laptop and did the test again where I used the Lenovo X as load driver.

The throughput increased, but although the horizontal pod auto-scaler did allow for scaling up to 6 replicas (with a target CPU utilization of 5%), it did not scale up the number of pods higher than 4. None of the I7 nodes were the constraint, … well, you have guessed it, now the X has become the bottleneck.

The current setup looks as follows:


In September 2018 ICP 3.1.0 comes out.  I have a cunning plan, …

My DEVOPS journey [3] start an initiative

After having attended a class and having read a couple of books it was time to start an initiative. Next to that the focus needed to be shifted back to the core subject of this blog – IT performance – and the question had to be asked “How does DEVOPS affect IT performance (and other qualities of service such as availability and security) and vice versa?”. That question led to a number of derived questions…

  1. Should we focus on the IT that supports the DEVOPS processes or on the IT that supports the target solution?
  2. Should we look into performance engineering, performance testing or performance management and should we do so in parallel or in sequence?
  3. Should we write a point of view paper, do a proof of technology (PoT), produce education materials, blog or all of the aforementioned?

An IBM Academy of Technology initiative is an excellent way to work with a group of colleagues ( business partners and/or customers can also be invited if and when interested ) on some innovative technical topic outside the boundaries of one’s day-job.

In 2017 such an initiative led to the publication of a whitepaper on ‘Agile performance engineering’ in which we folded proven performance engineering and management practices (PEMMX) into Agile ways of working. Many practitioners who reviewed that work have asked for an extension to DEVOPS with more ‘practical’ guidance on tools and techniques.

The recently started 2018 initiative will therefore dive into a number of practical questions related to DEVOPS and non-functional aspects. That means that we will not just produce a white paper but also stood up an (albeit limited) PoT environment in which we installed a Kubernetes master and two worker nodes with we can do some experiments.

Stay tuned for updates on our progress and findings!

My DEVOPS journey [2] Read a book (or two)

In my previous blog I have shared my experiences in the DEVOPS workshop at IBM Hursley. I am now inviting you to stay with me as my journey continues…

I was pleasantly surprised by being given a book at the end of the DEVOPS class. The title of the book ‘The Phoenix project’ by Gene Kim et al. did not suggest any direct connection to the hands-on learning experiences in the class.

Intrigued I started reading and immediately got caught by the accessible style of writing and the recognizability of the story.

The main character of the book is Bill, an operations manager who is given an unexpected promotion and then sees himself faced with the impossible task of saving his department from being outsourced. I am not going to give away the storyline – you can (and should!) read the book for yourself!

The short summary is that the project and the department are saved by simplifying and improving processes and by implementing much more efficient collaborations between teams.

This brings me to the point that I made at the end of my previous blog. For IT professionals it is tempting to focus on the technology dimension. It is fun, it inspires innovation, it does not talk back, it does not get angry or shout and it does not protect its job! However, without a mindshift and serious changes in processes and organisational culture, DEVOPS is never going to be successful!

In ‘The Phoenix project’ this is explained by means of the three ways. The three ways are introduced in a playful manner in the novel and at the end of the book summarized in the epilogue.

The first way optimizes the left to right flow of work from DEV to OPS by introducing small batch sizes and short intervals of work. The key practices are continuous build, integration and deployment. The second way focuses on a constant feedback loop from right to left, from OPS to DEV. The key practices are automated testing, failing fast and retreating back on your steps when quality goals are not met. The third way aims at creating a culture of continuous experimentation and risk taking based on trust.

Although the technology dimension is important to support the three ways of DEVOPS, the process and people dimensions are critical to make them succeed! Particularly the third way is a challenge, especially in large organisations with an established company culture.

Another important insight that is highlighted by the authors is the existence of four types of work that are competing for the same scarce resources in IT organisations. The four types of work are business projects, internal projects, operational change and unplanned work. All these types of work are important in their own right; they have different stakeholders; priorities are often unclear making it hard to strike the right balance.

All these insights are derived from process optimization methods developed in the 80s, in particular “the theory of constraints” that is explained in another famous book “The goal” by Eli Goldratt.

The key message is that IT can learn a lot from optimization practices developed for manufacturing. In IT as well as in manufacturing we have to look for the “constraints”, the work centers that have limited capacity but are on the critical path to production. The batches and sequence of work have to be adapted to the constraints to achieve a continuous workflow and as little ‘work in process’ (stock) as possible.

It requires a thorough analysis of IT design-to-delivery value streams to identify the weak spots and improvement points. And buy-in of professionals throughout these value streams must be obtained to be able to make changes.

This probably explains why many organisations that claim to practice DEVOPS in reality often do so only in small experimental teams working on isolated projects. Implementng DEVOPS practices in a complete IT shop is not at all simple….!

My DEVOPS journey [1] take a class

In February of this year I finally managed to attend a DEVOPS class that I never found the time for before. The joining instructions contained a large pdf with a detailed stepwise description of the installation of a virtual machine and the download of a very large file with an image in it that needed to be deployed in that virtual machine.

Fortunately I had just acquired a new and completely empty laptop, so disk space was no issue. The process took me one Sunday afternoon of waiting, hitting keys and waiting again. It has been a while since I installed machines and compiled programs on a day-to-day basis, hence I was truly proud to see my image work on Sunday evening!

Having successfully passed that first step, I headed for Hursley with three other colleagues from the Netherlands. My first impression of the teachers was that they are millenials, not much older than my two sons and I prepared myself for three days of hard work!

We started by agreeing social contracts in our team and by drawing up the value stream map for a process that we all are familiar with. [Our millenial teacher commented that a process containing more than 10 steps is far too complex and that one should aim for no more than 5 steps which is easier said than done….!]

On the backwall of the classsroom the teachers started to build up three lists; the TO-DO (backlog), the DOING and the DONE list. At the start of day one all the coloured post-its were glued onto the TO-DO space and in the course of three days they gradually moved from TO-DO to DOING to DONE! [Agile backlog management is put into practice very effectively in this class – to get rid of your backlog you just have to wait until it drops off the wall as the glue wears off the post-its…]

After the introductions the hard work begun – the first hands-on excercises in our virtual machines had to be completed.

In the mean time a colourful landscape of OpenSource tools, called THE BIG PICTURE unfolded on another part of the back wall of the classroom as we plodded along. With nostalgia I remembered the days when operating systems, programming languages and software products were called by uninspiring but meaningful acronyms. After having deciphered the acronym one had a good chance of figuring out from the name what the software was supposed to do. Not so in the Agile and DEVOPS era! Nerdy names seem to be standard for OpenSource tools (‘Bower’, ‘Puppet’, ‘Jenkins’) and there is no way to guess the purpose of each tool – one has to learn by doing.

And learning by doing is what we did for three days… we wrote simple code, deployed it and tested it automatically to get a feeling of DEVOPS. The experience reminded me of the programming labs that I took during my computer science education, just with different tools.

Being so deeply submerged in the code, builds, deployment and test runs, it became more and more difficult to maintain the necessary helicopter view and keep track of the generic structure behind the BIG PICTURE. I therefore felt the need to take a step back and reflect on it here.

The BIG PICTURE unfolding on the wall was obviously very focused on the technology dimension of DEVOPS.

Needless to say that however important good technological support is, to make DEVOPS succesfull the process and peopledimensions have to be addressed as well!

Slightly rearranging THE BIG PICTURE and adding my own thoughts to it, there clearly is a need for DEVelopment tools on the one hand side and OPerationS tools on the other side as the name DEVOPS suggests. Additionally test and collaboration tools need to be included to provide the glue between the teams and to ensure a seamless automated process. And last but not least there is the target system solution stack with its technology footprint.

Summarizing, the architecture of the DEVOPS technology dimension could look like below included mindmap in which ideally all tools interconnect nicely without any overlaps or gaps [!]