In a recent blog we highlighted the need for a different set of metrics to assure the enterprise that DevOps is delivering genuine business value. While there has been a focus on producing more end-user related metrics, the blurring of the lines between development and operations, the increased volume of releases and the ever-decreasing development cycle times, all pose a risk to existing governance and security tools and controls.
In response to the requirement to monitor the pace of deployment, the accuracy of delivered code, the security and performance of the system and end-user satisfaction, a range of new monitoring tools have been developed. Such tools exist at the application performance layer, the virtualized network layer, the software defined infrastructure layer, and the virtualized storage layer. They exist across data types that are collected with some focusing upon metrics and others focusing upon logs.
The amount of data being collected for analysis from within the delivery pipeline, from the physical infrastructure and from the application is enormous. Each tool often has its own reporting mechanisms and dashboard but the sheer amount of data and the lack of visibility through the development and operations lifecycle make it difficult to spot problems in a timely manner. Trends and correlation between issues at different stages in the DevOps process are also hard to spot. This makes the task of optimising the end-to-end process much harder.
Continuous delivery and continuous integration has led to much greater automation. This improves consistency, drives greater speed and is key to the successful growth of DevOps. But at the same time, it generates even more data. Integrated DevOps environments like Puppet and Chef provide some of the answers. There is no doubt that they, and more traditional systems management vendors like CA, will continue to enhance their management platforms to provide more visibility and control. Interestingly a notable development in a comprehensive end-to-end dashboard has come, not from the vendor community, but from an end-user organisation. Financial Services company, Capital One, have developed and open source dashboard called Hygieia, that correlates and displays data from multiple sources and helps organisations understand their delivery processes from multiple perspectives. Have a look at this article from TechRepublic or here on Github if you are a little more technically inclined.
The sheer amount of data that needs to be analysed and correlated to spot trends and problems early is also leading analysts like Forrester to predict that Artificial Intelligence (AI), machine learning and big data practices will combine to provide correlation and insights that humans can’t easily glean from what appears to be random data. This will help DevOps teams understand problems and spot improvement opportunities at their origin.
IBM are looking at how Watson, their AI/machine learning system can be used in conjunction with its Application Delivery Intelligence tool (ADI) to provide additional insights and predictions. IBM admit this is something for the near future, and Forrester suggest that we should be looking at a 3 to 5-year timescale for AI and machine learning capabilities to become embedded in existing tools. In the meantime our understanding and experience in helping organisations on the journey to DevOps and Agile application delivery can help you map out and start your journey.
To book a meeting with Peter Borner directly, please do so here.