looking at DevOps tooling, facts about automated monitoring and operating tools

running and automatically monitoring your devops systems

Entering the operations/running side of the PLCM (Product’s Life Cycle Management), the engineer who monitors the reliability of the site, solution or product needs to understand the services that can be monitored and measured. It then allows for incident management processes to be initiated. If there isn’t a DevOps toolchain that ties all these processes together, you have an untidy, uncorrelated environment (chaos).  With a toolchain that’s well-integrated, better context exists into what is going on. 

Implementation of different types of monitors is needed. Include monitors for errors, transactions, synthetic, heartbeats, alarms (thresholds for workload, etc.), infrastructure, capacity, and security during development.

Ensure that all team members in these areas are trained on the monitors. These monitors need to be implemented based on the requirements of each application/solution and are often application/solution specific.
An alert-and-incident management system is needed to seamlessly integrate with your team’s tools (log management, outage reporting, etc.) so that it naturally fits into your team’s development and operational processes. It should also include the ability to group notifications and alerts and filter the numerous alerts – especially when several alerts are generated from a single error or failure!
The monitoring tools should send important alerts with the lowest latencies. These alerts must deliver to your preferred notification channel(s) as a priority. Critical alerts should even integrate with your Incident Management solution to auto create the relevant incident tickets for the relevant resolver teams. Impact, criticality, and routing to the correct resolver team is all predefined.
Application performance monitoring is essential to ensure that the application-specific performance indicators such as time to load a page, latencies of downstream services, or transitions are monitored in addition to basic system metrics such as CPU or memory and storage utilization.
E.g.: Tools such as “OpsGenie” (from Atlassian), “SignalFX” (Splunk) and “NewRelic” are ideal to observe metrics data in real time. Amazing tools like “WhatsUp Gold” are also comprehensive but they are on the “paid” side of the price spectrum.

With many organisations living in hybrid environments with an ongoing mix of on-prem and cloud. It is important to find a tool that suits your organisation accordingly.

Consider therefore a tool that could manage and monitor on-prem as well as cloud-based monitoring.

 

 

Q: How well do these monitoring solutions integrate with the 3 biggest “clouds” (Amazon, Google and Azure)?
The big cloud providers also provide their own measuring and monitoring tools, but they are not necessarily “good enough” or “fit-for-purpose” for the enterprise customers and hence additional tools are considered by such customers.

Answer:

  • Sensu
    1. A top DevOps monitoring tool.
    2. Used for monitoring infrastructure and application solutions.
    3. Combines dynamic, static, and temporary infrastructure to solve modern challenges in modern infrastructure platforms.
    4. Does not offer software-as-a-service (SaaS,) but you can monitor your system just the way you want.
    5. Opensource but with commercial support.

(You can also purchase on the Sensu, Redhat, VMware and Atlassian marketplaces.)

  1. Provides dynamic registration and de-registration of clients.
  2. Combines dynamic, static, and temporary infrastructure to solve modern challenges in modern infrastructure platforms.
  3. Used for automating processes also.
  4. Sends alerts and notifications.
  5. It is not affected by the presence of mission-critical applications or multi-tiered networks.
  • PagerDuty
    1. PagerDuty is an operations performance platform designed to work closely with operations staff to assess the reliability and performance of apps and address errors as early as possible.
    2. When timely alerts come in from the development environment to the production environment, the operations team can detect, triage, and resolve the alerts faster.
    3. Offers an excellent, easy-to-use incident response and alerting system.
    4. The intuitive alerting API of PagerDuty makes it very popular among developers.
    5. If an is not responded to after a set amount of time, the system will auto-escalate based on the originally established SLA.
    6. It contains a powerful GUI tool for scheduling and escalation policies.
  • Datical Deployment Monitoring Console
  1. Datical deployment monitoring console is the solution that you would use to automatically track the deployment status of each database across the enterprise.
  2. This software receives and records SQL script execution events across the entire deployment environment.
  3. It does that to minimize human errors.
  4. Simplifies database auditing and deployment monitoring.
  5. A significant advantage is that Datical deployment monitoring console (DDMC) offers is the simplified auditing of databases.
  6. It tracks deployments and errors automatically.
  7. It provides access to deployment information on-demand.
  8. In addition, it simplifies the release processes so that both users and administrators can automatically track, audit, and resolve all deployment-specific database issues.
  • Librato
    1. With Librato, you can track and understand, in real-time, the metrics that affect your business at every level of the stack.
    2. Offers all the features that are needed to monitor a solution including visualizations, analyses, and alerts on all the metrics discussed above.
    3. Provides notifications on various metrics on completion of activity processing.
    4. This tool is capable of aggregating and also transforming real-time data from virtually any source.
    5. Is a complete solution that monitors and analyses data.
    6. Offers a variety of services that help in data monitoring and providing data visualizations.
    7. Does not require any installation.
    8. Has an easy-to-use user interface.
    9. The reliable alerts received from Librato help you to take necessary actions based on a possible situation in your production environment.

 

 

  1. By monitoring changes to artifacts, this software runs at the lowest possible footprint and reduces the load on other tools.
  2. It provides a secure login via a web-based interface.
  3. Incorporates all tools within an organization into a single application to offer value to the organization.
  4. a Single-Point solution that handles all software delivery integration requirements without referring to another tool.
  5. Powerful tool to deliver the right information to the right people at the right time, using the right tool with the right interface.
  6. Fully functional Integrations are available for 57 tools (from Atlassian to Zendesk).
  7. It allows the addition of new tools to existing software integration quickly.
  8. You can route artifacts as well as specific field updates according to a filter that complies with customer requirements around frequency and direction.
  • Prometheus
    1. Metrics-based, time-series database aimed at white box monitoring.
    2. Community-driven open-source system monitoring and alerting solution with a thriving ecosystem.
    3. Since its debut, numerous organizations and businesses have integrated this tool into their ecosystems, allowing the user and developer communities to interact.
    4. Is a tool developed in the Go programming language, making it an excellent contender for future advancements.
    5. Can gather time-series data for your organization and enable easy connection with PagerDuty.
    6. Has no dependencies and provides a good amount of Web API for custom development.
    7. The information gathered by this tool is useful in the field of business intelligence.

Free Tutorial – Youtube video.

  1. It uses a pay-as-you-go model.
  2. Users get 100 GB of free data to ingest per month.
  3. It provides automatic correlation between logs, errors, and traces to accelerate root cause analysis.
  4. Continuous monitoring tool that offers complete observability of the entire software stack.
  5. One of the biggest advantages of this tool is that it allows DevOps teams to benefit from a single platform that brings together 4 types of telemetry data including events, logs, metrics, and traces.
  6. The key features of this tool include browser and mobile session monitoring, visibility into servers, on-prem VMs, cloud-native infrastructure, real user monitoring, and synthetic monitoring capabilities.
  7. “DevOps Without Measurement Is a Fail” Free New Relic e-book
  1. Open-source analytics and visualization tool that was created particularly to interact with Elasticsearch. The most common uses of Kibana are searching, viewing, and interacting with data stored internally in Elasticsearch indices.
  2. Advanced data analysis and visualization may be accomplished by using charts, tables, and maps.
  3. Quick and uncomplicated setup procedure.
  4. You can view the data in the log to discover solutions to your difficulties in production.
  5. Provides an auto-highlighting function for the search fields to identify problems in your log files quickly.
  6. It allows you to visualize log files and display the necessary data and real-time statistics in graphics.
  1. A sophisticated platform for analysing machine data, especially logs generated frequently but seldom used effectively.
  2. Used for searching, monitoring, and analysing machine-generated data using a web-based interface. It compiles all pertinent data into a central index that allows users to find the required information quickly.
  3. Enables the examination of data from networks, servers, apps, and various other data sources.
  4. Is simple enough to deploy in a production environment – Free Splunk course
  5. Provides attributes such as Splunk light to transfer data from many servers to the main Splunk engine for analysis.
  6. Indexes data in such a way that it produces powerful analytic insights.
  1. Nagios is one of the DevOps tools for continuous monitoring.
  2. Widely used open-source tool.
  3. In true DevOps culture, Nagios can assist to monitor systems, applications, services, and business processes.
  4. It notifies users when anything goes wrong with the infrastructure and later rectifies the problem.
  5. It is a great product that can do rapid tests and is easy to configure from both client and server sides.
  6. You can develop custom plug-ins that match your requirements and check the most critical production environment requirements.
  7. The documentation on the Nagios website is rather comprehensive, and you may use it for any specific reference with self-paced training videos also available.
  8. It has a feature that allows you to set up services to ping devices in an organization.
  • ChaosSearch (monitoring, search and analytics at scale)
  1. Offers lower TCO when compared to other alternatives.
  2. This tool helps DevOps teams to ingest log and event data from multiple sources into Amazon S3 or Google Cloud Storage buckets.
  3. An innovative approach to continuous monitoring tools that combines data indexing and querying capabilities with data lake economics for a best-in-class log management solution.
  4. It also allows to index the data with proprietary technology, and rapidly achieve insights with no additional data movement or ETL process.
  5. It supports full-text search and SQL queries, with Machine Learning (ML) support.
  6. It can replace tools like Elasticsearch/OpenSearch while reducing TCO.
  7. It can complement Splunk with a Security Data Lake.
  8. Index, transform, and visualize data with no data movement, direct on cloud object storage.
  9. It is not free, but you get significant discounts at scale and well below your cloud provider cost. (up to petabyte scale).
    • This solution could be cost prohibitive for some organisations and could be very challenging for Private Cloud environments having to pump vast data lakes into a cloud.
      • Data Jurisdiction for compliance could also be a challenge if the data needs to remain “local”.
  1. It can optimize the “DataDog” Log Process because significant Datadog log management challenges arise at scale.
    • These challenges cause complexity when ingesting logs, their retention and also log rehydration.
    • Alternatives exist to complement and optimize this complex Datadog log management process.
    • By combining Datadog as a monitoring service with ChaosSearch as a forensics tool, teams can achieve true observability at scale.
    • ChaosSearch is a fully managed service that provides unlimited data retention with a starting price of $0.80/GB – (Q1-2023)

Free DataDog Fundamentals Video

  1. Ideal for use with Hybrid cloud.
  2. Is a real user monitoring tool that allows DevOps teams to collect and analyse experience and behaviour data from users who visit their website or application.
  3. With the help of the Akamai mPulse tool, developers can capture over 200 business and performance metrics from each user session by installing the mPulse snippet on the target webpage or the app.
  4. It also captures application performance and UX metrics including session and user agent data, bandwidth, and latency, loading times, and much more.
  5. It is easy to deploy.
  6. It is easy to scale for demand spikes and delivering quality digital experience even during peak bandwidth times.
  7. Can be used for application monitoring for websites and native applications.
  8. Helps in creating and monitoring custom metrics and building custom dashboards.
  9. Provides credible performance data and feedback.
  10. System dashboards offer real-time user activities insights.
  1. Is a continuous monitoring tool that supports infrastructure, network, and application monitoring of both cloud and on-premises computing environments.
  2. DevOps teams can capture data from infrastructure components, database transactions, applications, end-user sessions, and other sources to maintain complete visibility into the tech attack and rapidly respond to performance issues.
  3. This helps avoid any situation that may negatively impact the customer experience.
  4. AppDynamics support multiple platforms such as Microsoft Azure, IBM, Kubernetes, AWS, and more.
  5. Instant root cause diagnostics driven by Machine learning.
  6. It has a pay-per-use pricing model.
  7. Can easily monitor a hybrid environment.
  1. Following a SaaS business model, it is easy to deploy.
  2. Offers customizable dashboards and reports streamline data access.
  3. Award-winning intelligence and automation tool.
  4. BMC Helix Operations Management uses predictive analytics to effectively monitor the availability and performance of IT services across environments in the cloud, on-premises, and hybrid.
  5. To improve performance and availability the tool uses service-centric monitoring, root cause isolation, intelligent automation and advanced event management.
  6. AI-driven proactive alerting capabilities and probable cause analysis further help DevOps teams to be jump-started while responding to prospective events.
  7. Provides the benefit of predictive alerts with Machine Learning and Advanced Analytics.
  1. Dynatrace is the smallest library that will run in the application process without consuming more than 10 MB of server memory.
  2. As a result, the application logs are monitored without causing any conflict on the server, resulting in lower overhead on the server.
  3. Adding or removing Dynatrace Platform agents from application servers does not even require the restart of application servers.
  4. You can see how long each stage of an application took as well as the transaction flow.
  5. Indicates clearly where problems or errors arise in production workflows.
  6. Identifies deviations from a standard baseline after the metrics have been benchmarked.
  7. Detects any unusual activity in the application or network and communicates this information to you.
  8. Gives non-technical users a clear picture of an application’s performance.
  1. Uses a pay-as-you-go
  2. It is a single unified platform for APM data.
  3. Leverages the ELK (Elastic, Logstash, Kibana) stack to combine logs and metrics, APM traces, uptime, UX data, and feedback from synthetic monitoring activities into a single solution that gives DevOps teams the improved visibility of application performance in the production environment.
  4. ELK delivered several capabilities such as log aggregation, indexing, and dashboard/visualization.
  5. Popular tool that can be used for application performance monitoring, real user monitoring, and log analytics.
  1. It is a multi-cloud capable solution.
  2. Makes it easier for DevOps teams to monitor microservice-based applications from a single platform that covers performance metrics, log, and event data and distributed transaction tracing.
  3. In addition to its APM capabilities, Sumo Logic also offers a cloud-native SIEM tool, which has correlation-based threat detection and support from the company’s own cyber threat hunting team.
  4. The tool is used for monitoring application performance and cloud security and BI features.
  5. Provides free product training and certifications.
  6. Easy configuration of real-time metrics and alerts.
  7. Allows visually appealing dashboards and graphs.
  8. It provides the insights you need like: Real-User-Monitoring, End-User-Monitoring, Full-stack-application-monitoring-and-observability, Proactive-infrastructure-monitoring, and Log-analytics-at-scale.

optimisation of your cloud computing environment

In cloud-native worlds, incidents abound, much like bugs in code. These incidents like network failures can also include misconfiguration, data inconsistencies, software bugs and, hardware and resource exhaustion. DevOps teams should anticipate and embrace incidents/issues. High-quality monitors must be in place to respond to them.
E.g.,“BMC Helix Operations Management” uses predictive analytics to effectively monitor the availability and performance of IT services across the cloud, on-premises, and hybrid environments.
“NewRelic” is also a strong player in the Hybrid-Cloud arena.

Some of the best practices to help with this are:

  • Encourage and build a collaboration culture, where monitoring is used during development along with feature/functionality and automated tests.
  • For custom built solutions, build appropriate high-quality alerts in the code during development. It minimizes mean time to detect (MTTD) and mean time to isolate (MTTI)
  • Build monitors to ensure dependent services operate as expected.
  • Build required dashboards and train team members to use them by allocating time for this.
  • “War Games” can be planned for the services to ensure that monitors operate as expected and missing monitors are uncovered.
  • Close actions from previous incident reviews by planning the closure at the start of the meetings. Especially actions relating to building missing monitors and automation.
  • Build security detectors for rolling credentials, patches and upgrades.
  • Initiate and cultivate a “measure-and-monitor-everything” mindset with automation determining the response to detected alerts.

The ever-increasing requirements for always-on services and applications, as well as comprehensive and strict SLA commitments, can make applications vulnerable. Development teams need to ensure they define SLAs as well as ELAs (Enterprise Level Agreements), service-level objectives (SLOs) and service-level indicators (SLI) that are monitored and acted on.

The four main DevOps metrics

  1. Deployment Frequency – DF measures how often a team successfully releases to production. (DFRate)
  2. Lead Time For Changes – LTFC measures the amount of time it takes for committed code to get into production.
  3. Change Failure Rate – CFR measures the percentage of deployments that result in a failure in production that requires a bug fix, a patch or a roll-back.
  4. Mean Time To Restore service – MTTR service measures how long it takes an organization to recover from a production failure.

 

Free DevSecOps Introduction Training – (The What, The Why & The How)

Click on these links to more free courses: credit to GitHub – ann-afame/DEVOPS-WORLD.

Alison, DevOps on AWS, Edureka, FreeCodeCampIntellipaat, Learnvern, My great learning, Udemy,
Microsoft Certified: DevOps Engineer Expert, Exam Readiness: AWS Certified DevOps Engineer

 

 

In WiRD’s experience it is important to be clear on which tools will be used for each DevOps category. What has your experience been with regards to deploying DevOps testing tools and devops automation? Please share your experiences with WiRD by commenting or interacting on LinkedIn

 

This website uses cookies and asks your personal data to enhance your browsing experience.