Webinar Recap: How Observability Impacts SRE, Development, and Security Teams

4 MIN READ
3 MIN READ
TABLE OF CONTENTS
    4 MIN READ
    3 MIN READ

    In today’s fast paced and constantly evolving digital landscape, observability has become a critical component of effective software development. Companies are relying more on and using machine and telemetry data to fix customer problems, refine software and applications, and enhance security. However, while more data has empowered teams with more insights, the value derived from that data isn’t keeping pace with this growth.

    So how can these teams derive more value from telemetry data?

    Harris Poll Research Director John Campbell and Mezmo’s own CMO Ajay Khanna unveil the essential findings and practical insights that explain how teams use and are affected by observability data. 

    If you missed the webinar, you can watch the recording here.

    The Research

    Mezmo partnered with the Harris Poll to conduct a survey among 100 developers, 100 security engineers, and 100 SREs in the United States. The research serves as a foundation to better understand how team members interact with telemetry data and how they use observability pipeline tools. 

    Key Findings

    The research demonstrated that troubleshooting, uptime and stability monitoring, cybersecurity, and firewall integrity were at the top of the list regarding machine data use cases. Over half of the respondents regularly use and interact with observability data, with the rest averaging a usage rate of 2-3 times per week. On average, each role uses 2-3 tools to interact with the data at any given time. 

    Despite the growing reliance of organizations on observability data, budgets needed to manage and leverage the data aren't keeping pace. All three roles reported that the cost of aggregating these large amounts of data in one place could be nightmarish. Even if the price is predictable, the reality is that their allocated budget simply can't keep up with rising data volumes. 

    Additionally, the increasing volume of data can make it difficult for each role to perform its job, affecting each team in different ways, such as: 

    • Increasing potential security vulnerabilities and threats, making it harder for security engineers to identify and respond to incidents promptly while adding strain on security systems and tools 
    • Developers spending more time adjusting applications and systems to handle increased volume while maintaining optimal performance
    • SREs investing additional resources, such as storage and network capacity to handle the increased data, which can be costly and time-consuming

    Having access to the correct data, managing it at a feasible cost, and doing it promptly and efficiently are significant concerns for these teams. As a result, more organizations are looking for observability pipelines to integrate within their tech stack to support various sources, enable transformation for a variety of use cases, and provide complete access and control of their telemetry data.

    Key Takeaways

    Mezmo’s research with The Harris Poll has yielded a key takeaway for organizations: observability pipelines play an outsized role in managing telemetry data across teams because they: 

    • Allow for better control of escalating costs by removing low-value data to optimize spend without losing observable surface area
    • Give teams the ability to transform data into consumable formats, increasing the value of data to power workflows
    • Help with compliance use cases, such as timely identification of risks from new deployments, or scrubbing, masking, or redacting sensitive data to remain compliant.

    Watch a recording of the webinar here and see how you can strategize and get more value from your data.

    false
    false