3 Log Management Use Cases

4 MIN READ
MIN READ

The versatility of logs allows them to be used across the development life cycle, and to solve various challenges within an organization. Let’s look at three logging use cases from leading organizations at various stages of the product life cycle, and see what we can learn from them.

logdna_in_action.png

Transferwise - Improving Mobile App Reliability

Transferwise is an online payments platform for transferring money across countries easily, and their app runs across multiple platforms. One of the challenges they face with their mobile app is analyzing crashes. It’s particularly difficult to reproduce crashes with mobile as there are many more possible issues - device-specific features, carrier network, memory issues, battery drain, interference from other apps, and many more. A stack trace doesn’t have enough information to troubleshoot the issue. To deal with this, Transferwise uses logs to better understand crashes. They attach a few lines of logs to a crash report which gives them vital information on the crash. To implement this, they use the open source tool CocoaLumberjack. It transmits crash logs to external loggers where they can be analyzed further. It enables Transferwise to print a log message to the console. You can save the log messages to the cloud or include them in a user-generated bug report. As soon as the report is sent, the user is notified that Transferwise is already working on fixing the issue. This is much better than being unaware of the crash, or ignoring it because they can’t find the root cause. You should ensure to exclude sensitive data in the log messages. To have more control over how log messages are reported and classified, Transferwise uses a logging policy. They classify logs into 5 categories - error, warning, info, debug, and verbose - each has a different priority level, and are reported differently. While CocoaLumberjack works only on Mac and iOS, you can find a similar tool like Timber or Hugo for Android. But the key point of this case study is that logging can give you additional insight into crashes especially in challenging environments like mobile platforms. It takes a few unique tools and some processes and policies in place to ensure the solution is safe enough to handle sensitive data, but the value is in increased visibility into application performance, and how you can use it to improve user experience. [Read more here.]

Wealthfront - Enhancing User Experience with A/B Tests

Wealthfront is a wealth management solution that uses data analytics to help its users invest wisely and earn more over the long term. Though the Wealthfront web app is the primary interface for a user to make transactions, their mobile app is more actively engaged with and is an important part of the solution. Wealthfront is a big believer in A/B testing to improve the UI of their applications. While they have a mature A/B testing process setup for the web app, they didn’t have an equivalent for their mobile apps. As a result they just applied the same learnings across both web and mobile. This is not the best strategy, as mobile users are different from web users, and the same results won’t work across both platforms. They needed to setup an A/B testing process for their mobile apps too. For inspiration, they looked to Facebook who had setup something similar for their mobile apps with Airlock - a framework for A/B testing on mobile. Wealthfront focussed their efforts on four fronts - backend infrastructure, API design, the mobile client, and experiment analysis. They found logs essential for the fourth part - experiment analysis. This is because logs are a much more accurate representation of the performance and results of an experiment than relying on a backend database. With mobile, the backend infrastructure is very loosely coupled with the frontend client and reporting can be inaccurate if you rely on backend numbers. With logs, however, you can gain visibility into user actions, and each step of a process as it executes. One reason why logging is more accurate is that the logging is coded along with the experiment. Thus, logging brings you deeper visibility into A/B testing and enables you to provide a better user experience. This is what companies like Facebook and Wealthfront have realized, and it can work for you too.[Read more here.]

Twitter - Achieving Low Latencies for Distributed Systems

At Twitter where they run distributed systems to manage data at very large scale, they use high-performance replicated logs to solve various challenges brought on by distributed architectures. Leigh Stewart of Twitter comments that “Logs are a building block of distributed systems and once you understand the basic pattern you start to see applications for them everywhere.”To implement this replicated log service they use two tools. The first is the open source Apache BookKeeper which is a low-level log storage tool. They chose BookKeeper for its low latency and high durability even under peak traffic. Second, they built a tool called DistributedLog to provide higher level features on top of BookKeeper. These features include naming and metadata for log streams, data segmentation policies like log retention and segmentation. Using this combination, they were able to achieve write latencies of 10ms, and not exceeding 20ms even at the slowest write speed. This is very efficient, and is possible because of using the right open source, and proprietary tools in combination with each other. [Read more here.]As the above examples show, logs play a vital role in various situations across multiple teams and processes. They can be used to make apps more reliable by reducing crashes, improve the user interface using A/B tests, and enforce better safety policies on end users. As you look to improve your applications in these areas, the way these organizations have made use of logs is worth taking note of and implementing in a way that’s specific to your organization. You also need a capable log analysis platform like LogDNA to collect, process and present your log data in a way that’s usable and actionable. Working with log data is challenging, but with the right goals, the right approach, and the right tools, you can gain a lot of value from log data to improve various aspects of your application’s performance.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines