LogDNA Helps Developers Adopt the AWS Billing Model for Cost-Effective Logging

4 MIN READ
MIN READ

(LogDNA is now Mezmo but the product you know and love is here to stay.)

Amazon Web Services (AWS) uses a large scale pay-as-you-go model for billing and pricing some seventy plus cloud services. LogDNA has taken a page from that same playbook and offers similar competitive scaling for our log management system. For most companies, managing data centers and a pricey infrastructure is a thing of the past. Droves of tech companies have transitioned into cloud-based services. This radical shift in housing backend data and crucial foundations has completely revolutionized the industry and created a whole new one in the process.

Logging Prices

For such an abrupt change – one would think that an intelligent shift in pricing methods would have followed. For the majority of companies this is simply not the case.

New industries call for new pricing arrangements. Dynamically scalable pricing is practically a necessity for data-based SaaS companies. Flexible pricing just makes sense and accounts for vast and variable customer data usage.

AWS, and for that matter, LogDNA, have taken the utilities approach to a complex problem. The end user will only pay for what they need and use. Adopting this model comes with a set of new challenges and advantages that can be turned into actionable solutions. There is no set precedent for a logging provider using the AWS billing model. We are on the frontier of both pricing and innovation of cloud logging.

LogDNA Pricing Versus a Fixed System

The LogDNA billing model is based on a pay-per-gig foundation. That means that each GB used is charged on an individual basis before being totaled at the end of the month. What follows then is for each plan: low minimums, no daily cap, and scaling functionality. Here is an example of a fixed tiered system with a daily cap. For simplicity’s sake, here is a four day usage-log (no pun intended) of a log management system with a 1 GB /day cap. Monthly Plan: 30 GB Monthly - $99Day 1: 0.2 GBDay 2: 0.8 GBDay 3: 1 GBDay 4: 0.5 GB This four day usage is equivalent to 2.5 GB logged. That’s an incredible amount of waste because of a daily cap and variable use. Let’s dive into a deeper comparison of the amount of money wasted compared to our lowest tiered plan. LogDNA’s Birch Plan charges $1.50 per GB. If we had logged that same amount of usage with our pricing system it would cost roughly $3.75. While the fixed system doesn’t show us the price per GB – we can compare it to LogDNA with some simple math. If a monthly plan at a fixed rate of $99 per month is equal to 30 GB usage per month then you can reasonably say that each GB is equal to about $3.30 in this situation. Can you spot the difference in not only pricing, but cloud waste as well? With a daily cap, the end-user isn’t even getting to use all of that plan anyhow. A majority of cloud users are underestimating how much they’re wasting. Along with competitive pricing, our billing model cuts down tremendously on wasted cloud spend.    

Challenges of the Model

It’s important again to note that our model is unique amongst logging providers. This unearths a number of interesting challenges. AWS itself has set a great example by publishing a number of guides and guidelines. The large swath of AWS services (which seems to be growing by the minute) are all available on demand. For simple operations, this means that only a few services will be needed without any contracts or licensing. The scaled pricing allows the company to grow at any rate they can afford, without having to adjust their plan. This lessens the risk of provisioning too much or too little. Simply put, we scale right along with you. So there’s no need to contact a sales rep. LogDNA as an all-in-one system deals with a number of these same challenges. The ability to track usage is a major focus area to us so that we can ensure you have full transparency into what your systems are logging with us. Our own systems track and bill down to the MB, so that the end-user can have an accurate picture of the spend compared to usage rates. This is not only helpful, but allows us to operate in a transparent manner with no hidden fees. Though it is powered by a complex mechanism internally, it provides a simplified, transparent billing experience for our customers.LogDNA users have direct control over their billing. While this may seem like just another thing to keep track of, it’s rather a powerful form of agency you can now use to take control of your budget and monetary concerns. Users can take their methodical logging mentality and apply that to their own billing process, allowing greater control over budgets and scale.   Say, for example, that there is an unexpected spike in data volume. Your current pricing tier will be able to handle the surge without any changes to your LogDNA plan. As an added bonus, we also notify you in the event of a sudden increase in volume. Due to the ever-changing stream of log data – we even offer the tools of ingestion control so that you can even exclude logs you don’t need and not be billed for them. Our focus on transparency as part of the user experience not only builds trust, but also fosters a sense of partnership.

Scaling for All Sizes & Purposes

Our distinctly tiered system takes into account how many team members (users) will be using LogDNA on the same instance and length of retention (historical log data access for metrics and analytic purposes.) Additionally we also have our scaled pricing tier – HIPAA compliant for protected health information (which includes a Business Associate Agreement, or BAA, for handling sensitive data). Pictured here is a brief chart of some basic scaled prices for our three initial individual plans. The full scope of the plans is listed here. This is a visualization of a sample plan per each tier.

Plan Estimator

BIRCH - $1.50 /GB - Retention: 7 Days - Up to 5 UsersMonthly GB Used1GB4GB16GB30GBCost Per Month$1.50$6$24$45Monthly Minimum: $3.00 MAPLE - $2.00 /GB - Retention: 14 Days - Up to 10 UsersMonthly GB Used10GB30GB120GB1TBCost Per Month$20$60$240$2000Monthly Minimum: $20.00OAK - $3.00 /GB - Retention: 30 Days - Up to 25 UsersMonthly GB Used50GB60GB150GB1TBCost Per Month$150$180$450$3,000Monthly Minimum: $100.00

Custom Solutions for All & Competitive Advantages

Many pricing systems attempt to offer a one size fits all model. Where they miss the mark, we succeed with a usability that scales from small shops to large enterprise solutions. Our WIllow (Free) Plan is a single user system that allows an individual to see if a log management system is right for their individual project or eventual collaborated team effort into a paid tier. High data plans are also customized and still retain the AWS billing model. We also offer a full-featured 14-day trial.The adoption of this model creates a competitive advantage in the marketplace for both parties. LogDNA can provide services to all types of individuals and companies with a fair transparent pricing structure. The end user is given all relevant data usage and pricing information along with useful tools to manage it as they see fit. For example, imagine you are logging conservatively, focusing only on the essentials like poor performance and exceptions. In the middle of putting out a fire, your engineering team realizes that they are missing crucial diagnostic information. Once the change is made, those new log lines will start flowing into LogDNA without ever having to spend time mulling over how to adjust your plan. Having direct control over your usage and spending without touching billing is enormously beneficial to not only our customers, but also reduces our own internal overhead for managing billing.

Competitive Scenario - Bridging the Divide Between Departments

Picture this scenario; there has been an increased flux of users experiencing difficulty while using your app. The support team has been receiving ticket after ticket. Somewhere there is a discrepancy between what the user is doing and what the app is returning. The support team needs to figure out why these users are having difficulty using the app. These support inquiries have stumped the department – the director needs to ask the engineering team how they can retrieve pertinent information to remedy a fix.  LogDNA helps bridge the divide by providing the support team with relevant information to the problem at hand. For this particular example, the engineering team instruments new code to log all customer interactions with API endpoints. The support team has a broader vision of how users are interacting with the interface. They’ve now been equipped with a new tool in their arsenal from the engineers. There was nothing lost in translation between the departments during this exchange.  After looking through the new logged information, the support team is able to solve the problem many of its users were experiencing. The support team has served its purpose by responding to these inquiries and making the end-user happy. All it took was some collaboration between two different departments.   The log volume has increased due to new logs being funneled through the system. But the correlation between increased log volume and better support is worth it. During this whole process no changes are required to your current account plan with LogDNA. Future issues that may arise will be easily fixed as a result of this diagnostic information being readily available. The cost of losing users outweighs the cost of extra logs.LogDNA places the billing model on an equal level of importance as the actual log management software itself. It can be used to make decisions all across the board. LogDNA’s billing model allows itself to adapt to budgetary concerns, user experience and a better grasp of your own data all at once.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines