How to Use LogDNA Views to Manage Logs Effectively

4 MIN READ
MIN READ

Views may seem straightforward at first, but they hide a lot of power. On a very basic level, it is a shortcut to a specific search query or filter. You can use them to display only a subset of logs, create alerts and graphs, export specific events, and even embed your log event feed on another website.In this post, we'll present several tips and tricks for making the most out of them.

1. Name Your Views Using a Standard Convention

Naming conventions are a way of giving your views relevant and useful names. A good naming convention makes them easier to manage, identify, and search for. Names become more important as the count increases; simple names will work fine for five years, but can quickly become unwieldy with 100 views.A well-defined naming scheme can tell you everything you need to know about a view at a glance. If you have a large collection, it also helps with filtering and searching. When devising a naming convention, start by identifying the most important aspects of your deployment. These might include:

  • The environment (dev, qa, prod, etc.)
  • The tenant (if you are using a multitenant architecture)
  • The application name
  • The target log level (info, debug, error, etc.)

Let's look at an example. Imagine a small SaaS provider hosting a web application on a private cloud server. The team logs the application to LogDNA and uses basic searches and filters to find the logs they're looking for. To help flag errors in production, the DevOps team created two new views: one for error-level and higher logs, and one for all other logs (called "Error Logs" and "Info Logs" respectively). For a while, this setup works fine.Now, let's say QA wants a separate environment for testing new builds. Not to be outdone, development also asks for an environment to try out changes. Each of these environments also generates logs. Creating views for these is easy enough, but they can't just be called "Error Logs 2" and "Info Logs 2". There needs to be a way of uniquely identifying these views.A basic convention could use a pattern of "department_instance_level." Instead of "Error Logs" and "Info Logs," they would be named "dev_app1_info," "qa_app2_error," and so on. Not only does this clearly define the contents of each view, but since LogDNA sorts views alphabetically, each view is naturally grouped with similar views. This makes it easy to scan for specific views and search for those containing specific keywords. In this screenshot, searching for "prod" narrows the list to views containing production environment logs.

Complete List

2. Organize Views Into Categories

By default, newly created views appear in a single list. LogDNA lets you group similar ones into categories, which them under a common heading. Categories appear in the list, and you can click on a category name to display or hide the views in it. Any view not assigned to a category appears under "Uncategorized."Imagine our SaaS provider onboards five new clients, each with their own private instance. In addition to a private production environment, each client also requires their own separate development and QA instances for building custom configurations. With two views per environment, this adds 30 views to the list!Categories not only help reduce this visual clutter, but also group similar views together. For example, the team might categorize each view by client or environment. You can still perform searches while using categories, as shown below.Categories don't need to follow a specific approach. You might find it more productive to categorize them by log level, log type, application, or using a combination of factors. With LogDNA, you can easily move views between categories at any time, and even assign a single view to multiple categories.

Category Search

3. Change How Events Are Displayed

Logs can pack a significant amount of information into a single message, and while this information is useful, having all of it on-screen at once is not. Custom line templates let you control how logs are displayed in each view. You can change the formatting of log entries, show or hide different fields, display metadata, and more. Using custom line templates can help you reduce clutter in your log feed and only show the information essential to your team. Note that custom line templates only change how logs are displayed in a particular one and do not affect parsing or searching.Custom line templates let you specify which fields to show in the log feed. For this reason, they work best with easily parsable formats such as JSON. They also work best with views containing logs that share the same fields or structure.Let's say we have one containing Nginx server access logs. Nginx logs a lot of data by default, but to make the view more readable, we want to pare it down to the client's IP address, the HTTP verb, the URL, and the response size (in bytes). Since LogDNA automatically parses Nginx logs, we already have access to these fields:

Nginx Parsed

We'll start by clicking on the name of the view and selecting Edit view properties. In the Edit View Properties dialog is a text box named Custom %LINE Template. Here, we'll specify the formatting of each log line. Curly braces denote fields, and any other strings are interpreted as literals. For this example, we'll use the following template: {{clientip}} - {{verb}} - {{request}} - {{bytes}}.Now, our view looks like this (the line shown above is highlighted):

Nginx Custom Line Template

4. Share Logs With Outside Users

In some cases, you might need to share your logs outside of LogDNA. Going back to our SaaS provider, imagine if one of their larger clients requests their private instance logs in order to do on-site troubleshooting. Granting the client access to your LogDNA organization creates a security risk, and manually exporting or re-routing logs is too time-consuming.LogDNA provides embedded views, which let you mirror a view onto any HTML page. This lets you create internal dashboards, live tail logs remotely, and control visibility into log data for certain users. These can either be static and display logs exactly as they appear in the view, or dynamic and limit the logs shown based on a specific query. For example, you could use a dynamic view to automatically filter logs to a specific client based on the identity of the logged in user.Embedded views offer a safe and flexible way of providing logs to users outside of your organization, but without exposing you or your organization to potential risk.

Conclusion

Views make it easier to manage, organize, search, and of course, read your log data. With the right practices in place, they can quickly become an essential part of your team's workflow. LogDNA offers unlimited saved views for all accounts, including free accounts, so there are no limits to how you can organize your logs. To get started, sign up for a LogDNA account or log into your existing account and start using views like a pro.

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines