The Runbook Problem: How AURA Documents What Teams Don’t Have Time to Write

Runbooks are rarely missing because teams don't value them. They're usually missing because incident response, follow-up, and platform work compete for the same limited time. By the time an issue is resolved, the knowledge is fresh, but the window to document it is already closing.

That gap creates familiar failure modes: over-reliance on senior engineers, slower handoffs, and less confidence for whoever is on call next. One practical use case for AURA, an open-source agentic harness for production AI, is closing that gap by turning incident investigation into draft documentation.

When a runbook already exits, AURA can use it during investigation

When a runbook exists, AURA can pull it into an active investigation. An agent configured for SRE workflows — like the reference example in the AURA repo — receives the PagerDuty incident, reads the runbook link from the alert body, and pulls the document from whatever documentation source you've connected. At Mezmo, for example, the sre-agent reads the runbook through the GitHub MCP, then uses the Mezmo MCP server to investigate logs and system state with that runbook as working context.

The result isn't a generic answer based on model priors. It's an investigation grounded in the service's actual procedures, dependencies, and operating assumptions, which is what makes a recommendation easier to trust under pressure.

More useful, however, is what happens when a runbook doesn't exist yet.

When a runbook does not exist, AURA can turn the investigation into a draft

Point AURA at the GitHub location where the runbook should live and define the behavior in the prompt: if no document exists, investigate the issue with the on-call engineer and then based on the actions taken during the incident, open a pull request with a draft runbook.

From there, AURA pulls telemetry and system context through Mezmo MCP, assembles a first pass, and opens a PR for review. That draft typically covers incident symptoms, likely root cause, validation steps, remediation guidance, relevant dashboards or queries, and follow-up checks for the next responder: structured enough to review, not polished enough to publish without one.

In practice, that distinction is what makes the workflow useful. An engineer reviews the branch, fills in service-specific nuance, and merges. The only thing left is to update the link in the alert body to point to the new runbook. Documentation that otherwise wouldn't get written now exists as part of the work already being done during investigation.

For existing runbooks, results tend to be stronger. AURA can follow the structure, language, and terminology already in place, which makes updates easier to review and easier to merge.

Incident response becomes institutional memory

The value isn't just in producing one draft. It's in creating a repeatable loop.

Investigate an issue. Reference the runbook if it exists. Surface what's missing. Update the document. Merge the PR. The next responder starts from a better baseline than the last one did.

That's a more realistic model for documentation maintenance than asking engineers to create and refresh runbooks as a separate project. The work improves as a byproduct of incident response instead of competing with it. Over time, runbooks become something closer to living operational memory instead of an outdated page no one trusts.

Why this workflow is practical for SRE and Platform teams

The value here is that it changes the job from authorship to review.

Ideally a SRE or platform engineer shouldn't have to start from a blank page after an incident. Reviewing a draft, correcting edge cases, and approving the final version is a much more realistic ask. AURA handles the repetitive first pass; engineers keep control over judgment, accuracy, and approval.

The goal isn't to auto-publish procedures into production workflows. It's to reduce the friction that keeps useful documentation from existing at all.

Templates help here too. Bringing your own runbook template into the prompt improves consistency across services and raises the quality of the initial draft by giving the system a clearer target structure.

Runbook generation is one useful pattern, not the whole story

Runbook generation is a good starting point for AURA because the value is immediate: investigation produces something the team can actually reuse. But it's one use case in a broader set of operational workflows

To learn more or get started, check out the following: 

AURA on GitHub
Explore the Quickstart
Give AURA a star

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    The Runbook Problem: How AURA Documents What Teams Don’t Have Time to Write
    AURA in practice: real-world use cases for production AI agent infrastructure
    Why we open-sourced AURA: Infrastructure for production AI
    The Grok-to-AI Evolution: Why Modern SREs Are Moving Beyond Manual Parsing
    2026 Resolution: Take Back Control of Your Observability Spend
    AI SRE Update: Your Feedback Shaped Our Latest Release
    Your Easiest 2026 Resolution: Simplify the Collection Layer and Move to OTel Without the Agent Sprawl
    New Year, New Telemetry: Resolve to Stop Breaking Dashboards
    The Observability Stack is Collapsing: Why Context-First Data is the Only Path to AI-Powered Root Cause Analysis
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges