How One Software Engineer Jumped to a Remote Startup

4 MIN READ
MIN READ

LogDNA is now Mezmo but the people you know and love are here to stay.

As a recent hire, I’m in a unique position to share my perspective about my experience at LogDNA. In this post, I’ll talk about how I discovered LogDNA and what it’s been like since I joined in June.

First, let's rewind to 2019 when I was just another tech worker bee. I’d wake up at 7:00 am, grab an energy bar and a cup of coffee, and pay the tolls to bypass as many cars as possible in hopes of making my  8:30 am meeting. Work. Then I’d spend another hour heading home. The commute was part of the deal and most of my co-worker bees were doing the same. It was just part of work life.

Fast forward to the beginning of 2021. Life was good without having to commute, pandemic lockdowns began to ease, and the talk of getting worker bees back to the office was buzzing in the air. I knew that every CDC announcement could be the automatic end of my remote work experience and I wanted to find a permanent remote job.

Having worked in tech for more than a decade, I knew what I wanted on my check list while job searching. Everyone has a list of their own, here’s mine:

  • Dedicated to a remote culture. 
  • A good engineering culture.
  • A startup with good tech stacks and a shared future with its employees.
  • Macbook Pro as my primary work machine, less corporate red tape for engineering activities.
  • A product-led company that has a direct impact on users.

Company Culture

Before Joining

It’s interesting that one can sense aspects of a company’s culture even before talking to anyone who works there. The good, bad, and ugly are captured in sites like Glassdoor, which was my first stop to learn about LogDNA. I know it’s all happy faces and a bit of marketing by the company, but it’s a way to get a snapshot of the culture. While looking at the photos I was asking myself; Are they having fun? Do I see people like me? Where are they, what are they doing? Do I like my future report and the team I’ll be working with?

Now

  • I got very interested in LogDNA after talking with the internal recruiter, Don. He painted a nice picture of working at LogDNA by focusing on the people, the tech stack, and the emphasis on work-life balance. Everything he told me is true and checked out.
  • Two of our company values are to jump in and have fun. When there’s an opportunity or challenge either within your team or outside, anyone can jump in to contribute. In the process, we have some fun giving help or receiving it. There is a spirit I can feel in the air. We are a startup and we feel and act like a startup!
  • Management structure is flat. Effort is focused on getting work done right and not focusing on reporting, time tracking, and bureaucratic process. Instead of focusing on deadlines, we are focusing on bi-weekly demos to show off what we have in process. Yes, there are deadlines to meet, but I can see the team is focusing on delivering value periodically and planning proper expectations and executions instead of letting deadlines run the show.
  • As an SDET, I’m encouraged to shift left (collaborating with developers), and shift right (work with release and cloud infrastructure engineers). Testing is a team sport.
  • Move fast and break things with automated tests. Things move fast and I expect things to break in a non-production environment.
  • I see a good test-writing culture within all layers of engineering. I am glad to see the front-end engineers write proper element selectors, so that it’s easier for frontend test automation. 
  • When we do an upgrade tons of tests are written beforehand so that the upgrade doesn’t break in production.
  • Release and Ops teams make sure a new feature’s impact radius is controlled during alpha release and rolled out to more customers when everything is running as expected. We are releasing and backing out with proper automation and playbooks.
  • Whenever we want to make sure something behaves as expected, we’ll write a test. It doesn’t have to be all the web front-end tests. For example, we can write python tests and schedule the run on CI server, and utilize an AWS API to verify file archiving features. When we want to verify that agents on servers behave correctly, some Ansible scripts will be written. The opportunities are everywhere to experience and gain the skills I want, which is a big deal for me personally. (I didn’t write that Ansible script, but I will someday when similar solutions are needed.)

Tech Stack

Before Joining

I always include a few keywords from the tech stack I want to work with in a job search. It is important for me to use my favorite languages, tools, and the ecosystem. One may argue that languages, frameworks, and tools should not matter to engineers who know what they are doing. My take is, as long as the languages and tools make an engineer happy, productive, and motivated, it’s worth being picky. I saw a bucket of buzzwords: Node, JavaScript, Vue, Cypress, Docker, Kubernetes, CI/CD, Elastic, Jenkins, AWS, Python, Rust, Terraform, etc.. I don’t know all of them, but most of them got me excited.

Now

  • Engineers at LogDNA can expect to see and work with (code review included) JavaScript, Python, Rust, etc. and multiple languages.
  • Infrastructure as code practice (Ansible) is not just for cloud infrastructure teams. All engineers are encouraged to work with each other to design and implement automation solutions.
  • All stacks are open to anyone. I can work on items that are outside my defined tasks and team. I don’t use all tools in the bucket of buzzwords mentioned, but I’m encouraged to contribute when ready. I know I’m not defined by my job title and department, but by my impact, responsibilities, and passion.
  • Code review is not just an exercise of putting a rubber stamp approval. I’m encouraged to take the time and discuss all aspects of the code and approach.

Macbook Pro

Before Joining

A friend of mine who went through tons of interviews lately mentioned to me that she uses Macbook Pro as a marker to guesstimate if it’s worth going through the interview process. I'm somewhat in the same boat. 

It’s not just business, it’s personal. I generally did not have a good user experience with Windows during the last couple of decades. One can argue “...but you can use Windows Subsystem for Linux 2...” to which my reply is, “Why not just get rid of windows and install Linux directly?” I have a feeling the true reason engineers stick with Windows is because of games. I know it’s controversial, but it has some valid points.

Now

  • Nice annual home office stipend allows me to get a great standing desk. Little perks here and there help me improve my home office.
  • Most of the tools are straight out of the cloud, and more and more services are moving to AWS to make things simple. I know I can have full control over my machine without having to wait for corporate approval for yet another tool to install.
  • No more complaining to corporate about why I need extra RAM and why I need to be the admin of my work machine.

Product

Before Joining

I knew I wanted to join a product-led company instead of a service or consulting company. LogDNA’s logging product is especially interesting to me. I knew it would have a great impact on the software industry and the direct users of the product are engineers.

Now

I like that engineers are exposed  to what marketing, finance, and sales are doing. I know the product roadmap and strategies are based on deep market research and understanding of the industry. I believe we are on the right track. We are also really transparent. Product roadmaps, features, and prioritization are communicated clearly and I am confident in what we are building.

What’s Next

I enjoy working with my co-workers and my manager (I know he might read this, but it’s true). One of the things I’m looking forward to as a new engineer is to meet more people face to face during the holiday party and beyond. 

In the next few months, I will be writing front-end regression tests for new enterprise features while they are being developed. I’ll also add visual testing capabilities through Percy.io and Jenkins integration. Why do we need visual testing? That’s another blog...

Table of Contents

    Share Article

    RSS Feed

    Next blog post
    You're viewing our latest blog post.
    Previous blog post
    You're viewing our oldest blog post.
    Mezmo + Catchpoint deliver observability SREs can rely on
    Mezmo’s AI-powered Site Reliability Engineering (SRE) agent for Root Cause Analysis (RCA)
    What is Active Telemetry
    Launching an agentic SRE for root cause analysis
    Paving the way for a new era: Mezmo's Active Telemetry
    The Answer to SRE Agent Failures: Context Engineering
    Empowering an MCP server with a telemetry pipeline
    The Debugging Bottleneck: A Manual Log-Sifting Expedition
    The Smartest Member of Your Developer Ecosystem: Introducing the Mezmo MCP Server
    Your New AI Assistant for a Smarter Workflow
    The Observability Problem Isn't Data Volume Anymore—It's Context
    Beyond the Pipeline: Data Isn't Oil, It's Power.
    The Platform Engineer's Playbook: Mastering OpenTelemetry & Compliance with Mezmo and Dynatrace
    From Alert to Answer in Seconds: Accelerating Incident Response in Dynatrace
    Taming Your Dynatrace Bill: How to Cut Observability Costs, Not Visibility
    Architecting for Value: A Playbook for Sustainable Observability
    How to Cut Observability Costs with Synthetic Monitoring and Responsive Pipelines
    Unlock Deeper Insights: Introducing GitLab Event Integration with Mezmo
    Introducing the New Mezmo Product Homepage
    The Inconvenient Truth About AI Ethics in Observability
    Observability's Moneyball Moment: How AI Is Changing the Game (Not Ending It)
    Do you Grok It?
    Top Five Reasons Telemetry Pipelines Should Be on Every Engineer’s Radar
    Is It a Cup or a Pot? Helping You Pinpoint the Problem—and Sleep Through the Night
    Smarter Telemetry Pipelines: The Key to Cutting Datadog Costs and Observability Chaos
    Why Datadog Falls Short for Log Management and What to Do Instead
    Telemetry for Modern Apps: Reducing MTTR with Smarter Signals
    Transforming Observability: Simpler, Smarter, and More Affordable Data Control
    Datadog: The Good, The Bad, The Costly
    Mezmo Recognized with 25 G2 Awards for Spring 2025
    Reducing Telemetry Toil with Rapid Pipelining
    Cut Costs, Not Insights:   A Practical Guide to Telemetry Data Optimization
    Webinar Recap: Telemetry Pipeline 101
    Petabyte Scale, Gigabyte Costs: Mezmo’s Evolution from ElasticSearch to Quickwit
    2024 Recap - Highlights of Mezmo’s product enhancements
    My Favorite Observability and DevOps Articles of 2024
    AWS re:Invent ‘24: Generative AI Observability, Platform Engineering, and 99.9995% Availability
    From Gartner IOCS 2024 Conference: AI, Observability Data, and Telemetry Pipelines
    Our team’s learnings from Kubecon: Use Exemplars, Configuring OTel, and OTTL cookbook
    How Mezmo Uses a Telemetry Pipeline to Handle Metrics, Part II
    Webinar Recap: 2024 DORA Report: Accelerate State of DevOps
    Kubecon ‘24 recap: Patent Trolls, OTel Lessons at Scale, and Principle Platform Abstractions
    Announcing Mezmo Flow: Build a Telemetry Pipeline in 15 minutes
    Key Takeaways from the 2024 DORA Report
    Webinar Recap | Telemetry Data Management: Tales from the Trenches
    What are SLOs/SLIs/SLAs?
    Webinar Recap | Next Gen Log Management: Maximize Log Value with Telemetry Pipelines
    Creating In-Stream Alerts for Telemetry Data
    Creating Re-Usable Components for Telemetry Pipelines
    Optimizing Data for Service Management Objective Monitoring
    More Value From Your Logs: Next Generation Log Management from Mezmo
    A Day in the Life of a Mezmo SRE
    Webinar Recap: Applying a Data Engineering Approach to Telemetry Data
    Dogfooding at Mezmo: How we used telemetry pipeline to reduce data volume
    Unlocking Business Insights with Telemetry Pipelines
    Why Your Telemetry (Observability) Pipelines Need to be Responsive
    How Data Profiling Can Reduce Burnout
    Data Optimization Technique: Route Data to Specialized Processing Chains
    Data Privacy Takeaways from Gartner Security & Risk Summit
    Mastering Telemetry Pipelines: Driving Compliance and Data Optimization
    A Recap of Gartner Security and Risk Summit: GenAI, Augmented Cybersecurity, Burnout
    Why Telemetry Pipelines Should Be A Part Of Your Compliance Strategy
    Pipeline Module: Event to Metric
    Telemetry Data Compliance Module
    OpenTelemetry: The Key To Unified Telemetry Data
    Data optimization technique: convert events to metrics
    What’s New With Mezmo: In-stream Alerting
    How Mezmo Used Telemetry Pipeline to Handle Metrics
    Webinar Recap: Mastering Telemetry Pipelines - A DevOps Lifecycle Approach to Data Management
    Open-source Telemetry Pipelines: An Overview
    SRECon Recap: Product Reliability, Burn Out, and more
    Webinar Recap: How to Manage Telemetry Data with Confidence
    Webinar Recap: Myths and Realities in Telemetry Data Handling
    Using Vector to Build a Telemetry Pipeline Solution
    Managing Telemetry Data Overflow in Kubernetes with Resource Quotas and Limits
    How To Optimize Telemetry Pipelines For Better Observability and Security
    Gartner IOCS Conference Recap: Monitoring and Observing Environments with Telemetry Pipelines
    AWS re:Invent 2023 highlights: Observability at Stripe, Capital One, and McDonald’s
    Webinar Recap: Best Practices for Observability Pipelines
    Introducing Responsive Pipelines from Mezmo
    My First KubeCon - Tales of the K8’s community, DE&I, sustainability, and OTel
    Modernize Telemetry Pipeline Management with Mezmo Pipeline as Code
    How To Profile and Optimize Telemetry Data: A Deep Dive
    Kubernetes Telemetry Data Optimization in Five Steps with Mezmo
    Introducing Mezmo Edge: A Secure Approach To Telemetry Data
    Understand Kubernetes Telemetry Data Immediately With Mezmo’s Welcome Pipeline
    Unearthing Gold: Deriving Metrics from Logs with Mezmo Telemetry Pipeline
    Webinar Recap: The Single Pane of Glass Myth
    Empower Observability Engineers: Enhance Engineering With Mezmo
    Webinar Recap: How to Get More Out of Your Log Data
    Unraveling the Log Data Explosion: New Market Research Shows Trends and Challenges
    Webinar Recap: Unlocking the Full Value of Telemetry Data
    Data-Driven Decision Making: Leveraging Metrics and Logs-to-Metrics Processors
    How To Configure The Mezmo Telemetry Pipeline
    Supercharge Elasticsearch Observability With Telemetry Pipelines
    Enhancing Grafana Observability With Telemetry Pipelines
    Optimizing Your Splunk Experience with Telemetry Pipelines
    Webinar Recap: Unlocking Business Performance with Telemetry Data
    Enhancing Datadog Observability with Telemetry Pipelines
    Transforming Your Data With Telemetry Pipelines