Originally published on DevOps.com by Mike Vizard
LogDNA is making available on a limited basis the ability to process streaming data to its observability platform as part of an effort to give DevOps teams more control over log data.
Tucker Callaway, LogDNA CEO, said the data ingestion pipeline that LogDNA already provides can ingest, parse and normalize massive amounts of structured and unstructured log data. LogDNA Streaming adds the ability to automatically send log data to any application or analysis tool.
That capability also helps reduce storage costs by limiting the number of copies of log data that might otherwise be created, he added.
LogDNA also expects that DevOps teams will also be able to analyze data in flight as part of an effort to make it simpler to surface issues even before log data is stored, noted Callaway.
While there are multiple streaming data platforms available, Callaway said LogDNA opted to build one on its own to meet the requirements of an observability platform. Ultimately, the goal is to shift observability left at the data level to provide more granular insights in real-time, he said.
That capability will also lay the foundation for a wide range of autonomous processes that would be driven by the observability data collected via LogDNA, added Callaway.
Observability has always been a core tenet of any DevOps best practice. Most DevOps teams typically focus on some form of continuous monitoring to more proactively manage application environments. However, it can still take days, sometimes weeks, to discover the root cause of an issue. Monitoring tools are designed to consume predefined metrics to identify when a specific platform or application is performing within expectations. The metrics tracked generally focus on, for example, resource utilization.
Observability combines metrics, logs and traces—a specialized form of logging—to instrument applications in a way that makes it simpler to troubleshot issues without having to rely solely on a limited set of metrics that have been pre-defined to monitor a specific process or function. DevOps teams can more easily employ queries to interrogate data in a way that makes it easier to discover the root cause of an issue. An observability platform correlates events in a way that makes it easier for analytics tools to identify anomalous behavior indicative of an IT issue.
It’s now only a matter of time before providers of the platform widely employ machine learning algorithms to analyze machine data faster. There may even come a day when so-called “war room” meetings—that today need to be convened to identify the cause of an IT issue via a painstaking process of elimination—are no longer required.
In the meantime, the percentage of applications that are instrumented in an IT environment needs to increase to achieve that goal. Fortunately, thanks to the rise of open source agent software, the cost of instrumenting applications is starting to decline. At the same time, DevOps teams should expect to see a marked increase in the volumes of incoming data, and they will need to find somewhere to store that data as well as determine the length of time it should be retained.