RELATED ARTICLES
SHARE ARTICLE
Quickwit vs. Elasticsearch: Choosing the Right Search Tool
Learning Objectives
This article covers the key features of Quickwit vs. Elasticsearch, how to choose the right solution and how each tool integrates with Mezmo.
Quickwit vs. Elasticsearch: Choosing the Right Search Tool
Overview of Quickwit
Quickwit is a high-performance, cost-effective, and scalable search engine purpose-built for massive, append-only datasets. It’s a compelling alternative to Elasticsearch/Loki/TEMPO for cloud-native log and trace analytics, especially when leveraging cheap object storage and elastic compute.
What is Quickwit?
Quickwit is an open‑source, cloud‑native search engine built for logs, traces, and other append‑only data. It lets you run complex search and analytics queries directly on object storage (like S3, GCS, Azure Blob) with sub‑second latency thanks to its efficient Rust implementation and architecture.
Quickwit offers true cloud native design: it decouples compute from storage, indexes are stored in object storage, searchers are stateless and can scale independently. It is built on Tantivy, leveraging the fast Rust search engine library for its indexing and query performance. Quickwit is tailored specifically for log management and distributed traces, with native support for OpenTelemetry, Jaeger, and Elasticsearch-compatible query APIs. And it allows flexible ingestion of JSON documents without upfront schema constraints.
Quickwit is ideal for log management, distributed tracing and append-only analytics.
Key features of Quickwit
Here are the key features of Quickwit:
1. Cloud-Native Architecture
Quickwit offers separation of compute and storage; Indexes are stored in object storage, while query nodes are stateless and independently scalable. It also has stateless components which make deployments easier and cost-effective for large-scale environments.
2. Append-Only Data Support
Quickwit is optimized for append-only workloads like logs, traces, and events. It is not designed for real-time document updates or deletions, which simplifies architecture and improves performance.
3. Fast Search on Object Storage
The Quickwit solution has sub-second search latency even on data stored in low-cost object storage. This is achieved through smart indexing and distributed query execution.
4. High Ingestion Throughput
Quickwit supports high ingestion rates from sources like Kafka, Fluent Bit, or HTTP. It contains efficient batching and compression mechanisms for large-scale ingestion pipelines.
5. Elasticsearch-Compatible Query API
Quickwit supports Elasticsearch-style queries (DSL), making it easy to integrate with tools including Kibana, Granfana, Fluent Bit and Logstash.
6. Built with Tantivy (Rust)
Tantivy is a fast full-text search engine in Rust and it is Quickwit’s core indexing engine, providing high performance and safety with low memory overhead.
7. Flexible Schema Management
Quickwit supports both schemaless ingestion (e.g., for JSON logs) and strict schemas for structured data, allowing dynamic fields and automatic schema discovery.
8. Distributed and Scalable
Designed for cloud-native deployments, Quickwit offers horizontal scalability for both indexing and searching.
9. Built-in Connectors and Integrations
Quickwit has native support for Kafka, S3, OpenTelemetry, Jaeger Traces, and Prometheus/Grafana.
10. Multi-Tenancy and Partitioning
The search solution supports logical indexes per tenant or application. Indexes can be sharded and partitioned by time for better query performance and management.
11. Low TCO (Total Cost of Ownership)
Quickwit reduces storage costs by indexing directly on object storage, and the stateless compute minimizes always-on resource requirements.
12. Open Source and Extensible
Quickwit is licensed under Apache 2.0 and has an extensible plugin architecture for custom indexing or querying logic.
Quickwit architecture
Quickwit’s architecture is cloud-native, distributed, and purpose-built for append-only data like logs and traces. It separates compute from storage, enabling scalability, cost efficiency, and fast search over data stored in object storage systems like Amazon S3.
Core Components
Quickwit’s data flow architecture has three stages: ingestion, search and index lifecycle management. In the ingestion phase, data is received via Kafka, HTTP, or file connectors. The Indexer processes and batches the data into index splits, which are serialized, compressed, and uploaded to object storage. Metadata about each split is registered in the Meta Service. In the search phase, the Searcher receives a query and contacts the Meta Service for the relevant index metadata. Searcher fetches only the needed index splits from object storage and it executes the query in parallel across splits and returns results with low latency (often sub-second). Finally, the index service helps with automation and orchestration.
Overview of Elasticsearch
What is Elasticsearch?
Elasticsearch is a distributed, RESTful search and analytics engine designed for fast, scalable full-text search, structured search, and data analytics across large volumes of data in near real time.
It is the core component of the Elastic Stack (formerly ELK Stack), which includes Elasticsearch, a search and analytics engine, Logstash, for data collection and transformation, Kibana, with visualization and dashboarding, and Beats, the lightweight data shippers.
Elasticsearch makes it possible to ingest and store data (logs, metrics, documents, JSON data, etc.), index and search that data very quickly, and analyze and visualize trends, patterns, and anomalies.
Elasticsearch is typically used for log and event data analytics, application search, security information and event management (SIEM), business intelligence/dashboards, monitoring and observability and geospatial data analysis.
Key features of Elasticsearch
Elasticsearch architecture
Elasticsearch has a distributed, horizontally scalable architecture designed for high-performance search, analytics, and data ingestion across large volumes of structured and unstructured data. Its architecture ensures fault tolerance, real-time performance, and easy scalability.
The Elasticsearch architecture is made up of the following:
The cluster is a collection of one or more nodes (servers) that together hold the entire dataset and provide indexing and search capabilities and it is identified by a unique cluster name.
The node is a single server that is part of the cluster and stores data and performs search and indexing.Types of nodes include Master, Data, Ingest, Coordinating and Machine Learning.
The index is similar to a database table in relational databases. An index contains documents, which are JSON objects. Each index is divided into shards for distributed storage and search.
A shard is a low-level unit of storage and computation. Elasticsearch distributes data by splitting each index into multiple primary shards and replica shards. Shards are distributed across nodes for parallelism and fault tolerance.
The document is the basic unit of information stored in Elasticsearch (in JSON format). Each document has a unique ID and resides in an index.
Elasticsearch is built on Apache Lucene, a high-performance search engine library. Lucene handles the underlying indexing and search mechanics using inverted indexes and segment files.
Data flow in Elasticsearch is made up of indexing and searching. In the indexing stage, Data is sent to Elasticsearch as JSON, and then routed through a coordinating node (if applicable) and sent to the appropriate primary shard. The ingested data is analyzed, tokenized, and indexed by Lucene. Replica shards are asynchronously updated for fault tolerance. When searching, a query is sent to any node (coordinating node). The coordinating node forwards the query to relevant shards (primary or replica) and each shard executes the query locally. Results are gathered and merged by the coordinating node, then returned to the user.
Quickwit vs Elasticsearch: Feature Comparison
Here’s a feature-by-feature comparison of Quickwit vs. Elasticsearch, highlighting their architectural differences, core use cases, performance, and cost considerations.
Key feature comparisons
Purpose/Use case
To sum up, choose Quickwit for:
- Large-scale, immutable log storage
- Cost-sensitive environments (especially cloud-native)
- Append-only event pipelines (Kafka, OpenTelemetry)
- Fast search over data stored in S3, GCS, or Azure Blob
- Simple observability stacks without full analytics
Choose Elasticsearch for:
- Application or site search with full-text relevance
- Real-time log and metric analysis with alerting
- Dashboards, visualizations, and user-friendly exploration
- Security analytics, threat detection, SIEM
- Use cases that require frequent data updates or complex querying
Data Storage
Quickwit has object storage; Elasticsearch offers local disks, EBS volumes or shared block storage.
Speed & Performance
Quickwit does not have built in machine learning, while Elasticsearch does. When it comes to query latency, Quickwit offers sub-second query latency, vs. millisecond to sub-second for Elasticsearch. Quickwit has fast batch search, filtering and full-text, compared with Elasticsearch’s full-text, fuzzy search, filtering, scoring and complex queries.
Scalability
Quickwit offers elastic scale, while Elasticsearch has horizontal scaling via nodes/shards but requires more instrumentation.
Security & Compliance
Quickwit’s security is limited in open source, while Elasticsearch offers extensive security and compliance features.
Ease of Use
Pros & Cons of Quickwit
On the plus side for Quickwit:
- Cloud native architecture
- Cost efficiency
- High-performance search
- Purpose-built for append-only data
- Easy to operate
- Elastic integration
- Fully open source
- Scalable and fault tolerant
The cons of Quickwit include:
- Append-only limitation
- No native visualization layer
- Narrow use case focus
- Smaller ecosystem
- Limited security features (out of the box)
- Learning curve for custom pipelines
Pros and Cons of Elasticsearch
The pros of Elasticsearch include:
- Powerful full-text search engineer
- Versatile data support
- Rich query capabilities
- Real-time indexing and search
- Scalable and distributed architecture
- Kibana integration
- Extensive ecosystem and community
- Advanced features (commercial)
- Multi-tenant capabilities
The cons of Elasticsearch are:
- Operational complexity
- High resource consumption
- Storage costs
- Complex licensing model
- Steep learning curve
- Not ideal for immutable-only workloads
How to choose the right solution
Choosing between Quickwit and Elasticsearch depends on your use case, data characteristics, infrastructure requirements, and cost constraints.
Choose Quickwit if:
- You need fast, cost-efficient search on append-only logs or traces
- You're building in the cloud, want to use object storage, and need stateless, scalable compute
- You're focused on observability/log pipelines with minimal overhead
Choose Elasticsearch if:
- You need a general-purpose search and analytics engine
- You rely on real-time indexing, rich queries, dashboards, or machine learning
- You're already using the Elastic Stack (Kibana, Logstash, Beats, etc.)
How each tool integrates with Mezmo
Elasticsearch was Mezmo’s original search backend, used to power log ingestion, indexing, and querying for their observability platform. While Mezmo has since migrated to Quickwit for improved scalability and cost efficiency, it’s still valuable to understand how Elasticsearch previously integrated with Mezmo, as it reflects a common architecture for log analytics platforms.
Mezmo transitioned their backend search engine from Elasticsearch to Quickwit in production, handling multi-petabyte telemetry data across thousands of customers. The integration maintains the same user-facing experience in Mezmo’s UI and APIs, preserving functionality while improving performance and lowering costs.
The company also used a parallel deployment model: Quickwit ran alongside Elasticsearch, allowing the team to run PoC, shift traffic incrementally, monitor performance, and fall back easily if needed. Post cut-over, Quickwit became the primary search backend for Mezmo’s cloud observability platform. Mezmo sends logs via standard agents into Quickwit’s ingest pipeline, which builds and stores index splits on object storage. Quickwit powers search queries directly from storage, with distributed searchers fetching only the needed index segments.
Watch a webinar of Mezmo’s entire migration journey.