Frequently Asked Questions

Don't see your question here? Ask us.

Is Druid in-memory?

Given that it is impossible to process data with a modern processor without first loading the data into memory, yes, it is in-memory. The earliest iterations of Druid didn’t allow for data to be paged in from and out to disk, so we often called it an “in-memory” system. However, we very quickly realized that RAM hasn’t become cheap enough to actually store all data in RAM and sell a product at a price-point that customers are willing to pay. Since the Summer of 2011, we have leveraged memory-mapping to allow us to page data between disk and memory and extend the amount of data a single node can load up to the size of its disks.

That said, as we made the shift, we didn’t want to compromise on the ability to configure the system to run such that everything is essentially in memory. To this end, individual Historical nodes can be configured with the maximum amount of data they should be given. Couple that with the Coordinator’s ability to assign data blocks to different “tiers” based on differing query requirements and Druid essentially becomes a system that can be configured across the whole spectrum of performance requirements. Configuration can be such that all data can be in memory and processed, it can be such that data is heavily over-committed compared to the amount of memory available, and it can also be that the most recent month of data is in memory, while everything else is over-committed.

What external dependencies does Druid have?

Druid has three external dependencies that must be running in order for the Druid cluster to operate

  1. Deep Storage
  2. Metadata Storage (MySQL or Postgres)
  3. ZooKeeper

What is “deep storage”?

Simply put, deep storage is some storage infrastructure that Druid depends on for data availability. If this infrastructure goes down, then Druid cannot load new data into the system, nor can it recover from failures in the cluster. As long as this infrastructure is operational, Druid Historical nodes cannot lose data.

A more technical description of Deep Storage and the options available exists in our docs.
Deep Storage

Can Druid accept non-structured data?

Druid requires some structure to the data it ingests. In general data should consist of a timestamp, dimensions and metrics. These are discussed in a bit more detail in our original blog post: Introducing Druid: Real-Time Analytics at a Billion Rows Per Second

Does Druid only accept data with a timestamp?

Yes. Data that is ingested into Druid must have a timestamp.

Isn’t Zookeeper a single point of failure?

No, Zookeeper can be deployed to withstand a configurable number of individual node failures. Also, if ZooKeeper goes down, the cluster will continue to operate.

Losing ZooKeeper does, however, mean that the cluster cannot add new data segments, nor can it effectively react to the lose of one of the nodes. So, while it is safe to lose access to ZooKeeper, it is definitely a degraded state.

Isn’t the Coordinator a single point of failure?

No, the “Coordinator” node is merely for coordination. It is not involved in the query path. Losing all coordinators means that new segments will not be loaded by the Historical nodes, i.e. no new data will enter the cluster. Also, segment balancing decisions will not be made, so it stops the cluster from being able to effectively scale up and down. Just like ZooKeeper, losing the coordinators puts the cluster in a degraded state, but it will keep operating just fine.

Also, there can be multiple coordinator nodes deployed. Extra coordinators will act as “hot spares” in case the active coordinator is lost.

Do all queries go through the coordinator?

Queries never touch the coordinator. Ever.

Can I query Druid even after the coordinator dies?

Yes, the coordinator has no impact on queries.

How do I update data?

Updating data means that you have to re-generate the segments for the time period that you need to update. This can be done by reindexing the data for that time period. Once the indexer finishes, the Druid cluster will swap the new segment in and stop serving the old segment.

What is the Indexing Service, do I need that?

The Indexing Service is a job scheduling service with tasks that operate on Segments. Long-term, we intend to move indexing activities to be fronted by this service.

It is not a requirement yet, but will become one as time goes on.

More information can be found in the docs:

Indexing Service

How does Druid compare to Elasticsearch?

Druid vs Elasticsearch

How does Druid compare to Key/Value Stores (HBase/Cassandra)?

Druid vs Key/Value Stores

How does Druid compare to Kudu?

Druid vs Kudu

How does Druid compare to Redshift?

Druid vs Redshift

How does Druid compare to Spark?

Druid vs Spark

How does Druid compare to SQL-on-Hadoop (Impala/Drill/Spark SQL/Presto)?

Druid vs SQL-on-Hadoop