Sponsored Links

Senin, 16 Juli 2018

Sponsored Links

Interactive Analytics at Scale with Druid
src: res.infoq.com

Druid is a column-oriented, open-source, distributed data store written in Java. Druid is designed to quickly ingest massive quantities of event data, and provide low-latency queries on top of the data. The name Druid comes from the shapeshifting Druid class in many role-playing games, to reflect the fact that the architecture of the system can shift to solve different types of data problems.

Druid is commonly used in business intelligence/OLAP applications to analyze high volumes of real-time and historical data. Druid is used in production by technology companies such as Alibaba, Airbnb, Cisco, eBay, Netflix, Paypal,, Yahoo. and Wikimedia Foundation


Video Druid (open-source data store)



History

Druid was started in 2011 to power the analytics product of a company named Metamarkets. The project was open-sourced under the GPL license in October 2012, and moved to an Apache License in February 2015.

Over time, a number of organizations and companies have integrated Druid into their backend technology, and committers have been added from numerous different organizations.

In October 2015, the commercial company Imply launched to provide an enterprise product built around Druid.


Maps Druid (open-source data store)



Architecture

Fully deployed, Druid runs as a cluster of specialized processes (called nodes in Druid) to support a fault-tolerant architecture where data is stored redundantly, and there is no single point of failure. The cluster includes external dependencies for coordination (Apache ZooKeeper), metadata storage (e.g. MySQL, PostgreSQL, or Derby), and a deep storage facility (e.g. HDFS, or Amazon S3) for permanent data backup.

Query management

Client queries first hit broker nodes, which forward them to the appropriate data nodes (either historical or real-time). Since Druid segments may be partitioned, an incoming query can require data from multiple segments and partitions (or shards) stored on different nodes in the cluster. Brokers are able to learn which nodes have the required data, and also merge partial results before returning the aggregated result.

Cluster management

Operations relating to data management in historical nodes are overseen by coordinator nodes. Apache ZooKeeper is used to register all nodes, manage certain aspects of internode communications, and provide for leader elections.


Building an Open Source Streaming Analytics Stack with Kafka and ...
src: i.ytimg.com


Features

  • Low latency (streaming) data ingestion
  • Arbitrary slice and dice data exploration
  • Sub-second analytic queries
  • Approximate and exact computations

Interactive Analytics at Scale with Druid
src: res.infoq.com


References


Scalable Realtime Analytics using Druid - YouTube
src: i.ytimg.com


External links

  • Official website

Source of the article : Wikipedia

Comments
0 Comments