Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison

Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase comparison

In this light, here is a comparison of CassandraMongodbCouchDBRedisRiakMembaseNeo4j andHBase:

 

MongoDB

  • Written in: C++
  • Main point: Retains some friendly properties of SQL. (Query, index)
  • License: AGPL (Drivers: Apache)
  • Protocol: Custom, binary (BSON)
  • Master/slave replication (auto failover with replica sets)
  • Sharding built-in
  • Queries are javascript expressions
  • Run arbitrary javascript functions server-side
  • Better update-in-place than CouchDB
  • Uses memory mapped files for data storage
  • Performance over features
  • Journaling (with –journal) is best turned on
  • On 32bit systems, limited to ~2.5Gb
  • An empty database takes up 192Mb
  • GridFS to store big data + metadata (not actually an FS)
  • Has geospatial indexing

Best used: If you need dynamic queries. If you prefer to define indexes, not map/reduce functions. If you need good performance on a big DB. If you wanted CouchDB, but your data changes too much, filling up disks.

For example: For most things that you would do with MySQL or PostgreSQL, but having predefined columns really holds you back.

Riak (V1.0)

  • Written in: Erlang & C, some Javascript
  • Main point: Fault tolerance
  • License: Apache
  • Protocol: HTTP/REST or custom binary
  • Tunable trade-offs for distribution and replication (N, R, W)
  • Pre- and post-commit hooks in JavaScript or Erlang, for validation and security.
  • Map/reduce in JavaScript or Erlang
  • Links & link walking: use it as a graph database
  • Secondary indices: but only one at once
  • Large object support (Luwak)
  • Comes in “open source” and “enterprise” editions
  • Full-text search, indexing, querying with Riak Search server (beta)
  • In the process of migrating the storing backend from “Bitcask” to Google’s “LevelDB”
  • Masterless multi-site replication replication and SNMP monitoring are commercially licensed

Best used: If you want something Cassandra-like (Dynamo-like), but no way you’re gonna deal with the bloat and complexity. If you need very good single-site scalability, availability and fault-tolerance, but you’re ready to pay for multi-site replication.

For example: Point-of-sales data collection. Factory control systems. Places where even seconds of downtime hurt. Could be used as a well-update-able web server.

CouchDB (V1.1.1)

  • Written in: Erlang
  • Main point: DB consistency, ease of use
  • License: Apache
  • Protocol: HTTP/REST
  • Bi-directional (!) replication,
  • continuous or ad-hoc,
  • with conflict detection,
  • thus, master-master replication. (!)
  • MVCC – write operations do not block reads
  • Previous versions of documents are available
  • Crash-only (reliable) design
  • Needs compacting from time to time
  • Views: embedded map/reduce
  • Formatting views: lists & shows
  • Server-side document validation possible
  • Authentication possible
  • Real-time updates via _changes (!)
  • Attachment handling
  • thus, CouchApps (standalone js apps)
  • jQuery library included

Best used: For accumulating, occasionally changing data, on which pre-defined queries are to be run. Places where versioning is important.

For example: CRM, CMS systems. Master-master replication is an especially interesting feature, allowing easy multi-site deployments.

Redis (V2.4)

  • Written in: C/C++
  • Main point: Blazing fast
  • License: BSD
  • Protocol: Telnet-like
  • Disk-backed in-memory database,
  • Currently without disk-swap (VM and Diskstore were abandoned)
  • Master-slave replication
  • Simple values or hash tables by keys,
  • but complex operations like ZREVRANGEBYSCORE.
  • INCR & co (good for rate limiting or statistics)
  • Has sets (also union/diff/inter)
  • Has lists (also a queue; blocking pop)
  • Has hashes (objects of multiple fields)
  • Sorted sets (high score table, good for range queries)
  • Redis has transactions (!)
  • Values can be set to expire (as in a cache)
  • Pub/Sub lets one implement messaging (!)

Best used: For rapidly changing data with a foreseeable database size (should fit mostly in memory).

For example: Stock prices. Analytics. Real-time data collection. Real-time communication.

HBase (V0.92.0)

  • Written in: Java
  • Main point: Billions of rows X millions of columns
  • License: Apache
  • Protocol: HTTP/REST (also Thrift)
  • Modeled after Google’s BigTable
  • Uses Hadoop’s HDFS as storage
  • Map/reduce with Hadoop
  • Query predicate push down via server side scan and get filters
  • Optimizations for real time queries
  • A high performance Thrift gateway
  • HTTP supports XML, Protobuf, and binary
  • Cascading, hive, and pig source and sink modules
  • Jruby-based (JIRB) shell
  • Rolling restart for configuration changes and minor upgrades
  • Random access performance is like MySQL
  • A cluster consists of several different types of nodes

Best used: Hadoop is probably still the best way to run Map/Reduce jobs on huge datasets. Best if you use the Hadoop/HDFS stack already.

For example: Analysing log data.

Neo4j (V1.5M02)

  • Written in: Java
  • Main point: Graph database – connected data
  • License: GPL, some features AGPL/commercial
  • Protocol: HTTP/REST (or embedding in Java)
  • Standalone, or embeddable into Java applications
  • Full ACID conformity (including durable data)
  • Both nodes and relationships can have metadata
  • Integrated pattern-matching-based query language (“Cypher”)
  • Also the “Gremlin” graph traversal language can be used
  • Indexing of nodes and relationships
  • Nice self-contained web admin
  • Advanced path-finding with multiple algorithms
  • Indexing of keys and relationships
  • Optimized for reads
  • Has transactions (in the Java API)
  • Scriptable in Groovy
  • Online backup, advanced monitoring and High Availability is AGPL/commercial licensed

Best used: For graph-style, rich or complex, interconnected data. Neo4j is quite different from the others in this sense.

For example: Social relations, public transport links, road maps, network topologies.

Cassandra

  • Written in: Java
  • Main point: Best of BigTable and Dynamo
  • License: Apache
  • Protocol: Custom, binary (Thrift)
  • Tunable trade-offs for distribution and replication (N, R, W)
  • Querying by column, range of keys
  • BigTable-like features: columns, column families
  • Has secondary indices
  • Writes are much faster than reads (!)
  • Map/reduce possible with Apache Hadoop
  • All nodes are similar, as opposed to Hadoop/HBase

Best used: When you write more than you read (logging). If every component of the system must be in Java. (“No one gets fired for choosing Apache’s stuff.”)

For example: Banking, financial industry (though not necessarily for financial transactions, but these industries are much bigger than that.) Writes are faster than reads, so one natural niche is real time data analysis.

Membase

  • Written in: Erlang & C
  • Main point: Memcache compatible, but with persistence and clustering
  • License: Apache 2.0
  • Protocol: memcached plus extensions
  • Very fast (200k+/sec) access of data by key
  • Persistence to disk
  • All nodes are identical (master-master replication)
  • Provides memcached-style in-memory caching buckets, too
  • Write de-duplication to reduce IO
  • Very nice cluster-management web GUI
  • Software upgrades without taking the DB offline
  • Connection proxy for connection pooling and multiplexing (Moxi)

Best used: Any application where low-latency data access, high concurrency support and high availability is a requirement.

For example: Low-latency use-cases like ad targeting or highly-concurrent web apps like online gaming (e.g. Zynga).

 

Posted in Uncategorized | Leave a comment

Redis and MongoDB insertion performance analysis

Redis and MongoDB insertion performance analysis

You can find here the three benchmarks snippets

import redis, time
MAX_NUMS = 1000

r = redis.Redis(host='localhost', port=6379, db=0)
del r['list']

nums = range(0, MAX_NUMS)
clock_start = time.clock()
time_start = time.time()
for i in nums:
    r.rpush('list', i)
time_end = time.time()
clock_end = time.clock()

print 'TOTAL CLOCK', clock_end-clock_start
print 'TOTAL TIME', time_end-time_start
import pymongo, time
MAX_NUMS = 1000

con = pymongo.Connection()
db = con.test_db
db.testcol.remove({})
db.testlist.remove({})

nums = range(0, MAX_NUMS)
clock_start = time.clock()
time_start = time.time()
for i in nums:
    db.testlist.insert({'v':i})
time_end = time.time()
clock_end = time.clock()

print 'TOTAL CLOCK', clock_end-clock_start
print 'TOTAL TIME', time_end-time_start
import pymongo, time
MAX_NUMS = 1000

con = pymongo.Connection()
db = con.test_db
db.testcol.remove({})
db.testlist.remove({})
oid = db.testcol.insert({'name':'list'})

nums = range(0, MAX_NUMS)
clock_start = time.clock()
time_start = time.time()
for i in nums:
    db.testcol.update({'_id':oid}, {'$push':{'values':i}})
time_end = time.time()
clock_end = time.clock()

print 'TOTAL CLOCK', clock_end-clock_start
print 'TOTAL TIME', time_end-time_start

 

 

 

 

Posted in Uncategorized | Leave a comment

How fast is Redis?

How fast is Redis?

Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests]> [-k <boolean>]

-h <hostname>      Server hostname (default 127.0.0.1)
-p <port>          Server port (default 6379)
-s <socket>        Server socket (overrides host and port)
-c <clients>       Number of parallel connections (default 50)
-n <requests>      Total number of requests (default 10000)
-d <size>          Data size of SET/GET value in bytes (default 2)
-k <boolean>       1=keep alive 0=reconnect (default 1)
-r <keyspacelen>   Use random keys for SET/GET/INCR, random values for SADD
  Using this option the benchmark will get/set keys
  in the form mykey_rand:000000012456 instead of constant
  keys, the <keyspacelen> argument determines the max
  number of values for the random number. For instance
  if set to 10 only rand:000000000000 - rand:000000000009
  range will be allowed.
-P <numreq>        Pipeline <numreq> requests. Default 1 (no pipeline).
-q                 Quiet. Just show query/sec values
--csv              Output in CSV format
-l                 Loop. Run the tests forever
-t <tests>         Only run the comma separated list of tests. The test
                    names are the same as the ones produced as output.
-I                 Idle mode. Just open N idle connections and wait.

Data size impact

 

NUMA chart

connections chart

Latency percentiles

$ redis-benchmark -n 100000

====== SET ======
  100007 requests completed in 0.88 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

58.50% <= 0 milliseconds
99.17% <= 1 milliseconds
99.58% <= 2 milliseconds
99.85% <= 3 milliseconds
99.90% <= 6 milliseconds
100.00% <= 9 milliseconds
114293.71 requests per second

====== GET ======
  100000 requests completed in 1.23 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

43.12% <= 0 milliseconds
96.82% <= 1 milliseconds
98.62% <= 2 milliseconds
100.00% <= 3 milliseconds
81234.77 requests per second

====== INCR ======
  100018 requests completed in 1.46 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

32.32% <= 0 milliseconds
96.67% <= 1 milliseconds
99.14% <= 2 milliseconds
99.83% <= 3 milliseconds
99.88% <= 4 milliseconds
99.89% <= 5 milliseconds
99.96% <= 9 milliseconds
100.00% <= 18 milliseconds
68458.59 requests per second

====== LPUSH ======
  100004 requests completed in 1.14 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

62.27% <= 0 milliseconds
99.74% <= 1 milliseconds
99.85% <= 2 milliseconds
99.86% <= 3 milliseconds
99.89% <= 5 milliseconds
99.93% <= 7 milliseconds
99.96% <= 9 milliseconds
100.00% <= 22 milliseconds
100.00% <= 208 milliseconds
88109.25 requests per second

====== LPOP ======
  100001 requests completed in 1.39 seconds
  50 parallel clients
  3 bytes payload
  keep alive: 1

54.83% <= 0 milliseconds
97.34% <= 1 milliseconds
99.95% <= 2 milliseconds
99.96% <= 3 milliseconds
99.96% <= 4 milliseconds
100.00% <= 9 milliseconds
100.00% <= 208 milliseconds
71994.96 requests per second

Notes: changing the payload from 256 to 1024 or 4096 bytes does not change the numbers significantly (but reply packets are glued together up to 1024 bytes so GETs may be slower with big payloads).

Example of benchmark results with optimized high-end server hardware

$ numactl -C 6 ./redis-benchmark -q -n 100000 -s /tmp/redis.sock -d 256
PING (inline): 200803.22 requests per second
PING: 200803.22 requests per second
MSET (10 keys): 78064.01 requests per second
SET: 198412.69 requests per second
GET: 198019.80 requests per second
INCR: 200400.80 requests per second
LPUSH: 200000.00 requests per second
LPOP: 198019.80 requests per second
SADD: 203665.98 requests per second
SPOP: 200803.22 requests per second
LPUSH (again, in order to bench LRANGE): 200000.00 requests per second
LRANGE (first 100 elements): 42123.00 requests per second
LRANGE (first 300 elements): 15015.02 requests per second
LRANGE (first 450 elements): 10159.50 requests per second
LRANGE (first 600 elements): 7548.31 requests per second

Using the TCP loopback:

$ numactl -C 6 ./redis-benchmark -q -n 100000 -d 256
PING (inline): 145137.88 requests per second
PING: 144717.80 requests per second
MSET (10 keys): 65487.89 requests per second
SET: 142653.36 requests per second
GET: 142450.14 requests per second
INCR: 143061.52 requests per second
LPUSH: 144092.22 requests per second
LPOP: 142247.52 requests per second
SADD: 144717.80 requests per second
SPOP: 143678.17 requests per second
LPUSH (again, in order to bench LRANGE): 143061.52 requests per second
LRANGE (first 100 elements): 29577.05 requests per second
LRANGE (first 300 elements): 10431.88 requests per second
LRANGE (first 450 elements): 7010.66 requests per second
LRANGE (first 600 elements): 5296.61 requests per second

 

 

 

 

 

 

Posted in Uncategorized | Leave a comment

javascript document.clear()

Help regarding document.clear().

clear the body tag, but not the head tag

document.documentElement.replaceChild(document.createElement("body"), document.body);

 clear everything:

document.replaceChild(document.createElement("html"), document.documentElement);

works with XML as well

document.replaceChild(document.createElement(document.documentElement.tagName), document.documentElement);
document.getElementsByTagName('BODY')[0].innerHTML = '';
Posted in Uncategorized | Leave a comment

Northern Goshawk in flight

Posted in Uncategorized | Leave a comment

Northern Goshawk in flight

Posted in Uncategorized | Leave a comment

playing cards …

Posted in Uncategorized | Leave a comment