PostgreSQL goes after MongoDB; benchmarks show PostgreSQL in the lead

by
in science on (#2SXX)
story imagePostgreSQL, the much-loved opensource relational database, is ramping up its nosql game with a new developer kit that gives access to NoSQL features that beat MongoDB in benchmarks.
The PostgreSQL Project, which EnterpriseDB supports, added NoSQL-style JSON processing features back in 2012. Now, the company is encouraging further work around that feature set by providing a developer kit to make it easier for programmers to leverage PostgreSQL's JSON functions and build applications around them. ... The PGXDK (Postgres Extended Datatype Developer Kit) is designed to allow developers "to use Postgres for the kinds of applications that until recently required a specialized NoSQL-only solution," as EnterpriseDB describes it. A sample application is also included to make it easier for developers to get a leg up on working with the product. The whole package will be made available through AWS as a machine image (PostgreSQL has long been a staple Amazon offering).
Vibhar Kumar has published a set of benchmarks here that show PostgreSQL eating MongoDB's lunch when measured in use of diskspace, bulk loading, and INSERTs.

Re: Quick, Dirty (Score: 2, Informative)

by evilviper@pipedot.org on 2014-09-26 01:24 (#2SYC)

Well yeah, except their primary use case (literally, it's the first use case listed) is BIG DATA. It's fair to say that if you're routinely pushing and parsing terabytes of structured data, you probably can and should take a day or two to get the database optimized, no?
No. You're simply stuck in a mindset of high-value databases. Try low-value data, on a large scale, instead... Turn up your syslog logging to the maximum amount of debug, then expand that out to hundreds and hundreds of heavily loaded servers, then log it all to a central system, desperately trying to write that to a database for eventual aggregation and reporting... Consider something like an IDS or other monitoring on high-speed data networks, trying to keep track of data usage, in detail, on those gigabit speed lines all-around the clock.

Or just consider the cost of an extra server (with SSDs) versus the cost of hours of a DBA's time... For non-critical data in general, you're going to expand the cluster, rather than spend time and effort to tune things.
Post Comment
Subject
Comment
Captcha
9 plus seven is what?