Comment 2SYC Re: Quick, Dirty

Story

PostgreSQL goes after MongoDB; benchmarks show PostgreSQL in the lead

Preview

Quick, Dirty (Score: 0)

by Anonymous Coward on 2014-09-25 22:03 (#2SY1)

Very quick and dirty test. He says right off the bat he's comparing default out of the box performance. There are so many possible tunings and usage scenarios that the only fair way to test is after tuning by experts in each product. While interesting, this test seems to mean nothing?

Re: Quick, Dirty (Score: 2, Interesting)

by evilviper@pipedot.org on 2014-09-25 22:31 (#2SY4)

Performance out-of-the-box shouldn't be underestimated. Not every database is a huge and mission-critical task deserving hours of your DBA's time to tune. In fact the overwhelming majority of database uses are quite the opposite, just some mundane back-end tasks for storing and collecting data.

If tuned performance was important to people, MySQL would never have caught-on as the M in LAMP... Instead, MySQL got popular because any idiot could install it and it would seem to work at decent speeds right away. You only got a rude awakening much later...

Re: Quick, Dirty (Score: 1, Interesting)

by Anonymous Coward on 2014-09-26 00:59 (#2SYB)

Well yeah, except their primary use case (literally, it's the first use case listed) is BIG DATA. It's fair to say that if you're routinely pushing and parsing terabytes of structured data, you probably can and should take a day or two to get the database optimized, no?

That's why an OOB test doesn't really seem to apply here. Mongo is supposed to be enterprise stuff. Anyhow, glad to see PostgreSQL is still in the game, and agreed it's too bad MySQL is still unavoidable.

Re: Quick, Dirty (Score: 2, Informative)

by evilviper@pipedot.org on 2014-09-26 01:24 (#2SYC)

Well yeah, except their primary use case (literally, it's the first use case listed) is BIG DATA. It's fair to say that if you're routinely pushing and parsing terabytes of structured data, you probably can and should take a day or two to get the database optimized, no?
No. You're simply stuck in a mindset of high-value databases. Try low-value data, on a large scale, instead... Turn up your syslog logging to the maximum amount of debug, then expand that out to hundreds and hundreds of heavily loaded servers, then log it all to a central system, desperately trying to write that to a database for eventual aggregation and reporting... Consider something like an IDS or other monitoring on high-speed data networks, trying to keep track of data usage, in detail, on those gigabit speed lines all-around the clock.

Or just consider the cost of an extra server (with SSDs) versus the cost of hours of a DBA's time... For non-critical data in general, you're going to expand the cluster, rather than spend time and effort to tune things.

Moderation

Time Reason Points Voter
2014-09-26 12:44 Informative +1 zafiro17@pipedot.org

Junk Status

Not marked as junk