Thanks for the links! In the December 2015 article the 1.2 million number comes with this condition "This workload is purely in-memory using SCHEMA_ONLY tables, which are not logged and not persisted to disk". The MySQL 1.5 million number is for a database that is on disk and logged (durable). For a durable table it seems like the MSSQL QPS number is closer to 250k; which is impressive, it's within an order of magnitude of MySQL.
All 3 tests (MSSQL , MySQL, PostgreSQL) seem to use a 4 socket board (MySQL & PostgreSQL both72 cores total; MSSQL doesn't say) so I'd guess that the hardware is fairly even.
Thanks for the links! In the December 2015 article the 1.2 million number comes with this condition "This workload is purely in-memory using SCHEMA_ONLY tables, which are not logged and not persisted to disk". The MySQL 1.5 million number is for a database that is on disk and logged (durable). For a durable table it seems like the MSSQL QPS number is closer to 250k; which is impressive, it's within an order of magnitude of MySQL.
PostgreSQL has caught up to MySQL as well, it seems to hit 1.6 million; not sure it its a single table or multiple tables (MySQL does 1.6 million when hitting multiple tables) - http://akorotkov.github.io/blog/2016/05/09/scalability-towards-millions-....
All 3 tests (MSSQL , MySQL, PostgreSQL) seem to use a 4 socket board (MySQL & PostgreSQL both72 cores total; MSSQL doesn't say) so I'd guess that the hardware is fairly even.