Add new comment

Writing a well documented - with references - article takes some time, which I unfrotunately am quite scarce of lately. But your concerns are valid, and I understand that a plain x15 improved performance statement is poor without context.

In-memory OLTP was introduced in SQL Server 2014, but had too many limitations to be used as a drop-in headache-less replacement for current database storage:

http://www.sqlpassion.at/archive/2015/03/11/dont-yet-recommend-memory-ol...

https://www.simple-talk.com/sql/performance/the-promise---and-the-pitfal...

In SQL Server 2016 they made it faster, more robust and - most important - removed all those limitations.

https://blogs.technet.microsoft.com/dataplatforminsider/2015/12/10/speed...

This article claims a single server sustained throughput of 1.2 million batch requests per second (little shorter than what you posted for MySQL) with 4Kb per request, resulting in a total throughput of 4.8Gb per second.

But.... in-memory OLTP is not just about throughput... claims are that they bring x30 improved performance for transactional workloads wich is the actual intented performance gain of in-memory OTLP: making things fast - very fast - while still being ACID. Making things fast for read-only single tabled non relational scenarios is - sort of - "easy" , the challenge here is to bring this performance to complex relational data models in transactional scenarios.

I have attached to this article a white paper detailing the internals of the new in-memory OLTP in SQL Server 2016.

Unfortunately, SQL Server 2016 is still in RC3 and there is very little in the wild usage/benchmarks.