
This one’s really easy to do – it probably requires the least changes to your application and database schema – but it’s just that you won’t have the amazingly millisecond-fast responses that a columnstore index can get you. Is there any way to get the exact row count of all tables in Postgres in fast way Count () seems to run very slowly for larger tables. Get on SQL Server 2019 or newer, and put your database in compat level 150 (2019) – even with rowstore indexes, you can still cut your CPU usage dramatically thanks to batch mode on rowstore.
#POSTGRESQL COUNT ROWS IN TABLE SERIES#
To learn more about the specifics, read Niko’s series on columnstore indexes, specifically the posts with the word “batch” in the title.
Get on any version that supports batch mode on columnstore indexes, and put a columnstore index on the table – although your experiences are going to vary dramatically depending on the kind of query you have. Get on SQL Server 2017 or newer, and put a columnstore index on the table. In descending order of preference & speed, with the best results first: So to make SELECT COUNT(*) queries fast, here’s what to do: Let’s put that in perspective: I know some developers who try to hit system tables in order to count rows quickly, and they can’t even generate speedy results like this. I have to change my units of measure here: The execution plan has our fancypants new columnstore index scan operator, and all of the operators in the plan are in batch mode: 1: Plain ol’ COUNT(*) with only a clustered rowstore index, compatibility level 2017 & prior I’m running these tests on SQL Server 2019 (.41) on an 8-core VM with 64GB RAM. For the sake of these tests, I’m not going to talk about isolation levels or blocking. There are also other ways to measure these methods depending on your own performance requirements: memory grants, ability to run without blocking, and even the accuracy of the results under concurrency. In your own environment, for the tables you’re trying to count and the hardware you’re using and the version you’re on and the phase of the moon, you’re going to get different results, and that’s fine.
How much CPU time it uses (gauged with SET STATISTICS TIME ON)ĭon’t obsess over small differences between the operations – I’m writing this blog post fast & furious to show you the big-picture differences, and to show you how my thought process works when comparing the different operations.How many pages it reads (gauged with SET STATISTICS IO ON).Let’s do an exploration of the Votes table in the Stack Overflow database, specifically the 2018-06 ~300GB version where the Votes table has 150,784,380 rows taking up ~5.3GB of space. When you run a SELECT COUNT(*), the speed of the results depends a lot on the structure & settings of the database.