Thursday, September 18, 2025

MySQL automatically indexes materialized CTEs

Did you know -

When you use a CTE in MySQL, should the optimizer choose to materialize it, it may automatically create an index on the fly, if it thinks your query could benefit from it?

Let's dig a bit into this.

First a little context, in case anyone is not aware, the materialization approach to CTE processing is when MySQL chooses to execute the CTE query, and store its results in an internal temporary table, to be reused in the parent query. This tends to be a useful approach when the CTE is referenced multiple times.

The alternate approach is to merge the CTE into the parent query. I won't discuss the merge approach for now.

For example, say you ran the following query:

WITH data_cte AS (
    /* expensive query */ 
    SELECT group_id, AVG(x) AS avg_x, MAX(x) AS max_x FROM table_100m GROUP BY group_id
)
/* "parent" part of the query; notice multiple references to data_cte
SELECT group_id, avg_x FROM data_cte WHERE max_x = 100000 -- search by max_x
UNION ALL
SELECT group_id, avg_x FROM data_cte WHERE max_x = 99999
UNION ALL
SELECT group_id, avg_x FROM data_cte WHERE max_x = 99998
;


For this query, MySQL's optimizer may be able to materialize the CTE. In addition to that, because the parent query searches by max_x three times, the optimizer may decide, "hey, I'd better create an index on max_x in the materialization."

The execution plan shows that:

1. The CTE called data_cte was indeed materialized

...and...

2. For all three lookups by max_x in the parent query, we can see in the plan an index lookup occurred. e.g. Index lookup on avg_data_cte using <auto_key0> (max_x = 99998). MySQL's optimizer automatically created an index to assist in these searches!

Execution plan:

 | -> Append  (cost=10.5 rows=30) (actual time=16912..16912 rows=2115 loops=1)
    -> Stream results  (cost=3.5 rows=10) (actual time=16912..16912 rows=419 loops=1)
        -> Filter: (data_cte.max_x = 100000)  (cost=3.5 rows=10) (actual time=16912..16912 rows=419 loops=1)
            -> Index lookup on data_cte using <auto_key0> (max_x = 100000)  (cost=0.35..3.5 rows=10) (actual time=16912..16912 rows=419 loops=1)
                -> Materialize CTE data_cte if needed  (cost=0..0 rows=0) (actual time=16912..16912 rows=10000 loops=1)
                    -> Table scan on <temporary>  (actual time=16909..16909 rows=10000 loops=1)
                        -> Aggregate using temporary table  (actual time=16909..16909 rows=10000 loops=1)
                            -> Table scan on table_100m  (cost=10.2e+6 rows=99.8e+6) (actual time=2.2..8625 rows=100e+6 loops=1)
    -> Stream results  (cost=3.5 rows=10) (actual time=0.0248..0.117 rows=786 loops=1)
        -> Filter: (data_cte.max_x = 99999)  (cost=3.5 rows=10) (actual time=0.014..0.0798 rows=786 loops=1)
            -> Index lookup on data_cte using <auto_key0> (max_x = 99999)  (cost=0.35..3.5 rows=10) (actual time=0.0129..0.0546 rows=786 loops=1)
                -> Materialize CTE data_cte if needed (query plan printed elsewhere)  (cost=0..0 rows=0) (never executed)
    -> Stream results  (cost=3.5 rows=10) (actual time=0.00196..0.108 rows=910 loops=1)
        -> Filter: (data_cte.max_x = 99998)  (cost=3.5 rows=10) (actual time=0.00183..0.0774 rows=910 loops=1)
            -> Index lookup on data_cte using <auto_key0> (max_x = 99998)  (cost=0.35..3.5 rows=10) (actual time=0.00158..0.0493 rows=910 loops=1)
                -> Materialize CTE data_cte if needed (query plan printed elsewhere)  (cost=0..0 rows=0) (never executed)

Notice the three instances of Index lookup on data_cte using <auto_key0> - that's the automagic index goodness in action.

There's some meta-context here. There's a recurring debate in internet SQL discussions about what is better: CTEs or temp tables. In my view, the two apply to only partially overlapping use cases, but the idea is you could rewrite the above query by splitting it into two statements, the first of which substitutes a temp table for the CTE, like so:

CREATE TEMPORARY TABLE data_tmp AS 
    SELECT group_id, AVG(x) AS avg_x, MAX(x) AS max_x FROM table_100m GROUP BY group_id;

Then query the temp table, like so:

SELECT group_id, avg_x FROM data_tmp WHERE max_x = 100000 -- search by max_x
UNION ALL
SELECT group_id, avg_x FROM data_tmp WHERE max_x = 99999
UNION ALL
SELECT group_id, avg_x FROM data_tmp WHERE max_x = 99998
;


Unfortunately, MySQL presents a blocker to this query: an unfortunate limitation that prevents me from even testing this approach because you can't reference a temporary table multiple times in one query. 

ERROR 1137 (HY000): Can't reopen table: 'data_tmp'

But I could instead try creating a "normal" table that is only used temporarily, to serve this query, then drop it after.

Below is the execution plan from this approach. Observe that it did NOT use an index - instead it ran Table scan on data_tbl three times - because we didn't explicitly create an index:

| -> Append  (cost=3011 rows=2989) (actual time=0.0747..15.2 rows=2115 loops=1)
    -> Stream results  (cost=1004 rows=996) (actual time=0.0734..6.31 rows=419 loops=1)
        -> Filter: (data_tbl.max_x = 100000)  (cost=1004 rows=996) (actual time=0.0675..6.21 rows=419 loops=1)
            -> Table scan on data_tbl  (cost=1004 rows=9963) (actual time=0.0488..5.69 rows=10000 loops=1)
    -> Stream results  (cost=1004 rows=996) (actual time=0.0383..4.23 rows=786 loops=1)
        -> Filter: (data_tbl.max_x = 99999)  (cost=1004 rows=996) (actual time=0.0304..4.15 rows=786 loops=1)
            -> Table scan on data_tbl  (cost=1004 rows=9963) (actual time=0.0219..3.74 rows=10000 loops=1)
    -> Stream results  (cost=1004 rows=996) (actual time=0.025..4.56 rows=910 loops=1)
        -> Filter: (data_tbl.max_x = 99998)  (cost=1004 rows=996) (actual time=0.021..4.47 rows=910 loops=1)
            -> Table scan on data_tbl  (cost=1004 rows=9963) (actual time=0.0122..3.99 rows=10000 loops=1)


But we could explicitly create an index. After doing so, and rerunning the query, we see a similar execution plan to the search portion of the CTE query's plan:

| -> Append  (cost=277 rows=2115) (actual time=0.0567..3.73 rows=2115 loops=1)
    -> Stream results  (cost=63.7 rows=419) (actual time=0.056..0.928 rows=419 loops=1)
        -> Index lookup on data_tbl using my_idx (max_x = 100000), with index condition: (data_tbl.max_x = 100000)  (cost=63.7 rows=419) (actual time=0.0504..0.88 rows=419 loops=1)
    -> Stream results  (cost=100 rows=786) (actual time=0.017..1.27 rows=786 loops=1)
        -> Index lookup on data_tbl using my_idx (max_x = 99999), with index condition: (data_tbl.max_x = 99999)  (cost=100 rows=786) (actual time=0.0151..1.19 rows=786 loops=1)
    -> Stream results  (cost=113 rows=910) (actual time=0.0143..1.42 rows=910 loops=1)
        -> Index lookup on data_tbl using my_idx (max_x = 99998), with index condition: (data_tbl.max_x = 99998)  (cost=113 rows=910) (actual time=0.012..1.32 rows=910 loops=1)


When this debate arises on the r/sql subreddit, I see temp tables getting all the upvotes over CTEs. I'm not always in agreement with the reasoning. Probably the single biggest fallacy I notice is they discuss how the CTEs vs. temp tables are processed under the hood, without specifying a DBMS. But you've heard my rants on that subject about 100 times.

One of the reasons sometimes offered is that you can index a temp table, but (supposedly) can't do that with a CTE. As we've seen just now, this is not true in MySQL. In fact one could argue the CTE is more automagic and therefore preferable. 

Also consider that merging may be the better approach than materialization, which is also part of the optimizer's automagic! If you create a temp table, that's forcing a materialization, whether it's better or not. If the MySQL optimizer is doing it's job right - which it usually does - we should let it do its thing and make the right decision. To be fair, it's from from perfect.

I'd also wager that at least 80% of developers wouldn't think to make the explicit index on their temp table.

Automagic ftw, in my opinion. Unless the automagic fails, then take other measures.

Here's one example of a Reddit comment on the debate. This is just a fundamental misunderstanding. (I'd feel better about it if it didn't get piles of upvotes.)

 

A classic r/sql error: sweeping statements about SQL engine behavior that are 1) not correct (at least not always, and far from always), 2) not qualified with a specific DBMS, and 3) somehow upvoted heavily

Contradicting the Reddit commenter, and their 26 upvoters, we've observed already that the CTE materialization MySQL can perform means the CTE query is NOT executed once per reference While it's possible they are correct about *some* DBMS out there (and perhaps in some querying scenarios but not all), every one of the several other DBMSs for which I've tested this subject is also capable of materializing CTEs.

It used to be the case in Postgres that CTEs were always materialized! That isn't the best approach 100% of the time.

Funny thing is, in MySQL, the materialized CTE literally becomes a temp table under the hood! So it almost turns into an argument of, "My temp table is better than your temp table!" You just don't have to manage it yourself using the CTE approach, and again, MySQL might choose the merge approach which may run better. Automagic ftw!

But I may be digressing a bit. I'll cut the discussion here unless anyone has any questions/comments. 


Monday, September 01, 2025

Division in MySQL vs. Postgres: SQL is SQL so it's all interchangeable am-I-right?

Did you know -

The integer division operator in MySQL is DIV? The operator we all know and love, / , will always produce a DECIMAL as its output. so if you want it to yield an integer, use DIV:


mysql> CREATE TABLE T AS SELECT 4 DIV 3 AS DUMMY1, 4/3 AS DUMMY2, 4/4 AS DUMMY3;
Query OK, 1 row affected, 1 warning (0.03 sec)
Records: 1  Duplicates: 0  Warnings: 1

mysql> SELECT * FROM T; /* observe the results */
+--------+--------+--------+
| DUMMY1 | DUMMY2 | DUMMY3 |
+--------+--------+--------+
|      1 | 1.3333 | 1.0000 |
+--------+--------+--------+
1 row in set (0.01 sec)

mysql> DESC T; /* observe the data types each kind of division yielded */
+--------+--------------+------+-----+---------+-------+
| Field  | Type         | Null | Key | Default | Extra |
+--------+--------------+------+-----+---------+-------+
| DUMMY1 | int          | YES  |     | NULL    |       |
| DUMMY2 | decimal(5,4) | YES  |     | NULL    |       |
| DUMMY3 | decimal(5,4) | YES  |     | NULL    |       |
+--------+--------------+------+-----+---------+-------+
3 rows in set (0.00 sec)

How about other DBMSs? Well in Postgres, there is no DIV but there is of course /- and basically the way it works is if you divide integers, it will produce an integer. But say you divide floats or numeric types, then it will produce floats or numeric types.


postgres=# CREATE TABLE T AS SELECT 4/3 AS DUMMY1, 4::float/3::float AS DUMMY2, 4.0/3.0 AS DUMMY3, 4/4 AS DUMMY4;
SELECT 1

postgres=# SELECT * FROM T;
 dummy1 |       dummy2       |       dummy3       | dummy4
--------+--------------------+--------------------+--------
      1 | 1.3333333333333333 | 1.3333333333333333 |      1
(1 row)


postgres=# \d T
                      Table "public.t"
 Column |       Type       | Collation | Nullable | Default
--------+------------------+-----------+----------+---------
 dummy1 | integer          |           |          |
 dummy2 | double precision |           |          |
 dummy3 | numeric          |           |          |
 dummy4 | integer          |           |          |

 

This is something like exhibit #5001 as to why, when the topic of database interchangeability comes up - because “SQL is SQL” - I think people tend to underestimate the huge number of “gotchas” that can be encountered. 

SQL here != SQL there. 







Thursday, August 07, 2025

The LENGTH() function - or - SQL is SQL so it's all interchangeable am-I-right?

Let's test the LENGTH() vs. CHAR_LENGTH() function on some emoji characters in MySQL and Postgres: 

mysql> SELECT x FROM emoji;
+--------------+
| x |
+--------------+
| 😄👋👍 |
+--------------+
1 row in set (0.02 sec) 

mysql> SELECT LENGTH(x) AS len, CHAR_LENGTH(x) AS char_len FROM emoji; +------+----------+ | len | char_len | +------+----------+ | 12 | 3 | +------+----------+ 1 row in set (0.02 sec) 

---

postgres=# SELECT x FROM emoji;
x
--------
😄👋👍
(1 row) 

postgres=# SELECT LENGTH(x) AS len, CHAR_LENGTH(x) AS char_len FROM emoji; len | char_len -----+---------- 3 | 3 (1 row) 

 

Expanded thoughts on the subject:

LENGTH() is the most common function we tend to reach for when we want the length of a string - typically we want number of characters and not bytes, at least in my experience. But LENGTH() gets number of bytes in MySQL, and on the other hand, number of characters in Postgres.

How could they be so different?!? Here's the thing: LENGTH() is not part of standard SQL, if that even matters since implementations deviate from it so much.

At least, I looked into the few documents freely available, and there IS no LENGTH() function in standard SQL. Rather, there are: CHARACTER_LENGTH() (aka CHAR_LENGTH()), OCTET_LENGTH(), and BIT_LENGTH()

If you test ONLY these standard functions (or "length expressions" as standard SQL puts it), you find the results do match between MySQL and Postgres:

mysql> SELECT CHAR_LENGTH(x), CHARACTER_LENGTH(x), BIT_LENGTH(x), OCTET_LENGTH(x) FROM emoji;
+----------------+---------------------+---------------+-----------------+
| CHAR_LENGTH(x) | CHARACTER_LENGTH(x) | BIT_LENGTH(x) | OCTET_LENGTH(x) |
+----------------+---------------------+---------------+-----------------+
|              3 |                   3 |            96 |              12 |
+----------------+---------------------+---------------+-----------------+
1 row in set (0.02 sec)


and:

postgres=# SELECT CHAR_LENGTH(x), CHARACTER_LENGTH(x), BIT_LENGTH(x), OCTET_LENGTH(x) FROM emoji;
 char_length | character_length | bit_length | octet_length
-------------+------------------+------------+--------------
           3 |                3 |         96 |           12
(1 row)


More fun facts:

Microsoft loves using its own special Microsofty style functions, for Microsofty consistency purposes, so it has no LENGTH(), only LEN(), just like the Len() function you might have used in Excel, or in Visual Basic in 1995. (See also DateDiff() and others.) 

  • Oracle has LENGTH() (number of chars in the string), and LENGTHB() (number of bytes in the string)
  • In Snowflake and Databricks/Spark SQL, LENGTH() returns number of chars.
  • So, it appears that MySQL is the odd one out.


This can be a particularly hairy gotcha if you are porting SQL queries from MySQL to another DBMS or vice versa. It may elude testing even, since most of our strings, generally speaking, tend not to include "special" characters such as Asian characters or emoji. (I've only worked for US-based companies, so I may be biased.) But they do crop up on occasion. So those occasions may potentially break things. 






 

 

Saturday, July 19, 2025

Playing SQL Mythbuster - or - I really ought to lay off the SQL on social media

Often, when I visit SQL-related forums and social media, I encounter posts that are half-truths, overgeneralizations, and sometimes, outright myths. This is especially true on the subject of SQL performance. 

The problem is I can't just scroll past. I have this compulsion to play SQL Mythbuster. If the posts had, say, 2 likes and that's it, I'd probably move on. But they often have hundreds of positive reactions and comments in response to some dubious, unsupported, or at least badly qualified claims. 


 

For example, here's a recent LinkedIn post:



Here's why I'm not a fan of this post (no offense to the author):

  1. The proposed optimization is a magic performance trick. It is essentially saying, "Hey, instead of writing this straightforward query, you can rearrange it in this magic way to make it go fast!" The post doesn't attempt to justify the claim other than by unconvincingly stating that the more straightforward query (hereafter called Query 1) causes "poor index utilization," while the supposedly optimized query (hereafter called Query 2) presumably does not.

  2. The post doesn't even state which of the 100+ DBMSs* in the world he's running, each of which has unique performance characteristics. Always be skeptical of claims resembling, "In SQL, always do this thing to make it fast," without qualifying the claim with a specific DBMS. It is like saying, "In vehicles with an internal combustion engine, always do this to make them run fast." For starters, what kind of vehicle: car? truck? plane?

    It is worth mentioning, despite my skepticism, that it's quite possible this trick did work for some DBMS. But which one? Tell us! My intuition tells me there's no way the claim is universally true, but it could be accurate for some specific version of some specific DBMS (in a specific schema, specific data set, specific configuration, etc.) The problem I have is that it's presented as if the trick applies to all of "SQL." (Whatever that is; SQL doesn't exist in a vacuum, remember!) To be fair, it didn't explicitly say "all of SQL," but IMHO, it is implied.

    Also worth mentioning, a commenter asked which DBMS, but the original poster didn't reply.

    *I am using "DBMS" as a generic, catch-all term of choice to refer to a relational DBMS or any other SQL engine

  3. There is no evidence presented to convince the reader that Query 2 is faster: no timing data; no execution plans. There is no demonstration of the "poor index utilization" of Query 1. The poster just said it's real and left it at that. 🤷 One commenter did ask to see some evidence, and the poster pushed back. 🤷🤷

  4. Although we can make some reasonable assumptions - e.g. id is probably the sole primary key column, and it's an integer type - nothing is mentioned about the table definition, number of rows, width of the rows, the table's physical structure (i.e. is it a clustered index?) Does it have indexes? And so on.
     
  5.  The final, and perhaps most important, reason I'm not a fan of the post is this:

    Great (/s) - so 926 reactions, most of which are positive. In other words, a lot of folks accepting this performance trick as factual. Also notice the 51 reposts. Way to spread yet another new myth. 🤦🏻‍♂️

That said, is it actually a myth? Could it be possible that despite being not the best-written post, Query 2 does actually run faster than Query 1 on some DBMSs?

When in doubt, test it out!

But here's the thing - I can't test it on all 100+ SQL DBMSs in the world. Who can? (Maybe somebody with a really cool test suite set up?) What I can do is test it on a recent version of an extremely popular DBMS.

Below is a test case on MySQL 8.0, using reasonable assumptions, that demonstrates Query 2 is not faster, so it disproves the claim...at least for my specific test in MySQL 8.0. The test below, of course, does not prove anything about Oracle or Snowflake, or Microsoft or any other DBMS.

But all it really takes to disprove a universal claim is a single counterexample, and MySQL 8.0 isn't exactly an obscure DBMS. So I consider the claim to be a myth. ("BUSTED," if you want. :)) 

I should put my money where my mouth is and show my tests. So, here they are! 

MySQL 8.0 Tests: 

mysql> /* setup */
mysql> CREATE TABLE users (id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(100), salary INT);
Query OK, 0 rows affected (0.01 sec)


mysql> /* generate 10 million dummy rows */
mysql> INSERT INTO users (name, salary)
    -> SELECT RAND(), RAND()*10000 FROM information_schema.columns c1, information_schema.columns c2, information_schema.columns c3
    -> LIMIT 10000000;
Query OK, 10000000 rows affected (1 min 39.74 sec)
Records: 10000000  Duplicates: 0  Warnings: 0

For the purpose of this demonstration, I am searching by 10,000 IDs, from a 10 million row table. If I search by a mere five IDs, both queries execute so fast that the MySQL CLI (my client) reports both as running in 0.00 seconds flat. So I needed to scale up the search to the point at which there can be a measurable difference.

Query 1:

mysql> EXPLAIN ANALYZE
    -> SELECT * FROM users WHERE id IN (9462823,6458594,3326348, <snip>); 

Query 1 Average Time to Execute (6 trials): 122 ms 

Show details
  • Trial 1: 200 ms
  • Trial 2: 160 ms
  • Trial 3: 140 ms
  • Trial 4: 80 ms
  • Trial 5: 90 ms
  • Trial 6: 60 ms
  • (MySQL cli used, which rounds to hundredths of a second)

 

Query 1 Sample Execution Plan:

-> Filter: (users.id in (9462823,6458594,3326348,<snip>))  (cost=10122 rows=9994) (actual time=0.0575..43.9 rows=10000 loops=1)
    -> Index range scan on users using PRIMARY over (id = 741) OR (id = 1245) OR (9998 more)  (cost=4502 rows=10000) (actual time=0.0525..41.9 rows=10000 loops=1)

 
Note MySQL used an index range scan on users using PRIMARY. that means it searched essentially for a whole batch of the id values using the primary index in one scan, which you can't really hope to do much better than. Absolutely not "poor index utilization."

Query 2:

mysql> WITH ids(id) AS (
    ->     /* should use row constructors IMO, but this is how OOP did it */
    ->     SELECT 9462823 UNION ALL
    ->     SELECT 6458594 UNION ALL
    ->     SELECT 3326348 UNION ALL
    ->     <snip>
    -> )
    -> SELECT u.*
    -> FROM users u
    -> JOIN ids ON u.id = ids.id;


Query 2 Average Time to Execute (6 trials run): 197 ms 

Show details
  • Trial 1: 240 ms
  • Trial 2: 230 ms
  • Trial 3: 220 ms
  • Trial 4: 200 ms
  • Trial 5: 150 ms
  • Trial 6: 140 ms
  • (MySQL cli used, which rounds to hundredths of a second)

 

Query 2 Sample Execution Plan:

-> Nested loop inner join  (cost=11516 rows=10000) (actual time=7.88..70.8 rows=9955 loops=1)
    -> Table scan on ids  (cost=2304..2432 rows=10000) (actual time=7.71..8.19 rows=10000 loops=1)
        -> Materialize union CTE ids  (cost=2304..2304 rows=10000) (actual time=7.71..7.71 rows=10000 loops=1)
            -> Rows fetched before execution  (cost=0..0 rows=1) (actual time=166e-6..208e-6 rows=1 loops=1)
            -> Rows fetched before execution  (cost=0..0 rows=1) (actual time=167e-6..209e-6 rows=1 loops=1)
            -> Rows fetched before execution  (cost=0..0 rows=1) (actual time=42e-6..84e-6 rows=1 loops=1)
            <snip>
(cost=0..0 rows=1) (actual time=0..41e-6 rows=1 loops=1)
    -> Filter: (u.id = ids.id)  (cost=0.25 rows=1) (actual time=0.00218..0.00226 rows=1 loops=10000)
        -> Single-row index lookup on u using PRIMARY (id = ids.id)  (cost=0.25 rows=1) (actual time=0.00207..0.0021 rows=1 loops=10000)

What this plan tells us in plain English:

  1. Materialize the CTE ids (basically think like making a temp table) based on all those UNION ALL'ed constants (each of those tiny UNION ALL statements is represented by "Rows fetched before execution")
  2. Table scan ids, and for each id, do a single-row index lookup on the primary index. Notice this is done in a Nested loop inner join. Essentially: for each id in ids, do a single-row index lookup on primary index.

Conclusion: 

All in all, Query 2 didn't perform badly (avg 197 ms), but definitely worse than Query 1 (avg 122 ms), a relative difference of about 38%, which is pretty significant. (I actually wonder if I replaced all those UNION ALLs in Query 2 with row constructors, could that help the performance? But I'm not going to explore that today.)

Aside from performance claims, the post also claims Query 2 provides:

  • "single query execution": sure it's one query, but so is Query 1. Huh???
  • Readability: subjective, but I'm not seeing it in this instance, and I say this as a CTE fanboy. (Query 2 uses a CTE, i.e. the WITH clause.)
  • Portability: Either of these queries should successfully parse and execute on pretty much any modern SQL engine, but if you came up to me and said "I know a SQL DBMS that can only handle one of these two queries -- which is it? Dollars to doughnuts it would be Query 1. (I think of how MySQL didn't even support CTEs until MySQL 8!)

OK, end of rant for today! But yeah, sometimes I need to stay away from SQL chatter on the internet. There's all kinds of this stuff that drives me crazy. 

Edit Weeks Later: Now Let's Test PostgreSQL 17.4! 

I thought I might enhance this post by repeating the same MySQL tests, but on Postgres 17.4 instead! See below.
 
postgres=# CREATE TABLE users (id INT GENERATED BY DEFAULT AS IDENTITY PRIMARY KEY, name VARCHAR(100), salary INT);
CREATE TABLE

postgres=# INSERT INTO users (name, salary)
postgres-# SELECT RANDOM(), RANDOM()*10000
postgres-# FROM generate_series(1, 10000000);
INSERT 0 10000000
Time: 38705.863 ms (00:38.706)
 

Query 1, Postgres:

postgres=# EXPLAIN ANALYZE
postgres-# SELECT * FROM users WHERE id IN (1322614, 8993900, 6548546, <snip>);
 

Query 1, Postgres, Average Time to Execute (6 trials): 103 ms 

Show details
  • Trial 1: 11.5 ms (planning) + 99.0 ms (execution) = 110.5 ms
  • Trial 2: 7.9 ms (planning) + 156.6 ms (execution) = 164.5 ms
  • Trial 3: 8.6 ms (planning) 60.7 ms (execution) = 69.3 ms
  • Trial 4: 11.1 ms (planning) 92.5 ms (execution) = 103.6 ms
  • Trial 5: 6.4 (planning) 70.4 ms (execution) = 76.8 ms
  • Trial 6: 11.6 (planning) 88.1 ms (execution) = 99.7 ms

Query 1, Postgres, Sample Execution Plan:

 Index Scan using users_pkey on users  (cost=0.43..35565.89 rows=10000 width=27) (actual time=3.200..88.645 rows=10000 loops=1)
   Index Cond: (id = ANY ('{1322614,8993900,6548546,3547394,9242769,4281742,4334656, <snip>}'::integer[]))

Query 2, Postgres:

postgres=# EXPLAIN ANALYZE
postgres-# WITH ids(id) AS (
postgres(#     SELECT 1322614 UNION ALL
postgres(#     SELECT 8993900 UNION ALL
postgres(#     SELECT 6548546 UNION ALL
postgres(# <snip>
postgres(# )
postgres-# SELECT u.*
postgres-# FROM users u
postgres-# JOIN ids ON u.id = ids.id;


Query 2, Postgres, Average Time to Execute  (6 trials): 415.5 ms 

Show details
  • Trial 1: 170.6 ms (planning) + 244.0 ms (execution) = 414.6 ms
  • Trial 2: 180.9 ms (planning) + 284.4 ms (execution) = 465.3 ms
  • Trial 3: 169.9 ms (planning) + 220.1 ms (execution) = 389.1 ms
  • Trial 4: 168.5 ms (planning) + 214.2 ms (execution) = 382.7 ms
  • Trial 5: 176.6 ms (planning) + 268.1 ms (execution) = 444.7 ms
  • Trial 6: 174.8 ms (planning) + 219.3 ms (execution) = 394.1 ms

Query 2, Postgres, Sample Execution Plan:

 Gather  (cost=1000.43..2057.59 rows=10000 width=27) (actual time=25.238..216.174 rows=10000 loops=1)
   Workers Planned: 2
   Workers Launched: 2
   ->  Nested Loop  (cost=0.43..57.59 rows=4167 width=27) (actual time=0.052..87.485 rows=3333 loops=3)
         ->  Parallel Append  (cost=0.00..50.00 rows=1 width=4) (actual time=0.002..3.282 rows=3333 loops=3)
               ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
               ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.000..0.000 rows=1 loops=1)
               ->  Result  (cost=0.00..0.01 rows=1 width=4) (actual time=0.001..0.001 rows=1 loops=1)
               <snip>
         ->  Index Scan using users_pkey on users u  (cost=0.43..7.58 rows=1 width=27) (actual time=0.025..0.025 rows=1 loops=10000)
               Index Cond: (id = (1322614))



⚠️ On Postgres, my first attempt at running Query 2 failed because:

ERROR:  stack depth limit exceeded
HINT:  Increase the configuration parameter "max_stack_depth" (currently 2048kB), after ensuring the platform's stack depth limit is adequate.


This is an issue when stringing all those SELECTs of literals separated by UNION ALLs, which nobody should be doing in 2025 unless your DBMS of choice does not support row constructors (Postgres does). My workaround for the purposes of running these tests was to add this to the top of my script:

set max_stack_depth = '7664kB';
 
7664kB is the max allowed by Postgres, and it isn't that much larger than the default of 2048kB. I got lucky in this particular use case, but if that's the ceiling, the Query 2 approach appears to be much less scalable on Postgres.

 

Conclusion for Postgres: 

So in summary, this magic performance trick was also BUSTED on Postgres 17.4. Query 2 is actually much slower than Query 1 for this given test; it took almost exactly 4 times longer to run!
 
Notice the Postgres planner tried to use parallel processing for Query 2, which my Postgres instance is configured to cap at 2 workers at the moment. It's possible that if I tuned parallel query settings, it could do better than presented here. It's not impossible that it could beat Query 1 even, but I have to go with the results I have here. This is why "for this particular configuration" is important to qualify your tests with, in addition to DBMS/version/etc.
 
It may even be worth mentioning all the tests in this post were run on my 2021 M1 Macbook Pro. The results could vary if I ran it on another machine, or on some cloud database service such as Amazon RDS.
 
 

Overall Conclusion: 

The major point I'm trying to make here is not "they are wrong and I am right," but rather, it is important to communicate about SQL performance more carefully! Always, with respect to performance quirks, qualify a post with your DBMS of choice. Ideally, talk about version and more detail about the schema and data as well. 

If some performance trick - say approach A runs faster than approach B - on your specific database running on, say, Oracle 19c, your experience is valid. But absolutely do not take away from your experience a vast overgeneralization such as: "In SQL, you should always do A because it's faster than B." 
 
And: always show the proof that one approach was slower or faster than the other! Show the respective timings, and ideally, the respective execution plans. Don't just say it's true. Convince us with evidence!

Always be skeptical of these magic performance tricks you see on the internet. If they can't even specify a DBMS, do not believe them! At least not by default. Make them put up or shut up. :) If they present the full test case and it supports their claim, great! But even then, the test shouldn't be generalized to all of "SQL." Rather, the test demonstrates something about a specific data set in a specific schema on a specific version of a specific DBMS product with a specific configuration, etc.
 
Question to the readers: what do you guys think? Should I test the claim on more DBMSs? Am I being fair? Do the tests seem valid? Let me know what you think in the comments!
 
 

Monday, May 26, 2025

MySQL metadata locks on child tables in foreign key relationships, and how Postgres compares

Probably the single biggest misconception I hear in SQL discussions is that there's this monolithic thing called the "RDBMS," or perhaps "SQL Database," and that they all work exactly (or "pretty much exactly") the same.

The logical chain usually goes something like:

MySQL-specific problem occurs => The problem is incorrectly generalized to all of "RDBMS" or all of "SQL" => "RDBMS"/"SQL" is bad and we should throw it out"

My motto is "SQL here != SQL there." (My other one is "When in doubt, test it out.")

Let's talk about this particular MySQL issue. (Assume the InnoDB storage engine whenever I talk about MySQL.) The issue is, in plain-if-not-technically-perfect language, that if a table, let's call it parent, has so much as a single row modified, all child tables (i.e. tables with a foreign key referencing parent) are blocked from running any DDL. 

This is because MySQL, when you update parent, acquires a shared metadata lock on all child tables. Observe a test between two concurrent sessions in MySQL:

MySQL Session 1:  do an update on one single row in parent, and don't commit (not commiting just makes the timing issue easier to recreate for demo purposes):

mysql> BEGIN;
mysql> UPDATE parent SET x=999 WHERE id = 1;
Query OK, 1 row affected (0.01 sec)
Rows matched: 1  Changed: 1  Warnings: 0
 

MySQL Session 2 (I'm switching to another terminal tab that has a separate connection to the database):

mysql> ALTER TABLE child ADD COLUMN new_col INT;

This statement hangs. This is because the ALTER TABLE statement needs to acquire an exclusive metadata lock on  child. Exclusive means just that -- exclusive. As long as a shared metadata lock is held on a table, an exclusive metadata lock cannot be acquired. On the flip side -- as long as an exclusive metadata lock is held on a table, a shared metadata lock cannot be acquired.

Perhaps a diagram can explain better than words:

 

 

Let's take a brief look at the locking situation in yet another session:

mysql> SELECT * FROM performance_schema.metadata_locks
    -> WHERE LOCK_STATUS = 'PENDING';
+-------------+---------------------+-------------+-------------+-----------------------+-----------+---------------+-------------+-------------+-----------------+----------------+
| OBJECT_TYPE | OBJECT_SCHEMA       | OBJECT_NAME | COLUMN_NAME | OBJECT_INSTANCE_BEGIN | LOCK_TYPE | LOCK_DURATION | LOCK_STATUS | SOURCE      | OWNER_THREAD_ID | OWNER_EVENT_ID |
+-------------+---------------------+-------------+-------------+-----------------------+-----------+---------------+-------------+-------------+-----------------+----------------+
| TABLE       | my_schema | child       | NULL        |       281469012339360 | EXCLUSIVE | TRANSACTION   | PENDING     | mdl.cc:3753 |           17282 |            893 |
+-------------+---------------------+-------------+-------------+-----------------------+-----------+---------------+-------------+-------------+-----------------+----------------+
1 row in set (0.00 sec)

The above result confirms that a metadata lock is pending.

When I finally issue a COMMIT or ROLLBACK in Session 1, Session 2 will complete. But think about what this means -- if you try to run DDL on a table in pretty much any way, and it has a foreign key reference to a table that is heavily updated, it may be a while. Furthermore, considering that the exclusive metadata lock blocks shared metadata locks, then when the DDL on child is finally running, all updates on the parent are blocked!

So does this mean you should always throw foreign keys in the garbage when using any RDBMS, as I sometimes hear? Not if you're thinking about doing so for this particularly reason. That's because not every DBMS locks in this way!

Let's repeat the test on Postgres.

Postgres Session 1:

postgres=# BEGIN;
BEGIN
postgres=*# UPDATE parent SET x = 999 WHERE id = 1;
UPDATE 1
postgres=*#

Postgres Session 2:

postgres=# ALTER TABLE child ADD COLUMN new_col INT; -- it completes instantly, no blocking involved
ALTER TABLE
postgres=# \d child -- look, the column was added
                 Table "public.child"
  Column   |  Type   | Collation | Nullable | Default
-----------+---------+-----------+----------+---------
 id        | integer |           | not null |
 parent_id | integer |           |          |
 new_col   | integer |           |          |
Indexes:
    "child_pkey" PRIMARY KEY, btree (id)
Foreign-key constraints:
    "child_parent_id_fkey" FOREIGN KEY (parent_id) REFERENCES parent(id)

postgres=*# update child set parent_id=null; -- I can even update the child row that is referencing the parent row that is being updated; completes instantly and doesn't hang
UPDATE 1
postgres=*# insert into child (id, parent_id) values (2, 1); -- insert a new child row refencing the parent row being updated; also doesn't hang
INSERT 0 1
postgres=*# create index idx on child(new_col); -- maybe i'll index the new column while i'm at it
CREATE INDEX

So clearly, Postgres does not have this same, particular locking issue as MySQL does. You can go hog wild running DDL on any child table no matter how much the parent is being updated. And no data or table definition will be corrupted or otherwise "bad." 

SQL here != SQL there!


Wednesday, May 21, 2025

The Benefits of Using Row Constructors! [MySQL]

 

In the world of SQL, sometimes you need to conjure up rows of literal data out of thin air, instead of pulling data from an existing table. The context of this post is when you, the SQL coder, need to define each row of literals individually. Maybe you're populating a table for testing purposes - the precise use case isn't too important. Note I am not referring to the context of generating a series by calling a function (post about how to do that), or otherwise programmatically generating a number of random rows out of thin air. Rather, you want to list the rows out one by one.

That foundation laid out, my claim is that I've generally found it is better to use your DBMS of choice's row constructor syntax than what I've more commonly seen: a bunch of SELECTs containing only literals that are UNION ALL'ed together.  For example:

SELECT 1,2,3
UNION ALL
SELECT 4,5,6
UNION ALL
SELECT 7,8,9
...

Rather than writing the above syntax,  there are benefits to taking a row constructor approach, which looks like the following in MySQL.

VALUES
ROW(1,2,3),
ROW(4,5,6),
ROW(7,8,9),
...


Let me clarify that last bit: this post is specifically about MySQL and the above syntax is MySQL specific. No specific tests or under-the-hood workings demonstrated below should be generalized beyond the scope of MySQL. That said, I've found the principle does tend to apply elsewhere, but I will not try to support my claim for any other DBMS than MySQL in this post. 

Without further ado...

Why use row constructors over a bunch of UNION ALL'ed SELECTs?


1) Performance

The UNION ALL approach takes greater wall clock time to execute. I quickly ran a not-super-scientific benchmark (MySQL 8.0.x, 2024 Macbook Pro, usual YMMV disclaimer applies) of the two approaches on 10,000 rows (the same exact data is constructed in each) and the difference was night and day:

Generating 10000 constant rows...
Generating UNION ALL SQL in union.sql...
Generating VALUES ROW(...) SQL in values.sql...

=== Running UNION ALL ===
real 3.95
user 0.04
sys 0.01

=== Running VALUES ROW ===
real 0.08
user 0.03
sys 0.00


Times were about consistent with repeated trials. The discrepancy would likely be less pronounced with smaller data sets. And of course, if you're cooking up a tiny number of rows, say 5, I doubt you'll notice any tangible difference.

Wednesday, May 07, 2025

Standard SQL, Shmandard SQL

I find the concept of "Standard SQL" - referring to these magical ANSI/ISO/IEC/whatever documents that spec out how SQL is supposed to work - to be a fascinating subject. Lately I've been thinking about my personal observation that hardly anybody who works with SQL knows what's in "the standard" and what isn't.

I can't blame them, because these documents are not freely available to the world. Look below to see how much they cost to read on iso.org:


So unless you're a corporation with a big budget for this sort of thing, like Oracle or Snowflake, or perhaps a university, you're probably not going to justify purchasing those docs. I, as an individual SQL geek, have thought about purchasing these docs to read over, with the hopes of taking my SQL knowledge to a higher level, or at least to a place I haven't been able to explore before. That is, until I learned that I couldn't simply shell out ~$300 once and get it all. There are ABOUT A DOZEN docs at that price! And then I'd potentially have to buy all the incremental updates as well. So clearly they aren't intended for the curious individual to purchase.

There are, however, some third-party sources of information that do a good job of parsing out which syntaxes and features are and are not in standard SQL, and which iterations. Markus Winand's fantastic web site modern-sql.com comes to mind. Wikipedia isn't a bad reference either, just to get a high-level overview of which features are in which version of the standard (for example: https://en.wikipedia.org/wiki/SQL:1999)

But what I've noticed over the years is that many folks who work with SQL will meet a particular SQL engine's special feature with suspicion because it's non-standard. Or at least, it's not standard...in their opinion! In other words, they might say something like, "I don't want to use some vendor-specific oddity, because if I need to switch engines down the line, that will make life difficult."

My thoughts on this kind of remark are as follows:

1. There actually is no full implementation of standard SQL!

Here is a brilliantly simple exchange on X (formerly Twitter) that I saw a few years ago: 

               

Lukas Eder, who made the one-word reply, is the creator of the SQL framework for JVM-based languages called jOOQ. jOOQ's list of supported DBMSs can be found here. I mention that list because if someone who has painstakingly dealt with the ins and outs of SQL syntax between ALL those different dialects can confidently state that NONE of the implementations is complete, we should probably take his word for it.

Additionally, loosely speaking, pretty much all the implementations add their own inventions. And consider that for any given standard SQL feature, it may or may not exist in both SQL Engine ABC and SQL Engine XYZ. The overlap with standard SQL tends to be much looser than many people realize. So if I were to draw a Venn diagram to depict standard SQL and various implementations thereof, in which area on the diagram represents features and syntaxes, it might look something like the following (I'm constraining myself to only standard, Postgres and MySQL for simplicity's sake):

 


  1.  

Disclaimer: the above diagram is not to scale in any meaningful way; nor did I attempt to take on the Herculean task of methodically evaluating feature sets in a technically accurate manner and sizing these circles accordingly. In other words this diagram isn't "real"; I just made it to illustrate how I personally conceptualize the fact that all implementations partially and loosely overlap with standard SQL as well as each other! 

So with the above diagram in mind, say you're using Postgres. If you limit yourself to the Postgres features that overlap with standard SQL only, you're omitting a large chunk of what Postgres has to offer!

2. In my experience, switching between SQL engines is a rare event.

If you do have a need to interact with multiple engines as a requirement, you would probably want a tool to handle the SQL conversions, or generation, for you anyway, perhaps jOOQ, just to name one. If you don't have that requirement and you prefer writing SQL by hand, and the rare need to pivot from one DBMS to another does occur, then you would need a concerted effort to convert queries anyway (involving lots of testing).

3. Much of the time, in my experience, these claims of "that thing is not standard SQL" are just plain wrong!

I recently saw a post on LinkedIn, in which a poster was excited to share his recently acquired knowledge about the handy IS DISTINCT FROM syntax in Snowflake. Another user chimed in to say he didn't like that syntax, because it's non-standard. And yet here it is, being all standardy: https://modern-sql.com/caniuse/is-distinct-from.


I've seen this same kind of remark many, many times in SQL discussions over the years! It feels like many folks come up with a "pseudo-standard" model of what they think is standard in their head, based on which features and syntaxes they commonly see. They don't actually do their due diligence in determining what is and what isn't part of standard SQL, which again, I can't blame them too much for, since this is hard to do. (I've made this very error myself, even.) Instead, if they haven't seen some particular syntax too often, or it isn't widespread, they assume it is not standard SQL. Which brings me to my next point...

4. They are probably already using non-standard SQL features all the time anyway!

But before I elaborate on this point...

RANDOM POP QUIZ TIME:

Your query is returning a million rows, but you only want the first 10 rows to be returned. With which syntax can you modify your query to accomplish this, if you want to conform to standard SQL syntax only?

A) FETCH FIRST 10 ROWS ONLY
B) LIMIT 10
C) TOP 10
D) ROWNUM <= 10

I asked this question on my company's Slack server, and B) LIMIT 10, was the most popular response by far. But that's incorrect! LIMIT is common for SQL engines to support - MySQL supports it, as does Postgres and many others - but it is NOT mentioned in any of those crazy ANSI/ISO/etc. standard SQL docs! The correct answer is actually A) FETCH FIRST 10 ROWS ONLY

It just goes to show how obscure it is to know which syntax and features are part of standard SQL and which are not.

A few more examples of non-standard but common syntax:

  • identifier quoting using backticks (e.g. `my_table`)
  • identifier quoting using brackets (e.g. [my_table])
  • string literal quoting with double quotes  (e.g. WHERE NAME = "Mark")
  • commonly used functions like DATEDIFF() and NOW()
  • AUTO_INCREMENT (e.g. creating a column defined as ID INT AUTO_INCREMENT PRIMARY KEY) and SERIAL (the older way to accomplish this in Postgres)
  • OFFSET (e.g. OFFSET 1000 LIMIT 10
  • CREATE MATERIALIZED VIEW (materialized views are not part of standard SQL)

 5. "Standard" doesn't necessary mean "common" or "widespread."

Sometimes nonstandard syntaxes/features do have widespread adoption. Again, see LIMIT. So there's almost an "not truly standard but commonplace" set of syntaxes/features. Another example: I've been seeing the useful QUALIFY syntax more and more lately, particularly in analytic SQL engines, which has to do with filtering results of window functions more elegantly. Snowflake, BigQuery, Redshift and several others support QUALIFY. But is it standard? Nope!

6. Two implementations of one standard feature could have very different under-the-hood workings or options.

For some statements, such as CREATE INDEX, the basic syntax is standard but the options available to the statement may vary quite a bit per implementation. And those options drive what goes under the hood of the engine, which can be very different. Consider that Postgres features binary range (brin) indexes, but MySQL does not. So how do we even account for this fact in the above Venn diagram? Is it a three-way intersection (MySQL + Postgres + Standard SQL all include CREATE INDEX)? Is it a partial overlap? It's complicated.

In the case of CREATE MATERIALIZED VIEW, both Oracle and Postgres have this statement and the kind of database object, a materialized view, that can be created and used. But an Oracle Materialized View is much more robust and powerful than a Postgres materialized view (see this past post of mine for reference). MySQL does not have materialized views at all. Standard SQL does not even mention materialized views. So do the Postgres and Oracle circles in the Venn diagrams overlap for the area representing materialized views? It's complicated. (At least we know MySQL and Standard SQL would not!)

Another tricky case: two DBMSs have completely implemented a Standard SQL per its spec, but they behave very differently! <TODO: expand on this> 

7. Some Standard SQL features are considered "optional":

Fun fact:

Did you know that standard SQL:99 says you can access Java methods directly in the database server? It's called SQL/JRT and it is designated as "optional." So, if DBMS x supports the optional SQL/JRT and DBMS y does not, is x the more standard-compliant DBMS? I don't know the answer!

Oracle Database has a whole JVM built into the server that you need to allocate resources for, such as memory, just to support what it calls Java Stored Procedures. Are these Java Stored Procedures a standard-compliant version of SQL/JRT? I don't know, but it certainly covers similar ground at the very least.

What if this standard feature is generally considered a "bad thing"? I asked some coworkers for their opinions on the subject of SQL/JRT and one said: "It was a big hype at the beginning of century. It quickly faded because DB server is one of the most expensive types of app server especially when coupled with Oracle licenses."

So you could argue that in this case, this particular standard feature is optional, and arguably a bad thing (not sure I agree, but one can make a valid case, as my coworker did). So maybe we wouldn't count a DBMS lacking this feature to be "less standard complaint." I mean, I guess they're orthogonal concepts? (standard vs. "good") And the latter is subjective. Just talking out loud.  :) 

At the risk of belaboring the point, there's also PL/Java for Postgres. I'll just drop the link and move on, because at this point, my head is spinning. It's a complicated subject!

Closing

So to wrap up this post, in my view, usually you shouldn't even worry about what's in standard SQL! Use what works best for your DBMS/SQL Engine of choice. Don't use your DBMS in a lowest common denominator manner - use it to its fullest capacity! See the Venn diagram above. You should use the full circle for your DBMS, not just its intersection with "Standard SQL."

Avoid walking on eggshells worrying about it, continuously looking up what actually is and isn't in standard SQL in documents you probably don't have access to. It's not worth it in my opinion.

I personally make some exceptions on this point: I care when standard SQL dictates a particular syntax or feature should exist, and it's a good and sensible thing. Then the implementation, while supporting the standard syntax, also offers an alternative that deviates from the standard one for little to no good reason, or perhaps for legacy reasons. Then I usually push for use of the standard version.

For example, MySQL lets you quote string literals in single or double quotes by default. In other words I can run:

SELECT * FROM EMPLOYEE WHERE NAME = 'Mark' 

or

SELECT * FROM EMPLOYEE WHERE NAME = "Mark" 

Both seem to run just fine. But the double quotes buy me nothing in this situation. Plus whether this query even works within the scope of a MySQL query is contingent on whether SQL_MODE is set to ANSI_QUOTES. If enabled, the double quotes are not allowed to be used for a string literal -- your query would throw an error. So in my view, stick with the single quotes as standard SQL dictates. Using single quotes also provides an added bonus: the aforementioned "what if I want to switch engines?" compatibility. Even though I'm against worrying about that too much, it comes free in this case.

OK, that's about it. Let me leave you with a quote from my buddy Kevin, a fellow engineer with close to three decades of experience now, who I spoke to about this subject: "Ultimately, it makes no difference [whether or not a feature is standard], outside trying to prove a point in an argument."



Saturday, May 03, 2025

I'm writing a book!

Part of my job has involved reviewing database changes, both schema (DDL) and the data itself (DML) for dozens of developers, and occasionally other folks like data scientists, for about 8 years. I've primarily done this with MySQL, and I feel like at this point I have a great deal of experience in knowing which "gotchas" to expect when working with this particular DBMS.

So I'm writing a book! It's tentatively titled "MySQL Booby Traps: the Hazards and Footguns to Look Out For, and How to Avoid Them." Thanks to some of my friends and coworkers who brought up the idea. I wasn't thinking about publishing my random SQL rants (mostly in our company's Slack, but also here and on Reddit) into book form, but they've encouraged me to do so, and so I've been writing away!

Stay tuned!