Postgres column type inference madness
Posted on October 20, 2025 (Last modified on October 21, 2025) • 2 min read • 413 wordsLong time no see.
Anyways.
This just cost me hours. And it’s “part I” because I just might add all the difficulties I had, which are … plenty.
DISCLAIMER: In no way I want to bash either Postgres or SQLAlchemy here, which are incredible projects. This is just my personal experience in, well, “experiencing” the learning curve as someone who has basically no clue about all those things.
Situation: Saving a database table, i get this error:
Original exception was: (psycopg2.errors.InvalidTextRepresentation) invalid input syntax for type double precision: "offset_key.251"
E LINE 1: ...:JSONB, ARRAY['scheduled-item']::VARCHAR[], NULL, 'offset_key.251',...
E [SQL: INSERT INTO invoice_items (<<tons truncated>> offset_key, <<tons truncated>> ... offset_by_ref, cancelled_by_ref) SELEC ... 1158 characters truncated ... 7, p18, p19,
E [parameters: {'offset_key__0': 'offset_key.251', <<again tons truncated>> ..., 'offset_key__1': nan, <<blablablablabla>>After a lot of debugging aorund I was absolutely sure that the value offset_key.251 was supposed to be written in the column offset_key. Which was perfectly fine, and which is both defined as a Mapped[String], as well as actually created as such during the test run. So the actual table column type was absolutely correct.
After a lot of haggling with ChatGPT the solution was this (copy-pasted, because that stupid thing explains really well sometimes):
Ah — now it’s crystal clear. Let’s unpack this carefully.
(well, thanks and fuck you too)
Sometimes, if the first row’s parameter is NULL, Postgres tries to infer a type. If SQLAlchemy passes ‘offset_key.251’ as a parameter in a position that Postgres already inferred as double precision (from another row, e.g., nan), you get this error.
Databases are … weird. Shouldn’t it just use the actually defined column type?!? This seems really, really wrong (not the first time with the ChatGPTool …).
But I digress.
Notice: the second row is nan.
Yes.
In Python, float(’nan’) is a FLOAT, so SQLAlchemy passes it as double precision.
Postgres sees nan in row 1 → infers double precision for that column in the multi-row insert.
Row 0 has ‘offset_key.251’ → cannot cast to float → error.
✅ This is a classic multi-row INSERT typing problem with Postgres.
WTF.
I was absolutely sure that could not be it. But I was completely out of options, so I went after the nan value – and it turns out, I read this from a Pandas DataFrame. Okay, that makes absolute sense, Pandas quite dislikes None, even in string columns.
So I re-mapped this to None, ran the test, …
… and it worked. Unbelievable.
Again: HOURS.