I'm really torn -- you and your engineers should be excited to work on your codebase. You should enter it and be like "yes, I've made good choices and this is a codebase I appreciate, and it has promise." If you have a set of storylines that make this migration appropriate, and its still early in the company that you can even do this in 3 days, then by all means, do it! And good luck. It'll never be cheaper to do it, and you are going to be "wearing" it for your company's lifetime.
But a part of me is reading this and thinking "friend... if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?" Like, you have the counterexample right there! Other companies are making the "technically worse" choice but making it work.
I love coding and I recognize that human beings are made of narratives, but this feels like 3 days you could have spent on customer needs or feature dev or marketing, and instead you rolled around in the code mud for a bit. It's fine to do that every now and then, and if this was a more radical jump (e.g. a BEAM language like Elixir or Gleam, or hell, even Golang, which has that preemptive scheduler + fast compiles/binary deploys + designed around a type system...) than I'd buy it more. And I'm not in your shoes so it's easy to armchair quarterback. But it smells a bit like getting in your head on technical narratives that are more fun to apply your creativity to, instead of the ones your company really needs.
The author addresses that in the article. Python can scale but then developers would have to work with unintuitive async code. You can think of it as a form of tech debt - every single decision they make will take longer because they have to learn something new and double check if they're doing it the right way.
I was just thinking... "BugHog? The platform famously broken more often than not?"
We have a whole posthog interface layer to mask over their constant outages and slowness. (Why don't we ditch them entirely? I, too, often ask this, but the marketing people love it)
>if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?
Also, considering the project is an AI framework, do you think the language ChatGPT is built on is a worse choice than the language we use because it's in the browser?
I have to spend 3 days working on someone else's "narratives that are more fun to apply their creativity to" all the time, even when my intuition and experience tells me it isn't a good idea. Sometimes my intuition is wrong. I've yet to meet a product manager that isn't doing this even when they claim to have all the data in the world to support their narrative.
Personally I don't think there's anything wrong with scratching that itch, especially if its going to make you/your team more comfortable long term. 3 days is probably not make-or-break.
Async and Django don't mix well and I honestly see the whole Django Async as wasted resources, all those "a" prefixed functions etc.
To be honest, I never liked the way async is done in python at all.
However, I love Django and Python in general. When I need "async" in a http cycle flow, I use celery and run it in background.
If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side. Either it's Chat response with LLM or importing a huge CSV file.
Simple rule for me is, "don't waste HTTP time, process quick and return quick".
> If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side.
Folks, if you have problems doing async work, and most of your intense logic/algorithms is a network hop away (LLMs, etc.), do yourself a favor and write a spike in Elixir. Just give it a shot.
The whole environment is built for async from the ground up. Thousands and thousands of hours put into creating a runtime and language specifically to make async programming feasible. The runtime handles async IO for you with preemptive scheduling. Ability to look at any runtime state on a production instance. Lovely community. More libraries than you might expect. Excellent language in Elixir.
I did an interview for the job I'm currently at, and we were discussing in it an architecture for live updating chats and I said I wouldn't reinvent the wheel and just use the approach Phoenix LiveView uses, which is to have a basic framework loaded client-side that would just apply diffs that comes from a websocket to the UI and have the chat update using those diffs. Turns out this is exactly the architecture they use in production.
People are reimplementing things that are first class citizens in elixir. Live content update, job runners, queues... Everything is built into the language. Sure you can do it all in typescript, but by then you'll be importing lots of libraries, reimplementing stuff with less reliability and offloading things like queues to third party solutions like pulsar or kafka.
People really should try elixir. I think the initial investment to train your workforce pays itself really quick when you don't have to debug your own schedulers and integrations with third party solutions. Plus it makes it really easy to scale after you have a working solution in elixir.
I agree in principle but I think that your average Python developer that thinks that Node.js is an improvement over Python is going to have seizures if they need to switch to Elixer. It's a completely different way of working.
This is an absolutely horrible idea. I’m not questioning the technology choice. But as someone interested in their career, it makes no sense to focus on a language or technology that is not popular. It’s both bad from the recruiting side trying to get developers who are smart enough to care about their n+1 job and the developer side.
There are probably less code samples and let’s be honest this is 2025, how well do LLMs generate code for obscure languages where the training data is more sparse?
Maybe for you! That's your call. I'm also interested in my career.
I've had 3 Elixir jobs and 2 Rust jobs in the last 10 years. All were on real products, not vaporware. I learned a ton, worked with great people, and made real friends doing it.
Luck? Skill? Who knows. It's not impossible to work with the technology of your choice on problems you find interesting if you're a little intentional.
Nothing ever gets better if everybody just does what's already popular.
You might not like LLM code generation or corporations encouraging it. Just like I might not like gravity. But I am not going to jump out of a 25 story building. I accept reality for what it is.
I have a strange feeling that most people haven’t found a method to get over there addictions to food and shelter. If they want to exchange labor for money to support those addictions, they have to care about what recruiters want whether external recruiters or internal recruiters.
Using a lot of Typescript and Python in my current role and I find myself missing that part of Elixir. Ecosystems are night and day though. For what we're doing we'd have to write far too many libraries ourselves in Elixir and don't have the time right now.
A lovely language with an incredible web framework (Phoenix, LiveView). However, not easy to pick up for people with only imperative programming experience.
I had to switch my project to .NET in the end because it was too hard to find/form a strong Elixir team. Still love Elixir. Indestructible, simple, and everything is easy once you wrap your head around the functional programming.
As someone who has spent my whole career in somewhat niche things (ROS, OpenWRT, microcontrollers, Nix), I think the answer for how to hire for these is not to look for someone who already has that specific experience but rather look for someone curious, the kind of person who reads wikipedia for fun, an engineer who has good overall taste and is excited to connect the dots between other things they've learned about and experimented with.
Obviously that's not going to give you the benefit of a person who has specifically worked in the ecosystem and knows where the missing stairs are, which does definitely have its own kind of value. But overall, I think a big benefit of working in something like Elixir, Clojure, Rust, etc is that it attracts the kind of senior level people who will jump at the opportunity to work with something different.
And what happens when I’m looking for that next job? I haven’t interviewed for a pure developer job since 2018. But the last time I did, I could throw my resume up on the air and find a job as someone experienced with C# and knew all of the footguns and best practices and the ecosystem. I’m sure the same is true for Java, Typescript, Python, etc.
One nice side effect of having done this is having a small rolodex of other people who are like that.
So, like, if I had a good use case for Elixir and wanted a pal to hack on that thing with, I know a handful of people who I'd call, none of whom have ever used Elixir before but I know would be excited to learn.
Yes, same here. And that has come in very handy more than once. But my merry band of friends isn't getting any younger, I think the youngest in our group is now mid 30s or so, the bulk between 50 and 60.
Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
Conversely all the node+typescript projects, big and small, have been pretty great the last 10+ years or so. (And the C# .NET ones).
I use python for real data projects, for APIs there are about half a dozen other tech stacks I’d reach for first. I’ll die on this hill these days.
100% same experience. If it were up to me, I'd started with typescript, but the client insisted on using a python stack (landed on FastMCP, FastAPI, PydanticAI).
While, `PydanticAI` does the best it can with a limited type system, it just can't match the productivity of typescript.
And I still can't believe what a mess async python is. The worst thing we've encountered was a bug from mixing anyio with asyncio which resulted in our ECS container getting it's CPU pinned to 100% [1]. And constantly running into issue with libraries not handling task cancellation properly.
I get that python has captured the ML ecosystem, but these agent systems are just API calls and parsing json...
async python has problems, but "anyio exists" is not one of them that can be blamed on python, simply dont use weird third party libraries trying to second guess the asyncio architecture
edit: ironically I'm the author of a weird third party library trying to second guess the asyncio architecture but mine is goodhttps://awaitlet.sqlalchemy.org/en/latest/ (but I'll likely be retiring it in the coming year due to lack of interest)
> Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
Very painfully.
I avoid the async libs where possible. I'm not interested in coloring my entire code-base just for convenience.
The funny thing is all the python people will tell you how great FastAPI is and how much of an improvement it is over what came before.
FastAPI does have a few benefits over express, auto enforcing json schemas on endpoints is huge, vs the stupidity that is having to define TS types and a second schema that then gets turned into JSON schema that is then attached to an endpoint. That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
The auto generated docs in FastAPI are also cool, along with the pages that let you test your endpoints. It is funny, Node shops setup a postman subscription for the team and share a bunch of queries, Python gets all that for free.
But man, TS is such a nice language, and Node literally exists to do one thing and one thing only really well: async programming.
Maybe you have non TS clients, but I moved to tRPC backends and now my consumers are perfectly typed at dev time, combined with pnpm monorepos I’m having a lovely time.
> That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
Just define all your types as TypeBox schemas and infer the schema from that validator. This way you write it once, it's synced and there's no need for a compiler plugin.
>Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
In my experience async is something that node.js engineers try to develop/use when they come from node.js, and it's not something that python developers use at all. (with the exception of python engineers that add ASGI support to make the language enticing to node developers.)
We should really be at a point where the application is a Yaml file and a something like Hugo for backends, and you can force it to use --java or --js or --rust or --python, etc...
I don't see it mentioned enough in the comments here, but not considering Celery as an alternative to Django + async really is the missing puzzle piece here. Aside from application-level options that weren't explored, I'm wondering whether handling some of the file IO stuff with, for instance, nginx, might be a better fit for their use case.
Once you're in the situation of supporting a production system with some of the limitations mentioned, you also owe it to yourself to truly evaluate all available options. A rewrite is rarely the right solution. From an engineering standpoint, assuming you knew the requirements pretty early on, painting yourself into a bad enough corner to scrap the whole thing and pick a new language gives me significant pause for thought.
In all honesty I consider a lot of this blog post to be a real cause for concern -- the tone, the conflating arguments (if your tests were bad before, just revisit them), the premature concern around scaling. It really feels like they may have jumped to an expensive conclusion without adequate research.
In an interview, I would not advance a candidate like this. If I had a report who exhibited this kind of reasoning, I'd be drilling them on fundamentals and double-checking their work through the entire engineering process.
I tried to use Celery for something extremely trivial (granted, 5+ years ago). It was so badly documented and failed to do basic things I would expect from a task queue (like progress reporting) I have no idea why it was and still is popular.
Just because you couldn't figure it out doesn't mean the capability wasn't there. More than ten years ago at this point I was running a massively scaled Celery + RabbitMQ + Redis deployment with excellent off-the-shelf reporting using Flower.
As a long-time Django user, I would not use Django for this. Django async is probably never the right choice for a green-field project. I would still pick FastAPI/SQLAlchemy over Express and PostHog. There is no way 15 different Node ORMs are going to survive in the long run, plus Drizzle and Prisma seem to be the leaders for now.
From the proverbial frying pan into the fire. If you're going to go through all of the effort and cost to switch platforms and to retrain your developers, why on earth would you pick Node.js?
Node.js is such an incredible mess. The ideas are usually ok but the implementation details, the insane dependencies (first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked), the lack of stability, the endless supply chain attacks, maintainers headaches and so on, there is very little to like about Node.js.
C# before Node.js and I can't stand C#. Java Before C#. Yes, it's a language rant, but in the case of Node I am really sorry.
So you'd recommend they rewrote their Python project in Java (assuming the rewrite itself was a good idea)? I don't have any experience on a production web server written in Java or C#, but they both seem like a more difficult transition than JavaScript for rewriting a Python codebase.
I've written code in all of these and I think that Python to Java or Go is easier than Python to Node, especially if you don't want to spend the next 24 months auditing all of the code you just imported.
I've been working with Node.js since it came out and it's my go to language for anything backend-related.
The complains about npm are issues that could happen with any other package manager as well. Javascript being so popular is what draws the attention of attackers worldwide and that's why it's newsworthy. i.e. your obscure Rust crate with 3 downloads per year is not "safe", it's just that no one gives af about it.
I would argue that all of these problems that came up, and the fixes that followed, have only made the ecosystem more robust with time :).
>first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked
Hehe, that made me chuckle. As you get more familiar with computers you will understand more and more what's going on. I used to teach older adults who never touched a computer before and many were startled when the cursor moved when they touched the mouse. Your comment kind of reminded of it. First steps are usually like that. It's nice to see all kinds of people getting involved! ^^
libuv has provided an async interface for io using a worker thread pool for a decade, no dependency on io_uring required. I guess the threadpool they mention that aiofiles uses is written in python, so it gives concurrency but retains the GIL, so no parallelism. Node's libuv async stuff moves all the work off the main thread into c/c++ land until results are ready, only when dealing with the completed data read event does it re-enter the NodeJS "GIL" JavaScript thread.
To be clear: libuv has had the ability to offload (some?) I/O operators to io_uring since v1.45.0, from 2023, and that's the 8x speed improvement. 2024 is when node.js seemed to enable (or rather, stop disabling) io_uring by default in its own usage of libuv.
Yeah if you look at the libuv release history there’s been a lot of adding and subtracting since then. It’s clearly not all settled but there are chunks.
I think the baggage goes both ways - Django has the advantage of being a "complete & proven recipe" vs. Node where you try to lego-together an app out of dependencies that have deprecation warnings even in their latest versions.
> Django has the advantage of being a "complete & proven recipe"
I work on a large Django codebase at work, and this is true right up until you stray from the "Django happy path". As soon as you hit something Django doesn't support, you're back to lego-ing a solution together except you now have to do it in a framework with a lot of magic and assumptions to work around.
It's the normal problem with large and all-encompassing frameworks. They abstract around a large surface area, usually in a complex way, to allow things like a uniform API to caches even though the caches themselves support different features. That's great until it doesn't do something you need, and then you end up unwinding that complicated abstraction and it's worse than if you'd just used the native client for the cache.
I'm having a hard time imagining a case where you'd be worse off with Django (compared to whatever alternative you may have chosen) in the case where the happy path for the thing you're trying to do doesn't exist natively in Django. Either way you're still farming out that capability to custom code or a 3rd party library.
I guess if you write a lot of custom code into specific hooks that Django offers or use inheritance heavily it can start to hurt. But at the end of the day, it's just python code and you don't have to use abstractions that hurt you.
I don't agree with this cache take. Adding operations to the cache is easy. Taking the django-redis project as an example there are only two levels until you reach redis-py: The cache abstraction and the client abstraction.
you can just run part of django. So the negatives of it being “massive” is really just the size of the library that will just be sitting there on disk. which is really not a big deal in most situations.
As far as going with what you know vs choosing the best tool for the job, that can be a bit of a balancing act. I generally believe that you should go with what the team knows if it is good enough, but you need to be willing to change your mind when it is no longer good enough.
I worked at a mid-size startup that was still running on Python 2.7 and Django for their REST APIs, as late as 2022. It was pretty meh and felt like traveling back in time 10 years.
2.7 was end-of-life in 2020! And Python 3 outdates 2.7 by a few years.
A company using 2.7 in 2022 is an indicator that the company as a whole doesn't really prioritize IT, or at least the project the OP worked on. By 2017 or so, it should have been clear that whatever dependencies they were waiting on originally were not going to receive updates to support python3 and alternative arrangements should be made.
You captured the fundamental issues. There were mountains of technical debt. I recall encountering a dependency that had not been updated in over 10 years.
We have VB deployments that haven't been changed at all in about that long. Finally got approval to do a rewrite last year, which is python 3.6 due to other dependencies we can't upgrade yet.
It got this bad because the whole thing "just worked" in the background without issues. "Don't fix what isn't broken" was the business viewpoint.
I probably would have pushed for Hono as the underlying framework... That said, I've been a fan of Koa/Oak over Express for a very long time. For API usage, the swagger+zod integration is pretty decent, though it changes the typical patterns a bit.
All-in, there's no single silver bullet to solving a given issue. Python has a lot of ecosystem around it in terms of integrations that you may or may not need that might be harder with JS. It really just depends.
Glad your migration/switch went relatively smoothly all the same.
It depends on your use case. Exactly. If you’re building big data intensive pipelines with lots of array manipulation or matrix multiplications you know what will shine. Building user facing APIs, use something with types and solid async.
Matching your latter definition, I'd be inclined to go with Rust or C#... that said, you can go a long way with TS in Node/Deno/Bun/Cloudflare/Vercel, etc.
So basically you just rewrote boilerplate code with complexity of "hello world" and deploy scripts. Without any dependencies, data migrations, real user data and downtime SLA. And after that you had time to write quite long article.
I'm actually building an app on the side and went the other way around on this. Migrating from Typescript back to Python. Granted, my gripes were more with NextJS rather than Node or Typescript.
Using Django was so intuitive although the nomenclature could do a bit better. But what took me days trying to battle it out on NextJS was literally done in an hour with Django (admin management on the backend). While Django still looks and feels a bit antiquated, at least it worked! Meanwhile I lost the entirety of the past weekend (or rather quite a bit of it), trying to fight my code and clean up things in NextJS because somehow the recommended approach for most things is mixing up your frontend and backend, and a complete disregard for separation of concerns.
My new stack from now one will likely be NextJS for the frontend alone, Django for the CRUD and authentication backend, Supabase Edge functions for serverless and FastAPI if needed for intensive AI APIs. Open to suggestions, ideas and opinions though.
I do a lot of glueware and semi-embedded stuff with Python... but my goto these days for anything networky is Elixir (LiveView if ux). If I need an event loop, async that is more than a patched on keyword, it just rocks. It is amazing to me how much Elixir does not have, and yet how capably it solves so many problems that other languages have had to add support for to solve.
This didn't make sense to me either? If it only took three days for a complete rewrite to another language, what's the problem? Did I read they were getting interrupted for user requests? felt weird.
>I'll preface this by saying that neither of us has a lot of experience writing Python async code
> I'm actually really interested in spending proper time in becoming more knowledgeable with Python async, but in our context you a) lose precious time that you need to use to ship as an early-stage startup and b) can shoot yourself in the foot very easily in the process.
The best advice for a start-up is to use the tools that you know best. And sometimes that's not the best tool for the job. Let's say you need to build a CLI. It's very likely that Go is the best tool for the job, but if you're a great Python programmer, then just do it in Python.
Here's a clearer case where the author was not very good with Python. Clearly, since they actually used Django instead of FastAPI, which should have been the right tool for the job. And then wrote a blog post about Python being bad, but actually it's about Django. So yeah, they should have started with Node from day one.
I would have picked Hono and Drizzle. In part because of the great TS support but also Hono is much faster than Express and supports Zod for validation out of the box. This stack would also allow to use any other runtime (Deno, Bun, or Cloudflare Workers).
Given they used TS and performance was a concern I would also question the decision to use Node. Deno or Bun have great TS support and better performance.
I checked it out and it looks good on paper but it only runs on Bun.
Don't get me wrong, I use Bun and I'm happy with it, but it's still young. With Hono/Drizzle/Zod I can always switch back to Node or Deno if necessary.
I wouldn't call it seamless, having also done this recently. (Handler func signature is different) But it is relatively straight forward without major changes to the code needed
Made pretty much the same comment Hono + Zod + Swagger is pretty nice all around. Not to mention the portability for different runtime environments. I also enjoy Deno a lot, it's become my main shell scripting tool.
I think it makes sense to start with node.js... it's the standard and widely supported. Eventually it should not be too difficult to switch to bun or deno if the need arises.
I'm more a fan of just a sql template string handler... in C#/.Net I rely on Dapper... for node, I like being able to do things like...
const results = await query`
SELECT...
FROM...
WHERE x = ${varname}
`;
Note: This is not sql injection, the query is a string template handler that creates a parameterized query and returns the results asynchronously. There's adapters for most DBs, or it's easy enough to write one in a couple dozen lines of code or less.
ORMs not only help with the result of the query but but also when writing queries. When I wrote SQL I was constantly checking table names, columns, and enums. With a good ORM like EF Core not only you get autocomplete, type checking, etc but dealing with relationships is much less tedious than with SQL. You can read or insert deeply nested entities very easily.
Obviously ORMs and query builders won't solve 100% of your queries but they will solve probably +90% with much better DX.
For years I used to be in the SQL-only camp but my productivity has increased substantially since I tried EF for C# and Drizzle for TS.
VS Code plugs into my DB just fine for writing SQL queries...
With an ORM, you can also over-query deeply nested related entities very easy... worse, you can then shove a 100mb+ json payload to the web client to use a fraction of.
No, but it does put you closer to the actual database and makes you think about what you're actually writing. You also aren't adding unnecessary latency and overhead to every query.
Also the overhead of good ORMs is pretty minimal and won't make a difference in the vast majority of cases. If you find a bottleneck you can always use SQL.
Bit of a plug but I just started working on a drizzle-esque ORM[1] for Python a few days ago and it seems somewhat appropriate for this thread. Curious whether anyone thinks this is a worthwhile effort and/or a good starting point syntax-wise.
If we ignore the ML/AI/array libs, where Python shines, the core development has really done nothing much for it since 3.0.
Despite MS, Guido and co throwing their weight, still none of the somewhat promised 5x speedup across the board (more like 1.5x at best), the async story is still a mess (see TFA), the multiple-interpreters/GIL-less is too little, too late, the ecosystem still doesn't settled on a single dependency and venv manager (just make uv standard and be done with it), types are a ham-fisted experience, and so on, and so forth...
I don’t know a ton about either but now I am curious if I should takeaway the idea that async with Python is problematic or if only async with Django is the issue.
Async with Python is problematic, but this article doesn't really explain that. Django async being bad is one of many symptoms of Python async being even worse in the past than it is today. Sometimes people claim it's fixed, which ignores a decade+ of momentum behind messy workarounds, also it's still bad.
I'm using it for a hobby project, and pretty pleased.
My personal maybe somewhat "stubborn old man" opinion is that no node.js orm is truly production quality, but if I were to consider one I think I would start with it. Be aware it has only one (very talented) maintainer as far as I recall.
Everyone's definition of "production quality" is different :-), but Joist is a "mikro-ish" (more so ActiveRecord-ish) ORM that has a few killer features:
I always find this line of thought strange. It's as if the entire team hinges their technical decision on a single framework, when in reality it's relatively easy to overcome this level of difficulties. This reminds me of the Uber blunder - the same engineer/team switched Uber's database from MySQL to Postgres and then from Postgres to MySQL a few years later, both times claiming that the replaced DB "does not scale" or "sucks". In reality, though, both systems can work very well, and truth be told, Uber's scale was not large enough for either db to show the difference.
Python async may make certain types of IO-blocked tasks simpler, but it is not going to scale a web app. Now maybe this isn't a web app, I can't really tell. But this is not going to scale to a cluster of machines.
You need to use a distributed task queue like celery.
We did the same for our app as well. I wrote a little library to make it as simple as FastAPI to generate swagger specs - you can try it out - https://github.com/sleeksky-dev/alt-swagger .
Doing zero upfront research or planning and then bragging about it in public like this is pretty suspect, but I guess more to the point, glorifying "the pivot" like this is out of style anyway. You're now supposed to insist that whatever happened was the plan all along.
After having used it two weeks ago for the first time: it feels as though async support in Python is basically a completely parallel standard library that uses the same python syntax with extra keywords all over the place. It's like if building code compliance required your 50 year old house to be updated have a wider staircase with deeper steps but you wanted to do so without affecting the existing stairs, so now you just have two staircases which are a little bit different and it feels like it takes up space unnecessarily.
I had to look for async versions of most of what I did (e.g. executing external binaries) and use those instead of existing functions or functionality, meaning it was a lot of googling "python subprocess async" or "python http request async".
If there were going to be some kind of Python 4.x in the future, I'd want some sort of inherent, goroutine-esque way of throwing tasks into the ether and then waiting on them if you wanted to. Let people writing code mark functions as "async'able", have Python validate that async'able code isn't calling non-async'able code, and then if you're not in an async runloop then just block on everything instead (as normal).
If I could take code like:
def get_image(image):
return_code = subprocess.check_call(["docker", "pull", image])
if return_code:
raise RuntimeError("idk man it broke")
result = get_image(imagename)
print(result)
And replace it with:
def get_image(image):
return_code = subprocess.check_call(["docker", "pull", image])
if return_code:
raise RuntimeError("idk man it broke")
result = async get_image(imagename)
print(result)
And just have the runtime automatically await the result when I try to access it if it's not complete yet then it would save me thousands of lines of code over the rest of my career trying to parallelize things in cumbersome explicit ways. Perhaps provide separate "async" runners that could handle things - if for example you do explicitly want things running in separate processes, threads, interpreters, etc., so you can set a default async runner, use a context manager, or explicitly threadpool.task(async get_image(imagename)).
When they start moving away from API calls to third parties to their own embeddings or AI they’re in for a bad time.
What’s going to end up happening is they’ll then create another backend for AI stuff that uses python and then have to deal with multiple backend languages.
They should have just bit the bullet and learned proper async in FastAPI like they mentioned.
theres caolan async if you need series and parallel controls
theres rxjs if you need observables
on web frameworks hono seems nice too. if you need performance, theres uwebsockets.js which beats all other web frameworks in http and websocket benchmarks.
for typesafety aside from typescript, theres ark, zod, valibot, etc.
I was about to migrate a legacy system written in Python/ Flask to FastAPI and React (frontend). But the sentiments here seem to suggest that FastAPI is not the best solution if I need async? So go with Next.js?
We're on the same wavelength, i have decades of ORM experience. It was the first thing i woudl do in any project. Now it can just be vanilla JDBC with tons of duplicated boilerplate. AT least in the early stages.
Who is the audience for a post like this? Presumably HN, since the author invoked PG.
But who is "we rewrote our stack on week 1 due to hypothetical scaling issues" supposed to impress? Not software professionals. Not savvy investors. Potential junior hires?
Good decision, judging by their general level of impatience with things they would have hated my ORM :).
Also I think the node approach is probably still more performant than FastAPI but that's just a hunch.
Hopefully they won't have security issues because someone hijacked the node package that sets the font color to blue or passes the butter or something.
I had a python script I was writing that basically just needed to the same shell command 40 times (to clone images from X to Y) and a lot of the time was spent making the request and waiting for the data to be generated so I figured I'd parallelize it.
Normally I do this either through multiprocessing or concurrent.futures, but I figured this was a pretty simple use case for async - a few simple functions, nothing complex, just an inner loop that I wanted to async and then wait for.
Turns out Python has a built in solution for this called a TaskGroup. You create a TaskGroup object, use it as a context manager, and pass it a bunch of async tasks. The TaskGroup context manager exits when all the tasks are complete, so it becomes a great way to spawn a bunch of arbitrary work and then wait for it all to complete.
It was a huge time saver right up until I realized that - surprise! - it wasn't waiting for them to complete in any way shape or form. It was starting the tasks and then immediately exiting the context manager. Despite (as far as I could tell) copying the example code exactly and the context manager doing exactly what I wanted to have happen, I then had to take the list of tasks I'd created and manually await them one by one anyway, then validate their results existed. Otherwise Python was spawning 40 external processes, processing the "results" (which was about three incomplete image downloads), and calling it a day.
I hate writing code in golang and I have to google every single thing I ever do in it, but with golang, goroutines, and a single WaitGroup, I could have had the same thing written in twenty minutes instead of the three hours it took me to write and debug the Python version.
So yeah, technically I got it working eventually but realistically it made concurrency ten times worse and more complicated than any other possible approach in Python or golang could have been. I cannot imagine recommending async Python to anyone after this just on the basis of this one gotcha that I still haven't figured out.
Do yourself a favor and use Elixir. Elixir has almost the same top libraries from Python you need to work with AI. As a matter of fact, the Elixir versions are far less fragile and more reliable in production use cases. I documented my journey of writing an AI app using Elixir and listed out the top libraries you can use, especially if you're coming from Python:
I'm really torn -- you and your engineers should be excited to work on your codebase. You should enter it and be like "yes, I've made good choices and this is a codebase I appreciate, and it has promise." If you have a set of storylines that make this migration appropriate, and its still early in the company that you can even do this in 3 days, then by all means, do it! And good luck. It'll never be cheaper to do it, and you are going to be "wearing" it for your company's lifetime.
But a part of me is reading this and thinking "friend... if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?" Like, you have the counterexample right there! Other companies are making the "technically worse" choice but making it work.
I love coding and I recognize that human beings are made of narratives, but this feels like 3 days you could have spent on customer needs or feature dev or marketing, and instead you rolled around in the code mud for a bit. It's fine to do that every now and then, and if this was a more radical jump (e.g. a BEAM language like Elixir or Gleam, or hell, even Golang, which has that preemptive scheduler + fast compiles/binary deploys + designed around a type system...) than I'd buy it more. And I'm not in your shoes so it's easy to armchair quarterback. But it smells a bit like getting in your head on technical narratives that are more fun to apply your creativity to, instead of the ones your company really needs.
The author addresses that in the article. Python can scale but then developers would have to work with unintuitive async code. You can think of it as a form of tech debt - every single decision they make will take longer because they have to learn something new and double check if they're doing it the right way.
https://status.posthog.com/history
I was just thinking... "BugHog? The platform famously broken more often than not?"
We have a whole posthog interface layer to mask over their constant outages and slowness. (Why don't we ditch them entirely? I, too, often ask this, but the marketing people love it)
>if PostHog was able to do what they're doing on the stack you're abandoning, do you think that stack is actually going to limit your scalability in any way that matters?
Also, considering the project is an AI framework, do you think the language ChatGPT is built on is a worse choice than the language we use because it's in the browser?
I have to spend 3 days working on someone else's "narratives that are more fun to apply their creativity to" all the time, even when my intuition and experience tells me it isn't a good idea. Sometimes my intuition is wrong. I've yet to meet a product manager that isn't doing this even when they claim to have all the data in the world to support their narrative.
Personally I don't think there's anything wrong with scratching that itch, especially if its going to make you/your team more comfortable long term. 3 days is probably not make-or-break.
Async and Django don't mix well and I honestly see the whole Django Async as wasted resources, all those "a" prefixed functions etc.
To be honest, I never liked the way async is done in python at all.
However, I love Django and Python in general. When I need "async" in a http cycle flow, I use celery and run it in background.
If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side. Either it's Chat response with LLM or importing a huge CSV file.
Simple rule for me is, "don't waste HTTP time, process quick and return quick".
> If client side needs to be updated about the state of the background task, the best is to send the data to a websocket channel known to the client side.
SSE is nice.
There’s also Django channels which is pretty sweet for certain tasks, especially websockets.
I use a combination or channels and celery for a few projects and it’s works great.
The problem with channels is that if you need to touch the ORM you will have to use a sync_to_async call which will block the event loop.
+1
but I still hope at some point they will manage to fix the devx with django/python and async
Folks, if you have problems doing async work, and most of your intense logic/algorithms is a network hop away (LLMs, etc.), do yourself a favor and write a spike in Elixir. Just give it a shot.
The whole environment is built for async from the ground up. Thousands and thousands of hours put into creating a runtime and language specifically to make async programming feasible. The runtime handles async IO for you with preemptive scheduling. Ability to look at any runtime state on a production instance. Lovely community. More libraries than you might expect. Excellent language in Elixir.
Give it a shot.
I did an interview for the job I'm currently at, and we were discussing in it an architecture for live updating chats and I said I wouldn't reinvent the wheel and just use the approach Phoenix LiveView uses, which is to have a basic framework loaded client-side that would just apply diffs that comes from a websocket to the UI and have the chat update using those diffs. Turns out this is exactly the architecture they use in production.
People are reimplementing things that are first class citizens in elixir. Live content update, job runners, queues... Everything is built into the language. Sure you can do it all in typescript, but by then you'll be importing lots of libraries, reimplementing stuff with less reliability and offloading things like queues to third party solutions like pulsar or kafka.
People really should try elixir. I think the initial investment to train your workforce pays itself really quick when you don't have to debug your own schedulers and integrations with third party solutions. Plus it makes it really easy to scale after you have a working solution in elixir.
I agree in principle but I think that your average Python developer that thinks that Node.js is an improvement over Python is going to have seizures if they need to switch to Elixer. It's a completely different way of working.
Persistent job queues?
you wouldn't need kafka or pulsar if you use elixir, why ?
I said you'd need those if you were coding jobs in typescript natively, without aid of cloud primitives like AWS SQS and Lambda, not with elixir.
This is an absolutely horrible idea. I’m not questioning the technology choice. But as someone interested in their career, it makes no sense to focus on a language or technology that is not popular. It’s both bad from the recruiting side trying to get developers who are smart enough to care about their n+1 job and the developer side.
There are probably less code samples and let’s be honest this is 2025, how well do LLMs generate code for obscure languages where the training data is more sparse?
Reminds me of the time I was on a team doing stuff in Erlang for no reason
Maybe for you! That's your call. I'm also interested in my career.
I've had 3 Elixir jobs and 2 Rust jobs in the last 10 years. All were on real products, not vaporware. I learned a ton, worked with great people, and made real friends doing it.
Luck? Skill? Who knows. It's not impossible to work with the technology of your choice on problems you find interesting if you're a little intentional.
Nothing ever gets better if everybody just does what's already popular.
>how well do LLMs generate code for obscure languages where the training data is more sparse
LOL. Speaking about absolutely horrible ideas ...
You might not like LLM code generation or corporations encouraging it. Just like I might not like gravity. But I am not going to jump out of a 25 story building. I accept reality for what it is.
Many people code out of curiosity and/or to learn new things and dgaf about whether recruiters will have trouble finding them, mega-lmao.
As an acceptor of reality, you can begin to accept that as well.
I have a strange feeling that most people haven’t found a method to get over there addictions to food and shelter. If they want to exchange labor for money to support those addictions, they have to care about what recruiters want whether external recruiters or internal recruiters.
Using a lot of Typescript and Python in my current role and I find myself missing that part of Elixir. Ecosystems are night and day though. For what we're doing we'd have to write far too many libraries ourselves in Elixir and don't have the time right now.
A lovely language with an incredible web framework (Phoenix, LiveView). However, not easy to pick up for people with only imperative programming experience.
I had to switch my project to .NET in the end because it was too hard to find/form a strong Elixir team. Still love Elixir. Indestructible, simple, and everything is easy once you wrap your head around the functional programming.
It. Just. Works.
As someone who has spent my whole career in somewhat niche things (ROS, OpenWRT, microcontrollers, Nix), I think the answer for how to hire for these is not to look for someone who already has that specific experience but rather look for someone curious, the kind of person who reads wikipedia for fun, an engineer who has good overall taste and is excited to connect the dots between other things they've learned about and experimented with.
Obviously that's not going to give you the benefit of a person who has specifically worked in the ecosystem and knows where the missing stairs are, which does definitely have its own kind of value. But overall, I think a big benefit of working in something like Elixir, Clojure, Rust, etc is that it attracts the kind of senior level people who will jump at the opportunity to work with something different.
And what happens when I’m looking for that next job? I haven’t interviewed for a pure developer job since 2018. But the last time I did, I could throw my resume up on the air and find a job as someone experienced with C# and knew all of the footguns and best practices and the ecosystem. I’m sure the same is true for Java, Typescript, Python, etc.
This is excellent advice.
Thanks.
One nice side effect of having done this is having a small rolodex of other people who are like that.
So, like, if I had a good use case for Elixir and wanted a pal to hack on that thing with, I know a handful of people who I'd call, none of whom have ever used Elixir before but I know would be excited to learn.
Yes, same here. And that has come in very handy more than once. But my merry band of friends isn't getting any younger, I think the youngest in our group is now mid 30s or so, the bulk between 50 and 60.
Yes but NodeJS is also built for async. I get why Discord or FB Messenger use Elixir or Erlang, but they're huge scale.
Yeah moving from python to node for concurrency is insane.
JS haters could stand to understand the language has strengths even if it has well known warts.
Moving from Python to Node for per-process fully-fleshed out async and share-nothing concurrency however is perfectly sane.
That's the main reason to move. Same reason people moved from Java to Kotlin, except that might change now with vthreads.
Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
Conversely all the node+typescript projects, big and small, have been pretty great the last 10+ years or so. (And the C# .NET ones).
I use python for real data projects, for APIs there are about half a dozen other tech stacks I’d reach for first. I’ll die on this hill these days.
100% same experience. If it were up to me, I'd started with typescript, but the client insisted on using a python stack (landed on FastMCP, FastAPI, PydanticAI).
While, `PydanticAI` does the best it can with a limited type system, it just can't match the productivity of typescript.
And I still can't believe what a mess async python is. The worst thing we've encountered was a bug from mixing anyio with asyncio which resulted in our ECS container getting it's CPU pinned to 100% [1]. And constantly running into issue with libraries not handling task cancellation properly.
I get that python has captured the ML ecosystem, but these agent systems are just API calls and parsing json...
[1](https://github.com/agronholm/anyio/issues/884)
I don't really see how you're comparing Pydantic AI here to Typescript. I'm assuming you meant simply Pydantic.
Just comparing an agent framework written in python (with focus on being "typesafe") to one (any) written in typescript
That's a very poor comparison then and not very useful?
async python has problems, but "anyio exists" is not one of them that can be blamed on python, simply dont use weird third party libraries trying to second guess the asyncio architecture
edit: ironically I'm the author of a weird third party library trying to second guess the asyncio architecture but mine is good https://awaitlet.sqlalchemy.org/en/latest/ (but I'll likely be retiring it in the coming year due to lack of interest)
> Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
Very painfully.
I avoid the async libs where possible. I'm not interested in coloring my entire code-base just for convenience.
Yeah I did my first project with FastAPI earlier this year with experience in other languages and I couldn't believe how bad it was.
The funny thing is all the python people will tell you how great FastAPI is and how much of an improvement it is over what came before.
FastAPI does have a few benefits over express, auto enforcing json schemas on endpoints is huge, vs the stupidity that is having to define TS types and a second schema that then gets turned into JSON schema that is then attached to an endpoint. That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
The auto generated docs in FastAPI are also cool, along with the pages that let you test your endpoints. It is funny, Node shops setup a postman subscription for the team and share a bunch of queries, Python gets all that for free.
But man, TS is such a nice language, and Node literally exists to do one thing and one thing only really well: async programming.
Maybe you have non TS clients, but I moved to tRPC backends and now my consumers are perfectly typed at dev time, combined with pnpm monorepos I’m having a lovely time.
> That IMHO is the weakest link in the TS backend ecosystem, compiler plugins to convert TS types to runtime types are really needed.
Just define all your types as TypeBox schemas and infer the schema from that validator. This way you write it once, it's synced and there's no need for a compiler plugin.
https://github.com/sinclairzx81/typebox?tab=readme-ov-file#u...
>Same experience working on FastAPI projects. I don’t know how big production apps are maintained (and supported operationally) with the mess that is python+async+types.
In my experience async is something that node.js engineers try to develop/use when they come from node.js, and it's not something that python developers use at all. (with the exception of python engineers that add ASGI support to make the language enticing to node developers.)
We should really be at a point where the application is a Yaml file and a something like Hugo for backends, and you can force it to use --java or --js or --rust or --python, etc...
I don't see it mentioned enough in the comments here, but not considering Celery as an alternative to Django + async really is the missing puzzle piece here. Aside from application-level options that weren't explored, I'm wondering whether handling some of the file IO stuff with, for instance, nginx, might be a better fit for their use case.
Once you're in the situation of supporting a production system with some of the limitations mentioned, you also owe it to yourself to truly evaluate all available options. A rewrite is rarely the right solution. From an engineering standpoint, assuming you knew the requirements pretty early on, painting yourself into a bad enough corner to scrap the whole thing and pick a new language gives me significant pause for thought.
In all honesty I consider a lot of this blog post to be a real cause for concern -- the tone, the conflating arguments (if your tests were bad before, just revisit them), the premature concern around scaling. It really feels like they may have jumped to an expensive conclusion without adequate research.
In an interview, I would not advance a candidate like this. If I had a report who exhibited this kind of reasoning, I'd be drilling them on fundamentals and double-checking their work through the entire engineering process.
I tried to use Celery for something extremely trivial (granted, 5+ years ago). It was so badly documented and failed to do basic things I would expect from a task queue (like progress reporting) I have no idea why it was and still is popular.
Just because you couldn't figure it out doesn't mean the capability wasn't there. More than ten years ago at this point I was running a massively scaled Celery + RabbitMQ + Redis deployment with excellent off-the-shelf reporting using Flower.
[dead]
As a long-time Django user, I would not use Django for this. Django async is probably never the right choice for a green-field project. I would still pick FastAPI/SQLAlchemy over Express and PostHog. There is no way 15 different Node ORMs are going to survive in the long run, plus Drizzle and Prisma seem to be the leaders for now.
Agree
Django is great but sometimes it seems it just tries to overdo things and make them harder
Trying to async Django is like trying to do skateboard tricks with a shopping cart. Just don't
From the proverbial frying pan into the fire. If you're going to go through all of the effort and cost to switch platforms and to retrain your developers, why on earth would you pick Node.js?
Node.js is such an incredible mess. The ideas are usually ok but the implementation details, the insane dependencies (first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked), the lack of stability, the endless supply chain attacks, maintainers headaches and so on, there is very little to like about Node.js.
C# before Node.js and I can't stand C#. Java Before C#. Yes, it's a language rant, but in the case of Node I am really sorry.
So you'd recommend they rewrote their Python project in Java (assuming the rewrite itself was a good idea)? I don't have any experience on a production web server written in Java or C#, but they both seem like a more difficult transition than JavaScript for rewriting a Python codebase.
I've written code in all of these and I think that Python to Java or Go is easier than Python to Node, especially if you don't want to spend the next 24 months auditing all of the code you just imported.
It's an opinion at the end of the day.
I've been working with Node.js since it came out and it's my go to language for anything backend-related.
The complains about npm are issues that could happen with any other package manager as well. Javascript being so popular is what draws the attention of attackers worldwide and that's why it's newsworthy. i.e. your obscure Rust crate with 3 downloads per year is not "safe", it's just that no one gives af about it.
I would argue that all of these problems that came up, and the fixes that followed, have only made the ecosystem more robust with time :).
>first time I tried to run a Node.js based project I thought there was something seriously wrong with my machine and that I'd been hacked
Hehe, that made me chuckle. As you get more familiar with computers you will understand more and more what's going on. I used to teach older adults who never touched a computer before and many were startled when the cursor moved when they touched the mouse. Your comment kind of reminded of it. First steps are usually like that. It's nice to see all kinds of people getting involved! ^^
> I've been working with Node.js since it came out and it's my go to language for anything backend-related.
> As you get more familiar with computers you will understand more and more what's going on.
Pot, meet kettle.
And yes, Rust's package management was inspired by Node, and it is one of the major drawbacks of Rust.
>Pot, meet kettle.
That makes zero sense, though.
But yeah, I'd encourage you to keep learning. I could send a few courses that I've found good if you're interested :).
Edit: can't comment there but @dang, this one's pretty bad :'(
https://news.ycombinator.com/item?id=45804814
[flagged]
"Python async sucks", then rants about django
"Python doesn't have native async file I/O." - like almost everybody, as "sane" file async IO on Linux is somehow new (io_uring)
Anyway ..
libuv has provided an async interface for io using a worker thread pool for a decade, no dependency on io_uring required. I guess the threadpool they mention that aiofiles uses is written in python, so it gives concurrency but retains the GIL, so no parallelism. Node's libuv async stuff moves all the work off the main thread into c/c++ land until results are ready, only when dealing with the completed data read event does it re-enter the NodeJS "GIL" JavaScript thread.
Libuv has had io_uring integration for almost 18 months if you’re not on an old kernel or old hardware.
They claim about an 8x improvement in speed.
To be clear: libuv has had the ability to offload (some?) I/O operators to io_uring since v1.45.0, from 2023, and that's the 8x speed improvement. 2024 is when node.js seemed to enable (or rather, stop disabling) io_uring by default in its own usage of libuv.
Yeah if you look at the libuv release history there’s been a lot of adding and subtracting since then. It’s clearly not all settled but there are chunks.
that's great! just saying io_uring has never been a required dependency to write application logic that avoids blockage on reads.
Python async does suck though
Django is massive and ton of baggage to be carrying if you are only doing REST APIs.
This sounds like standard case going with what developers know instead of evaluating tool for job.
I think the baggage goes both ways - Django has the advantage of being a "complete & proven recipe" vs. Node where you try to lego-together an app out of dependencies that have deprecation warnings even in their latest versions.
> Django has the advantage of being a "complete & proven recipe"
I work on a large Django codebase at work, and this is true right up until you stray from the "Django happy path". As soon as you hit something Django doesn't support, you're back to lego-ing a solution together except you now have to do it in a framework with a lot of magic and assumptions to work around.
It's the normal problem with large and all-encompassing frameworks. They abstract around a large surface area, usually in a complex way, to allow things like a uniform API to caches even though the caches themselves support different features. That's great until it doesn't do something you need, and then you end up unwinding that complicated abstraction and it's worse than if you'd just used the native client for the cache.
I'm having a hard time imagining a case where you'd be worse off with Django (compared to whatever alternative you may have chosen) in the case where the happy path for the thing you're trying to do doesn't exist natively in Django. Either way you're still farming out that capability to custom code or a 3rd party library.
I guess if you write a lot of custom code into specific hooks that Django offers or use inheritance heavily it can start to hurt. But at the end of the day, it's just python code and you don't have to use abstractions that hurt you.
I don't agree with this cache take. Adding operations to the cache is easy. Taking the django-redis project as an example there are only two levels until you reach redis-py: The cache abstraction and the client abstraction.
"dependencies that have deprecation warnings even in their latest versions"
Could you be more specific? Don't get me wrong, I'm well aware that npm dependency graph mgmt is a PITA, but curious where you an into a wall w/ Node.
you can just run part of django. So the negatives of it being “massive” is really just the size of the library that will just be sitting there on disk. which is really not a big deal in most situations.
As far as going with what you know vs choosing the best tool for the job, that can be a bit of a balancing act. I generally believe that you should go with what the team knows if it is good enough, but you need to be willing to change your mind when it is no longer good enough.
How would you compare this with spring?
Spring is massive as well but since Java is compiled, it’s baggage is less noticeable.
I worked at a mid-size startup that was still running on Python 2.7 and Django for their REST APIs, as late as 2022. It was pretty meh and felt like traveling back in time 10 years.
Python 2.7 was released in 2010. Of course using it in 2022 felt like travelling back in time ten years‽
2.7 was end-of-life in 2020! And Python 3 outdates 2.7 by a few years.
A company using 2.7 in 2022 is an indicator that the company as a whole doesn't really prioritize IT, or at least the project the OP worked on. By 2017 or so, it should have been clear that whatever dependencies they were waiting on originally were not going to receive updates to support python3 and alternative arrangements should be made.
You captured the fundamental issues. There were mountains of technical debt. I recall encountering a dependency that had not been updated in over 10 years.
We have VB deployments that haven't been changed at all in about that long. Finally got approval to do a rewrite last year, which is python 3.6 due to other dependencies we can't upgrade yet.
It got this bad because the whole thing "just worked" in the background without issues. "Don't fix what isn't broken" was the business viewpoint.
python2 will have that effect tbh
I probably would have pushed for Hono as the underlying framework... That said, I've been a fan of Koa/Oak over Express for a very long time. For API usage, the swagger+zod integration is pretty decent, though it changes the typical patterns a bit.
All-in, there's no single silver bullet to solving a given issue. Python has a lot of ecosystem around it in terms of integrations that you may or may not need that might be harder with JS. It really just depends.
Glad your migration/switch went relatively smoothly all the same.
It depends on your use case. Exactly. If you’re building big data intensive pipelines with lots of array manipulation or matrix multiplications you know what will shine. Building user facing APIs, use something with types and solid async.
Matching your latter definition, I'd be inclined to go with Rust or C#... that said, you can go a long way with TS in Node/Deno/Bun/Cloudflare/Vercel, etc.
So basically you just rewrote boilerplate code with complexity of "hello world" and deploy scripts. Without any dependencies, data migrations, real user data and downtime SLA. And after that you had time to write quite long article.
What honest reaction you expect from readers?
I have no idea how you reached this conclusion from the article that I read.
I'm actually building an app on the side and went the other way around on this. Migrating from Typescript back to Python. Granted, my gripes were more with NextJS rather than Node or Typescript.
Using Django was so intuitive although the nomenclature could do a bit better. But what took me days trying to battle it out on NextJS was literally done in an hour with Django (admin management on the backend). While Django still looks and feels a bit antiquated, at least it worked! Meanwhile I lost the entirety of the past weekend (or rather quite a bit of it), trying to fight my code and clean up things in NextJS because somehow the recommended approach for most things is mixing up your frontend and backend, and a complete disregard for separation of concerns.
My new stack from now one will likely be NextJS for the frontend alone, Django for the CRUD and authentication backend, Supabase Edge functions for serverless and FastAPI if needed for intensive AI APIs. Open to suggestions, ideas and opinions though.
I do a lot of glueware and semi-embedded stuff with Python... but my goto these days for anything networky is Elixir (LiveView if ux). If I need an event loop, async that is more than a patched on keyword, it just rocks. It is amazing to me how much Elixir does not have, and yet how capably it solves so many problems that other languages have had to add support for to solve.
"we almost quit multiple times"
It was a three day small task?
This didn't make sense to me either? If it only took three days for a complete rewrite to another language, what's the problem? Did I read they were getting interrupted for user requests? felt weird.
TL;DR
>I'll preface this by saying that neither of us has a lot of experience writing Python async code
> I'm actually really interested in spending proper time in becoming more knowledgeable with Python async, but in our context you a) lose precious time that you need to use to ship as an early-stage startup and b) can shoot yourself in the foot very easily in the process.
The best advice for a start-up is to use the tools that you know best. And sometimes that's not the best tool for the job. Let's say you need to build a CLI. It's very likely that Go is the best tool for the job, but if you're a great Python programmer, then just do it in Python.
Here's a clearer case where the author was not very good with Python. Clearly, since they actually used Django instead of FastAPI, which should have been the right tool for the job. And then wrote a blog post about Python being bad, but actually it's about Django. So yeah, they should have started with Node from day one.
Actually after having written a complex CLI in Node, I think I would've been better off learning Go and writing in that.
Sometimes tools are worth learning!
They were good but no experience with python async
I would have picked Hono and Drizzle. In part because of the great TS support but also Hono is much faster than Express and supports Zod for validation out of the box. This stack would also allow to use any other runtime (Deno, Bun, or Cloudflare Workers).
Given they used TS and performance was a concern I would also question the decision to use Node. Deno or Bun have great TS support and better performance.
Have you tried Elysia (https://elysiajs.com/)? Admittedly I'm not using it at scale, but it's quite pleasant.
I checked it out and it looks good on paper but it only runs on Bun.
Don't get me wrong, I use Bun and I'm happy with it, but it's still young. With Hono/Drizzle/Zod I can always switch back to Node or Deno if necessary.
I've written 10s of thousands of lines of Elysia+Kysely and its a match made in heaven
I'm sure you know what you're taking about -- yet, your response reminds me of https://youtu.be/aWfYxg-Ypm4
"drizzle works on the edge"
I recently moved a classic HTTP API server from Express to Hono (through the Hono node-server package), absolutely seamless migration.
I wouldn't call it seamless, having also done this recently. (Handler func signature is different) But it is relatively straight forward without major changes to the code needed
Made pretty much the same comment Hono + Zod + Swagger is pretty nice all around. Not to mention the portability for different runtime environments. I also enjoy Deno a lot, it's become my main shell scripting tool.
I think it makes sense to start with node.js... it's the standard and widely supported. Eventually it should not be too difficult to switch to bun or deno if the need arises.
I've had a really pleasant experience with Drizzle as an ORM. It feels straightforward compared to some of the incredibly bloated alternatives.
I'm more a fan of just a sql template string handler... in C#/.Net I rely on Dapper... for node, I like being able to do things like...
Note: This is not sql injection, the query is a string template handler that creates a parameterized query and returns the results asynchronously. There's adapters for most DBs, or it's easy enough to write one in a couple dozen lines of code or less.Sure, looks good. I often do templating.
However drizzle makes it very very straightfoward to handle DB migration / versioning, so I like it a lot for that.
With something like EF Core in .NET or Drizzle in TS you get a lot of help from your editor that you wouldn't get when writing SQL.
In TS, I can still create a type for my result... const results : Promise<Foo[]> = ...
I'm not sure what additional help you're getting. I'm just not a fan of ORMs as they tend to have hard edges in practice.
ORMs not only help with the result of the query but but also when writing queries. When I wrote SQL I was constantly checking table names, columns, and enums. With a good ORM like EF Core not only you get autocomplete, type checking, etc but dealing with relationships is much less tedious than with SQL. You can read or insert deeply nested entities very easily.
Obviously ORMs and query builders won't solve 100% of your queries but they will solve probably +90% with much better DX.
For years I used to be in the SQL-only camp but my productivity has increased substantially since I tried EF for C# and Drizzle for TS.
VS Code plugs into my DB just fine for writing SQL queries...
With an ORM, you can also over-query deeply nested related entities very easy... worse, you can then shove a 100mb+ json payload to the web client to use a fraction of.
That's just nonsense. It's trivial to make efficient projected queries with ORMs like EF. Nothing stops you doing stupid things with plain SQL either.
No, but it does put you closer to the actual database and makes you think about what you're actually writing. You also aren't adding unnecessary latency and overhead to every query.
Better DX is not unnecessary.
Also the overhead of good ORMs is pretty minimal and won't make a difference in the vast majority of cases. If you find a bottleneck you can always use SQL.
Bit of a plug but I just started working on a drizzle-esque ORM[1] for Python a few days ago and it seems somewhat appropriate for this thread. Curious whether anyone thinks this is a worthwhile effort and/or a good starting point syntax-wise.
https://github.com/carderne/embar
If we ignore the ML/AI/array libs, where Python shines, the core development has really done nothing much for it since 3.0.
Despite MS, Guido and co throwing their weight, still none of the somewhat promised 5x speedup across the board (more like 1.5x at best), the async story is still a mess (see TFA), the multiple-interpreters/GIL-less is too little, too late, the ecosystem still doesn't settled on a single dependency and venv manager (just make uv standard and be done with it), types are a ham-fisted experience, and so on, and so forth...
This is a lot more about Django than it is about Python.
I don’t know a ton about either but now I am curious if I should takeaway the idea that async with Python is problematic or if only async with Django is the issue.
Async with Python is problematic, but this article doesn't really explain that. Django async being bad is one of many symptoms of Python async being even worse in the past than it is today. Sometimes people claim it's fixed, which ignores a decade+ of momentum behind messy workarounds, also it's still bad.
[dead]
I'm curious about who else is using MikroORM. I see a lot of hype around Prisma, Drizzle, and Kysely but MikroORM has always looked interesting.
I'm using it for a hobby project, and pretty pleased.
My personal maybe somewhat "stubborn old man" opinion is that no node.js orm is truly production quality, but if I were to consider one I think I would start with it. Be aware it has only one (very talented) maintainer as far as I recall.
Everyone's definition of "production quality" is different :-), but Joist is a "mikro-ish" (more so ActiveRecord-ish) ORM that has a few killer features:
https://joist-orm.io/
Always happy to hear feedback/issues if anyone here would like to try it out. Thanks!
> Python async sucks
I always find this line of thought strange. It's as if the entire team hinges their technical decision on a single framework, when in reality it's relatively easy to overcome this level of difficulties. This reminds me of the Uber blunder - the same engineer/team switched Uber's database from MySQL to Postgres and then from Postgres to MySQL a few years later, both times claiming that the replaced DB "does not scale" or "sucks". In reality, though, both systems can work very well, and truth be told, Uber's scale was not large enough for either db to show the difference.
The grass is always greener, because of greenshift, a phenomenon where the light dilates due to the universe expansion dilation of spacetime.
>We did this so we can scale.
>Python async sucks
Python async may make certain types of IO-blocked tasks simpler, but it is not going to scale a web app. Now maybe this isn't a web app, I can't really tell. But this is not going to scale to a cluster of machines.
You need to use a distributed task queue like celery.
Is this 2015?
Plenty of people are still using Python, Java, or even C++ for this sort of thing, even in new codebases, for no reason other than familiarity
We did the same for our app as well. I wrote a little library to make it as simple as FastAPI to generate swagger specs - you can try it out - https://github.com/sleeksky-dev/alt-swagger .
We use fastapi for our newer stuff. Its nice but unless you really need async I think you can get further quicker with something like flask.
I really wish the dev would extract the dependency injection portion of the project and flesh it out a bit. There are a lot of rough edges in there.
Async django is not for the faint of heart, but it's definitely possible in 2025.
I recently wrote about issues debugging this stack[1], but now I feel very comfortable operating async-first.
[1] https://blendingbits.io/p/i-used-claude-code-to-debug-a-nigh...
Doing zero upfront research or planning and then bragging about it in public like this is pretty suspect, but I guess more to the point, glorifying "the pivot" like this is out of style anyway. You're now supposed to insist that whatever happened was the plan all along.
Looks like last week was coding week, current one is marketing week.
It is true that Python's typing and async support feels like someone adding extension to a house that was built 50 years ago.
After having used it two weeks ago for the first time: it feels as though async support in Python is basically a completely parallel standard library that uses the same python syntax with extra keywords all over the place. It's like if building code compliance required your 50 year old house to be updated have a wider staircase with deeper steps but you wanted to do so without affecting the existing stairs, so now you just have two staircases which are a little bit different and it feels like it takes up space unnecessarily.
I had to look for async versions of most of what I did (e.g. executing external binaries) and use those instead of existing functions or functionality, meaning it was a lot of googling "python subprocess async" or "python http request async".
If there were going to be some kind of Python 4.x in the future, I'd want some sort of inherent, goroutine-esque way of throwing tasks into the ether and then waiting on them if you wanted to. Let people writing code mark functions as "async'able", have Python validate that async'able code isn't calling non-async'able code, and then if you're not in an async runloop then just block on everything instead (as normal).
If I could take code like:
And replace it with: And just have the runtime automatically await the result when I try to access it if it's not complete yet then it would save me thousands of lines of code over the rest of my career trying to parallelize things in cumbersome explicit ways. Perhaps provide separate "async" runners that could handle things - if for example you do explicitly want things running in separate processes, threads, interpreters, etc., so you can set a default async runner, use a context manager, or explicitly threadpool.task(async get_image(imagename)).Man, what a world that would be.
Not sure if this is a tongue in cheek post referring to how many modern languages already work like this? TS, Kotlin, C#, Java 29+, ...
Might as well just implement virtual threads: https://discuss.python.org/t/add-virtual-threads-to-python/9...
If scale really mattered they should have rewriten it in Go, Java, C#, Rust....
Or if feeling fancy, Erlang, Elixir.
Do not rewrite in one shot, make small micro services in nodes and migrate from Django step by step.
That's good. Should've ditched the ORM too.
Does this problem exist with fastapi as well?
When they start moving away from API calls to third parties to their own embeddings or AI they’re in for a bad time.
What’s going to end up happening is they’ll then create another backend for AI stuff that uses python and then have to deal with multiple backend languages.
They should have just bit the bullet and learned proper async in FastAPI like they mentioned.
I won’t even get started on their love of ORMs.
nodejs excels for me on flow control
theres effectts if you need app level control
theres caolan async if you need series and parallel controls
theres rxjs if you need observables
on web frameworks hono seems nice too. if you need performance, theres uwebsockets.js which beats all other web frameworks in http and websocket benchmarks.
for typesafety aside from typescript, theres ark, zod, valibot, etc.
imho it would make more sense to rewrite to golang with a stronger production ready standard library.
I was about to migrate a legacy system written in Python/ Flask to FastAPI and React (frontend). But the sentiments here seem to suggest that FastAPI is not the best solution if I need async? So go with Next.js?
Do we still need ORMs in the age of AI-assisted coding?
I started ripping them out of a java system even before that.
We're on the same wavelength, i have decades of ORM experience. It was the first thing i woudl do in any project. Now it can just be vanilla JDBC with tons of duplicated boilerplate. AT least in the early stages.
[delayed]
Who is the audience for a post like this? Presumably HN, since the author invoked PG.
But who is "we rewrote our stack on week 1 due to hypothetical scaling issues" supposed to impress? Not software professionals. Not savvy investors. Potential junior hires?
Probably people who write too little code but read too many blog posts
This feels like an article from 2013.
Good decision, judging by their general level of impatience with things they would have hated my ORM :).
Also I think the node approach is probably still more performant than FastAPI but that's just a hunch.
Hopefully they won't have security issues because someone hijacked the node package that sets the font color to blue or passes the butter or something.
Thank you for that comment, Mike :-) I was looking for that type of response here :-). As expressed many times: thank you for your work.
Passes the butter is a euphemism for churn?
it's from rick and morty that was referenced here last week with a post about robots that pass butter
It sounds like they are just saying they should have used the tool they were most familiar with on day 1.
They did. That's why they used Django.
python async is just a phase you grow out of and go back to mpsc where you don't see color and stack starts to make sense again.
djanko (sic) out fastapi in
[dead]
[dead]
I had a python script I was writing that basically just needed to the same shell command 40 times (to clone images from X to Y) and a lot of the time was spent making the request and waiting for the data to be generated so I figured I'd parallelize it.
Normally I do this either through multiprocessing or concurrent.futures, but I figured this was a pretty simple use case for async - a few simple functions, nothing complex, just an inner loop that I wanted to async and then wait for.
Turns out Python has a built in solution for this called a TaskGroup. You create a TaskGroup object, use it as a context manager, and pass it a bunch of async tasks. The TaskGroup context manager exits when all the tasks are complete, so it becomes a great way to spawn a bunch of arbitrary work and then wait for it all to complete.
It was a huge time saver right up until I realized that - surprise! - it wasn't waiting for them to complete in any way shape or form. It was starting the tasks and then immediately exiting the context manager. Despite (as far as I could tell) copying the example code exactly and the context manager doing exactly what I wanted to have happen, I then had to take the list of tasks I'd created and manually await them one by one anyway, then validate their results existed. Otherwise Python was spawning 40 external processes, processing the "results" (which was about three incomplete image downloads), and calling it a day.
I hate writing code in golang and I have to google every single thing I ever do in it, but with golang, goroutines, and a single WaitGroup, I could have had the same thing written in twenty minutes instead of the three hours it took me to write and debug the Python version.
So yeah, technically I got it working eventually but realistically it made concurrency ten times worse and more complicated than any other possible approach in Python or golang could have been. I cannot imagine recommending async Python to anyone after this just on the basis of this one gotcha that I still haven't figured out.
Why don't you post the original broken Python code.
I mean I know it sounds snarky but it just sounds like you weren't awaiting the tasks properly
Do yourself a favor and use Elixir. Elixir has almost the same top libraries from Python you need to work with AI. As a matter of fact, the Elixir versions are far less fragile and more reliable in production use cases. I documented my journey of writing an AI app using Elixir and listed out the top libraries you can use, especially if you're coming from Python:
https://medium.com/creativefoundry/i-tried-to-build-an-ai-pr...