Show HN: SnapQL – Desktop app to query Postgres with AI
github.comSnapQL is an open-source desktop app (built with Electron) that lets you query your Postgres database using natural language. It’s schema-aware, so you don’t need to copy-paste your schema or write complex SQL by hand.
Everything runs locally — your OpenAI API key, your data, and your queries — so it's secure and private. Just connect your DB, describe what you want, and SnapQL writes and runs the SQL for you.
This is nice -- we're heavy users of postgresql and haven't found the right tool here yet.
I could see this being incredible if it had a set of performance related queries or ran explain analyze and offered some interpreted results.
Can this be run fully locally with a local llm?
just opened a PR for local llm support https://github.com/NickTikhonov/snap-ql/pull/11
Merged! Thanks Stephan
Thank you for the feedback. Please feel free to raise some issues on the repo and we can jam this out there
Wow, this looks really cool.
What is the current support for OpenAI proxy or non-GPT models?
For example, using locally deployed Qwen models or LLaMA models.
I might test this out, but I worry that it suffers from the same problems that I ran into the last time I played with LLMs writing queries. Specifically not understanding your schema. It might understand relations but most production tables have oddly named columns, potentially columns that changed function overtime, potentially deprecated columns, internal-lingo columns, and the list goes on.
Granted, I was using 3.5 at the time, but even with heavy prompting and trying to explain what certain tables/columns are used for, feeding it the schema, and feeding it sample rows, more often than not it produced garbage. Maybe 4o/o3/Claude4/etc can do better now, but I’m still skeptical.
I think this is the achilles heel of LLM-based AI: the attention mechanisms are far, far, inferior to a human, and I haven't seen much progress here. I regularly test models by feeding in a 20-30 minute transcript of a podcast and ask them to state the key points.
This is not a lot of text, maybe 5 pages. I then skim it myself in about 2-3 minutes and I write down what I would consider the key points. I compare the results and I find the AI usually (over 50% of the time) misses 1 or more points that I would consider key.
I encourage everyone to reproduce this test just to see how well current AI works for this use case.
For me, AI can't adequately do one of the first things that people claim it does really well (summarization). I'll keep testing, maybe someday it will be satisfactory in this, but I think this is a basic flaw in the attention mechanism that will not be solved by throwing more data and more GPUs at the problem.
> I encourage everyone to reproduce this test just to see how well current AI works for this use case.
I do this regularly and find it very enlightening. After I’ve read a news article or done my own research on a topic I’ll ask ChatGPT to do the same.
You have to be careful when reading its response to not grade on a curve, read it as if you didn’t do the research and you don’t know the background. I find myself saying “I can see why it might be confused into thinking X but it doesn’t change the fact that it was wrong/misleading”.
I do like when LLM‘s cite their sources, mostly because I find out they’re wrong. Many times I’ve read a summary, then followed it to the source, read the entire source, and realized it says nothing of the sort. But almost always, I can see where it glued together pieces of the source, incorrectly.
A great micro example of this are the Apple Siri summaries for notifications. Every time they mess up hilariously I can see exactly how they got there. But it’s also a mistake that no human would ever make.
[dead]
This is not a difficult problem to solve. We can add the schema, columns and column descriptions in the system prompt. It can significantly improve performance.
All it will take is a form where the user supplies details about each column and relation. For some reason, most LLM based apps don't add this simple feature.
It’s not a difficult problem to solve, I did it, last year, with 3.5, it didn’t help. That’s not to say that newer models wouldn’t do better, but I have tried this approach. It is a difficult problem to actually get working.
So, I have not tried it on a very complex database myself so I can't comment how well it will work in production systems I have tried this approach with a single Big Query table and it worked pretty well for my toy example.
If by 3.5 you mean ChatGPT 3.5 you should absolutely try it with newer models, there is a huge difference in capabilities.
Yes, ChatGPT 3.5, this testing was a while back. I’m sure it has improved but I doubt it’s solid enough for me to trust.
Example/clean/demo datasets it does very well on. Incredibly impressive even. But on real world schema/data for an app developed over many years, it struggled. Even when I could finally prompt my way into getting it to work for 1 type of query, my others would randomly break.
It would have been easier to just provide tools for hard-coded queries if I wanted to expose a chat interface to the data.
[dead]
I got better results with Claude Code + PostgreSQL MCP. I let claude understand my drizzle schema first, and i can instruct it to also look at the usage of some entities in the code. Then it is smarter in understanding what the data represents.
might be possible to solve this with prompt configuration. e.g. you'd be able to explain to the llm all the weird naming conventions and unintuitive mappings
I did that the last time (again, only with 3.5, things have hopefully improved in this area).
And I could potentially see LLMs being useful to generate the “bones” of a query for me but I’d never expose it to end-users (which was what I was playing with). So instead of letting my users do something like “What were my sales for last month?” I could use LLMs to help build queries that were hardcoded for various reports.
The problem is that I know SQL, I’m pretty good at, and I have a perfect understanding of my company’s schema. I might ask an LLM a generic SQL question but trying to feed it my schema just leads to (or rather “led to” in my trials before) prompt hell. I spent hours tweaking the prompts, feeding it more context, begging with it to ignore the “cash” column that has been depreciated for 4+ years, etc. After all of that it still would make simple mistakes that I hard specially warned against.
Looks useful! And the system prompt didn't require too much finessing. I wonder how it would work with some later models than gpt-4o as in my own dabbling around gpt-4o wasn't quite there yet and the latest models are getting really good.
For analytical purposes, this text-to-SQL is the future; it's already huge with Snowflake (https://www.snowflake.com/en/engineering-blog/cortex-analyst...).
Appreciate the input! I'd love to be able to support more models. That's one of the issues in the repo right now. And I'd be more than happy to welcome contributions to add this and other features
Would love to contribute. I have made a fork, will try and raise a PR if contributions are welcome.
Question, how are you testing this? Like doing it on dummy data is a bit too easy. These models, even 4o, falter when it comes to something really specific to a domain (like I work with supply chain data and other column names specific to the work that I do, that only makes sense to me and my team, but wouldn't make any sense to an LLM unless it somehow knows what those columns are)
I'm using my own production databases at the moment. But it might be quite nice to be able to generate complex databases with dummy data in order to test the prompts at the higher levels of complexity!
And thank you for offering to contribute. I'll be very active on GitHub!
Can you please add support to add descriptions of each column and enumerated types?
For example, if a column contains 0 or 1 encoding the absence of presence of something, LLMs need to know what 0 and 1 stand for. Same goes of column names because they can be cryptic in production databases.
Genuinely do not understand the point of these tools. There is already a practically natural language to query RDBMS; it’s called SQL. I guarantee you, anyone who knows any other language could learn enough SQL to do 99% of what they wanted in a couple of hours. Give it a day of intensive study, and you’d know the rest. It’s just not that complicated.
SQL is simple for simple needs, basic joins and some basic aggregates. Even that you won't learn in 2 hours. And that is just scratching the surface of what can be done in SQL and what you need to query. With LLMs and tools like this you simply say what you need in english, you don't need to understand the normalizations, m:n relation tables, CTEs, functions, JSON access operators, etc.
For reference, I’m a DBRE. IMO, yes, most people can learn basic joins and aggregates in a couple of hours, but that is subjective.
> you don’t need to understand the normalizations
You definitely should. Normalizing isn’t that difficult of a concept, Wikipedia has terrific descriptions of each level.
As to the rest, maybe read docs? This is my primary frustration with LLMs in general: people seem to believe that they’re just as good of developers as someone who has read the source documentation, because a robot told them the answer. If you don’t understand what you’re doing, you cannot possibly understand the implications and trade-offs.
Thank goodness 99% don’t want to understand everything. Otherwise, you wouldn’t be paid very well at your job, right?
Without having looked at it, I would assume the value comes from not having to know the data model in great detail, such that you can phrase your query using natural language, like
"Give me all the back office account postings for payment transfers of CCP cleared IRD trades which settled yesterday with a payment amount over 1M having a value date in two days"
That's what I'd like to be able to say and get an accurate response.
In a business, a management decision maker has to rely on a Db analyst if any query they have cannot be answered by any front end tool they have been given. And that introduces latency to the process
A 100% accurate ai powered solution would have many customers.
But can this generation of llms produce 100% accuracy?
and yet this was on the front page of hacker news for an entire day :D
it's all about friction. why spend minutes writing a query when you can spend 5 seconds speaking the result you want and get 90-100% of the way there.
Mostly because you don’t know if it’s correct unless you know SQL. It’s entirely too easy to get results that look correct but aren’t, especially when using windowing functions and the like.
But honestly, most queries I’ve ever seen are just simple joins, which shouldn’t take you 5 minutes to write.
> Mostly because you don’t know if it’s correct unless you know SQL. It’s entirely too easy to get results that look correct but aren’t ...
This is the fundamental problem when attempting to use "GenAI" to make program code, SQL or otherwise. All one would have to do is substitute SQL with language/library of choice above and it would be just as applicable.
Fully agree, I just harp on SQL because a. It’s my niche b. It always seems to be a “you can know this, but it doesn’t really matter” thing even for people who regularly interact with RDBMS, and it drives me bonkers.
> most queries I’ve ever seen are just simple joins
Good for you. Some of us deal with more complex queries, even if it may not seems so from the outside. For example getting hierarchical data based on parent_id, while having non-trivial conditions for the parents and the children or product search queries which need to use trigram functions with some ranking, depending on product availability across stores and user preferences.
I agree knowing SQL is still useful, but more for double checking the queries from LLMs than for trying to build queries yourself.
> getting hierarchical data based on parent_id
So, an adjacency list (probably, though there are many alternatives, which are usually better). That’s not complex, that’s a self-join.
> trigram functions
That’s an indexing decision, not a query. It’s also usually a waste: if you’re doing something like looking up a user by email or name, and you don’t want case sensitivity to wreck your plan, then use a case-insensitive collation for that column.
> I agree knowing SQL is still useful, but more for double checking the queries from LLMs
“I agree knowing Python / TypeScript / Golang is still useful, but more for double checking the queries from LLMs.” This sounds utterly absurd, because it is. Why SQL is seen as a nice-to-have instead of its reality - the beating heart of every company - is beyond me.
Your Python / TypeScript etc. argument is a strawman, thats why it sounds absurd. Your arguments would hold better if an average person was good and very quick at learning and memoizing complex new things. I don't know if you work with people like that, but that's definitely not the norm. Even developers know little SQL unless it's their specific focus.
In the original comment you said:
> I guarantee you, anyone who knows any other language could learn enough SQL to do 99% of what they wanted in a couple of hours. Give it a day of intensive study, and you’d know the rest. It’s just not that complicated.
Well your "guarantee" does not hold up. Where I live, every college level developer went through multiple semesters of database courses and yet I don't see these people proficient in SQL. In couple hours? 99% of what they need? Absurd
It's not a strawman, it's reductio ad absurdum. SQL and Python are both languages that are commonly used. It would be (currently; who knows in a few years) laughable if someone said they didn't need to deeply understand Python to be able to correctly write Python at an employable level, modulo experience levels - I don't expect a Junior to know the vagaries of the language, e.g. that bools are aliased to integers.
> Even developers know little SQL unless it's their specific focus.
Yes, and I believe this to be deeply problematic. We don't generally allow people to use a language they don't understand in production, except for SQL.
> Where I live, every college level developer went through multiple semesters of database courses and yet I don't see these people proficient in SQL.
That's horrifying.
Look, while I would love it if everyone writing SQL knew relational algebra, basic set theory, and the ins and outs of their specific RDBMS implementation, I think the below suffices for the majority of work in web dev:
You're telling me that given a simple educational schema like Northwind Traders, and the documentation for their RDBMS, that someone who already knows a programming language couldn't use the above to figure it out in a fairly short order?You made here an important assumption
> someone who already knows a programming language
I'm sure someone who can already code, can write a simple query. But my argument is
1. with AI assistance programmer would be quicker, with less friction, more productive, enabled to make queries beyond his current abilities
2. with AI assistance Non-programmer would be enabled to use SQL at all
3. real world queries are often not trivial (todays developers have simple queries covered by ORM / query building tools)
Regarding real life queries - I look at my last query that I crafted with difficulty and AI help - starts `WITH RECURSIVE`, uses `UNION ALL`, `GROUP_CONCAT`, `COALESCE` (even with SELECT statement inside), multiple CTEs. It would take me hours to get to that. I can have that in minutes with AI help. I don't even mention different dialects, feature support, arrays and JSONs, extensions, etc.
I like this a lot. I am looking forward to having something similar built into Metabase.
Great tool!
Pardon my technical ignorance, but what exactly is OpenAI's API being used for in this?
OpenAI LLM is used to generate SQL based on a combination of a user prompt and the database schema.
[dead]
Looks like a good idea. Any reason you didn't use React native?
Not really - I had some previous experience with electron and wanted to finish the core feature set in a few hours, so just went with what I already know.
Are there plans to support other LLM sources, in particular ollama?
Yes! https://github.com/NickTikhonov/snap-ql/issues/1
awesome, looking forward to try it with a self hosted model
I was looking for something like this that supports graphs.
Graph generation is next on the list.
Neo4j?
What's the underlying model to enable this ?
Currently OpenAI 4o
So u already train all knowledgebase or fine tune? Would love to know how can u evaluate correctness.
they don't it's simple a zero-shot text to sql interface. the app development started 2days ago.
https://github.com/NickTikhonov/snap-ql/blob/main/src/main/l...
Which MCP is the recommended or “official” for SQLite and PostgreSQl for use with Cursor?
congrats on the launch! This looks very interesting
data engineering about to be eaten by llms
Am I misunderstanding something? How is this "Everything runs locally" if it's talking to OpenAI's APIs?
This app is using OpenAI via the ai package[0][1], so "Everything runs locally" is definitely misleading.
[0]: https://github.com/NickTikhonov/snap-ql/blob/409e937fa330deb...
[1]: https://github.com/vercel/ai
I guess he means there is no proxy between you and openai. API key won’t leak, etc.
What I meant was that it isn't a web app and I don't store your connection strings or query results. I'll make this more clear
It is a web app, though. You just aren't running the server, OpenAI is. And you're packaging the front end in electron instead of chrome to make it feel as if it all runs locally, even though it doesn't.
Side note: I don't see a license anywhere, so technically it isn't open source.
You might not but openai does.
API gateways could accept public keys instead of generating bearer tokens. Then the private key could reside in an HSM, and apps like this could give HSMs requests to sign. IMO even though this could be done in an afternoon, everyone - Apple and Google, the CDN / WAF provider, the service provider - is too addicted to the telemetry.
That makes no sense. OpenAI doesn't know the secret database connection string or any query results. Perhaps you should have read the code before making baseless claims.
But it knows what you're querying, which depending on what you're doing may also give away a good bit about whats in the DB.
[dead]
If you can do this, can't you create a read-only user and use it with a database MCP like https://github.com/executeautomation/mcp-database-server ? Am I missing something?
You can set up an MCP and use it in your existing AI app, but is afaiu the first open source standalone app that gives you a familiar interface to other SQL workspace tools. I built it to be a familiar but much more powerful experience for both technical and nontechnical people.
There are competitors with a GUI too, such as https://www.sqlchat.ai/ and https://www.jetbrains.com/datagrip/features/ai/
I wish you luck in refining your differentiation.
Selfless plug, our own tool => https://www.myriade.ai
> I wish you luck in refining your differentiation. Can't agree more with you. It's about distribution (which Snowflake/Databricks/... have) or differentiation.
Still, chatting with your data is already working and useful for lots.
The first doesn't have good UX and the second isn't open source. SnapQL is both :) But I'll find new ways to differentiate for sure, it's part of the fun of building.
Your project is source-available, not open-source. Consider adding a license.
https://dbeaver.com/docs/dbeaver/AI-Smart-Assistance/
[flagged]
[flagged]
Interesting lead. What else would they be looking for in a tool like this? My bad re the video, I'll make sure not to toggle dark mode in the next one.
awesome work nick, literally been asking for a vibe coding SQL interface for months
thanks Jaimin. happy you finally found what you were looking for :D