throwaway2016a 3 days ago

I find it striking that in the same day I saw a video about how someone "Made an API in 20 minutes with one prompt" and this. The two approaches seem very divergent. One that is almost cavalier about things like security, standards, etc and another that is (almost) over engineered.

One observation, is that I there are two trains of thought. Using OAD (Open API Descriptions) as a source of truth and generating code from there or treating OAD as an artifact that comes out of some other tools.

I personally see OpenAPI as kind of a glue that can allow different tooling to be able to speak the same language.

Overall I found the linked Moonwalk[1] document to be more interesting. But there is some interesting analysis to be found in this article as well.

[1] https://www.openapis.org/blog/2023/12/06/openapi-moonwalk-20...

  • re-thc 2 days ago

    > I find it striking that in the same day I saw a video about how someone "Made an API in 20 minutes with one prompt" and this

    You can also record a blank video on your phone for 20 minutes and call that a movie. Would anyone watch it?

    You can also build a house in days. Would it crack? Is it maintainable? What happens later? Who knows.

    • flessner 2 days ago

      The ethos I have seen around these is usually "It doesn't have to be proper if it isn't making money"

      I think it's a fair attitude if your only goal is to make money, but it completely misses "why" you should build something... if you truly care about a problem you wouldn't haphazard it anyway.

      • madeofpalk 2 days ago

        Maybe I don't truely care about my problem? But I just care a little bit, and I've done the risk analysis.

        I used a whole lot of "ChatGPT just wrote it all for me" for a rust program that watches for and renames video game clips for me. Maybe it's insecure or has subtle bugs, I don't really care all that much because it does the job for me.

        • re-thc a day ago

          > Maybe I don't truely care about my problem?

          You pretend to not care until you do. When it accidentally deletes your files or even your whole hard drive you'll suddenly find someone / something to blame.

      • throwaway2016a 2 days ago

        > I think it's a fair attitude if your only goal is to make money

        Short term, yes. But it's a bit short sighted as most of the AI code I have seen has security and scalability issues that long term have potential to blow up in your face costing even more money.

        Granted that can usually be fixed by better prompts. But to right those prompts requires the person doing the "prompt engineering" (rolls eyes) to actually have a working knowledge of a lot of areas such as architecture, security, software engineering best practices, etc. And a lot of the influencers out there pushing AI openly admit to "not knowing how to code" let alone knowing the right way to build a technology product so that it scales and is safe.

      • re-thc 2 days ago

        > "It doesn't have to be proper if it isn't making money" > I think it's a fair attitude if your only goal is to make money

        Is that why we often get so many posts about e.g. getting a huge bill on AWS or GCP? Or that so and so company shut them down or whatever else?

        I've seen far too many "temporary" solutions and "quick fixes" that always go beyond the scope and lifetime. Never have such a mindset.

    • throwaway2016a 2 days ago

      To be fair I wasn't agreeing with the "API in 20 minutes approach" I was only pointing out the contrast between that and something like this.

      As I tried to allude too, AI written APIs often have security, performance, maintainability and a whole slew of other issues.

      But at the same time, I think "blank video on your phone for 20 minutes" is a bit of a stretch. These AI generated APIs have problems for certain but they are working software and in many cases better working software than a non-coder or junior engineer could have written in a much longer time.

      And while I don't like the idea of tons of insecure poorly architected APIs being out there, the realty is, people are using AI generated APIs in the real-world right now, it's not hypothetical.

      • re-thc 2 days ago

        > but they are working software

        What is "working" software?

        Have we lost the meaning of that now too? Samsung Galaxy Note 7 is a "working phone" too - it just might explode.

        > but they are working software and in many cases better working software than a non-coder or junior engineer could have written in a much longer time.

        Imagine the nurse telling you that you've got an AI doctor operating on you that's better than the junior surgeon. I'm sure you'd be happy. We've been cheapening the industry for a long time. Not everyone needs to produce code.

        > the realty is, people are using AI generated APIs in the real-world right now, it's not hypothetical.

        The reality is there is contaminated cooking oil [1], noodles with opium [2] and a infinite amount of issues. Let's not make the world worse?

        [1]: https://www.abc.net.au/news/2024-07-13/cooking-oil-contamina...

        [2]: https://www.washingtonpost.com/news/morning-mix/wp/2014/09/2...

        • throwaway2016a 2 days ago

          Working means you give it input and it produces the expected output for all your defined used cases. Don't confuse working with good.

          Let's keep your analogy: AI isn't producing software that is the equivalent of a AAA movie title by any stretch but it is producing far better than a bunch of kids in a garage with their cell phones can make. Which is orders of magnitude better than 20 minutes of blank video. Which means that people will use it whether you like it or not.

          Reality doesn't care if you think it is a bad idea... in fact I think you and I are on the same page, I do think it is a bad idea... but reality will continue to exist whether you and I like it or not.

          You're not helping anyone by arguing how crappy and harmful it is to someone who already knows how crappy and harmful it is.

    • lionkor 2 days ago

      Those make great YouTube video titles.

  • handrews 3 days ago

    Yeah this article is more about how we (the OpenAPI Initiative) are designing the next versions of the OpenAPI Specification than it is about how to use it. The diagram does include both an OAD generator and editor, intended to encompass both code-first and description-first (which doesn't make too much difference for this blog post). The Moonwalk article is definitely more general purpose! This is "OK Moonwalk has a great vision, but how do we actually make it a real spec?" I've been using variations of this diagram in the weekly Moonwalk calls for the past month or two.

    • throwaway2016a 3 days ago

      > OK Moonwalk has a great vision, but how do we actually make it a real spec?

      I'm not sure the article really succeeds if that was the goal. I suspect that there might be some aspects of the discussion that are taking place that are missing from the article making it a little difficult for someone who wasn't in those discussions to connect the dots.

      Don't get me wrong, I think the article had some useful pieces in it, I just think if that was the goal of the article it could possibly use some additional framing for people who don't have the full context.

      With that said, I really appreciate transparency into the thought process!

      • handrews 3 days ago

        > I just think if that was the goal of the article it could possibly use some additional framing for people who don't have the full context.

        It's always a struggle to figure out how much explanation to put in before people see something like "20 minute read" and just refuse to read it. (BTW I don't mind the critical feedback at all- I'm just glad you found something useful in it).

        But keep in mind that _we_ haven't answered "how do we actually make it a real spec?" either! This is a snapshot of our efforts at this particular moment. Also, there's a reason that this is "part one in a series" :-)

Sytten 2 days ago

The code generators for OpenAPI are rarely good compared to what you find in GraphQL in my experience. The Swagger Generator / OpenAPI Generator (why the fork...) that is the "standard" is kinda of a mess.

In rust Oxide had to write their own [1] which is actually decent, but you really need to know it exists.

[1] https://github.com/oxidecomputer/progenitor

  • handrews a day ago

    Part of the problem is that OpenAPI depends on JSON Schema and JSON Schema is a runtime constraint system, not a data definition system [1]. That shows up in the diagram with the three colors crammed into the "Interface Modeling and Validation" area because the boundary between those three ends up being very weird. And there's no standard for using JSON Schema as a data definition system... people just started kinda doing stuff.

    I've been advocating for finding a better data definition system for Moonwalk (and yes, that's my name on the post-2016 JSON Schema drafts- it's a great runtime constraint system, and it could still be useful for that purpose alongside a system more tailored to data definition).

    [1] https://modern-json-schema.com/json-schema-is-a-constraint-s...

  • salomonk_mur 2 days ago

    How is Swagger a mess? It produces perfectly readable documentation and testing mechanisms.

    Barebones? Sure. But far from "a mess".

nilslice 2 days ago

The OpenAPI ecosystem is really impressive. We went through a major analysis of it recently when deciding whether or not to create a new, custom IDL from which to generate binding/glue code for our WebAssembly plugin system.

We knew that OpenAPI is already great at describing the interaction points between a client and server, and this ended up being a perfect fit for the plugin definition too.

Since there is already so much OpenAPI spec out there, I think more people should build tooling based on it. Being able to take types that a server application already knows well, and reuse them to just interact with the client code locally in-process vs. interact with a client over HTTP is pretty remarkable!

zurfer 2 days ago

OpenAPI is great, we use it in combination with redoc (https://github.com/Redocly/redoc) to have a almost 0 effort API for our product.

But man, my tokenizer is in trouble with OpenAPI and OpenAI.

  • ljm 2 days ago

    I keep mixing them up now too.

    Realised that writing a consistent API was more effort than it was worth compared to the one-shot setup cost of creating a set of basic components and re-using them to generate interfaces. Most of the API layer is just boilerplate and we can focus more on biz logic instead.

  • handrews a day ago

    For a while, the "P" key on my laptop wouldn't work reliably... which is really annoying when your work is all focused around OpenAPI but everyone else keeps assuming AI :P

nicholasjarnold 2 days ago

Since I don't see it mentioned here and it's complimentary information allow me to mention the OpenAPI Tools website[0]. It lists and categorizes a ton of different tooling options from SDK generators, to automatic testing frameworks, etc. From some personal experience they vary in spec support and quality widely, but the listing itself is a good starting point if you're in a position to start evaluating tooling for a team you're on or company you're working with.

[0] https://openapi.tools/

  • handrews a day ago

    I actually looked at that one, too, and know the person (or one of the people) that started it. Their classification is better than tools.openapis.org (slightly embarrassing, but we don't have anyone who has volunteered to curate it beyond minimal PR approvals, so...). Although you still fundamentally see the same problems. And it was more convenient for me to point to the messier list for the purpose of this blog post, anyway!

infocollector 3 days ago

Thanks for writing this! This nicely breaks it down into boxes that OpenAPI deals with.

I still think OpenAPI usage is a bit confusing in general. For example, I am still waiting for a better explanation of this diagram with relation to a choice of backend (Python WSGI) + frontend (JS) combinations. Perhaps someone here has a pointer for me to read?

  • dgellow 3 days ago

    If you haven’t yet, I highly recommend to check FastAPI for your python backend: https://fastapi.tiangolo.com/. OpenAPI is a core part of it, making it simple to integrate with other tools such as docs and clients generator

    • BerislavLopac 2 days ago

      Or you can go straight to Starlette [0], which is much more streamlined to develop with. FastAPI is based on it itself, and while it provides many additional bells and whistles I find it messes things up significantly.

      [0] https://www.starlette.io/

_fat_santa 2 days ago

General question. I currently have a Node API that I am the solo dev on and am considering bringing up to OpenAPI spec to get nice docs and stuff. Wanted to ask others here, for a solo dev would it be worth it? Or is that sort of thing really geared for teams where multiple people are using and integrating the API?

  • ljm 2 days ago

    The output will only really be as good as what you describe, really, and if you’re only using it for yourself it might feel like overkill, particularly if you want the spec as the source of truth and have to rewrite your API to fit a generated interface.

    It may have more use to you if your API is quite large or re-uses a lot of components (e.g if you follow something like the JSON:API spec), because you can generate boilerplate from it then.

    Can also be useful if there’s another team consuming the API, so you can design new endpoints and figure out the requirements before getting to work. That spec can then be used to generate mock servers or used for e2e testing.

  • Swizec 2 days ago

    OpenAPI is only useful if you intend for your project to become bigger than what fits in your head or if you intend to keep a project alive for long enough to start forgetting things you did a while back.

    • jlengrand 2 days ago

      Or if you deliver an API and any of your users / customers want to use anything else than a raw API. IMHO the top value of OpenAPI isn't for you, it's for the people who will use what you make

      • salomonk_mur 2 days ago

        Agreed. Most of the value comes when people other than you need (or will need) to use it.

candiddevmike 2 days ago

What does everyone use for conformance testing? I'm currently generating my specs from code but the idea of defining a spec, building the code to spec, and testing that the code conforms to the spec would be very interesting.

  • salomonk_mur 2 days ago

    I personally just rely on FastAPI + Pydantic.

gunsch 2 days ago

I spent a few years building OpenAPI-related tools at Google, collaborating with Apigee.

One of the "open secrets" about OpenAPI's history was how Smartbear spun out the OpenAPI spec to be a community-managed spec, but with the requirement that there wouldn't be official tooling offered with it --- arguably to protect Smartbear's investment in Swagger. (it's been a minute so specifics are hazy but IIRC it was something like this). The tooling ecosystem feels pretty disjointed as a result.

Compare to gRPC/protobuf, where the specification and tooling have been developed together. Parsers, static analyzers, code generators, documentation tooling all happen in lockstep with spec development, and the ecosystem feels much more cohesive.

  • handrews a day ago

    My involvement started in the lead-up to OAS 3.1 so I don't have any insight into the Smartbear/OpenAPI Initiative hand-off. Yes, the decision to be vendor-neutral is part of the reason the ecosystem is so fractured. But most technical standards are vendor-neutral and work just fine.

    OpenAPI also has the problem of not providing any guidance on what the tooling ecosystem ought to look like, regardless of who implements the tools. You want enough leeway to encourage innovation, but if you don't put out _anything_, you get... well... this. Lots of tools that don't all fully implement things, and not much in the way of interoperable points of hand-off between tools beyond the format. And when many tools leave off some feature or another (most egregiously, external referencing - down the line in this blog post I'll actually show a concrete solution for that), just sharing the format isn't enough.

    I'm hoping that this approach I'm sharing through these blog posts will help us start to shape the ecosystem more. For Moonwalk if nothing else (although I'd love to get 3.x into more of an interoperable state- let's make life better for people working with things now, not just years down the line).

  • The_Colonel 2 days ago

    > but with the requirement that there wouldn't be official tooling offered with it --- arguably to protect Smartbear's investment in Swagger

    That kind of seems backwards. If there was a official tooling, it would be Smartbear's (who else would provide it?) This clause actually seems to be designed to limit Smartbear's ability to use OpenAPI to push their products (and at the same time push OpenAPI as a credibly vendor-independent spec).

    > Compare to gRPC/protobuf, where the specification and tooling have been developed together.

    Yes, because it's a protocol pushed by one company, not pretending to be independent.

dgellow 3 days ago

Excellent article, I really like the diagrams

zelcon 2 days ago

Please, people, just use GRPC or Thrift. This stuff makes me want to vomit.

  • sunshowers 2 days ago

    GRPC and Thrift can't express ADTs (enums with data) easily, but OpenAPI can. That's worth a lot in my book.

    Another advantage of OpenAPI is that you can write your specifications using Rust types (as we do at Oxide with Dropshot: https://docs.rs/dropshot)

    edit: Apparently protobuf 3 does have oneof: https://protobuf.dev/programming-guides/proto3/#oneof. They look like they solve the problem but I can't vouch for it, and they appear to have some edge cases ("only the last member seen is used in the parsed message"). Thrift doesn't appear to, still.

    And I do think being able to write the spec using Rust types is really nice -- you still get an OpenAPI document as an artifact, and (for external users) you get wide client compatibility.

    • throwawaymaths 2 days ago

      If you're building maintainable servers you should write the doc first and the codegen from there. Otherwise you're gonna be in a world of hurt when some junior changes your datatype or if you need to version and maintained both versions simultaneously from the same or similar endpoints.

      • sunshowers 2 days ago

        You're right that this is a very difficult problem, but writing the document first doesn't give you much over generating it from types.

        We actually have a plan for supporting multiple versions, and conversions between the corresponding types, using Dropshot as the place where that's coordinated.

        • throwawaymaths a day ago

          > writing the document first doesn't give you much over generating it from types.

          There's a required top-level parameter called version. As you are writing the document, it's in your face.

          • sunshowers 19 hours ago

            With Dropshot, the version is passed in as a function argument to the OpenAPI generator.

            • throwawaymaths 17 hours ago

              You will find out the problems with your approach.

  • rswail 2 days ago

    This "stuff" allows for easy exchange of API definitions and Arazzo goes to the next level to define the semantics of the process of combining API calls.

    gRPC requires brittle compilation of the protobuf definitions that has impacted every marshalling/serialization protocol for remote procedure calls since XDR.

    Whether you like it or not, HTTP/JSON are the lingua franca of the internet (at least the API side of things). Protobuf is good if you are in control of both sides of the API, less so if you are just the server. It also is much less self-documenting than JSON Schema/OpenAPI.

  • chrisweekly 2 days ago

    Apples and oranges? gRPC might sometimes fit the bill for server-to-server use cases, but it's completely unsuitable for integration with SPAs; grpc-web was never well-supported, and is now dead. I don't understand the "want to vomit" perspective. That implies that adding protoc to your build toolchain and using an opaque binary format/protocol is somehow much more palatable than well-documented JSON / REST over HTTP. What am I missing?

    • cdelsolar 16 hours ago

      ConnectRPC works over the web and I've built several web apps (SPAs) with it. It works with JSON by just setting the Content-Type header to `application/json`. You can add compression easily on the backend with a single option. Generating typed stub code is one line (`buf generate`). Connect-ES is awesome - it's a Typescript-first protobuf library for the web.

      Interacting with the API is very simple, for example:

              const resp = await gameEventClient.getRecentAnnotatedGames({
                limit: annotatedPageSize,
                offset: recentAnnotatedGamesOffset,
              }); 
              setRecentAnnotatedGames(resp.games); 
      
      
      These function calls are typed and a lot of the code is auto-generated from the .proto spec (i.e. above the gameEventClient and getRecentAnnotatedGames are auto-generated code). On the server side the code is also obviously auto-generated. It all works seamlessly. Even the documentation for the API can be auto-generated. See for example:

      https://buf.build/domino14/liwords

      The above documentation was auto-generated from my protobuf files. This project uses ConnectRPC on the backend and on the front-end SPA. To me it seems so much simpler and better than the way I've seen people use OpenAPI - where many people seem to create the code _before_ creating the spec. I actually haven't found a good Go generator of stub code from an OpenAPI spec. With ConnectRPC it just works, it's simple, easy, fast, etc. It's easy to add interceptors to do things like parse the http request for an Auth or Cookie header and then insert the user ID etc back into the context for the different service functions to handle whatever needs to be done with the authenticated user.

      I could ask the same question - what am I missing?

      • chrisweekly 13 hours ago

        Thanks for the thoughtful reply. Glad you found something you like that's solving your problems! The docs you linked do look nice. Ironically, I noticed this quote there:

        > "It's much easier to use the API with JSON + a web browser, but the protobuf option is still available..."

        ... which sort of underscores my perspective (which is admittedly strongly biased towards / empathetic with the API consumer side).

  • cdelsolar 2 days ago

    ConnectRPC here but yeah same idea.