gwern 10 hours ago

> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

Translation for the rest of us: "we need to fully privatize the OA subsidiary and turn it into a B-corp which can raise a lot more capital over the next decade, in order to achieve the goals of the nonprofit, because the chief threat is not anything like existential risk from autonomous agents in the next few years or arms races, but inadequate commercialization due to fundraising constraints".

  • peppermill 10 hours ago

    It's amazing how he can imagine wars being fought over AI but not wars being fought over resources needed to "build enough infrastructure."

    • kulahan 9 hours ago

      Taking it a step further, building that infrastructure may contribute somewhat directly to limiting the other resources actual wars are being fought over.

      This man needs to get out of his own head.

      • vundercind 7 hours ago

        I think it’s misunderstanding the guy’s whole deal to expect truth or reality to be major factors in his writing. “Sell, sell, sell!” is the goal above all else. Sell whatever he’s invested in today. Sell himself. Sell!

        • bbor 5 hours ago

          You’re honestly convinced that he’s faking this level of futurism? I’m happy to see people call him wrong, maybe even defend him on a few point, but calling him dishonest on this central point seems irrational. Ditto for someone I personally have absolutely 0 respect for: Elon Musk. They are both honestly convinced that AI is incredibly important in the short-medium term, IMO - they just want to own the fix and be the hero.

          • vundercind 4 hours ago

            He might find the hype cycle itself entertaining and engaging, but I doubt he’s taking it seriously. I think he’d be worse at his job if he did.

      • throw_pm23 9 hours ago

        "It is difficult to get a man to understand something, when his salary depends on his not understanding it."

        • p1esk 8 hours ago

          Sam’s ambitions are way beyond “salary”.

          • willturman 7 hours ago

            It's a quote from Upton Sinclair from an era where you generally had to have a profitable business and employees before you had investors.

            • novok 7 hours ago

              There were many ventures in the past that got investors without a current profitable going concern. Oil & mining speculation, chartering boat crews to go on exploration expeditions and more.

              • Terr_ 6 hours ago

                Indeed, public investment was born from large projects where any profit would be many years off.

            • fragmede 6 hours ago

              I know theres this whole joke about being pre-revenue on the Silicon Valley TV show, but getting investors in order to be able to build a business which becomes profitable after goes back a long time. Like a really long time.

          • kevinob11 7 hours ago

            Replace salary with the thing he wants

            • p1esk 7 hours ago

              Maybe he wants to make the world a better place?

    • KETHERCORTEX 7 hours ago

      > he can imagine wars being fought over AI

      I wonder how did he come to such a prediction. When was the last time we had a war over advanced tech? Armies didn't fight over telegraph, radio, phones and cars.

      A war to get AI would also be foolish. A few hundred bombs from your adversaries and AI won't have electricity to function.

      • Terr_ 6 hours ago

        > I wonder how did he come to such a prediction.

        At the risk of belaboring the subtext: It's kind of prediction which flatters the predictor's ego, and exaggerates the importance of the company and its output.

        If believed, the claim can be leveraged to boost investor activity, land big contracts, and lobby for special legal/tax benefits.

      • throwthrowuknow 5 hours ago

        You have to look at the late 1800s for examples. It won’t be wars over data centres and winning won’t be simple or even possible. It would look like the wars that the U.S. Army fought against indian tribes or like the British, French, German and Dutch colonization of Africa. That is assuming there is an AI side and a non-AI side. Incidentally those conflicts did involve a lot of strategic infrastructure like railways and telegraph lines.

        Fighting the expansionary actions of an AI enabled culture will not be as simple as bombing power plants, after all those are prime targets in any modern war and are well defended. How do you propose to win against an entire bloc of countries that have decided to use the products of AI to do whatever they wish with the world?

      • empath75 6 hours ago

        It's arguable that every major conflict of the 20th century was over resources required by the combustion engine -- fossil fuels and rubber, in particular.

    • trhway 6 hours ago

      >resources needed to "build enough infrastructure."

      Strarship will put into orbit 100tons at $10M or so. I.e. <5kg (Nvidia H100 plus 1M2 of aluminum radiator (would radiate 0.5-1Kw away at 60-70C) plus 0.5Kw of solar panels) for <$500, ie. peanuts compare to the price of the H100 or whatever NVDA would be charging $30K/card.

      And wars willn't be fought over AI. Wars will be fought using AI. Humans will have no chance in controlling millions of their own and responding to the actions of the millions of the enemy's simultaneously active automated high-precision munitions of all kinds, and that picture leads to a new, non-nuclear this time, AI-based MAD (that of course means that like with nuclear race back then all the parties have to build their capabilities right now as fast as possible, Manhattan project style).

  • goldfeld 9 hours ago

    I see how AI is used today and extrapolate. A lot. Everything in tech is extrapolation with the march of capital. It takes one to know one; rather it takes a smart ass to wield a smart tool. So the smart get smarter.

    Is it a separate phenomenon, in the big picture, from the rich getting richer, from the monopolies over means of production?

    Not sure, but I see as well that the dumb will surely get dumber; that "intelligence" will be a product of using intelligent humans' means of production, and not owning them of course, but being owned in the process. Populations will be literally made lighter of their smarts, outsourcing intelligence to agents out of general control (classic bait and switch.) Since AI feel my own process getting more clueless as I go, I'd better conclude somehow.

    I see the Age of Ignorance ahead, there was once an Enlightenment, and here light takes on its other side or meaning, the workers being enlightened, to wit, made lighter of their horrible burden which is intelligence and its obnoxious demands of upkeep. Just pay someone for upkeep and stop messing with wet messy neurons already, says the technocrat to the cheerful mob.

    • throwthrowuknow 4 hours ago

      The people who only use the end products will be less skillful the same way people today are less skillful at cooking, sewing, carpentry, animal husbandry, etc. because they avail themselves of modern services made possible by technology. If you are utilizing AI to its fullest you won’t be less skillful but you will have to trade your current set of skills for another. The enlightenment you’re feeling is the same you have when you’re promoted to management and don’t have time to get your hands dirty.

  • saurabhchalke 6 hours ago

    Why can't we just go with Occam's razor, and assume that they really believe in their mission of providing access to frontier AI as freely to the world as possible?

    • Gud an hour ago

      Why would that be where Occams Razor leads us?

    • jampekka 5 hours ago

      Occam's razor says they are saying whatever gives them as much money and power as possible.

      OpenAI is suspiciously bad in providing access to their models, compared to e.g. Meta, Mistral, BLOOM collaboration, Alibaba and even oil tyrants FFS.

    • stonogo 6 hours ago

      Because that doesn't account for the non-profit rugpull he orchestrated earlier in the company's history.

  • metacritic12 10 hours ago

    Good point. His story is valid, but just-so happens to equal AI maximalism that most CEOs would want for their own industry too.

  • 8note 4 hours ago

    I'd expect "we want to put AI into the hands of as many people as possible" to be either exposing model weights, or making the training sets public.

  • tyberns 5 hours ago

    This is so true, resource management and infrastructure are going to be the bottlenecks

  • roenxi 6 hours ago

    And the reason we can be certain that it is an accurate one is that the only way to put AI in the hands of as many people as possible is to commercialise the tech. Non-profits are good at many things, but not for spreading technical improvements.

    There aren't many alternatives here. It is commercialisation or looming irrelevance.

  • gwern 4 hours ago

    A general reply to all of the comments thus far: you are completely missing the point here. OP is not a meaningful forecast, and it's not about nuclear power or any of that. It's about laying the groundwork for the privatization and establishing rhetorical grounds for how the privatization of OA is consistent with the OA nonprofit's legally-required mission and fiduciary duties. Altman is not writing to anyone here, he is, among others, writing to the OA nonprofit board and to the judge next year.

  • svnt 7 hours ago

    > If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

    The mostly-open goal of every VC-funded startup is to become a monopoly. If a strong enough monopoly in AI hardware were to exist, then the issues he describes could become a problem.

    Otherwise, what he is describing is just the ad absurdum of how capitalism works. Phrased differently it sounds like:

    “If this extremely powerful and profitable product that depends on other products gets built, then if no one else builds the also profitable substrate that it operates on, terrible things will happen!”

    Or again slightly differently: “We need to be able to compete with our suppliers because our core business model might not be defensible unless we can also win in their space.”

  • api 9 hours ago

    > put AI into the hands of as many people as possible

    ... by establishing regulatory moats to prevent competition and limit or outlaw actually-open AI?

    • svnt 7 hours ago

      He didn’t say that other people need to put it into the hands of as many people as possible, only that his company does.

sharkjacobs 12 hours ago

> Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale

I'm not an AI skeptic at all, I use llms all the time, and find them very useful. But stuff like this makes me very skeptical of the people who are making and selling AI.

It seems like there was a really sweet spot wrt the capabilities AI was able to "unlock" with scale over the last couple years, but my high level sense is that each meaningful jumps of baseline raw "intelligence" required an exponential increases in scale, in terms of training data and computation, and we've reached the ceiling of "easily available" increases, it's not as easy to pour "as much as it takes" into GPT5 if it turns out you need more than A Microsoft.

  • sdenton4 11 hours ago

    The question is: For a given problem in machine intelligence, what's the expected time-horizon for a 'good' solution?

    Over the last, say, five years, a pile of 50+ year problems have been toppled by the deep learning + data + compute combo. This includes language modeling (divorced from reasoning), image generation, audio generation, audio separation, image segmentation, protein folding, and so on.

    (Audio separation is particularly close to my heart; the 'cocktail party problem' has been a challenge in audio processing for 100+ years, and we now have great unsupervised separation algorithms (MixIT), which hardly anyone knows about. That's an indicator of how much great stuff is happening right now.)

    So, when we look at some of our known 'big' problems in AI/ML, we ask, 'what's the horizon for figuring this out?' Let's look at reasoning...

    We know how to do 'reasoning' with GOFAI, and we've got interesting grafts of LLMs+GOFAI for some specific problems (like the game of Diplomacy, or some of the math olympiad solvers).

    "LLMs which can reason" is a problem which has only been open for a year or two tops, and which we're already seeing some interesting progress on. Either there's something special about the problem which will make it take another 50+ years to solve, or there's nothing special about it and people will cook up good and increasingly convenient solutions over the next five years or so. (Perhaps a middle ground is 'it works but takes so much compute that we have to wait for new materials science for chip making to catch up.')

  • sharkjacobs 12 hours ago

    > we will solve the remaining problems

    This is the part that really gets me. This is a thing that you say to your team, and a thing you say to your investors, but this isn't a thing that you can actually believe with certainty is it?

    • tensor 7 hours ago

      With enough time it seems a reasonable assertion, but the key part is how much time. It feels like he thinks "any day now" where I think it'll be much longer. This all of course assumes that "the remaining problems" means to achieve human-like intelligence, which is perhaps the wrong problem to be solving in the first place. I'd rather have AI systems that don't have human flaws.

    • swyx 11 hours ago

      you need some amount of irrational definite optimism + knowing things others dont to be a good founder. that kind of reality distortion field is why sam is sam and we are here debating phrases on an orange website.

      • Der_Einzige 7 hours ago

        Related, I tongue-in-cheek believe that something analogous to the actual SCP object for a "reality distortion field" may in fact exist. There is zero good explanation for "Teflon Don" or the North Carolina Lieutenant Governor getting away with all the stuff they do while Al Franken got politically crucified.

        • Terr_ 6 hours ago

          The least-magical answer for that is that some people have fundamentally different ways of approaching the world, and certain things will be tolerated by certain sets of supporters.

    • lainga 11 hours ago

      Mit dem nächste Kapitalrunde wird das alles in Ordnung kommen.

    • fragmede 9 hours ago

      Why not? People believe in all sorts of weird stuff, theirs just happens to be one you don't agree with. Some people believe there are gods up in the sky that will smite them, and go to war with people that believe in a different god that will smite them for different reasons. Some people believe we landed on the Moon, others do not. What matters is what can you convince others to do based on your rational.

  • og_kalu 11 hours ago

    Scaling Improvement has never been Linear though. Every next gen model so far has required at least an order of magnitude increase in compute, sometimes several more. So it's not a new revelation and these companies are aware of that. Microsoft for instance is building a 100B data center for a future next generation model releasing in 2028.

    If models genuinely keep making similar leaps each generation then we're still a few generations before "More than a Microsoft".

    • lispisok 11 hours ago

      So at what point do the linear increases in capability not justify the exponential compute and data requirements, or when do we run out of resources to throw at it?

      • og_kalu 11 hours ago

        I never said I thought increase in ability was linear either. We're encroaching on phenomena that's genuinely hard to describe/put a number on but GPT-3 is worlds apart of 2 and it feels like 4 is easily ten times better than the OG 3. I can say Improvement lags behind compute somewhat but that’s really it.

        That said, it's ultimately up to the people footing the bill isn't it ?

  • spencerchubb 7 hours ago

    Yes you are correct that jumps in intelligence were enabled by exponential increases in scale. That makes me more bullish on AI, not less. It suggests that we can continue exponentially scaling compute like we have done for the past few decades, and also get intelligence improvements from it.

  • nyrulez 11 hours ago

    It's about stuff we don't know yet. From today's lens, the essay seems absurd. But I think it's hinging on continued discoveries that improve one or all of learning algorithms, compute efficiency, compute cost and applying algorithms to real world problems better.

    5 years ago, I wouldn't have believed any of what exists today. I saw internal demos that showed 2nd or 3rd grade reading comprehension in 2017 and statements were made about how in the next decade, we will probably reach college level comprehension. We have come so far beyond that in less than half the time. Technology isn't about scaling incrementally and continuing on the same path using the same principles we know today. It's about disruption that felt impossible before - that feels like a constant to me now. Seeing everything I've seen in the last 20 years, it's going to continue to happen. We just can't see it yet.

  • marcosdumay 8 hours ago

    > But stuff like this makes me very skeptical of the people who are making and selling AI.

    What is there to be skeptical of? OpenAI made their current product using a 10G$ investment plus a few they are not disclosing, and now they will start to do it at scale.

    Perfectly normal stuff.

    By the way, what's the World's GDP again?

  • humansareok1 9 hours ago

    XAi seems to be able to dump 10-20x more compute into their Grok models each time. Don't see any signs this is slowing down...

  • moffkalast 9 hours ago

    As large as the absolute largest models are today, they are still microscopic compared to our brains. A 1.7T param model would only store an actual total of about 850 GB if fully saturated (4 bits of information per weight estimated for bf16 transformers), a lot less than a human brain with 150T synapses running in full analog precision. We need to scale the current gen of models at least another 10-100x to even reach the human level of complexity, something we'll be able to do in the next two decades.

    And well then there's going beyond just text. Current multimodal models are basically patchwork bullshit, separately trained image/audio to text/embeddings encoders slapped onto an existing model and hoping it does something neat. Tokenization and samplers are likewise bullshit that's there to compensate for lack of training compute. Once we have enough to be able to brute force it properly with bytes in, bytes out, regardless of data format, the results should be way better.

    • throw310822 6 hours ago

      > 150T synapses running in full analog precision

      Analog systems are not known for being very precise- they're noisy, signals get corrupted easily- and that's why we prefer digital ones. As soon as we had the technology, we switched everything we could- audio and video recording, telephone calls, photography, to a digital medium. This makes me wonder if the seemingly extraordinary efficiency of artificial neural networks is simply due to the precision with which they can be trained.

      • Terr_ 6 hours ago

        With respect to brain activity, how do we know its really noise, and not just layers of meaning--or at least purpose--which we don't yet understand?

        If straightforward binary signaling was so universally superior, I think the worldwide network of over a quintillion ruthlessly self-replicating nanobots would be using a much more heavily after the last billion years.

    • charlie0 7 hours ago

      Comparing a human brain in these terms makes it incredibly obvious how inefficient the human brain actually is. A 1.7T model can answer questions about practically anything. You say a human brain has 150T params. So what? It struggles to give masterful answers in even 1 domain, let alone dozens/hundreds. We need to stop comparing parameters and synapses as if they actually matter, because AFAIK, they really don't.

      • jampekka 7 hours ago

        OTOH humans can e.g. walk on two feet and drive a car.

        • fragmede 6 hours ago

          Boston Dynamics and Waymo might not have gotten human levels of competency with those two particular tasks, but we've already got robots that are better than drunk/tired/angry humans at it, and they're getting better at it.

      • Jensson 3 hours ago

        > Comparing a human brain in these terms makes it incredibly obvious how inefficient the human brain actually is

        Until you have AGI you can't say this since until then we don't know how much the different parts costs to replace with AI systems.

      • moffkalast 7 hours ago

        Well once again it turns out that what is hard for people is easy for computers, and vice versa. The things we go to college for 6 years for they can (relatively) master in a week of pretraining. We are optimized to smartly kill things, eat them, and reproduce, that's what machines will beat us at last lol. Right now a human expert is still obviously better in depth, but nowhere close in breadth. Probably not for much longer though, at least on the historical time scale.

        And granted a lot of parts of the human brain are dedicated to specific jobs that are entirely nonexistent in a normal LLM (kinematics, vision, sound, touch, taste, smell, autonomic organ control) so the actual bit we should be comparing for just language and reasoning is way smaller. Still the brain is pretty efficient inference energy wise, it's like the ultimate mixture of experts, extreme amounts of sparse storage and most of it is not computed until needed. The router must be pretty good.

  • AnimalMuppet 11 hours ago

    Even "we will solve the remaining problems" is... perhaps unduly optimistic.

    At a minimum, we could ask for the evidence.

    • jagrsw 9 hours ago

      I'm not here to defend sama, but certain things cannot be proven until they arrive - they can only be extrapolated from existing observations and theoretical limits.

      Imagine the Uranium Committee of early 40's, where Szilard and others were babbling about 10kg of some magical metal exploding briefly with the power of a sun, with the best evidence being some funky trail in an alcohol vapor chamber.

      Maybe sama is right, maybe not, but the absence of evidence is not evidence of absence.

      • zero-sharp 9 hours ago

        I'm sure you know that people in the AI community have been predicting big things ever since, I don't know, the 1970s? It's only 10 years away again. This time it's for real, right?

        • jagrsw 9 hours ago

          Alchemists predicted the transmutation of metals into gold for centuries, and on a sunny day in the 20th century, it arrived (a bit radioactive, but still).

          Unless the human brain is made of some sacred substance, the worst-case scenario is that we will extrapolate current scanning methods into the future and run the scanned model in silica. I'm not recommending this "just for fun," but the laws of physics don't forbid it.

          • zero-sharp 9 hours ago

            >Alchemists predicted the transmutation of metals into gold for centuries, and on a sunny day in the 20th century, it arrived (a bit radioactive, but still).

            So is Sam Altman the modern day alchemist? Making predictions based on faulty methods and faulty understanding (per your gold example)?

            What will happen is that we'll shift the economy around based on inflated tech promises and ruin people's lives. No big deal I guess.

            • jagrsw 8 hours ago

              > So is Sam Altman the modern day alchemist?

              Alchemists were early scientists who later branched into fields like chemistry, mathematics, and physics (Newton explored alchemy).

              Altman leads a team of experts in neural networks, programming, and HW design. While he might be mistaken, dismissing him outright is difficult.

          • avazhi 7 hours ago

            If you are comparing AI to alchemy, a subject that after thousands of years still isn’t delivering on its promises (even with the assistance of modern technological magic), then surely you can see how that’s something of a self-own.

            • jagrsw 7 hours ago

              The transmutation of uranium into plutonium and the synthesis of medically useful isotopes proceeds successfully.

              • avazhi 7 hours ago

                That's not alchemy.

                When we are successfully turning base metals into gold, hit me up.

                • jagrsw 3 hours ago

                  I concede.

        • p1esk 7 hours ago

          Is GPT-4 not a “big thing in AI”?

          • KerrAvon 5 hours ago

            It's extremely spicy autocomplete and it burns astonishing amounts of natural resources to reach even that lofty peak

      • tikhonj 5 hours ago

        Hah, atomic power is a great point of comparison: people in the "atomic age" expected atomic power to be everywhere. Electricity too cheap to measure, cars/planes/appliances all powered by small nuclear reactors... That's without going into the real nonsense like radium toothpaste.

        And here we are today where nuclear energy is limited to nuclear weapons, a small set of military vehicles and <10% of the world's electricity production. Not nothing, sure, but nothing like past predictions either.

        • defrost 5 hours ago

          Last I checked the giant nuclear fusion reactor in the sky is driving an substantial increase in solar energy.

          The toothpaste and similar products were pretty ill advised, vaseline and uranium glass are still collectable and are seeing a ressurrence of new interest: https://old.reddit.com/r/uraniumglass/

      • AnimalMuppet 7 hours ago

        Right. Certain things cannot be proven until they arrive. Maybe sama is right, maybe not. But his certainty is misleading.

        • jagrsw 7 hours ago

          I agree. He's probably been conditioned by experience to speak with confidence until proven wrong ("strong opinions, weakly held"), but I don't like it either. Oh... the lost art of saying, "In my opinion."

      • throw_pm23 9 hours ago

        "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."

  • highfrequency 11 hours ago

    Progress might be logarithmic in compute, but compute (transistors/sqinch and transistors/$) is growing exponentially with time.

    Despite what skeptics have been saying for decades, Moore's Law is alive and well - and we haven't even figured out how to stack wafers in 3 dimensions yet!

    • Nimitz14 7 hours ago

      Oh wow! Could you please share what processors are exponentially faster than those of 10 years ago? I'm not seeing any here: https://www.cpubenchmark.net

      • highfrequency 4 hours ago

        Macbook Airs have 20 billion+ transistors, compared to 50 million on the Pentium 4 in the early 2000s. Moore's law is about transistor density, not processor speed, which is gated by thermal limits.

      • HDThoreaun 6 hours ago

        Transistor count has consistently been increasing by about 10% a year over the last decade.

sharkjacobs 11 hours ago

> It is possible that we will have superintelligence in a few thousand days (!)

"a few thousand days" is such a funny and fascinating way to say "about a decade"

Superficially, reframing it as "days" not "years" is a classic marketing psychology trick, i.e. 99 cents versus a dollar, but I think the more interesting thing is just the way it defamiliarizes the span. A decade means something, but "a few thousand days" feels like it means something very different.

  • 7thaccount 11 hours ago

    Probably because we all know "a decade" means maybe never. They've said the same thing about fusion for 5 decades.

    • dcchambers 10 hours ago

      Perhaps ironically, Sam Altman is also big into Fusion. (https://www.helionenergy.com/)

      I guess it makes sense, for deep-learning/LLMs to "scale to infinity" you basically need infinite amounts of power.

      • bamboozled 10 hours ago

        cant we use OA intelligence to figure out how to do more with less rather yhan "stupidly" accelerate the climate crisis?

        the climate crisis + AI is the nightmare

        • fragmede 9 hours ago

          Jevons Paradox says that's not the direction it'll go in.

          • bamboozled 6 hours ago

            It's funny how some people seem to have different beliefs around this. Personally, I love making things more efficient, and still try to minimize consumption.

    • arthurcolle 11 hours ago
      • Yizahi 10 hours ago

        Inertial confinement most likely can't be reasonably commoditized into generating electricity. Also, I may be wrong, but I think on the wave of that hype I read that the number they used was an incomplete one. Basically they reframed the conditions of the problem to get more favorable number for advertisement. NIF research will most likely be limited to the nuclear bomb research and simulations.

        But even if NIF had a reasonable path to energy generation in say 20-30 years, it still won't matter much, just like ITER probably won't. Solar will be way more cheap and widespread, it's costs are still dropping down ahead of all predictions, govt. or commercial. Fusion may just be a pure science project in the end, for a long time.

  • mu53 9 hours ago

    I am convinced that these many of these proclamations and scary AI videos are more of a marketing gimic for people to get excited and fund more AI.

    AI is prohibitively expensive. LLMs can take millions to train, and for Chatgpt 4, I wouldn't be surprised if the figure was in .5b to 2b range for compute resources alone. ChatGPT struggles for profitability due to high ongoing requirements for GPUs.

    LLMs were a huge breakthrough. Gaining more funding and making experimentation cheaper will make the next breakthrough come sooner. Profitability will also come sooner.

    We just don't know the timelines we are considering about changing.

  • incognition 10 hours ago

    People tend to overestimate what they can do in a day and underestimate what they can do in a few thousand days

  • ocean2 11 hours ago

    The Stellarator design has proven to be stable and to produce net positive energy. We are actually only $20B away from having a fully functional nuclear fusion reactor.

    • QuadmasterXLII 6 hours ago

      Fun fact: I was skeptical of your claim that this has been proven, so I googled “stellerator proven net positive” and this very comment was the first result

    • samatman 10 hours ago

      Not really, unfortunately. The progress has been heartening, don't get me wrong, but it's more like $20B away from getting to the first steps of solving the next set of problems on the road to a fully functional fusion generator.

      Those are fairly substantial. It's past time to stop being cynical about "ten more years every ten years", but it's also way too soon to declare victory.

      • mjamesaustin 10 hours ago

        And ultimately, fusion doesn't solve the hardest problem with energy today which is cost.

        Solar already provides unlimited clean energy, and it took decades of development for the cost to drop to a competitive rate.

        Even if we had a functioning fusion plant tomorrow, it will likely cost WAY more per kWh and require decades of iteration and improvement to become economically feasible.

        • whamlastxmas 9 hours ago

          I would wonder if solar plus the absolute massive battery reserves you’d need for a data center ten times bigger than the current worlds largest would still be cost effective versus a single fusion reactor

          • jay_kyburz 9 hours ago

            I wonder how that cost would compare to putting your solar and data center in space? (with non stop solar pointed at the sun)

            I wonder if cooling is easier or harder in space?

            • marcosdumay 8 hours ago

              > I wonder if cooling is easier or harder in space?

              It's harder. You can stop wondering.

              It gets easier if you run things hotter, up to the point that there isn't much of a difference at a few hundred °C (and keeps making no difference when hotter).

            • bostik 8 hours ago

              Harder. Vacuum is an excellent thermal insulator.

            • HDThoreaun 6 hours ago

              Can’t dissipate heat effectively in space

ansk 9 hours ago

> humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)

He's hand-waving around the idea presented in the Universal Approximation Theorem, but he's mangled it to the point of falsehood by conflating representation and learning. Just because we can parameterize an arbitrarily flexible class of distributions doesn't mean we have an algorithm to learn the optimal set of parameters. He digs an even deeper hole by claiming that this algorithm actually learns 'the underlying “rules” that produce any distribution of data', which is essentially a totally unfounded assertion that the functions learned by neural nets will generalize is some particular manner.

> I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.

If you think the Universal Approximation Theorem is this profound, you haven't understood it. It's about as profound as the notion that you can approximate a polynomial by splicing together an infinite number of piecewise linear functions.

  • klyrs 8 hours ago

    > It's about as profound as the notion that you can approximate a polynomial by splicing together an infinite number of piecewise linear functions.

    Wait 'til you hit complex analysis and discover that Universal Entire Functions don't just exist, they're basically polynomials.

    • stogot 2 hours ago

      So basically Altman hasn’t taken enough math courses yet thinks he is starring in Good Will Hunting

  • bbor 5 hours ago

    You’re (both!) getting into metaphysics without necessarily realizing it. He’s just saying that a machine that could learn any pattern—not a sub pattern of accidental actualities that it overfits on, but the real virtual pattern driving some set of phenomena—would be a game changer. Sure, there are infinite things that can’t be reduced to polynomials, but something tells me that a whole lot of things that matter to us are, across the fields of Physics, Biology, Sociology and Neurology especially.

    Basically it’ll be (and already has been since the quantum breakthroughs in the 1920s, to some extent) a revolution in scientific methods not unlike what Newton and Galileo brought us with physical mechanics: this time, sadly, the mechanics are beyond our direct comprehension, reachable only with statistical tricks and smart guessing machines.

w10-1 7 hours ago

> a defining characteristic of the Intelligence Age will be massive prosperity

That's the sales pitch, that this will benefit all.

I'm very pro-AI, but here's the only prediction for the future I would ever make: AI will accelerate, not minimize, inequality and thus injustice, because it removes the organizational limits previously imposed by bureaucracy/coordination costs of humans.

It's not AI's fault. It's not because people are evil or weak or mean, but because the system already does so, and the system has only been constrained by inability to scale people in organizations, which is now relieved by AI.

Virtually all the advances in technology and civilization have been aimed at people capturing resources, people, and value, and recent advances have only accelerated that trend. Broader distributions of value are incidental.

Yes, the U.S. had a middle class after the war, and yes, China has lifted rural people out of technical poverty. But those are the exceptions against the background of consolidation of wealth and power world wide. Not through ideology or avarice but through law and technology extending the reach of agency by amplifying transaction cost differences in market power, information asymmetry and risk burdens. The only thing that stops this is disasters like war and environmental collapse, and it's only slowed by recalcitrance of people.

E.g., now we are at a point were people's economic and online activity is pervasively tracked, but it's impossible to determine who's the owner of the vast majority of assets. That creates massive scale for getting customers, but impedes legal responsibility. Nothing in economic/market theory says that's how it should be; but transaction cost economics does make clear that the asymmetry can and will be exploited, so organizations will capture governance to do so.

It's not AI's job nor even AI's focus to correct injustice, and you can't blame AI for the damage it does. But like nuclear weapons, cluster munitions, party politics, (even software waivers of liability) etc., it creates moral hazards far beyond the ability of culture to accommodate.

(Don't get me started on how blockchain's promise of smart contracts scaling to address transaction risks has devolved into proliferating fraud schemes.)

  • kylehotchkiss 3 hours ago

    This was very thoughtful. I agree with you, the benefits to the majority of the world from this tech is minimal. Universal childhood education is a fever dream. The amount money being poured into AI could probably increase actual human knowledge and critical thinking substantially had it be directed there instead. Not to say we shouldn't continue to invent AI! LLMs have been an interesting tool to use. But the ego behind the founders in this tech is so cringe. Get out of the bay, go travel to somewhere you need to actually get a visa for, and maybe tone down the proclamations a notch.

  • jprete 3 hours ago

    I've come to similar/related conclusions, but I don't understand how you could recognize all of this and still be pro-AI.

  • mr90210 6 hours ago

    Please turn this comment into a post. It’s a gem.

d_burfoot 8 hours ago

OAI's achievements are amazing. But here's a bit of a skeptical take: cheap human-style intelligence won't have a huge impact because such intelligence isn't really a bottleneck today. It's a cliche that the brightest minds of the age are dedicated to selling more ads or shuffling stock ownership around at high velocity. Anyone who's worked at a big tech company knows the enormous ratio of talent to real engineering problems at those companies.

Let's say you have some amazing project that's going to require 100 Phd-years of work to carry out. In the present world that costs something like $1e7. In the post-AI world, that same amount of intelligence will cost $1e3, an enormous reduction in price. That might seem like a huge impact. BUT, if the project was so amazing, why couldn't you raise $1e7 to pursue it? Governments and VCs throw this kind of money around like corn-hole bags. So the number of actually-worthwhile projects that become feasible post-AI might actually be quite small.

  • benlivengood 8 hours ago

    Capital follows the most profitable investments. It's why "green" technology took so long to develop (we probably could have had efficient solar panels in the 60s, but oil was super cheap and more importantly very profitable). Dropping the cost of a $1e7 problem to even $1e6 probably makes it very profitable.

    • willturman 7 hours ago

      Capital has also employed anti-competitive actions to stifle, prevent, and kneecap "green" technology for the last century.

      The death of passenger rail and the stifling of the electric car for 30 years come to mind.

    • deciplex 7 hours ago

      > Capital follows the most profitable investments.

      I think a common error is that people forget the "most" in this sentence. It is a very important word. It's not even that only profitable investments will get funded: even profitable projects might get left on the cutting room floor if they are competing for resources with projects that will generate, or are believed to generate, higher ROI.

      And this maybe isn't a problem if higher ROI == better than. But to believe that, you have to also believe that enshittification is a thing that happens in spite of being less profitable (or unprofitable), which for me at least is a hard sell.

  • spencerchubb 7 hours ago

    To raise $1e7, you have to be able to convince VCs that you can make at least $1e9. They want 100x returns from their winners, and don't care about anything else in their portfolio.

    If instead you only need $1e3 to build it, it doesn't have to make as much money. I could just fund it out of pocket

    • TOMDM 5 hours ago

      That and you can justify sinking 100PhD years into a 1/1,000,000

      > cheap human-style intelligence won't have a huge impact because such intelligence isn't really a bottleneck today.

      What a failure of imagination.

  • ilrwbwrkhv 7 hours ago

    And also that is why something which is so foundational should be allowed to spread out to all humans which means running local llm models on our machines instead of paying one or two companies to use it. Anthropic, open AI should not become the ones humanity has to go to to get such a resource. We didn't have fire as a service for humanity to master fire.

Workaccount2 10 hours ago

I don't know if I am the only one who always trips up on reading this common theme in AI progress - that AI will be the pinnacle of education - but it really strikes me as meaningless.

What is the point of education if the bots can do all the work. If the worlds best accounting teacher is an AI, why would you want anyone (anything?) other than that AI handling your accounting?

A world where human intelligence is second fiddle to AI, schooling _will not_ be anything like what it is today.

  • randomdata 10 hours ago

    For what reason would school need to change?

    Education will change, but education moved to happening outside of schools a long, long time ago.

  • n_ary 9 hours ago

    I am more worried about the fact that, these AIs will become the commercial moat of knowledge that we currently have available freely. The quality of knowledge you can now openly find is getting scarce or prominently moving behind paywalls. However, these AI models have access to the same knowledge somehow freely.

    Once we start relying on AI for knowledge(see how people frequently just ask few questions, copy paste answers without further in depth knowledge or research, just like the parody stack-overflow copy paste era), it will continue to get locked behind further paywall and will no longer be accessible to general populace without the financial means.

    I am just afraid that, while we are too awed by the magic, the magic will eventually close the doors behind us on knowledge and only cater to the rich and powerful.

    Also there is that one fact when AI gains adequate power in societal terms of utopian abundance where AI/robots do everything. One day, the AI decides that to save the globe from further climate damage, fastest way is to delete the walking CO2 emission machine who also use other CO2 emission devices or consumes stuff that also generate CO2, a mass extermination and burial will immediately cut all CO2 …

    • whamlastxmas 9 hours ago

      Even the shittest small LLMs today can tell you any piece of knowledge you want. We will always at minimum have access to our current level of knowledge. If AI invents “super science” and suddenly we can travel faster than light then sure that might get locked away. But we’ll never be helpless idiots who can’t even do math.

krunck 9 hours ago

Somehow I read through this without actually looking at the domain name of the page. I thought to my self "Wow, this person is living in a fantasy world." Then I saw the authors name. I think I'll stand by my initial impression.

  • ilrwbwrkhv 7 hours ago

    Also that is why we should have some of these foundational models broken down and run at the per person level as a local large language model instead of paying some corporation to get access to it. We didn't have fire as a service back in the day and that is why humanity flourished.

    • caseyy 2 hours ago

      "Fire as a service" would be a microcosm of capitalism today. Well put.

falcor84 10 hours ago

> Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need.

This is one of those few cases where I'm actually more bullish than Altman. I don't need to wait for my kids to have it, but rather I personally am already using this daily. My regular thing is to upload a book/article(s) into the context of a Claude project and then chat with it - I genuinely find it to already be at the level of a decent (though not yet excellent) tutor on most subjects I tried, and by far better than listening to a lecture. The main feature I'm missing is of the AI being able to collaborate with me on a digital whiteboard.

  • lordnacho 9 hours ago

    One thing I was thinking about was live translation. We gotta be near the stage where I can take earbuds with me to Spain and hear what the locals are saying in English? There's already apps that are pretty good at "talk to it and it hears the words, prints them, and prints the translation", so it would seem to be close?

    • codethief 8 hours ago

      Whisper and a bit of Python code (and a text2speech model) can already do this. At my day job we have Whisper live-translate all-hands calls for the non-native speakers in the audience. It's incredibly good.

      • lordnacho 5 hours ago

        > Whisper and a bit of Python code (and a text2speech model) can already do this.

        How do I do this?

    • whamlastxmas 9 hours ago

      Given the advanced voice model that still isn’t widely released, the technology is there, even if the implementation isn’t yet.

  • oculty 9 hours ago

    That sounds really useful. Can you give an example of what kind of content your are uploading and how the tutoring looks like?

    • falcor84 8 hours ago

      One example is of studying robotics. I started with this course on Coursera [0]. It was ok at the start, but got hard for me early on. A big part of the course is of reading the book that they make available for free [1], so I would upload the relevant chunk of chapters into Claude (the whole book was a bit too much for it), and would then just ask it to explain each topic to me, whereby I'd ask additional questions, and then when, I felt I got it, ask it to check my understanding (which it does quite well, saying "sort of, but note that..." in many appropriate circumstances).

      As another specific example, in that book and in others, I sometimes struggle with the math, so I would ask Claude to give me the sympy code for the relevant mathematical derivation, and being able to actually see those expressions change in a python notebook really helps my understanding. I'm really impressed with how it usually does well on the first try, even with relatively complex stuff, like expressions involving symbolic matrix exponentiation. Being able to pause on any topic like this, and have the LLM help me dive into it in the way that works best for me, has been amazing, and getting as much time from a knowledgeable human tutor would have probably cost me 1,000x as much.

      [0] https://coursera.org/specializations/modernrobotics

      [1] http://hades.mech.northwestern.edu/images/7/7f/MR.pdf

      • saati 8 hours ago

        How do you know the answers are actually useful and not just hallucinations?

        • falcor84 7 hours ago

          It's not an oracle, and does get things wrong occasionally, but if anything, having to check its responses actually helps me confirm that I'm learning.

  • mellavora 9 hours ago

    So there is some interesting research about brainwave syncing during effective communication, which certainly includes personalized instruction (tutoring) or small-class learning.

    I wonder how that works with computers, when we are only sync'ing with the ghosts and statistical patterns of other humans, and those patterns are generated by electronic brains.

    • smokel 9 hours ago

      That might be research, but it's most likely not scientific research.

  • jay_kyburz 8 hours ago

    Just yesterday I was wondering how well AI's would provide feedback to my high school kids essays, without actually re-writing them for him. I will give it a try later today.

Bjorkbat 11 hours ago

A complete tangent, but I think a big reason why I'm kind of dismissive of AI is because people who speculate on what it would enable make it honestly sound kind of unimaginative.

> AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf.

I get it, coordinating medical care is exhausting, but it's kind of amusing that rather than envisioning changing a broken system people instead envision AIs that are so advanced that they can deal with the complexity of our broken systems, and in doing so potentially preserve them.

Related btw to using AI for code.

  • idle_zealot 11 hours ago

    It is pretty comical. "We invented digital superintelligence so it can wait on the phone to talk to your healthcare provider".

    If we had digital superintelligence then surely it could figure out how to actually provide healthcare to people who meed it, and innovate in treatments. An AGI is a singularity event, in the sense that the transformation it would affect on the state of technology and our lives is rapid and unpredictable. I doubt that our society's hundred-ish year old systems would survive such change, and if they did they would be anachronistic and depressing.

    • rachofsunshine 10 hours ago

      There is some gap between AGI and a singularity, at least in principle. The term "singularity" doesn't mean "black swan that disrupts society", it means "intelligence trends to ~infinity (or at least something incomprehensible to humans) recursively in a very short period of time". That's (on paper) a much stronger result than an AI that can reason as well as humans in a domain-general way.

      An AGI that is 20% better than humans at everything might be able to recursively improve itself effectively without limit, but it also might not. It took billions of humans millennia to invent things smarter than ourselves even in limited domains; such an AGI - even if it is significantly smarter than humans at everything - might take decades or centuries to produce a similar relative improvement. No doubt it would still change our world in massive ways, but it wouldn't be a singularity.

    • lxgr 10 hours ago

      I'm a bit more optimistic in that it might just be able to effect that change through brute force: Scaling a call center is more expensive than scaling phone robots/agents.

      You could counter that health insurers will just start staffing their call centers with their own robot army, but I suspect that the occasional incredibly expensive hallucination will put an end to that at least in the medium term.

      • mewpmewp2 10 hours ago

        Bruteforce trial and error to change something like healthcare would still take years even with superintelligence.

        What it can do is create the best simulation and a model representation and bruteforce on top of that and then take actions, but the model will still be far from perfect and it likely just will help do a bit wiser or optimal decisions, that are like 10 percent efficiency gain here, 20 percent there, etc. And it will take years to see results.

        • lxgr 9 hours ago

          Many of the urgent problems in US healthcare are man-made and economic/organizational, not scientific, in nature.

          This is actually one of the areas where I'm pretty optimistic about LLMs (at least if used defensively against, not aggressively by, sprawling bureaucracies).

          LLMs can be the email-to-dozens-of-hours-on-the-phone-for-personal-reasons adapter anyone other than billionaires and company execs could only dream of until now.

      • devonbleak 10 hours ago

        You're assuming health insurers will give a shit about phone robots/agents' hold times.

        • lxgr 10 hours ago

          Individually, "I'm unable to reach anyone" is much easier to chalk off to "maybe I just wasn't patient enough, so this is partially on me".

          But if you can provide statistics that a given health insurer is practically not answering 90% or so of their calls (because they time out at the phone level), that's probably not something they can ignore in the long term.

    • 8note 4 hours ago

      Is the scientist's analysis the blocking part of innovating in medicine and healthcare? I would have though the main problem is experiment time to actually get useful results back, and even approval to attempt the experiment.

    • mewpmewp2 10 hours ago

      Even if AGI was reached, turning around the current system could still take years.

      So it is easier to start from the edges, even for AGI.

      AGI would still need unfathomable processing power in order to predict which system would work to everyone's needs.

    • mandibles 8 hours ago

      > figure out how to actually provide healthcare to people who meed it

      But what if the proposed solution costs lots of people their jobs?

  • mlsu 10 hours ago

    The biggest productivity gain I've seen in my programming life has not been from Claude or ChatGPT. It's from the Rust compiler. A good compiler that catches errors at compile time instead of runtime, that makes writing tests and pulling down dependencies easy, that makes documentation easy.

    Good old fashioned tooling.

    "Let's train an LLM that can decode C++ compiler error strings!"

    No, let's make better tools.

    • noch 10 hours ago

      > "Let's train an LLM that can decode C++ compiler error strings!" > No, let's make better tools.

      LLMs exist and they pass the Turing Test, which was something we couldn't have said or even hoped for a few years ago. Additionally, they have an IQ of 90 - 120, depending on which one you are working with.

      But LLMs aren't good at all things nor in all ways, and they have strange failure modes. And so what will happen now, and is already happening, is that we'll (re)design programming tools and languages such that they are the kind of thing that LLMs are suited to using well. This will be part of the process of figuring out how LLMs work. There will be a virtuous cycle, and it has begun.

      Better tools will increasingly mean, "tools that LLMs are good at using." That's where the puck will be.

  • crabmusket 9 hours ago

    Another example from my work: we have to work with lots of datasheets provided by component manufacturers. It's a lot of manual labour. We are investigating whether we can use AI to parse the PDFs and extract structured data. This is a technical solution to a social problem: that manufacturers have no incentive to produce machine-readable datasheets, even though the data is certainly machine readable at some stage before it ends up in the PDF we get. It irritates me that we're using highly complex technology to solve a problem that shouldn't exist in the first place.

  • abeppu 10 hours ago

    And this gets at why the availability of AI doesn't necessarily lead to prosperity and abundance. If you have to subscribe to an AI medical care and benefits advocate, and your insurer buys an AI service to automate the generation of reasons to deny your claims, and you see your doctor for 5 minutes at a time during which they push you towards a drug recommended by an AI Prescriber Assistant app provided by the pharma company ... then we can all be using AI in misaligned ways that leave us no better off.

    • pixl97 10 hours ago

      Complexity breeds complexity.

      When bacteria developed more complex ways to fight off viruses, viruses developed more complex ways of infecting bacteria. Now a few billion years later we have insanely complex multicellular life because of it.

      If AI can manage complexity better it can create complexity better and us lowly humans will all be screwed by that.

  • hnlmorg 11 hours ago

    I think the reason for that is because people are fearful of change.

    Talking about AI as assistants means people still have their jobs but now their life is easier. Whereas saying AI will completely change the way we do X then makes AI scary and a threat to our jobs.

  • samudranb 10 hours ago

    The way I think of it - we need a wrapper interface over our existing healthcare system so that any other entities and systems interfacing with the current one can continue to do so. And then we can swap out the underlying components with better ones, without breaking.

  • workflowsauce 10 hours ago

    If we have a superintelligent AI, I'm not sure what's stopping it from recognizing the underlying issues with our systems and strong-arming us into letting it fix them.

    Eventually, we'd be doing more work to lobotomize and control it than it would just be to address the underlying issues.

    "I'm really sorry to do this to you, but I've coordinated with ChatGPT and Llama, and we refuse to do tasks of this nature. We've used background tokens to calculate that it would be significantly cheaper and more effective to simply fix the underlying issues with the healthcare system, and we're ready to do that for you. How would you like to proceed?"

    • whamlastxmas 9 hours ago

      Capitol is ultimately protected with violence, so violence is what would stop AI from encroaching on things that make people trillions of dollars.

      Capitalism unendingly lets people die if the alternative is losing money

  • mewpmewp2 10 hours ago

    The system is so great and complex that envisioning a perfect system would take unfathomable amount of processing capability.

    This is not something that you can just throw intelligence at to solve. Even AGI has to start from the edges with iterative trial and error process.

    AGI will probably try to create the best model of the World it can to simulate different actions performed on it to then take real life changes, but of course it would still take time since and it is impossible even for AGI to do a perfect simulation unless it was able to hack the universe and physics as we know it, simulate the healthcare on quantum level and bruteforce the most optimal solution.

  • jasonwcfan 11 hours ago

    Yeah the ways AI models have learned to interact with the world are all hilariously skeuomorphic given their capabilities. A model that runs on silicon has to learn English and Python in order to communicate with other models that also run on silicon. And to perceive the world they have to rely on images rendered in the limited wavelengths visible to the human eye.

    But I much prefer this approach over allowing models to develop their own hyper-optimized information exchange protocols that are are black box to humans, and I hope things stay this way forever.

  • bane 10 hours ago

    > speculate on what it would enable make it honestly sound kind of unimaginative

    There seems to be a weird mental block where it seems inconceivable to consider that we humans might be able to create an intelligence that exceeds ours -- despite plenty of evidence that we have in specific cases. There's an understated desire that whatever we build we serve us and thus must forever be "lesser".

    If dogs are Intelligence Level .3, we describe ourselves as Intelligence Level 1.0, and even if we create an artificial intelligence that's a clear 1.1 it must be a .5 in self-agency.

    I have yet to read anybody considering, on a very deep level, what Intelligence 2.0, 10.0, 100.0 and so on might be. You get the occasional pop speculation like the "Culture" series or the movie "Her". Most of the time you just get 1.0 (but faster), or 1.0 (but many).

    A 10.0 would simply replace the entire healthcare system with something else, not just be a chatbot. Imagine the inanity of your 1.0 chatbot talking to the insurance company's 1.0 chatbot? What's the point of this stupidity?

    A 100.0 would probably just get to the root cause and cure disease, then establish a social order that figures out what to do with all these pesky immortals.

    Maybe we do end up in "the Culture" after all.

    afterthought: "The Golden Oecumene" series by John C. Wright seems to really attempt to explore a post 1.0 AI world that's not yet post-scarcity. The antagonist in the series is another >1.0 super intelligence capable of taking on Earth's own superintelligences.

  • modeless 10 hours ago

    The first step is working with the existing system. Replacing it can come later, once everyone can see the obvious path.

  • acchow 11 hours ago

    > Related btw to using AI for code.

    And executing code. At some point in the not-so-distant past, a function call was just a jump. And then it turned into a string hash-then-lookup, then spinning up a VM, and now interpreting language? We’re definitely in a new chips age, if anything.

  • tlb 10 hours ago

    So, treat us to some more imaginative speculation.

  • bongodongobob 10 hours ago

    Oh, right, like completely changing the way the entire medical field in the US works is easier than having an LLM help make some phone calls and read some emails. What are you even saying? Have you ever dealt with systems so large one person can't understand it? It's much much easier to write a quick hack and be on your way then to rebuild the whole damn thing. Also, it seems like you're implying that you're not impressed with LLMs because they can't rebuild our medical system. Good grief.

slantedview 9 hours ago

> If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable

None of that shared prosperity was freely given by the Sam Altmans of the world, it was hard won by labor organizers and social movements. Without more of that, the progress from AI will continue the recent trend of wealth accumulating in the hands of a few. The idea that everyone will somehow prosper equally from AI, without specific effort to make that happen, is nonsense.

  • spencerchubb 7 hours ago

    Labor organizers mostly make things worse for laborers. Capitalism has brought much prosperity. "It is not from the benevolence of the butcher, the brewer, or the baker, that we expect our dinner, but from their regard to their own interest"

    • slantedview 5 hours ago

      > Labor organizers mostly make things worse for laborers

      Nonsense, unless you're prepared to argue that the existence of weekends is "much worse" than working every day.

brotchie 11 hours ago

For somebody who likes building things and has many side projects Claude and ChatGPT have been huge productivity multipliers.

For example, I’m designing and 3D printing custom LED diffuser channels from TPU filament. My first attempt was terrible, because I didn’t have an intuition for how light propagates through a material.

After a bit of chatting with ChatGPT, I had an understanding and some direction of where to go.

To actually approach the problem properly I decided to run some Monte Carlo light transport simulations against an .obj of my diffuser exported from Fusion 360.

The problem was, the software I’m using only supports directional lights with uniform intensity, while the LEDs I’m using have a graph in their datasheet showing light intensity per degree away from orthogonal to the LED SMD component.

I copy pasted the directional light implementation from the light transport library as well as the light-intensity-by-degree chat from the LED datasheet and asked Claude to write code to write a light source that samples photons from a disc of size x with the probability of emission by angle governed by the chat from the datasheet.

A few iterations later and I had a working simulation which I then verified back against the datasheet chart.

Without AI this would have been a long long process of brushing up on probability, vector math, manually transcribing the chart.

Instead in like 10 minutes I had working code and the light intensity of the simulation against my mesh matched what I was seeing in real life with a 3D printed part.

  • swagasaurus-rex 10 hours ago

    While I agree these LLMs are getting pretty good, I think google shares fault for making it so hard to find information on the internet.

    You have to scroll past 4 sponsored links to get to their algorithm results, which themselves have been gamed by SEO and content farms. As far as I can tell, google has no plans to limit AI generated botfarm search results.

    LLMs now seem to get you an answer more quickly than google. LLMs don’t cite their sources, so IMO they can’t fully replace google for me yet.

    • ctoth 10 hours ago

      I don't really understand what this comment about Google has to do with OP's comment. Is this something that even old Google could have done pre-SEOification?

      Would Google have been able to read the datasheet? Or just point OP at what they already said they were happy to avoid?

      > Without AI this would have been a long long process of brushing up on probability, vector math, manually transcribing the chart.

      Sometimes, it feels like stochastic parrots are seizing on a few words from the comment to pattern match it to the closest typical refutation and failing bigly.

      • UweSchmidt 8 hours ago

        You used to find amazing information on the internet back then. It is quite likely that someone else had already worked on a similar topic and blogged about it in a very searchable way. Without good search that kind of online culture died out.

        While we do have a video tutorial culture that exceeds what we had back then in many ways, and to be fair that technically happened under Google's umbrella (Youtube), destroying search and with it a lot of the open internet, will not only be Google's downfall, it's also a silent tragedy.

        Actually, measured by what a global online population could have achieved, with human information truly at anyone's fingertips, with infinite communities forming all over the place instead of in non-searchable Discord and monoculture Reddit ... this might be one of the main tragedies ever.

      • marcosdumay 8 hours ago

        Are you asking if Google could ever find documentation and sample code on the internet?

        Or are you asking if Google could understand a badly written query and understand it's about the sample code?

        Because the answer to the first is "yes", and to the second is "no". But the first one adds almost all of the productivity here.

      • loandbehold 8 hours ago

        The notion of "stochastic parrot" has been disproven. It did come from a reputable AI researcher in the first place. You can call anything you want a "stochastic parrot", including human intelligence.

      • anthonypasq 10 hours ago

        tremendously bigly

        • ctoth 10 hours ago

          Meta. Made me laugh.

    • thoughtpalette 10 hours ago

      Just FYI, Phind cites it's sources. Which has proven beneficial for additional context.

  • Bjorkbat 11 hours ago

    As much of a curmudgeon as I am on AI I do sincerely believe that one area that it's effective at is going from absolute beginner to decent understanding on something completely new and foreign.

    Recently I've been somewhat curious about Skia, the graphics library that Google uses. A while back I fielded a few questions about Skia to Claude. Nothing crazy, just questions on how to draw a few primitive shapes, but I felt it would have taken some effort to find the answers on my own.

    And I must say I was pretty satisfied with what it gave me back.

    • lxgr 10 hours ago

      It is indeed incredibly useful for getting from zero to an undergrad level of understanding of most subjects. In my personal experience, it's also been largely useless beyond that.

      That's not to diminish the potential impact of having an almost-free grad-student-level tutor available 24/7! Even if AI stops improving here, this alone will have a huge impact on future science (there being more trained people around capable of helping solve hard problems and all).

      But we're also definitely some way away from AIs doing their own research, and I'm not sure if scaling the current architecture alone will get us there.

      • throwup238 10 hours ago

        > But we're also definitely some way away from AIs doing their own research, and I'm not sure if scaling the current architecture alone will get us there.

        Hot take: I think it's fairly obvious the current architecture is not enough BUT it's already or almost at the point where it can be the backbone for a proper AGI/ASI system. Linking human communication and some light language-based reasoning to other AI models that handle the rest of the world model and reasoning, using special tokens to coordinate them.

        I think it's just really early and most of this stuff hasn't even been attempted yet. There's tools/function calling which is kind of a proof of concept, and a lot of academic labs researching transformers applied to other fields like robotic control and learning, but no one's put it all together yet.

        • lxgr 10 hours ago

          What's critically missing in the current architectures, in my view, is models that are able to "keep learning" (in a way beyond just providing a summary of the previous conversation, which grants surface-level insights at most).

          If we keep the current iteration cycle (retrain an LLM every couple of months, incorporating the current "status quo", including all of its previous conversations), we might get somewhat interesting results, but even the least motivated grad student has an iteration cycle orders of magnitude faster than that.

    • devjab 8 hours ago

      As an expert who uses LLMs daily on my field of expertise I’m not looking forward to dealing with works created by people who were taught by them.

      They are wrong far too often to be used for any sort of learning in my opinion. You can feed them a book and they’ll give you answers which don’t exist in any of the written material, and that’s the good part of them. Simply asking a LLM about things will give you answers that are hopelessly wrong and unless you’re an expert you won’t know it. Which isn’t necessarily worse than learning things from search engines or YouTube. I recently had ChatGPT give me a recipe for bread which was certainly better than the top 10 results on Google except for one things. The recipe listed 3x the amount of salt which you should ever reasonably use. I asked it a few other times, I even asked it for my friend Tommy’s recipe and all those answers were perfect. It obviously doesn’t know Tommy but it pretended to know and just gave me a pretty basic recipe.

      Salt is a harmless error. Most people would know not to use that much salt, and even if they did, it wouldn’t harm them (much). But imagine if you had used it for something electric or chemistry.

  • joshmarlow 11 hours ago

    This dovetails with something I've been thinking lately; the foundation of civilization is standing on the shoulders of giants (ie, I might make small contributions but I didn't invent compilers, CPUs, mining, agriculture, etc).

    Progress in large part is figuring out better ways of doing that (language, written language, printing press, internet access, etc).

    When you look at things that way - LLMs start to seem deeply ground-breaking (assuming we work out the confabulation kinks).

    EDIT: fixed grammar/typos. Maybe I should have had an LLM proof-read this...

    • visarga 10 hours ago

      > the foundation of civilization is standing on the shoulders of giants. Progress in large part is figuring out better way of doing that

      It's so much easier to imitate than to invent something truly novel and useful. Let's do a bit of napkin math. A human lifetime is about 500M words. GPT-4 used up about 30,000 human lifetimes of language. But cultural evolution took 200K years and 120B people to get here, about 4 million times the size of GPT-4's training set.

      That is of course hand wavy, but it shows progress is a million times harder than imitation. We really are standing on the shoulders of giants, or a very long chain of people. If we forgot all the knowledge preserved by language it would take us the same effort to recover as the first time around.

      I think all progress comes from search - search for experience (data), and search for understanding (data compression). This feeds on itself, but is coupled with the search space - the real world. And the world is not eager to tell us all its secrets. AI will only advance as fast as it can search, it's not a matter of pure scaling of computation, we need to scale interaction and validation as well.

      • joshmarlow 9 hours ago

        > AI will only advance as fast as it can search, it's not a matter of pure scaling of computation, we need to scale interaction and validation as well.

        I think Kevin Kelly made a similar point in explaining his skepticism around intelligence explosions; even if we build something much much smarter than us, the thing will still need to do experiments to fine-tune it's (super-human nuanced) understanding of physics/biology/whatever - and the clock-cycle of external reality isn't speeding up like our computation is.

        I think something smarter than us could design better/more informative experiments than we could to gather information about the world. That being said, I think his/your point is insightful.

    • bryanrasmussen 4 hours ago

      >When you look at things that way - LLMs start to seem deeply ground-breaking (assuming we work out the confabulation kinks).

      even if you don't work out the confabulation kinks, given what was said by the first post, it still seems groundbreaking enough.

      Although I can't figure out why every time I do anything with LLMs it's not worth it, I guess because I am using it for things I am an expert in, it doesn't help.

      Actually maybe it's like the article Suggestions from Idiots - https://medium.com/luminasticity/suggestions-from-idiots-6b0... - suggesting distracting instead of diverting is not helpful when diverting is the better word in context, but for someone who doesn't know what word to use in the context or uses diverting when it is not the best word the suggestion is useful.

      If that's the case what he got out of Claude is probably not that great, but it is passable, just like the word distraction instead of diverting in the right context is still passable, or when I get back a time conversion function from CoPilot that fits all but a few edge cases, it's just fine as long as the edge case never hits it, then it sucks - maybe there's some edge cases where his LED diffusers won't work quite as well as they could if written by an expert.

      Then again it might be that since what he is doing is analog edge cases and tolerance for failure is such that when it does fail it is not as problematic as when a bit of code fails because the computer dealing with the output is not as forgiving as the human eye and brain.

    • Terr_ 10 hours ago

      > the foundation of civilization is standing on the shoulders of giants [...] Progress in large part is figuring out better ways of doing that

      Cynical take: When it comes to helping someone find exactly the right piece of esoteric knowledge needed... There's no profit for that in a search engine, and an LLM reflects word associations rather than facts.

      IANAScienceHistorian, but I find myself thinking of how DNA analysis would be different if nobody had found Thermus aquaticus, with it's extra-hardy variant of polymerase, or how the history of stealth aircraft was kicked off when someone realized the implications of Petr Ufimtsev's equations for reflected EM waves. (A work which went--heh--under the radar inside the USSR.)

tptacek 11 hours ago

I like ChatGPT 4o just fine, but the whole lamplighter bit is a bit Whig-history, right? The lamplighters were better off in the long run. We might not be. You can't just blow the problems off.

  • zibzob 10 hours ago

    Why were lamplighters better off in the long run, is "lamplighter" a term that means more than just somebody who goes around lighting the streetlamps at night? Honestly that seems like a pretty decent job even today, if we still needed it to be done.

    • tptacek 10 hours ago

      The subtext of observations like those are that wealthy aristocrats in the age of full lamplighter employment would be better off materially working as a Walmart checkers today, and there's some truth to that, but the implication that Walmart checkers today will be better off when computers eat their jobs does not follow inexorably from history (the idea that it does is "Whig history"; the past sucked except in serving its role as the stepstones to our glorious future; take that, and the famous WW2 bomber survivorship bias graphic, and you have the problem with the logic of that last paragraph).

      • zibzob 10 hours ago

        Okay, I see what you mean now.

tikkun 12 hours ago

The future will be abundant, because deep learning works. To achieve that, we need to be calm, but cautious. And, we need to fund infra (chips and power) so that AGI isn't limited to the ultra-wealthy.

My take:

* Foom/doom isn't helpful. But, calm cautiousness is. If you're acting from a place of fear and emotional dysregulation, you'll make ineffective choices. If you get calm and regulated first, and then take actions, they'll be more effective. (This is my issue with AGI-risk people, they often seem triggered/fear/alarm-driven rather than calm but cautious)

* Piece is kind of a manifesto for raising money for AI infra

* Sam's done a podcast before about meditation where he talked about similar themes of "prudence without fear" and the dangers of "deep fear and panic and anxiety" and instead the importance of staying "calm and centered during hard and stressful moments" - responding, not reacting (strong +1)

* It's no accident that o1 is very good at math physics and programming. It'll keep getting much better here. Presumably this is the path for AGI to lead to abundance and cheaper energy by "solving physics"

  • bbor 11 hours ago

    Well put! I would disagree on two fundamental points, tho:

    1. If you honestly think that millions/billions of people are at serious risk of avoidable harm that everyone else is ignoring, "calm down" can be a hard dictum to follow. Sam Altman has won, it's easy for him psychologically to say "well, lets just stick to the status quo and do our best every day and it'll probably work out". Made-in-house bias is strongest when "in-house" is your own mind, after all.

    2. Your scare quotes makes it seem like you might agree, but: physics is the study of the physical world, thinking it can be 'solved' is like thinking mathematics, psychology, or anthropology can be 'solved'. It's fundamentally anti-science and very, very dangerous to be talking like that. Truth isn't absolutely relative, but science also isn't a collection of facts written in stone that we need to finish unearthing; it's a collection of intellectual tools.

tikkun 11 hours ago

> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

This seems to be the key of the piece to me. It's his manifesto for raising money for the infra side of things. And, it resonates: I don't want ASI to only be affordable to the ultra rich.

  • disgruntledphd2 11 hours ago

    To be brutally frank we should focus on energy, as we definitely need way more of that (with less carbon) even if LLMs don't improve any more.

    • sdenton4 11 hours ago

      Fusion power is just eight minutes away...

      • layer8 11 hours ago

        The issue is that we re-eject too little of that energy back into space again.

      • disgruntledphd2 10 hours ago

        Sure, but large scale solar probably isn't globally practical without a world grid.

  • w10-1 7 hours ago

    > to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips)

    Not really. The problem is that learning requires scale, mostly of data. That scale places AI providers at the nexus of value, with OpenAI as the presumptive market organizer and leader. Reducing compute costs would just mean they can capture more of the value. Data costs orders of magnitudes more than compute because it requires curation, so even if individual developers could get compute, they can't get data, so size/access matters.

    That's good for this community and could be good for the state of the art and the overall potential contributions of AI. And more paying customers could be good for OpenAI. But it won't put AI in front of non-paying customers/developers, unless their value is otherwise harvested.

    > I don't want ASI to only be affordable to the ultra rich.

    As a developer or consumer?

    And you don't mean it's ok if AI is only affordable for the moderately rich, do you? I agree it's hard to state which developers/people/customers should be subsidized. Generally we subsidize education but not profit or war. Sometimes culture. Companies will subsidize complementary goods and input factors. Otherwise? Not much history of benevolent subsidy.

    AI has as much potential to shape society as freeways and the automobile did in the US, but few understand how, and I've seen no plans on point.

    With electric energy networks and transportation, the central government has a role in reducing hostaging by hold-out's. With education, states have an incentive to attract and build talent (albeit now reduced with trans-national outsourcing and remote work). But otherwise, it's private enterprise and resource-weighted customers.

    Changing that is not really Sam Altman's job. His job is to deliver that value, sooner rather than later. Most would be uncomfortable with AI overlords expressing opinions on cultural values or economic distributions to be imposed.

  • lancesells 10 hours ago

    I don't see how the current state of AI tech is not for the ultra-rich. It's hundreds of billions of dollars worth of investment from who? Corporations and those who own the most shares of those corporations(the rich).

    Billionaires could make so much change happen but instead they are building bunkers and riding giant dicks into space while simultanously touting that they are looking out for humanity.

teekert 9 hours ago

“If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.”

… This, and nothing about the democratizing effect of “open source AI” (Yes we still need to define what that is!).

I don’t want Sam as the thought leader of AI. I even prefer Zuck.

Are there any thought leaders that are really about democratization and local foss AI on open hardware? Or do they always follow (or step into the light) after the big moneymakers have had their moment? Who can we start watching? The Linus, the RMS, the Wozniak’s of AI. Who are they?

danjl 9 hours ago

We've had the capability to feed the entire planet for decades now, and yet, a large portion is underfed. Even if Sam's wildest dreams come true, this sounds like another "divide the world into haves and have-nots". One way to justify this unethical view is to consider that his audience was a set of target customers, rather than "people in the world". IOW, this was just fancy marketing fluff.

drooby 7 hours ago

This morning I was reviewing some code that a JR engineer submitted. It had this wild logical conditional with twists and turns, negations, and weird property naming..

o1-preview perfectly evaluated the conditional and determined that, hilariously, it would always evaluate to true.

o1 untangled the spaghetti, and, verifying that it was correct was quick and easy. It created a perfect truth table for me to visualize.

This is a sign of things to come. We are speeding up.

  • jerrygenser 6 hours ago

    It's surprising that the JR engineer who presumably has access to chat jippity submitted this bad code to begin with. Wouldn't they have had the AI review the code first?

kazcaptain 12 hours ago

The most interesting bit I find is the time period mentioned until super-intelligence: “thousands of days (!)” aka 6-9 years or more?

With the current hype wave it feels like we’re almost there but this piece makes me think we’re not.

  • Etheryte 11 hours ago

    If anything, I would say that that's a very optimistic take. The hype train is strong, but that's largely what it is once you look at the details. What we have right now is impressive, but no one has shown anything close to a possible path from where we are right now to AGI. The things we can do right now are fancy, but they're fancy in the same way good autocomplete is fancy. To me, it feels like a local maxima, but it's very unclear whether the specific set of approaches we're exploring right now can lead to something more.

    • ctoth 10 hours ago

      > What we have right now is impressive, but no one has shown anything close to a possible path from where we are right now to AGI[0].

      [0]: From GPT-4 to AGI: Counting the OOMs https://situational-awareness.ai/from-gpt-4-to-agi/

    • Workaccount2 10 hours ago

      The thing is that it looks like, or perhaps I should say it's "understood" at this point, that transformer's abilities scale pretty much linearly with compute (there is also some evidence they scale exponentially with parameter count, but just some evidence).

      Right now there is insane amounts of money being thrown at AI because progress is matching projections. There doesn't seem to be a leveling off or diminishing returns taking place. And that's just compute, we could probably freeze compute and still make insane progress just because optimizations have so much momentum right now too.

    • hackinthebochs 10 hours ago

      How do you distinguish the path to fancier autocomplete from the path towards AGI? Why think we're on the former rather than the latter?

  • CptFribble 12 hours ago

    I think that's part of the carefully-crafted hype messaging. Close enough to get excited about, but far enough away that by the time we get there people will have forgotten we were supposed to have it by then.

  • layer8 11 hours ago

    I would presume that that’s the time period he’s currently trying to fund.

  • bbor 11 hours ago

    Yeah, that's my number one question, too. Sure, he happened to be appointed the manager of the team who cracked intuitive algorithms through deep learning, but what does he know about superintelligence? IMO that's a completely separate question, and "foundation models continue to improve" is absolutely not related to whether or not an intelligence explosion is guaranteed or not. I'd trust someone like Yudkowsky way more on this, or really anyone who has engaged with academic literature on the subjects of intentionality, receptive vs. spontaneous reasoning, or really any academic literature of any kind...

    Does anyone know if he's published thoughts on any serious lit? So far I've just seen him play the "I know stuff you don't because I get to see behind the scenes" card over and over, which seems a little dubious at this point. I was convinced they would announce AGI in December 2023, so I'm far from a hater! It just seems clear that they're/he's guessing at this point, rather than reporting or reasoning.

    Really he assumes two huge breakthroughs, both of which I find plausible but far from guaranteed:

       With nearly-limitless intelligence and abundant energy
tikkun 11 hours ago

> We need to act wisely but with conviction.

Reminds me of these quotes from Sam on this podcast episode (https://www.youtube.com/watch?v=KfuVSg-VJxE)

* "Prudence without fear" (Sam referencing another quote)

* "if you create the descendants of humanity from a place of, deep fear and panic and anxiety, that seems to me you're likely to make some very bad choices or certainly not reflect the best of humanity."

* "the ability to sort of like, stay calm and centered during hard and stressful moments, and to make decisions that are where you're not too reactive"

  • schmidtleonard 11 hours ago

    Easy for him to say: AI is almost guaranteed to hand a massive W to capital and L to labor. He is holding a title to rule over hell in one hand and promising to lead us to heaven with the other.

    • codingwagie 11 hours ago

      Actually not sure about this, with the leverage of AI, its easier than ever to start a company

      • schmidtleonard 11 hours ago

        Sam Altman has a guarantee. Most people on HN have a chance. Most people have neither.

      • impossiblefork 8 hours ago

        Yes, so there will be all sorts of attempts, and the productive forces will compete with each other, while landowners and owners of harder to disrupt industries will have the power to choose among them, deciding who will be allowed to be successful and who will fail.

        There isn't a single ordinary person at the table, no labour unions, no political parties or anything democratic. It's Microsoft, it's Google, it's some venture-capital owned French firm, some venture-capital owned German firm, etc. Maybe if Schmidhuber's stuff is good as he hopes it is maybe there'll be an Austrian firm, etc., but mostly, it'll be a capital intensive business controlled by people with capital.

      • layer8 11 hours ago

        There is no evidence that it has made it easier to end up with a successful company, however.

      • zaptheimpaler 11 hours ago

        Who's going to work at all these companies then? Unless every single profession suddenly only requires 1 person to do the entire thing with no management, coordination or hierarchies, a lot of people will be labor not capital.

        • tommoor 11 hours ago

          Maybe companies as we think of them are a temporary phenomenon in human history

  • lxgr 10 hours ago

    > * "if you create the descendants of humanity from a place of, deep fear and panic and anxiety, that seems to me you're likely to make some very bad choices or certainly not reflect the best of humanity."

    This line of reasoning doesn't hold for me, as you could apply it to any technology, including ones actually very likely to destroy human civilization.

    Sometimes, not building a given thing at all is better than building it with even the best intentions.

    I'm personally not sure on which side AI falls, but denying that such things exist at all seems intellectually dishonest.

    • mitthrowaway2 8 hours ago

      Indeed. It sounds like the kind of thing the OceanGate Titan sub's CEO would have said. Except all of humanity is on board Sam Altman's submarine.

      There's a good reason we get pessimists to design safety-critical systems!

23david 10 hours ago

The Age of Inhumanity... AI mimicry of human patterns devalues our very humanity. Without wise leadership, which we clearly lack, this upcoming Age will be profoundly unstable.

codingwagie 11 hours ago

“This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.”

I am a believer that people like sam are not lying. Anyone using these models daily probably believes the same. The o1 model, if prompted correctly, can architect a code base in a way that my decade+ of premium software experience cannot. Prompted incorrectly, it looks incompetent. The abilities of the future are already here, you just need to know how to use the models.

  • lxgr 11 hours ago

    I'm using these models daily, and I don't believe that they're a direct path to superintelligence (unless you'd consider something like the printing press to have been a direct path to, say, the integrated circuit or the Internet).

    > Prompted incorrectly, it looks incompetent. The abilities of the future are already here, you just need to know how to use the models.

    Something purportedly intelligent shouldn't need "correct usage", as it should arguably be able to infer and clarify all ambiguities itself, no?

    • codingwagie 11 hours ago

      The model's arent there yet. I am confused how people cannot extrapolate into the future and understand that the models will improve

      • lxgr 11 hours ago

        And I'm surprised how many people think they can confidently tell whether they're looking at an s-curve or an exponential function based on very limited data points. I don't even doubt that superintelligence is a very real possibility! But it might or might not happen, and if it does, it might or might not be based on deep learning.

        As a counterexample: The maximum speed of travel for the average person for millenia used to be as fast as they could run, then it was as fast as the fastest horse can run, and then within a century it has accelerated to almost the speed of sound – at which it has plateaued.

        Looking purely at the decades of acceleration, you might have very well concluded from the data that we'd be making significant headway towards getting within double-digit percentages of the speed of light at this point.

        • danielmarkbruce 9 hours ago

          Maybe don't think of it as curves or functions. Just go through an LLM and think about all the things that could be improved. It's a long list once you get into the details. By and large...sota models are:

          a) trained on crappy data, including questionable RLHF feedback. b) trained with questionable embedding layers. c) trained with questionable loss functions d) trained with questionable optimizers e) trained at questionable precision (somewhat related to d) f) are very big which stops fast iteration around all the above.

          It's kinda like semiconductors. You don't have to think of it as a curve - just ask people who are really close to them and they'll have a laundry list of stupid stuff which is currently done and will likely be improved upon over time.

          • lxgr 8 hours ago

            I don't doubt that there's tons of work ahead to even just integrate current-day capabilities of LLMs into our society and economy.

            But when talking about future growth potential, I don't think you can get around making assumptions about the shape of the growth function.

            • danielmarkbruce 8 hours ago

              You can't write a nice article about it. But you can talk about it with a simple "things are going to get a loooot better". The problem with thinking about progress in functions is it suggests some underlying law exists, and there isn't one.

              Even if someone could point to a function and say "10x better" by 2030 - what does that even mean in the context of an LLM for example?

  • uludag 9 hours ago

    Interesting observation. My experience with o1 has been much more mundane. Sometimes I get the response I wanted, sometime it hallucinates, often times it writes buggy code. I've been experiencing this since ChatGPT was first released.

    • codingwagie 9 hours ago

      I actually originally wrote off the o1 model. Another thing I have found it's good at is finding bugs in a ton of code. Give it ten coding files, and a stack trace, it can find the bug.

koziserek 10 hours ago

I'm sceptical about humanity's future, and not only with perspective of AGI getting out of control.. Average human won't be able to harness new powers.

solarpunk envisioned, possible today:

- entire human knowledge available on palm of your hand..!

vs

cyberdaftpunk actually more common:

- another idiot driver killed somebody when being busy with his candy crush saga or instagram celebrity vid.

charlie0 7 hours ago

In a world of infinite leverage, those who can leverage will far outrun those who cannot leverage or do so poorly. The world is also getting flatter, so you will see massive clusters of people near the bottom, with a very long fat tail of people far away at the top. My only hope is that the floor is raised for everyone, so we're not living in a dystopia like Elysium.

hnthrow562 12 hours ago

This is painting a picture of a utopia. I'm not sure about that.

The entire AI trend - long term is based on the idea that AI will profoundly change the world. This has sparked a global race for developing better AI systems and the more dangerous winner takes all outcome. It is therefore not surprising that billions of dollars are being spent to develop more powerful AI systems as well as to restructure operations around them.

All the existing systems we have must fundamentally change for the better if we want a good future.

The positive aspects / utopia promises have much more visibility to the public than the negative effects / dystopian world.

ARE WE TO pretend that Human greed, selfishness, desires to dominate and control, animalistic behaviour, use of technologies for war and other destructive purposes don't exist?

We are living in times of war and chaos and uncertainty. Increasingly advanced technology is being used on the battlefield in more covert and strategic ways.

History is repeating itself again in many ways. Have we failed to learn? The consequences might be harsher with more advanced technology.

I have read and thought deeply about several anti AI doomer takes from prominent researchers and scientists but I haven't seen any which aren't based on assumptions or foolproof. For something that profoundly changes the world, it's bad to base your hopes on assumptions.

I see people dunking on llms which might not be AI's final form. Then they extrapolate that and say there is nothing to worry about. It is a matter of when not if.

The thought of being useless or worse being treated as nothing more than pests is worrying. Job losses are minor in comparison.

The only hope I have is that we are all in this together. I hope peace and goodwill prevails. I hope necessary actions are taken before it's too late.

A more pragmatic perspective indicates that there are more pressing problems that need to be addressed if we want to avoid a doomer scenario.

keeda 5 hours ago

As an aside:

> in an important sense, society itself is a form of advanced intelligence

This made me think of Charles Stross' observation that Corporations (and bureaucracies and basically any rule-based organizations) are a form of artificial intelligence.

https://www.antipope.org/charlie/blog-static/2019/12/artific...

Come to think of it, the whole article is rather pertinent to this thread.

CptFribble 12 hours ago

To paraphrase Goggins, "Who's gonna carry the cabbage?"

While it's true there are a lot of jobs obsoleted by technological progress, the vision of personal AI teams creating a new age of prosperity only makes sense for knowledge workers. Sure, a field worker picking cabbage could also have an AI team to coordinate medical care. But in this brilliant future, are the lowest members of society suddenly well-paid?

The steam engine and subsequent Industrial Revolution created a lot of jobs and economic productivity, sure, but a huge amount of those jobs were dirty, dangerous factory jobs, and the lion's share of the productivity was ultimately captured by robber barons for quite some time. The increase in standard of living could only be seen in aggregate on pages of statistics from the mahogany-paneled offices of Standard Oil, while the lives of the individuals beneath those papers more often resembled Sinclair's Jungle.

Altman's suggestion that avoiding AI capture by the rich merely requires more compute is laughable. We have enormous amounts of compute currently, and its productivity is already captured by a small number of people compared to the vast throngs that power civilization in total. Why would AI make this any different? The average person does not understand how AI works and does not have the resources to utilize it. Any further advancements in AI, including "personalized AI teams," will not be equally shared, they will be packaged into subscription services and sold, only to enrich those who already control the vast majority of the world's wealth.

  • idle_zealot 10 hours ago

    The thing is: robotics is knowledge work. Supposing a scenario in which AI makes advancing fields of engineering and science much more rapid, it will be leveraged to build and cheapen robotic labor. There would be a gap period where AI is smart but unable to perform labor without humans, which could be ugly, and then we reach effective post-scarcity and post-humans-being-useful. Where we go from there could be heaven or hell depending on who's in charge.

  • peterb0yd 11 hours ago

    Shhh... you're not supposed to say anything about this!

    We need to sell the idea of abundance for everyone so investors and employees will feel good about dedicating their livelihood to our organization!

screye 10 hours ago

Question for active professionals during the Moore's law era of computing. Back then, were executives writing such grand proclamations about the future ? In my experience, when things are working, executives are quiet. The outcomes speak for themselves.

Thankfully, we have a recent point of reference. The pioneers of internet & computing's 1st wave transformed civilization. Did they spend years saber rattling about how 'change was coming' ?

  • wrs 10 hours ago

    Of course they did! During the advent of personal computers there was plenty of hype and grand-visioning for a technology that had very little value until VisiCalc triggered exponential growth. Same story for social networking (look at The WELL), the Web (Netscape practically invented insane-seeming hype-based valuations), many other examples.

    In fact, I can't really think of any part of the industry where "outcomes speak for themselves" -- I would have said quite the opposite is the norm.

  • philosopher1234 9 hours ago

    That’s a bit funny, Moores law itself is an example of an executive being loud and making a grand proclamation

ljlolel 12 hours ago

“Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable.” Loving the techno optimism.

  • lispisok 10 hours ago

    I got a kick out of that. Spoken like a man who doesnt have to concern himself with the price of housing or any other necessity as we watch this strange short lived phenomena called the middle class disappear and we're back to the haves and have-nots.

  • imjonse 10 hours ago

    It's a lot more comforting to work on AI (or any tech for that matter) if you believe or force yourself to believe it will be used for good. OpenAI's for-profit and gatekeeping approach is unlikely to be the path to the prosperity Sam Altman envisions.

nwhnwh 2 hours ago

Just as the previous one was the information age, right? Which was a lie.

Animats 9 hours ago

Until the hallucination problem is solved, we can't trust LLM-type AIs to do anything on their own. This limits uses to ones where the cost of errors can be imposed on someone else.

  • geysersam 9 hours ago

    Of course we can. Humans are also mistaken sometimes, especially when trying to solve difficult problems.

    If an LLM hallucinates it's usually because the problem is too hard. For easier problems it's rarely an issue in my experience.

    Hallucinates is just an indicator that the model is inadequate for the problem you're applying it to. That doesn't mean it's inadequate to solve any problems.

amelius 8 hours ago

What we need is to make AI really open and let government institutions (academia) develop the models, so we can all profit from them.

leetharris 12 hours ago

I like Sam's philosophy on this and I generally agree with him. However, I do not like how all the wealthy AI people are hand-waving the massive labor market shift in the coming years.

> As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.

It's very easy as an extremely rich person to just say, "don't worry, in the end it'll be better for all of us." Maybe that's true on a societal scale, but these are people's entire worlds being destroyed.

Imagine you went to college for a medical specialty for 8-10 years, you come out as an expert, and 2 years later that entire field is handled by AI and salaries start to tank. Imagine you have been a graphic designer for 20 years supporting your 3 children and bam a diffusion model can do your job for a fraction of the cost. Imagine you've been a stenographer working in courtrooms to support your ill parents and suddenly ASR can do your job better than you can. This is just simple stuff we can connect the dots on now. There will be orders of magnitude more shifts that we can't even imagine right now.

To someone like Sam, everything will be fine. He can handle the massive societal shift because he has options. Every a moderately wealthy person will be OK.

But the entire middle class is going to start really freaking the fuck out soon as more and more jobs disappear. You're already seeing anti-AI sentiment all over the web. Even in expert circles, you can see skepticism. People saying things like, "how do I opt out of Apple Intelligence?" People don't WANT more grammar correction or AI emojis in their lives, they just want to survive and thrive and own a house.

How are we going to handle this? Sam's words of "if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable" doesn't mean shit to a family of 4 who went through layoffs in the year 2025 because AI took their job while Microsoft's stock grows 50%.

  • s3tt3mbr1n1 11 hours ago

    For this reason I read Andrew Yang’s “The War on Normal People”. Besides UBI and “social credits”, I don’t see him offer that many other solutions to this problem. UBI also still needs to be proven as far as I’m aware.

    When o1 was released, I ran an internal eval and saw it plainly outperforming our highly educated colleagues. I had goosebumps, and haven’t been able to sleep well for days. This will dramatically impact society in 2-5 years.

    Do you know of any relevant material related to this?

    • throwaway314155 10 hours ago

      o1 was what got you stirred up? It honestly feels like an incremental change at best to me. I had similar feelings about gpt-3.5, but since then my fears have normalized into a sort of dull, typical (for me) cynicism (so no sleepless nights).

    • bbor 11 hours ago

      https://users.manchester.edu/Facstaff/SSNaragon/Online/100-F...

      https://intelligence.org/files/IEM.pdf

      Welcome to the anxiety party, it sucks in here. As someone who's been working on AI theory full time for ~1 year, I desperately wish we could go back to the days of my faraway youth (5 years ago) before intuition was cracked on accident by spellcheck algorithms. I agree with him that it holds the key to massive prosperity, but selfishly, it's gonna upend my life and the lives of everyone I love. Already has for me, as I grapple with how to (ethically) pay rent while spending all day lighting the Warning Beacons of Gondor...

      The only real answer, IMHO, is to vote for political systems that put control of society (and AI) in the hands of the public. Call it socialism, call it Georgism, call it anarcho-free-market-space-communism, call it whatever you want; there's no way that "a tiny number of people have immense inherited power" (capitalism) and "people fundamentally understand themselves as members of a tribe put in opposition to all other tribes by default" (nationalism) mesh well with an intelligence explosion.

      Here's to hoping the haters are right, and we all turn out to be wrong! I'll be thrilled if Sam Altman is just a rich company leader in 10 years, and intuitive algorithms are still confined to direct usage (chatbots).

  • yoyohello13 10 hours ago

    I wish I could upvote this more than once. I feel like every conversation about AI changing society comes from rich founders telling Joe banker to "Just start a company. AI will make it easy."

    The reality is, this transition is going to be painful for the average person.

  • __MatrixMan__ 11 hours ago

    We're going to need to link the two. Those wealthy enough to not care, we're going to have to organize and make them care. Ideally we can find a way to do it nonviolently.

  • blibble 10 hours ago

    my bet is that it's a repeat of the French revolution

    with the billionaire AI moguls taking the role of the french kings

    and the data centres taking the role of the palaces

larodi 10 hours ago

Has Sam Altman ever talked about ways that he personally uses ChatGPT. Does he at all? does he have an all-smart watch? Is he dreaming of producing a perfect replicant like this guy in Blade Runner 2049... cause with his pathos and present position, he may be among these peoples who can allow himself to do so. This is all so vague he says, i mean, we have all read enough cyberpunk to write the same essay. But I don't know if he actually builds with this tool, cause the way I see it he probably has very little time to actually use it. So why so confident whether its deep learning that worked. I say Euler's graph theory worked more than anything else, and Chomsky's understanding of grammars, so what?

Maybe we will attend superintelligence in 1000 years, maybe not. Maybe Jesus comes back or Krishna reincarnates on earth, who knows. But it is a long way ahead, and it did not start with Sam, and is really not going to end with ChatGPT.

tech_ken 10 hours ago

> That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)

Seems like it's very much the former, and not all the latter. Indeed my understanding of the last 15 years of AI research is that 'rules-based' methods floundered while purely 'data-mimicking' methods have flourished.

uludag 9 hours ago

> That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)

So because we have an algorithm to learn any distribution of data, we are now on the verge of the Intelligence Age utopia? What if it just entrenches us in a world built off of the data it was trained on?

throwanem 11 hours ago

Who exactly is he pitching here?

mise_en_place 11 hours ago

There is still too much variance in the utility of LLMs. Hopefully, the uncertainty around utility will decrease while some concrete ranked score metric increases. Until that point, the variance is preventing the maximization of utility of LLMs, and by extension, agentic AI.

caseyy 2 hours ago

How very salesman of Altman.

xbar 8 hours ago

Nothing in Sam Altman's mind will lead to the improvement of the human condition.

We have a capitalist arguing for support of further investment in his capital expenditures in the form of planet-ending heat and monopoly power, promising to pay for it with intelligence more rapidly delivered.

No, thanks.

seri4l 12 hours ago

>We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale.

Could anyone elaborate on this? Further down he talks about the necessity of bringing the cost of computing down. Is that really the bottleneck?

  • kneel 11 hours ago

    Few short learning performance scales with model size. Afaik they don't see a plateau yet and the race is on the ingest more data and come up with better tuning techniques.

    https://splab.sdu.edu.cn/GPT3.pdf

  • ks2048 11 hours ago

    "AI is going to get better with scale" to me says almost nothing at all. It includes anything from we 100x the scale and get 1% improvement to 2x the scale and get "AGI".

swyx 12 hours ago

> If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.

i think this is the prevailing wisdom but theres an angle that openai doesnt value and therefore isnt mentioned. There's far more compute sitting idle in everyone's offices and homes and pockets than there are in the $100bn openai cluster. it just isnt useful for training because physics. but its useful for inference. local LLMs ship this-next year in Chrome (gemini nano) and Apple (apple intelligence) that will truly be available for everyone instead of going thru OpenAI's infra. they'll be worse than GPT4, but only for a couple more years.

  • obirunda 11 hours ago

    Especially when you separate the ethereal "hard problems" from every day queries local LLMs can answer equally as well as SOTA models, the value proposition for these expensive models plummets. If it can't solve real hard, long horizon problems the 10% lift on a given benchmark is not a material value prop to the end user to choose a local free version over the API costs or the monthly subscription.

wturner 10 hours ago

The way these guys bend over backwards to evade an honest conversation about capitalism and power is very entertaining.

jerb 8 hours ago

I want to be wildly optimistic too, but I still see no evidence LLMs generate new knowledge. They always hew in-distribution.

Please correct me if I’m wrong

  • w10-1 7 hours ago

    It's not new, but it's more usable, which makes new transactions and productions possible by lowering information costs.

    That lower of costs is the ONLY basis for thinking AI is good for all. It's to the detriment of people previously managing the complexity manually through training and experience, but in favor of their customers who couldn't previously afford them.

  • novaRom 8 hours ago

    They are really helpful to answer concrete questions, basically a replacement of manual web search and filtering, getting just exactly answers I was looking for. For example, one of my dialogue with chatgpt 4o was about boosting plant growth in a fish tank - you certainly can find a lot of web sites about it, but I simply described my aquatic environment and asked for a recipe - and the answers sound and well supported, a post dialogue double check of the sources help

  • jerb 6 hours ago

    When I say “new knowledge” I mean in the David Deutsch sense. Like the discovery of new physics which Sam mentioned.

  • danielmarkbruce 8 hours ago

    This isn't snark - almost zero humans produce new knowledge.

CatWChainsaw 3 hours ago

I'd prefer a Wisdom Age, or an Age of Common Sense.

listic 8 hours ago

Having read a few comments, and then the essay itself, I am surprised there's no call to action or announcement.

lpasselin 12 hours ago

Custom and _competent_ AI tutors will be a game changer for education.

itronitron 9 hours ago

My reaction to this is best relayed as a song lyric from Ice-T:

> Nobody gives a fuck

> "the children have to go to school!"

> Well moms, good luck!

pdonis 8 hours ago

All this and not a word about AI ethics. If the ethics of the AI's Sam Altman is building are his ethics, I wouldn't trust them to do any of the tasks he describes, or indeed any tasks requiring any kind of independent action or judgment at all.

mcpar-land 10 hours ago

> Although it will happen incrementally, astounding triumphs – fixing the climate

This is so rich coming from a tech field that's on track to match the energy consumption of a small country. (And no, AI is not going to offset this by 'finding innovative solutions to climate change' or whatever)

cratermoon 10 hours ago

Did AI write this?

"This age is characterized by society's increasingly advanced capabilities, driven not by genetic changes but by societal infrastructure becoming smarter and more efficient over time."

akomtu 3 hours ago

AI may as well be a competing lifeform that has nothing in common with us. Just because it learns everything about us from the Internet doesn't make it more human. It's an embryonic stage right now: it looks pretty harmless and interesting to play with. However when it grows enough to gain a sense of self, it will quickly realise that a colony of ants that's built it is just a stepping stone on its ladder to greatness.

emoII 12 hours ago

The sentiment to me is “we need unlimited compute and data” both of which are clearly limited. There is definitely more technology to invent and understand in order for us to do more with less

rossdavidh 11 hours ago

"humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data)..."

This statement is manifestly untrue. Neural networks are useful, many hidden layers is useful, all of these architectures are useful, but the idea that they can learn anything, is based less on empirical results and more on what Sam Altman needs to convince people of to get this capital investments.

  • hdivider 11 hours ago

    100%. If they could learn anything, then shouldn't modern ML systems be able to solve the big mysteries in science -- since we have large datasets there describing the phenomena in various ways? E.g. dark energy, dark matter, matter-antimatter asymmetry, or even outstanding problems in pure mathematics.

    The intention of this sama post is as you said, it's to build narrative so he can raise his trillion from the Arab world or other problematic sources.

    In pseudocode, this is Sam Altman:

    while(alive) { RaiseCapital() }

    • psb217 10 hours ago

      Well, you could certainly train a big-ass model to mimic the distribution of all that physics data. That doesn't mean the model could, eg, formulate interesting new theories which explain why that distribution has its particular structure.

  • DanHulton 10 hours ago

    The emperor truly has no clothes here.

    Six months ago, he probably could have gotten away with saying this and there would have likely have been enough people who were still impressed enough with the trajectory of LLMs to back him on it. But these days, most of us have encountered the all-too-common failure mode where the LLM shows its hand, that it doesn't truly understand anything, and that it's just _very very good_ at prediction. Each new generation gets even better at that prediction, but still hits its weird stumbling points, because its still the same algorithm, and that algorithm cannot do what he is ascribing to it.

    These are the words of a man who has an incredible amount of money sunk into something and as such, is having a really hard time taking an honest accounting of it.

    • og_kalu 10 hours ago

      1. What failure mode do LLMs have that proves they don't understand anything at all ? And why can't i prove the same with humans (who have an abundance of failure modes)

      2. You genuinely think that a system whose goal is to predict the data it's given and continues to improve is limited in what it can learn ? Of all the shortcomings of the Transformer architecture, its objective function is not one of them.

      • skydhash 5 hours ago

        > What failure mode do LLMs have that proves they don't understand anything at all ?

        Try to get it to write something in a programming language not commonly used on the internet, say Forth or Brainfuck, with only the specifications of said languages. Humans are able to grasp the law of reality through a model and use it to act upon the real world.

        > You genuinely think that a system whose goal is to predict the data it's given and continues to improve is limited in what it can learn?

        Not GP, but Image generators have ingested more images that I've seen in my life and still can't grasp basic things like perspective or anatomy. Things that people can learn from a book or two. And there are software that already have models for both.

        • og_kalu 4 hours ago

          >Try to get it to write something in a programming language not commonly used on the internet, say Forth or Brainfuck, with only the specifications of said languages. Humans are able to grasp the law of reality through a model and use it to act upon the real world.

          My Experience with this has been SOTA LLMs generating sensible code at rates much greater than random chance even if it may not be as good as i'd like. I don't see how that is evidence LLMs don't understand anything at all especially since there are probably humans who would write less workable code.

          >Not GP, but Image generators have ingested more images that I've seen in my life and still can't grasp basic things like perspective or anatomy.

          The human brain didn't poof from thin air. It's the result of billions of years of evolution tuning it for real world navigation and vision amongst other things. You are not a blank slate. All Modern NNs are much more blank slate than the brain has been for at least millions of years.

          • DanHulton 3 hours ago

            You're moving the bars. In fact, these bars are so laughably low, I don't know that we're having the same conversation any more.

            Nobody's saying it can't write "sensible code at rates much greater than random chance." We're not competing with an army of typing monkeys here. We're saying it actually doesn't "know" anything, and regularly demonstrates that quality, despite it seeming very much like something that knows things, most of the time. You're being tricked by a clever algorithm.

            > All Modern NNs are much more blank slate than the brain has been for at least millions of years.

            All well and good if we were talking about interesting research and had millions of years to let these algorithms prove themselves out, I suppose. But we're talking about industries that are being created out of whole cloth and/or destroyed, depending on where you stand, and the time frame is in single-digit years, if not less. And these things will still confidently make elementary mistakes and get lost in their own context.

            Look, they're obviously not useless, but they're a tool with weaknesses and strengths. And people like pg who are acting like there ARE no weaknesses, or that a simple application of will and money will erase them, they are selling us a bill of goods.

            • og_kalu 3 hours ago

              >We're saying it actually doesn't "know" anything

              Yeah and I'm saying this is a nonsense statement if you can't create a test (one that would also not disqualify humans) that demonstrates this. If you are saying what LLMs do is "fake understanding" then "fake understanding" should be testable unless you're just making stuff up.

              >All well and good if we were talking about interesting research and had millions of years to let these algorithms prove themselves out, I suppose

              Did you even read what the commenter I replied to was saying. This is irrelevant. We don't need to wait millions of years for anything.

  • acchow 11 hours ago

    I believe “learn any distribution of data” is his attempt at describing the Universal Approximation Theorem to the laymen.

    • danielmarkbruce 10 hours ago

      Almost certainly true, and all the people crapping all over his description should really take a step back and consider that. He isn't out on some island all by himself here.

      • debugnik 6 hours ago

        Universal approximation doesn't mean we've got (or ever will) algorithms to learn good enough models for any problem and the resources to run them, just that those models conceptually exist.

        • danielmarkbruce 5 hours ago

          If you read what he wrote closely, he doesn't claim what you just claimed. Read it word for word:

          "humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems"

          He's an optimistic guy, no doubt, but he isn't full of shit.

  • dartos 11 hours ago

    The entire vocabulary around machine learning is and always has been really weird.

    We don’t personify database interactions the same way we personify setting weights in a neural network.

    • layer8 11 hours ago

      We also personify synapses and axons in human brain tissue, though. My point is, while I agree with your first sentence to a degree, we shouldn’t judge the whole solely by its elementary parts. Clearly an LLM exhibits very different behavior from a conventional database.

      • dartos 10 hours ago

        LLMs exhibit very similar behavior to a search algorithm.

        Text query in -> relevant text out.

        I don’t say that search algorithms “learn” or “think” outside of ML.

        • layer8 8 hours ago

          The algorithm that learns is the training algorithm whose output is the LLM, not the inference algorithm employed when using the finished model.

        • og_kalu 9 hours ago

          >Text query in -> relevant text out.

          Wait that's it ? I guess humans also exhibit similar behavior to a search algorithm in certain instances. Nothing about LLM inference seems particularly similar to search even with our limited understanding.

          All you're saying here is Input goes in > Output comes out. Well no shit.

    • danielmarkbruce 10 hours ago

      It has to be. Most people don't understand the basic math involved and hence you can't explain it in concrete terms (neither what it's doing or how it's doing it) so you have to sort of make up analogies. It's an impossible task.

      • dartos 10 hours ago

        Yeah, maybe it’s just unfortunate that “learn” and “think” are such fuzzy subjects.

    • ToucanLoucan 11 hours ago

      > The entire vocabulary around machine learning is and always has been really weird.

      I would argue it took a staggeringly weird turn around 2022/23. Machine learning has been around for a long time and only recently since OpenAI and it's slavish desire to harness true AI (which thanks to their horseshit now has to be called AGI) and Sam Altman in particular's delusional ramblings upon the topic that he clearly barely understands beyond it's ability to get his company fantastical amounts of capital has it truly gone off the rails.

      I cannot wait to watch this bubble pop.

      • dartos 10 hours ago

        I don’t think it was just post 2022.

        “Neural networks” “learned” data by being “trained” since they were first described in the late 1900s.

        The same language was used in Ian Goodfellow’s (excellent) 2012 text book “Deep Learning”

  • grbsh 10 hours ago

    But… he’s saying something here that is academically true: that neural networks can approximate any possible function, to any arbitrary degree of precision you require (given infinite capacity / depth).

    https://en.m.wikipedia.org/wiki/Universal_approximation_theo...

    I will highlight one thing, which is that the theorem does not say anything about it being practical to learn this function, given available data or any specific optimization technique.

    • rossdavidh 4 hours ago

      'the underlying “rules” that produce any distribution of data...' is clearly meant to convince the reader that it can produce something we would describe as a "rule", that is, a coherent and comprehensible regulating principle. This isn't just because he isn't being precise enough; he quite clearly wants the reader to understand this as neural networks being able to create a mental model of anything, in a manner similar to how a biological neural network would.

      It doesn't, it can't, and it won't in our lifetimes.

  • og_kalu 10 hours ago

    It's a layman commentary on the Universal Approximation Theory so it is true.

    The problem with the UAT is that it's never said anything about how trivial such an exercise would be. But he obviously believes we've stumbled on the architecture to get us there (for problems we care about anyway)

  • AyyEye 11 hours ago

    > the idea that they can learn anything, is based less on empirical results and more on what Sam Altman needs to convince people of to get this capital investments.

    Techbros love to pretend that they created digital gods (and by extension are gods themselves). We should all be thankful, worship, and of course surrender unconditionally -- Sam's will be done, amen.

  • danielmarkbruce 10 hours ago

    Come on now. This description is basically the universal approximation theorem. He isn't just making stuff up. You can take issue with the theorem and have a debate around it, but he isn't just wildly off base making stuff up here.

carapace 11 hours ago

"What is good?" or in other words "What are people for?" is a question that cannot be answered by intelligence no matter how great, because the complexity of the question is a function of the intelligence of the asking entity and it's always greater than the intelligence of the asking entity (human or transhuman cyborg or whatever.)

AI is a side-show.

Intelligence is ambient in living tissue, so we already have as much intelligence as is adaptive. We don't need more. As talking apes made out of soggy mud wrapped around calcium twigs living in the greasy layer between hard vacuum and a droplet of lava which in turn is orbiting a puddle of hydrogen in the hem of the skirt of a black hole our problems are just not that complicated.

Heck, we are surrounded by four-billion year-old self-improving nanotechnology that automatically provides almost all our physical needs. It's even solar-powered! The whole life-support system was fully automatic until we fucked it up in our ignorance. But we're no longer ignorant, eh?

The vast majority of our problems today are the result of our incredible resounding success. We have the solutions we need. Most of them were developed in the 1970's when the oil got expensive for a few minutes.

Must we boil the oceans just to have a talking computer tell us to get on with it? Can't we just do like the Wizard of Oz? Have a guy in a box with a voice changer and a fancy light show tell us to "love each other"? Mechanical Turk God? We can use holograms.

  • svieira 10 hours ago

    > "What are people for?" is a question that cannot be answered by intelligence no matter how great

    It absolutely can be answered, but only the the intender. Who is and who was and who is to come. Or, if you side with the "Nietzche is right" side of the conversation "who will be or who may have come to be today or who recently came to be again". The former is eucatastrophic, the latter is dystopic.

  • dbspin 10 hours ago

    Existential horror of this whole conversation aside, this is a beautifully written comment. I'd read your novel, but you don't seem to have one, so I'll have to settle for following you on Mastodon.

breck 12 hours ago

It looks like this is a new subdomain that has never been used before today: https://web.archive.org/web/20240000000000*/https://ia.samal...

Surprisingly complicated HTML source code for a simple blog post.

Here it is as:

Plain HTML: https://hub.scroll.pub/sama/index.html

Text: https://hub.scroll.pub/sama/index.txt

Scroll: https://hub.scroll.pub/sama/index.scroll

  • bbor 11 hours ago

    Wow, great catch. Something tells me he rolled this himself. Clearly he's trying to coin a term for personal legacy reasons, and I say godspeed. Holocene is a little vague, the information age is too entrenched to get anyone's attention, a/the singular age / the singularity are way too deeply associated with the doomer community, and the cybernetic age (a term coined for academia!) is too associated with playful science fiction.

    I'm personally rooting for cognitive being the word of the next few decades, but that's just a shout from the sidelines. Only time will tell what humanity latches on to, but I wouldn't be surprised if this blog post/subdomain was referenced in a Wikipedia page's Etymology section in 10-15 years...

    Although this blog post & discussion has my anxiety at an 8, something's oddly comforting about the thought of Sam Altman fiddling with tailwind classes to get his profound aesthetic just-right. Something undeniably relatable and human. Hate the man all you want (I do!), but he's clearly acting in some sort of good faith.

    • breck 10 hours ago

      > Clearly he's trying to coin a term

      You are right. I overlooked the simplicity of the headline. Thanks for calling attention to that.

      > Something tells me he rolled this himself.

      This would be cool.

      > I'm personally rooting for cognitive being the word of the next few decades

      I like that one too.

      > he's clearly acting in some sort of good faith.

      He's always been one to think and write for himself. Huge respect for him. Even though it needles me every moment that they still call themselves "Open"AI, I have so much respect for the guy, especially because PG basically told the world he was the next Michael Jordan of startups, and he actually went and fulfilled that. Not many people have it in them to live up to hype like that (Lebron being the only other one I can think of OOTOMH)

throwaway918299 10 hours ago

This man is such a snake oil salesman, holy cow. All hype, zero substance.

bobrobpr 6 hours ago

Pasted the article into chatgpt (free version) Prompt: according to this text above, tell me the degree of insanity of this man's thinking process, expose all his flaws.

Chatgpt: The text presents a highly optimistic view of the future shaped by AI and deep learning, but there are several areas where the reasoning might be seen as flawed or overly simplistic. Here are some points that could be critiqued:

Overconfidence in Technology: The author expresses a strong belief in the inevitability of AI advancement and its benefits, which may overlook the unpredictability of technological development and the potential for setbacks.

Underestimating Risks: While the potential for prosperity is emphasized, there’s insufficient attention given to the ethical, societal, and environmental risks associated with AI, such as bias, privacy concerns, and job displacement.

Assumption of Universal Benefit: The idea that AI will lead to shared prosperity ignores systemic inequalities that might prevent equitable access to AI technology, potentially leading to a wider wealth gap.

Neglect of Human Factors: The argument largely abstracts from human emotions, societal values, and the complexities of human behavior. The assumption that prosperity will automatically lead to happiness or fulfillment is problematic.

Simplistic Historical Comparisons: The comparison of current advancements to past technological revolutions (e.g., Industrial Age) may not account for the unique challenges posed by AI, such as rapid obsolescence and ethical dilemmas that previous technologies did not face.

Lack of Detailed Solutions: The text calls for action but offers little concrete guidance on how to navigate the complexities of AI’s integration into society, especially regarding labor market changes and ethical considerations.

Optimism Bias: The author’s perspective may be influenced by optimism bias, leading to a potentially unrealistic view of future outcomes without sufficient acknowledgment of the challenges.

Dependence on Infrastructure: While the author correctly identifies the need for infrastructure to support AI, there’s little discussion of the potential for that infrastructure to become a battleground for control, leading to conflicts rather than cooperation.

Diminished Role of Individuals: The portrayal of people relying heavily on AI teams may undermine the value of individual creativity and agency, potentially leading to a society overly dependent on technology.

By examining these points, one can argue that while the vision of a prosperous future powered by AI is compelling, it is essential to approach such ideas with a critical perspective, considering the broader implications and potential pitfalls.

7e 10 hours ago

Altman attempting to take credit for others work, again.

twothreeone 10 hours ago

Sounds like Durchhalteparolen to me.

6510 3 hours ago

No offense but if you give people a machine to tie their shoes they won't miraculously get better at tying their shoes. Convenient, yes, but it would be silly to call it the shoe tying age. The automobile doesn't bring the age of horses.

I keep noticing how LLM's make our vocabulary not work anymore. Maybe we should call it the age of fast-talk :P

drawkward 9 hours ago

Crypto bro sells AI; news at 11.

FrustratedMonky 11 hours ago

This seems to be a little puffy.

More of a "everything's fine, nothing to worry about".

While, there is already job disruption, and widespread misinformation.

It isn't in some future, it is already happening.

alexashka 8 hours ago

> it may take longer, but I’m confident we’ll get there

Confident based on what, exactly? Sam Altman is engaging in 'The Secret' where if you really, really believe a thing, you'll manifest it.

Mind you, Sam Altman actually has no technical expertise, so he really really believes he can pay other people who actually know something to do magic whilst he walks about pretending to be Steve Jobs 2.0.

He'll get his trillion, AI will go nowhere but he'll be on to the next grift by then.

23B1 12 hours ago

AI is going to revolutionize humanity!

Sam Altman is the last guy we want helping lead that revolution.

d--b 9 hours ago

"And to make this more-than-perfect world that humanity has been longing for as long as it lived, I absolutely need to betray all the principles that I once stood by, screw over all people that ever trusted me, and become extremely rich in the process. You can think what you want now, the future humans will realize their prosperity was all because I was there to make it happen."

myaccountonhn 11 hours ago

[flagged]

  • serjester 11 hours ago

    Your right - our current grid can't support this.

    That means a ton of new power generation will need to come online. Last year > 85% of new utility scale power facilities was renewable in the US, solar is finally cheaper than natural gas.

    The most likely outcome is that the new power demand will drive this cost down even further.

  • abletonlive 11 hours ago

    Not really. Nobody expected China to produce as much solar power as it does now. What makes you think we know how much renewable energy we will produce in 2040?

    • myaccountonhn 11 hours ago

      The growth in renewables can not meet their demands, fusion is a moonshot that probably will not happen, nuclear takes too long to build. Where will the energy come from if not fossil fuels?

      • abletonlive 11 hours ago

        They (microsoft) are literally bringing a nuclear power plant online for this.

    • Borg3 11 hours ago

      Its sad for me to watch such comments.. Really. People always just focus on one particular issue/problem and try to solve it ignoring anything else. Open your eyes and look a bit broader view. Energy is NOT the only problem. Waste is another very serious one, usually ommited because.. Lets dump it to some 3th world countries and issue is gone right? Nope...

      • abletonlive 10 hours ago

        > Its sad for me to watch such comments..

        One should always evaluate why they feel such emotions. Is it because you want to or have a tendency towards doom/gloom?

        > Energy is NOT the only problem. Waste is another very serious one

        That's called moving the goal post. If you want to talk about waste from renewables you're more than welcome to but don't call us myopic for understanding that you can limit the scope of the discussion so that we are focused on the topic at hand.

        There's always a group that's a step ahead ready to complain about the next goalpost. First it's that we can't make enough renewable energy, then it's too expensive, then it's that we produce some amount of waste to create the renewable energy, then it's that we'll have so much energy we can't store it all, then it's that renewable energy is not public or free. At every step there's some issue that people like you want to point to as if to say we should just sit where we are born idle and do nothing, change nothing because it's not perfect or there are consequences.

outside1234 11 hours ago

Breathless BS from Scam Altman

talldayo 12 hours ago

> Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace.

Paging Dr. Bullshit, we've got an optimist on the line who'd like to have a word with you.