jgord 3 hours ago

The text has some great explanatory diagrams and looks to be a very high quality overview of ML thru the lens of probability, with lots of math.

I was also recently impressed by Zhaos "Mathematical Foundation of Reinforcement Learning", free textbook and video lectures on YT : https://github.com/MathFoundationRL/Book-Mathematical-Founda...

If you dont have a lot of time, at least glance at Zhaos overview contents diagram, its a good conceptual map of the whole field, imo .. here :

https://github.com/MathFoundationRL/Book-Mathematical-Founda...

and maybe watch the intro video.

  • vimgrinder 28 minutes ago

    The first lecture is so good. Not only from perspective of content, but how Zhao explain things about how to think about learning as a student. ty for recommendation.

abhgh 4 hours ago

I came across this a few days ago, and my excuse to give it a a serious look is that Andreas Krause has some deep and interesting research in Gaussian Processes and Bandits [1].

[1] https://scholar.google.com/scholar?start=10&q=andreas+krause...

  • trostaft 3 hours ago

    It's Krause, he's one of the biggest researchers in the field. At least based on the other work of his I've read, he's a good writer too. This ought to be a worth while read.

antonkar 3 hours ago

I think we’ll need a GUI for the models to democratize interpretability and let even gamers explore them. Basically to train another model, that will take the LLM and convert it into 3D shapes and put them in some 3D world that is understandable for humans.

Simpler example: represent an LLM as a green field with objects, where humans are the only agents:

You stand near a monkey, see chewing mouth nearby, go there (your prompt now is “monkey chews”), close by you see an arrow pointing at a banana, father away an arrow points at an apple, very far away at the horizon an arrow points at a tire (monkeys rarely chew tires).

So things close by are more likely tokens, things far away are less likely, you see all of them at once (maybe you’re on top of a hill to see farther). This way we can make a form of static place AI, where humans are the only agents

  • soulofmischief 3 hours ago

    I had a mind-bending Salvia trip at eighteen that went sort of like that.

    My mind turned into an infinitely large department store where each aisle was a concurrent branch of thought, and the common ingredient lists above each aisle were populated with words, feelings and concepts related to each branch.

    The PA system replaced my internal monologue, which I no longer had, but instead I was hearing my thoughts externally as if they were another person's.

    I was able to walk through these aisles and marvel at the immense, fractal, interdependent web of concurrent thought my brain was producing in realtime.

    • _rpxpx 2 hours ago

      “When I began to navigate psychospace with LSD, I realized that before we were conscious, seemingly self-propelled human beings, many tapes and corridors had been created in our minds and reflexes which were not of our own making. These patterns and tapes laid down in our consciousness are walled off from each other. I see it as a vast labyrinth with high walls sealing off the many directives created by our personal history.

      Many of these directives are contradictory. The coexistence of these contradictory programs is what we call inner conflict. This conflict causes us to constantly check ourselves while we are caught in the opposition of polarity. Another metaphor would be like a computer with many programs running simultaneously. The more programs that are running, the slower the computer functions. This is a problem then. With all the programs running that are demanded of our consciousness in this modern world, we have problems finding deep integration.

      To complicate matters, the programs are reinforced by fear. Fear separates, love integrates. We find ourselves drawn to love and unity, but afraid to make the leap.

      What I found to be the genius of LSD is that it really gets you high, higher than the programs, higher than the walls that mask and blind one to the energy destroying presence of many contradictory but hidden programs. When LSD is used intentionally it enables you to see all the tracks laid down, to explore each one intensely. It also allows you to see the many parallel and redundant programs as well as the contradictory ones.

      It allows you to see the underlying unity of all opposites in the magic play of existence. This allows you to edit these programs and recreate superior programs that give you the insight to shake loose the restrictions and conflicts programmed into each one of us by our parents, our religion, our early education, and by society as a whole.”

      ~ Nick Sand, 2001, Mind States conference, quoted in Casey Hardison's obituary

      • wincy 2 hours ago

        I feel like if all the things people believe and espouse about hallucinogens were true and not just the effect of permanently damaging your mind, with the illusion of wisdom, we’d be able to point at all the revolutionary scientific breakthroughs and discoveries made under the influence of hallucinogenic substances.

        However, everyone I’ve met who admits to having taken hallucinogens seems reduced in some way, rather than enhanced. Like the lights are on but someone else is home.

        • antonkar 2 hours ago

          There is the Qualia Research Institute, one of the things they do is using dope and making simulations of the experience. They basically found 2 main types of drugs:

          1. Most “create more separate personalities” in you

          2. One (the “toad poison”) actually makes you feel like a giant place, the feeling is usually pleasant.

          So there is either “agentification” of you into more “agents” or “space-ification” into one giant place without any agency in it. I think we can make this static place AI and it’ll be safe by definition because we’re the only agents in it.

          P.S. I don’t promote drugs

        • LVL96 an hour ago

          This was sort of my experience with LSD. It just broke me. I fell into a deep depression afterward, but the reason was only partly due to damaging my mind. The other part of it was that the LSD made me realize where my life was going, and how completely unfulfilled I'd end up being in 10-20 years. In that way, it helped me course-correct. I'm healthier, more honest with myself, and got back into college because of the experience.

          But it did damage my mind. I have mild to moderate anhedonia now. Weed hits me completely differently now (feels more like strong caffeine + brain fog instead of any pleasure). I lost my desire to write creatively.

          • bongodongobob 10 minutes ago

            Unless you did a thumbprint, you're perfectly fine, no damage. Just get your shit together, that seems to be your takeaway. Sounds like it worked. Now you have to keep working on yourself rather than blaming a harmless drug for your problems.

        • gruntbuggly an hour ago

          A fair observation, but real assumptions about progress, what it means, and what is valuable

        • bongodongobob an hour ago

          It happens all the time. DNA double helix is a good example. You really think people are going to mention their drug use in white papers? I think not. Nothing to gain and everything to lose.

          • devmor an hour ago

            The record of a scientific discovery that is heavily criticized for plagiarism and falsehoods is probably not a good example, actually.

            • bongodongobob an hour ago

              Well here's one: tons of people do drugs. It's not even a question whether or not drugs have inspired discoveries. They obviously have.

    • neom 28 minutes ago

      If you feel like being a hippie, you can find the "rendering engine for reality" in here: Mandelbrot (1980) – The Mandelbrot Set and fractal geometry Julia (1918) – Memoire sur l'iteration des fonctions rationelles (Julia sets) Meyer (1996) – Quantum Cellular Automata (procedural complexity) Wolfram (1984) – Cellular automata as models of complexity Bak et al. (1987) – Self-organized criticality. Wolfram, Gorard & Crowley (2020) - "A Class of Models with the Potential to Represent Fundamental Physics" - Kari & Culik (2009) - "Universal Pattern Generation by Cellular Automata". Just combine the papers, ofc, that is crazy - but its fun to be a bit crazy sometimes. It's one of my fav thought experiments, just for fun. :)

    • antonkar 2 hours ago

      We have our first Neo candidate)

      The guy who’ll make the GUI for LLMs is the next Jobs/Gates/Musk and Nobel Prize Winner (I think it’ll solve alignment by having millions of eyes on the internals of LLMs), because computers became popular only after the OS with a GUI appeared. I recently shared how one of its “apps” possibly can look: https://news.ycombinator.com/item?id=43319726

  • jgord 3 hours ago

    I dont think anyone has found a good way to map higher dimensional space onto 4D visualizations, yet.

    Maybe this is why tokens and language are so useful for humans ? they might be the closest analog we have.

    • antonkar 2 hours ago

      Good point, I think at least some lossy “compression” into a GUI is possible. The guy who’ll make the GUI for LLMs is the next Jobs/Gates/Musk and Nobel Prize Winner (I think it’ll solve alignment by having millions of eyes on the internals of LLMs), because computers became popular only after the OS with a GUI appeared. I recently shared how one of its “apps” possibly can look: https://news.ycombinator.com/item?id=43319726

  • meindnoch 2 hours ago

    Sir, this is a Wendy's.

sunami-ai an hour ago

I found Gaussian Processes with the right kernel to be very powerful with even just a few data points and a very small set of parameters. I don't know if I was using it correctly tbh, but it worked out great in predicting values that I could not predict so accurately. I used it as a predictable yet non-linear process to tweak the input in a computer vision task. The proof was literally in the pudding.

nbeleski 3 hours ago

Seems similar, or at least partially overlap, with what I would say is the best reference on the subject, an Introduction to Statistical Learning from Gareth James et al [1].

I wonder it this one might be a bit more accessible, although I guess the R/Python examples are helpful on the latter.

[1] https://www.statlearning.com/

  • whimsicalism 3 hours ago

    not really, islr is a pretty basic book - this is about more advanced techniques to propagate probability estimates rather than point-wise

    and frankly i would not recommend islr anymore today, too dated

    • keviniam 3 hours ago

      What would you (or other informed parties) recommend?

      • whimsicalism 2 hours ago

        it’s been a while since I’ve been a beginner so I might not have the best resources, but I would recommend Harvard’s Stat 110 with Joe Blitzstein (lectures online) and then Machine Learning by Kevin Murphy. might be a scarier book to someone not confident in their math, but overall a better one imo

        for something more directly comparable to the niche ISLR filled, Bishop’s books are generally better - although I can’t recall their title

chasely an hour ago

Kevin Murphy racing to rename his Probabilistic Machine Learning series.

thisisauserid 4 hours ago

Gemini 2.0 Experimental 02-05 sees this as "only" 107K tokens.

Handy if you want help breaking this down.

https://aistudio.google.com

'Laplace Approximation is a "quick and dirty" way to turn a complex probability distribution into a simple Gaussian (bell curve).

It works by finding the highest point (mode) and matching the curvature at that point.

It's fast and easy, but it can be very inaccurate and overconfident if the true distribution doesn't look like a bell curve.'

jacob019 5 hours ago

This is great. Is it available as a printed book?

  • falcor84 5 hours ago

    From a brief search I see that it isn't (or it least not yet), but seeing how well-formatted the pdf is, and the fact that it's CC-licensed, you could print it yourself, or perhaps talk with them to organize a batch.

    Though I personally prefer to read these sorts of books directly from pdf, and am grateful to them for sharing it on arxiv.

    • mnky9800n 4 hours ago

      I wonder if one could organize an arXiv print service that binds and prints and ships with a unique cover and such.

      Also it should use LLMs and the blockchain.

      But this would be nice there are a number of papers and such that if you could submit an arXiv link to a print service I would probably buy a copy. I wonder why no one does it.

      • woolion 3 hours ago

        Aren't you describing Lulu but for the very niche case of arxiv publications that are small books but not published as books? I think you could do it in a weekend with their API.

        • ivan_ah 39 minutes ago

          If anyone is interested in trying this, here is some Python starter code you might find useful: https://github.com/minireference/lulu-api-client?tab=readme-...

          This worked four years ago when the API was still launched, but there might have been changes since, so no guarantees.

          Most ArXiv PDFs are probably lulu-printable out of the box, but to make a general solution, one would probably need to do some pre-processing with ghostscript (gs), e.g. embed all fonts and flatten images (no transparency).

        • mnky9800n an hour ago

          yes thats what i thought after the post. haha.

    • madcaptenor 3 hours ago

      I wonder if they're aiming for it to be a book. Hubotter describes it on his web page as "notes on Probabilistic AI".

  • esafak 4 hours ago

    I do not think so. I am asking the author for confirmation.

    • adrc 4 hours ago

      There's no printed version. Btw I took this course at ETHZ last year (a course with this title and whose script is this document. Pretty nice course and pretty nice course notes, happy to see that the authors decided to share it outside of the course website now!

brador 4 hours ago

Interesting separation and distinction between noisy inputs, noisy processing and noisy chains.

dcreater 3 hours ago

As a layman in this field, I have no idea the contact it significance of this work. Can someone better informed inform us?

cubefox 3 hours ago

Apparently they don't discuss language models at all.

  • cubefox 2 hours ago

    Which is a major omission, as transformer-based language models are the most powerful available form of "probabilistic artificial intelligence". They predict a probability distribution over a token given a sequence of previous tokens.

    My guess is that most of the content in the book is several years old (it's apparently based on an ETH Zurich class), despite the PDF being compiled this year, which would explain why it doesn't cover the state of the art.

fud101 3 hours ago

Books suck (imho). We need a new format to teach and learn this deep technical stuff. Not youtube, something interactive with exercises and engagement.

  • thomasahle 2 hours ago

    > something interactive with exercises and engagement

    Books have exercises. It's your job to engage.

    This book, in particular, has 3 pages of Problems per chapter. The only way to learn the math is to do all of them.

  • vessenes 18 minutes ago

    I urge you to rethink this perspective. All research shows that paper increases comprehension significantly over screens and even over eink. Additionally hand note taking again has a positive impact.

  • nh23423fefe an hour ago

    thanks. i was worried about job security for a nanosecond.

    • jcgrillo 26 minutes ago

      It's a wild world where "reading the documentation" or "researching a topic" has become a career superpower. I'm glad my education largely predated social media and cell phones, and that I learned to read and work problems independently. OTOH it often makes work a very lonely, taxing experience. Being a human index into documentation is a hell of a lot less fulfilling than working with people who also can read.

  • jgord 2 hours ago

    yeah, I mean 3Blue1Brown has done a great job .. and maybe those would be even better if you could app-ify them into something you can interact with.

    Current gen of LLM programming AIs might make it less leg-work to make these

    • whimsicalism 2 hours ago

      3b1b is great but if you want to do deep technical work, you’re eventually going to have to get comfortable with text as a medium