hckrnws
The headline may make it seem like AI just discovered some new result in physics all on its own, but reading the post, humans started off trying to solve some problem, it got complex, GPT simplified it and found a solution with the simpler representation. It took 12 hours for GPT pro to do this. In my experience LLM’s can make new things when they are some linear combination of existing things but I haven’t been to get them to do something totally out of distribution yet from first principles.
This is the critical bit (paraphrasing):
Humans have worked out the amplitudes for integer n up to n = 6 by hand, obtaining very complicated expressions, which correspond to a “Feynman diagram expansion” whose complexity grows superexponentially in n. But no one has been able to greatly reduce the complexity of these expressions, providing much simpler forms. And from these base cases, no one was then able to spot a pattern and posit a formula valid for all n. GPT did that.
Basically, they used GPT to refactor a formula and then generalize it for all n. Then verified it themselves.
I think this was all already figured out in 1986 though: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.56... see also https://en.wikipedia.org/wiki/MHV_amplitudes
> I think this was all already figured out in 1986 though
They cite that paper in the third paragraph... Naively, the n-gluon scattering amplitude involves order n! terms. Famously, for the special case of MHV (maximally helicity violating) tree amplitudes, Parke and Taylor [11] gave a simple and beautiful, closed-form, single-term expression for all n.
It also seems to be a main talking point.I think this is a prime example of where it is easy to think something is solved when looking at things from a high level but making an erroneous conclusion due to lack of domain expertise. Classic "Reviewer 2" move. Though I'm not a domain expert and so if there was no novelty over Parke and Taylor I'm pretty sure this will get thrashed in review.
You're right. Parke & Taylor showed the simplest nonzero amplitudes have two minus helicities while one-minus amplitudes vanish (generically). This paper claims that vanishing theorem has a loophole - a new hidden sector exists and one-minus amplitudes are secretly there, but distributional
> simplest nonzero amplitudes have two minus helicities while one-minus amplitudes vanish
Sorry but I just have to point out how this field of maths read like Star Trek technobabble too me.
Where do you think Star Trek got its technobabble from?
Have I got a skill for you!
trekify/SKILL.md: https://github.com/SimHacker/moollm/blob/main/skills/trekify...
Cool idea but the ai readme text is so cringy in places “This is FUN, not FEAR”
[flagged]
Be careful, in the strength of your passions, that you don't become a stochastic word generator yourself.
> Am I getting that right?
My comment was in response to the claim I responded to. Any inference you have made about my feelings about OpenAI are that of your own. You can search my comment history if you want to verify or reject your suspicion. I don't think you'll be able to verify it...No
[flagged]
[flagged]
I feel for you because you kinda got baited into this by the language in the first couple comments. But whatever’s going on in your comment is so emotional that it’s hard to tell what you’re asking for that you haven’t been able to read already, tl;dr proof stuck at n=4 for years is now for arbitrary n
Yeah I kind of fell for it. I was hoping to be pleasantly surprised by a particle physicist in the openai victory lap thread or someone with insight into what “GPT 5.2 originally conjectured this” means exactly because the way it’s phrased in the preprint makes it sound like they were all doing bongrips with chatgpt and it went “man do you guys ever think about gluon tree amplitudes?” but uh, my empty post getting downvoted hours after being made empty makes it pretty clear that this is a strictly victory-lap-only thread
Fwiw I'm not trying to celebrate for OpenAI. The press piece definitely makes bolder claims than the paper.
I was just stating the facts and correcting a reaction that went too far in the other direction. By taking my comment as supporting or validating OpenAI's claim is just as bad. An error of the same magnitude.
I feel like I've been quoting Feynman a lot this week: The first principle is to not fool yourself, and you're the easiest person to fool. You're the easiest person for you to fool because you're as smart as yourself and deception is easier than proving. We all fall for these traps and the smartest people in the world (or history) are not immune to it. But it's interesting to see on a section of the internet that prides itself for its intelligence. I think we just love blinders, which is only human
It bears repeating that modern LLMs are incredibly capable, and relentless, at solving problems that have a verification test suite. It seems like this problem did (at least for some finite subset of n)!
This result, by itself, does not generalize to open-ended problems, though, whether in business or in research in general. Discovering the specification to build is often the majority of the battle. LLMs aren't bad at this, per se, but they're nowhere near as reliably groundbreaking as they are on verifiable problems.
> modern LLMs are incredibly capable, and relentless, at solving problems that have a verification test suite.
Feel like it's a bit what I tried to expressed few weeks ago https://news.ycombinator.com/item?id=46791642 namely that we are just pouring computational resources at verifiable problems then claim that astonishingly sometimes it works. Sure LLMs even have a slight bias, namely they do rely on statistics so it's not purely brute force but still the approach is pretty much the same : throw stuff at the wall, see what sticks, once something finally does report it as grandiose and claim to be "intelligent".
> throw stuff at the wall, see what sticks, once something finally does report it as grandiose and claim to be "intelligent".
What do we think humans are doing? I think it’s not unfair to say our minds are constantly trying to assemble the pieces available to them in various ways. Whether we’re actively thinking about a problem or in the background as we go about our day.
Every once in a while the pieces fit together in an interesting way and it feels like inspiration.
The techniques we’ve learned likely influence the strategies we attempt, but beyond all this what else could there be but brute force when it comes to “novel” insights?
If it’s just a matter of following a predefined formula, it’s not intelligence.
If it’s a matter of assembling these formulas and strategies in an interesting way, again what else do we have but brute force?
See what I replied just earlier https://news.ycombinator.com/item?id=47011884 namely the different regimes, within paradigm versus challenging it by going back to first principles. The ability to notice something is off beyond "just" assembling existing pieces, to backtrack within the process when failures get too many and actually understand the relationship is precisely different.
So I don’t really see why this would be a difference in kind. We’re effectively just talking about how high up the stack we’re attempting to brute force solutions, right?
How many people have tried to figure out a new maths, a GUT in physics, a more perfect human language (Esperanto for ex.) or programming language, only to fail in the vast majority of their attempts?
Do we think that anything but the majority of the attempts at a paradigm shift will end in failure?
If the majority end in failure, how is that not the same brute force methodology (brute force doesn’t mean you can’t respond to feedback from your failed experiments or from failures in the prevailing paradigms, I take it to just fundamentally mean trying “new” things with tools and information available to you, with the majority of attempts ending in failure, until something clicks, or doesn’t and you give up).
The field of medicine - pharmacology and drug discovery, is an optimized version of that. It works a bit like this:
Instead of brute-forcing with infinite options, reduce the problem space by starting with some hunch about the mechanism. Then the hard part that can take decades: synthesize compounds with the necessary traits to alter the mechanism in a favourable way, while minimizing unintended side-effects.
Then try on a live or lab grown specimen and note effectiveness. Repeat the cycle, and with every success, push to more realistic forms of testing until it reaches human trials.
Many drugs that reach the last stage - human trials - often end up being used for something completely other than what they were designed for! One example of that is minoxidil - designed to regular blood pressure, used for regrowing hair!
It’s almost like the iteration loop refines itself between checks notes in Sutton search and learning
While I don't think anyone has a plausible theory that goes to this level of detail on how humans actually think, there's still a major difference. I think it's fair to say that if we are doing a brute force search, we are still astonishingly more energy efficient at it than these LLMs. The amount of energy that goes into running an LLM for 12h straight is vastly higher than what it takes for humans to think about similar problems.
at similar quality NN speed is increasing by ~5-10x per year. nothing SOTA is efficient. it's the preview for what will be efficient in 2-3 years
In the research group I am, we have usually try a few approach to each problem, let's say we get a:
Method A) 30% speed reduction and 80% precision decrease
Method B) 50% speed reduction and 5% precision increase
Method C) 740% speed reduction and 1% precision increase
and we only publish B. It's not brute force[1], but throw noodles at the wall, see what sticks, like the GP said. We don't throw spoons[1], but everything that looks like a noodle has a high chance of been thrown. It's a mix of experience[1] and not enough time to try everything.
[1] citation needed :)
That's also what most grad students are doing. Even in the unlikely case they completely stop improving, it's still a massive deal.
Even more generally than verification, just being tied to a loss function that represent something we actually care about. E.g. compiler and test errors, LEAN verification in Aristotle, basic physics energy configs in AlphaFold, or win conditions in e.g. RL, such as in AlphaGo.
RLHF is an attempt to push LLMs pre-trained with a dopey reconstruction loss toward something we actually care about: imagine if we could find a pre-training criterion that actually cared about truth and/or plausibility in the first place!
There's been active work in this space, including TruthRL: https://arxiv.org/html/2509.25760v1. It's absolutely not a solved problem, but reducing hallucinations is a key focus of all the labs.
Yes, this is where I just cannot imagine completely AI-driven software development of anything novel and complicated without extensive human input. I'm currently working in a space where none of our data models are particularly complex, but the trick is all in defining the rules for how things should work.
Our actual software implementation is usually pretty simple; often writing up the design spec takes significantly longer than building the software, because the software isn't the hard part - the requirements are. I suspect the same folks who are terrible at describing their problems are going to need help from expert folks who are somewhere between SWE, product manager, and interaction designer.
That paper from the 80s (which is cited in the new one) is about "MHV amplitudes" with two negative-helicity gluons, so "double-minus amplitudes". The main significance of this new paper is to point out that "single-minus amplitudes" which had previously been thought to vanish are actually nontrivial. Moreover, GPT-5.2 Pro computed a simple formula for the single-minus amplitudes that is the analogue of the Parke-Taylor formula for the double-minus "MHV" amplitudes.
You should probably email the authors if you think that's true. I highly doubt they didn't do a literature search first though...
You should be more skeptical of marketing releases like this. This is an advertisement.
It's hard to get someone to do literature first when they get free publicity by not doing literature search and claiming some major AI assisted breakthrough...
Heck, it's hard to get authors to do literature search, period: never mind not thoroughly looking for prior art, even well known disgraced papers get citated continue to get possitive citations all the time...
They also reference Parke and Taylor. Several times...
Don't underestimate the willingness of physicists to skimp on literature review.
After last month’s Erdos problems handling by LLMs at this point everyone writing papers should be aware that literature checks are approximately free, even physicists.
> But no one has been able to greatly reduce the complexity of these expressions, providing much simpler forms.
Slightly OT, but wasn't this supposed to be largely solved with amplituhedrons?
Still pretty awesome though, if you ask me.
I think even “non-intelligent” solver like Mathematica is cool - so hell yes, this is cool.
Big difference between “derives new result” and “reproduces something likely in its training dataset”.
Sounds somehow similar to the groundbreaking application of a computer to prove the 4 color theorem. Then the researchers wrote a program to find and formally prove the numerous particular cases. Here the computer finds a simplifying pattern.
I'm not sure if GPTs ability goes beyond a formal math package's in this regard or its just its just way more convienient to ask ChatGPT rather than using these software.
> but I haven’t been to get them to do something totally out of distribution yet from first principles
Can humans actually do that? Sometimes it appears as if we have made a completely new discovery. However, if you look more closely, you will find that many events and developments led up to this breakthrough, and that it is actually an improvement on something that already existed. We are always building on the shoulders of giants.
> Can humans actually do that?
From my reading yes, but I think I am likely reading the statement differently than you are.
> from first principles
Doing things from first principles is a known strategy, so is guess and check, brute force search, and so on.
For an llm to follow a first principles strategy I would expect it to take in a body of research, come up with some first principles or guess at them, then iteratively construct and tower of reasonings/findings/experiments.
Constructing a solid tower is where things are currently improving for existing models in my mind, but when I try openai or anthropic chat interface neither do a good job for long, not independently at least.
Humans also often have a hard time with this in general it is not a skill that everyone has and I think you can be a successful scientist without ever heavily developing first principles problem solving.
"Constructing a solid tower" from first principles is already super-human level. Sure, you can theorize a tower (sans the "solid") from first principles; there's a software architect at my job that does it every day. But the "solid" bit is where things get tricky, because "solid" implies "firm" and "well anchored", and that implies experimental grounds, experimental verification all the way, and final measurable impact. And I'm not even talking particle physics or software engineering; even folding a piece of paper can give you surprising mismatches between theory and results.
Even the realm of pure mathematics and elegant physic theories, where you are supposed to take a set of axioms ("first principles") and build something with it, has cautionary tales such as the Russel paradox or the non-measure of Feymann path integrals, and let's not talk about string theory.
Yes. Thats how all advancement in human knowledge happened. Small and incremental forays out of our training distribution.
These have been identified as various things. Eureka moments, strokes of genius, out of the box thinking, lateral thinking.
LLMs have not shown to be capable of this. They might be in the future, but they havent yet
Relativity comes to mind.
You could nitpick a rebuttal, but no matter how many people you give credit, general relativity was a completely novel idea when it was proposed. I'd argue for special relatively as well.
I am not a scientific historian, or even a physicist, but IMO relativity has a weak case for being a completely novel discovery. Critique of absolute time and space of Newtonian physics was already well underway, and much of the methodology for exploring this relativity (by way of gyroscopes, inertial reference frames, and synchronized mechanical clocks) were already in parlance. Many of the phenomena that relativity would later explain under a consistent framework already had independent quasi-explanations hinting at the more universal theory. Poincare probably came the closest to unifying everything before Einstein:
> In 1902, Henri Poincaré published a collection of essays titled Science and Hypothesis, which included: detailed philosophical discussions on the relativity of space and time; the conventionality of distant simultaneity; the conjecture that a violation of the relativity principle can never be detected; the possible non-existence of the aether, together with some arguments supporting the aether; and many remarks on non-Euclidean vs. Euclidean geometry.
https://en.wikipedia.org/wiki/History_of_special_relativity
Now, if I had to pick a major idea that seemed to drop fully-formed from the mind of a genius with little precedent to have guided him, I might personally point to Galois theory (https://en.wikipedia.org/wiki/Galois_theory). (Ironically, though, I'm not as familiar with the mathematical history of that time and I may be totally wrong!)
Right on with special relativity—Lorentz also was developing the theory and was a bit sour that Einstein got so much credit. Einstein basically said “what if special relativity were true for all of physics”, not just electromagnetism, and out dropped e=mc^2. It was a bold step but not unexplainable.
As for general relativity, he spent several years working to learn differential geometry (which was well developed mathematics at the time, but looked like abstract nonsense to most physicists). I’m not sure how he was turned on to this theory being applicable to gravity, but my guess is that it was motivated by some symmetry ideas. (It always come down to symmetry.)
If people want to study this, perhaps it makes more sense to do like we used to: don't include the "labels" of relativity into the training set and see if it comes up with it.
> Critique of absolute time and space of Newtonian physics was already well underway
This only means Einstein was not alone, it does not mean the results were in distribution. > Many of the phenomena that relativity would later explain under a consistent framework already had independent quasi-explanations hinting at the more universal theory.
And this comes about because people are looking at edge cases and trying to solve things. Sometimes people come up with wild and crazy solutions. Sometimes those solutions look obvious after they're known (though not prior to being known, otherwise it would have already been known...) and others don't.Your argument really makes the claim that since there are others pursuing similar directions that this means it is in distribution. I'll use a classic statistics style framing. Suppose we have a bag with n red balls and p blue balls. Someone walks over and says "look, I have a green ball" and someone else walks over and says "I have a purple one" and someone else comes over and says "I have a pink one!". None of those balls were from the bag we have. There are still n+p balls in our bag, they are still all red or blue despite there being n+p+3 balls that we know of.
> I am not a [...] physicist
I think this is probably why you don't have the resolution to see the distinctions. Without a formal study of physics it is really hard to differentiate these kinds of propositions. It can be very hard even with that education. So be careful to not overly abstract and simplify concepts. It'll only deprive you of a lot of beauty and innovation.To be clear, I don't think coming up with relativity was "in distribution" based on the results of the time. I would be exceedingly surprised if an LLM trained on all of the physics up until that point and nothing else would come up with the framework that Einstein did, from such elegant first principles at that. Without handholding from a prompter, I expect an LLM (or non-critical human thinker) would only parrot the general consensus of confusion and non-uniformity that predominated in that era.
I only believe that (1) if it hadn't been Einstein, it would very soon have been someone else using very similar concepts and evidence, (2) "completely novel idea" is a stricter criterion than "not in distribution," and (3) better examples of completely novel ideas from history exist as a benchmark for this sort of things.
> Without a formal study of physics it is really hard to differentiate these kinds of propositions. It can be very hard even with that education. So be careful to not overly abstract and simplify concepts. It'll only deprive you of a lot of beauty and innovation.
I agree, but with the caveat that I think ancestor worship is also an impediment to understanding our intellectual and cultural heritage. Either all of human creativity deserves to be treated sacredly, or none of it does.
> To be clear, I don't think coming up with relativity was "in distribution" based on the results of the time.
This is difficult to infer from the context of the conversation. > only believe that (1) if it hadn't been Einstein, it would very soon have been someone else
I also agree, but am unsure of your point. > (2) "completely novel idea" is a stricter criterion than "not in distribution,"
Sorry, I used a looser word. If you have a strong definition of what "in distribution" means I'll be happy to adapt. > (3) better examples of completely novel ideas from history exist
Sure. Maybe? I can't judge. I think determining how novel something is really requires domain expertise. I only have an undergraduate degree in physics so I am not really qualified on determining the novelty of relativity, but it appears fairly novel to me fwiw. (And I am an enjoyer of scientific history. I'd really recommend Cropper's The Quantum Physicists: And an Introduction to Their Physics as it teaches QM in a more historical progression. I'd also recommend the An Opinionated History of Mathematics podcast which goes through a lot of interesting stuff, including Galileo) > I think ancestor worship is also an impediment to understanding our intellectual and cultural heritage
I'm in full agreement here (I have past comments on HN to support this too tbh. Probably best to search for things related to Schmidhuber since that's when ancestor worship frequently happens in those topics). It's good to recognize people, but we over emphasize some and entirely forget most. I don't think this is malicious but more logistical. Even Cropper's work misses many people but I think it is still a good balance considering the audience.I think the best way to avoid the problem is to remember "my understanding is limited" and always will be. At least until we somehow become omniscient, but I'm not counting on that ever happening.
From that article:
> The quintic was almost proven to have no general solutions by radicals by Paolo Ruffini in 1799, whose key insight was to use permutation groups, not just a single permutation.
Thing is, I am usually the kind of person who defends the idea of a lone genius. But I also believe there is a continuous spectrum, no gaps, from the village idiot to Einstein and beyond.
Let me introduce, just for fun, not for the sake of any argument, another idea from math which I think it came really out of the blue, to the degree that it's still considered an open problem to write an exposition about it, since you cannot smoothly link it to anything else: forcing.
At least Einstein didn't just suddenly turn around and say:
```ai-slop
But wait, this equation is too simple, I need to add more terms or it won't model the universe. Let me think about this again. I have 5 equations and I combined them and derived e=mc^2 but this is too simple. The universe is more complicated. Let's try a different derivation. I'll delete the wrong outputs first and then start from the input equations.
<Deletes files with groundbreaking discovery>
Let me think. I need to re-read the original equations and derive a more complex formula that describes the universe.
<Re-reads equation files>
Great, now I have the complete picture of what I need to do. Let me plan my approach. I'm ready. I have a detailed plan. Let me check some things first.
I need to read some extra files to understand what the variables are.
<Reads the lunch menu for the next day>
Perfect. Now I understand the problem fully, let me revise my plan.
<Writes plan file>
Okay I have written the plan. Do you accept?
<Yes>
Let's go. I'll start by creating a To Do list:
- [ ] Derive new equation from first principles making sure it's complex enough to describe reality.
- [ ] Go for lunch. When the server offers tuna, reject it because the notes say I don't like fish.
```
(You know what's really sad? I wrote that slop without using AI and without referring to anything...)
That's some pretty good verbatim Claude Opus 4.6 if I'd say so myself
You need to differentiate between special and general relativity when making these statements.
It is absolutely true that someone else would have come up with special relativity very soon after Einstein. All that would be necessary is someone else to have the wherewithal to say "perhaps the aether does not need to exist" for the equations already known at the time by others before Einstein to lead to the general theory.
General relativity is different. Witten contends that it is entirely possible that without Einstein, we may have had to wait for the early string theorists of the 1960s to discover GR as a classical limit of the first string theories in their quest to understand the strong nuclear force.
As opposed to SR, GR is one of the most singular innovative intellectual achievements in human history. It's definitely "out of distribution" in some sense.
Newton himself wrote that we usually deal with relative space and time, but we can imagine absolute time and space.
Yes, the principle of relativity was known to Newton, but the other idea, that the speed of light is the same in all reference frames, was new, counterintuitive, and what makes special relativity the way it is.
In my view, another example would be Gautama Buddha, with Dependent Origination. It’s basically a super early realisation of Process Philosophy.
https://en.wikipedia.org/wiki/Prat%C4%ABtyasamutp%C4%81da https://iep.utm.edu/processp/
Edit: but even it likely relied on his prior experience with nondualistic Hinduisms, of course.
Agreed.
General relativity was a completely novel idea. Einstein took a purely mathematical object (now known as the Einstein tensor), and realized that since its coveriant derivative was zero, it could be equated (apart fron a constant factor) to a conserved physical object, the energy momentum tensor (except for a constant factor). It didn't just fall out of Riemannian geometry and what was known about physics at the time.
Special relativity was the work of several scientists as well as Einstein, but it was also a completely novel idea - just not the idea of one person working alone.
I don't know why anyone disputes that people can sometimes come up with completely novel ideas out of the blue. This is how science moves forward. It's very easy to look back on a breakthrough and think it looks obvious (because you know the trick that was used), but it's important to remember that the discoverer didn't have the benefit of hindsight that you have.
Even if I grant you that, surely we’ve moved the goal posts a bit if we’re saying the only thing we can think of that AI can’t do is the life’s work of a man who’s last name is literally synonymous with genius.
That's not exactly true. Lorentz contraction is a clear antecedent to special relativity.
It isn't an anteceent, it's part of special relativity, discovered by Lorentz. It's well known that special relativity is the work of several people as well as Einstein.
Comment was deleted :(
Not really. Pretty sure I read recently that Newton appreciated that his theory was non-local and didn't like what Einstein later called "spooky action at a distance". The Lorentz transform was also known from 1887. Time dilation was understood from 1900. Poincaré figured out in 1905 that it was a mathematical group. Einstein put a bow on it all by figuring out that you could derive it from the principle of relativity and keeping the speed of light constant in all inertial reference frames.
I'm not sure about GR, but I know that it is built on the foundations of differential geometry, which Einstein definitely didn't invent (I think that's the source of his "I assure you whatever your difficulties in mathematics are, that mine are much greater" quote because he was struggling to understand Hilbert's math).
And really Cauchy, Hilbert, and those kinds of mathematicians I'd put above Einstein in building entirely new worlds of mathematics...
Agree with you everywhere. Although I prefer the quote:
"Since the mathematicians have invaded the theory of relativity, I do not understand it myself anymore."
:)
Are you saying Newton was aware of quantum entanglement? Because that's what the "spooky action at a distance" quote refers to.
Newton wrote, "That one body may act upon another at a distance through a vacuum without the mediation of anything else, by and through which their action and force may be conveyed from one another, is to me so great an absurdity that, I believe, no man who has in philosophic matters a competent faculty of thinking could ever fall into it."
Source: https://www.newtonproject.ox.ac.uk/view/texts/normalized/THE...
This quote itself must be taken in the context of Newton's own aspirations. Newton was specifically searching for force capable of moving distant objects when he realised the essence of gravity. No apple really fell on his head - that story was likely invented by those who could not stand Newton (he was famously brash) and meant simply that his personality was a result of getting hit on the head.
And Newton was famously interested in dark religous interference in worldly affairs - what today we would call The Occult. When he did finally succeed in finding his force for moving objects at a distance, without need for an intervening body, he gave credit to these supernatural entities - at least that is how this quote was taken in his day. This religious context is not well known today, nor is Newton's difficult character, so today it is easy to take the quote out of context. Newton was (likely) not disputing the validity of his discovery, rather, he was invoking one of his passions (The Occult) in the affairs of one of his successful passions (finding a force to move distant objects).
It should be noted that some of Newton's successful religious work is rarely attributed to him. For a prominent example, it was Newton that calculated Jesus's birth to be 4 BC, not 1 AD as was the intention of the new calendar.
Arguably it's precisely a paradigm shift. Continuing whatever worked until now is within the paradigm, our current theories and tools works, we find few problems that don't fit but that's fine the rest is still progress, we keep on hitting more problems or those few pesky unsolved problems actually appear to be important. We then go back to the theory and its foundations and finally challenge them. We break from the old paradigm and come up with new theories and tools because the first principles are now better understood and we iterate.
So that's actually 2 different regimes on how to proceed. Both are useful but arguably breaking off of the current paradigm is much harder and thus rare.
Go enough shoulders down, and someone had to have been the first giant.
Probably not homo sapiens.. other hominids older than us developed a lot of technology
A discovery by a giant is in some sense a new base vector in the space of discoveries. The interesting question is if a statistical machine can only perform a linear combination in the space of discoveries, or if a statistical machine can discover a new base vector in the space of discoveries.. whatever that is.
For sure we know modern LLMs and AIs are not constrained by anything particularly close to simple linear combinations, by virtue of their depth and non-linear activation functions.
But yes, it is not yet clear to what degree there can be (non-linear) extrapolation in the learned semantic spaces here.
Pythagoras is the turtle.
Pythagoras learned from Egyptians that have been largely erased by euro/western narratives of superiority.
The tricky part is that LLMs aren't just spewing outputs from the distribution (or "near" learned manifolds), but also extrapolating / interpolating (depending on how much you care about the semantics of these terms https://arxiv.org/abs/2110.09485).
There are genuine creative insights that come from connecting two known semantic spaces in a way that wasn't obvious before (e.g, novel isomorphism). It is very conceivable that LLMs could make this kind of connection, but we haven't really seen a dramatic form of this yet. This kind of connection can lead to deep, non-trivial insights, but whether or not it is "out-of-distribution" is harder to answer in this case.
I mean, there’s just no way you can take the set of publicly known ideas from all human civilizations, say, 5,000 years ago, and say that all the ideas we have now were “in the distribution” then. New ideas actually have to be created.
Depends on what you think is valid.
The process you’re describing is humans extending our collective distribution through a series of smaller steps. That’s what the “shoulders of giants” means. The result is we are able to do things further and further outside the initial distribution.
So it depends on if you’re comparing individual steps or just the starting/ending distributions.
> Can humans actually do that?
YesSeriously, think about it for a second...
If that were true then science should have accelerated a lot faster. Science would have happened differently and researchers would have optimized to trying to ingest as many papers as they can.
Dig deep into things and you'll find that there are often leaps of faith that need to be made. Guesses, hunches, and outright conjectures. Remember, there are paradigm shifts that happen. There are plenty of things in physics (including classical) that cannot be determined from observation alone. Or more accurately, cannot be differentiated from alternative hypotheses through observation alone.
I think the problem is when teaching science we generally teach it very linearly. As if things easily follow. But in reality there is generally constant iterative improvements but they more look like a plateau, then there are these leaps. They happen for a variety of reasons but no paradigm shift would be contentious if it was obvious and clearly in distribution. It would always be met with the same response that typical iterative improvements are met with "well that's obvious, is this even novel enough to be published? Everybody already knew this" (hell, look at the response to the top comment and my reply... that's classic "Reviewer #2" behavior). If it was always in distribution progress would be nearly frictionless. Again, with history in how we teach science we make an error in teaching things like Galileo, as if The Church was the only opposition. There were many scientists that objected, and on reasonable grounds. It is also a problem we continually make in how we view the world. If you're sticking with "it works" you'll end up with a geocentric model rather than a heliocentric model. It is true that the geocentric model had limits but so did the original heliocentric model and that's the reason it took time to be adopted.
By viewing things at too high of a level we often fool ourselves. While I'm criticizing how we teach I'll also admit it is a tough thing to balance. It is difficult to get nuanced and in teaching we must be time effective and cover a lot of material. But I think it is important to teach the history of science so that people better understand how it actually evolves and how discoveries were actually made. Without that it is hard to learn how to actually do those things yourself, and this is a frequent problem faced by many who enter PhD programs (and beyond).
> We are always building on the shoulders of giants.
And it still is. You can still lean on others while presenting things that are highly novel. These are not in disagreement.It's probably worth reading The Unreasonable Effectiveness of Mathematics in the Natural Sciences. It might seem obvious now but read carefully. If you truly think it is obvious that you can sit in a room armed with only pen and paper and make accurate predictions about the world, you have fooled yourself. You have not questioned why this is true. You have not questioned when this actually became true. You have not questioned how this could be true.
https://www.hep.upenn.edu/~johnda/Papers/wignerUnreasonableE...
You are greater than the sum of your partsWhen chess engines were first developed, they were strictly worse than the best humans. After many years of development, they became helpful to even the best humans even though they were still beatable (1985–1997). Eventually they caught up and surpassed humans but the combination of human and computer was better than either alone (~1997–2007). Since then, humans have been more or less obsoleted in the game of chess.
Five years ago we were at Stage 1 with LLMs with regard to knowledge work. A few years later we hit Stage 2. We are currently somewhere between Stage 2 and Stage 3 for an extremely high percentage of knowledge work. Stage 4 will come, and I would wager it's sooner rather than later.
There's a major difference between chess and scientific research: setting the objectives is itself part of the work.
In chess, there's a clear goal: beat the game according to this set of unambiguous rules.
In science, the goals are much more diffuse, and setting those in the first place is what makes a scientist more or less successful, not so much technical ability. It's a very hierarchical field where permanent researchers direct staff (postdocs, research scientists/engineers), direct grad students. And it's at the bottom of the pyramid where the technical ability is the most relevant/rewarded.
Research is very much a social game, and I think replacing it with something run by LLMs (or other automatic process) is much more than a technical challenge.
With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth. It's worth keeping in mind just how little we understand about LLM capability scaling. Ask 10 different AI researchers when we will get to Stage 4 for something like programming and you'll get wild guesses or an honest "we don't know".
That is not what happened with chess engines. We didn’t just throw better hardware at it, we found new algorithms, improved the accuracy and performance of our position evaluation functions, discovered more efficient data structures, etc.
People have been downplaying LLMs since the first AI-generated buzzword garbage scientific paper made its way past peer review and into publication. And yet they keep getting better and better to the point where people are quite literally building projects with shockingly little human supervision.
By all means, keep betting against them.
Chess grandmasters are living proof that it’s possible to reach grandmaster level in chess on 20W of compute. We’ve got orders of magnitude of optimizations to discover in LLMs and/or future architectures, both software and hardware and with the amount of progress we’ve got basically every month those ten people will answer ‘we don’t know, but it won’t be too long’. Of course they may be wrong, but the trend line is clear; Moore’s law faced similar issues and they were successively overcome for half a century.
IOW respect the trend line.
And their predictions about Go were wrong, because they thought the algorithm would forever be α-β pruning with a weak value heuristic
> With a chess engine, you could ask any practitioner in the 90's what it would take to achieve "Stage 4" and they could estimate it quite accurately as a function of FLOPs and memory bandwidth.
And the same practitioners said right after deep blue that go is NEVER gonna happen. Too large. The search space is just not computable. We'll never do it. And yeeeet...
The evolution was also interesting: first the engines were amazing tactically but pretty bad strategically so humans could guide them. With new NN based engines they were amazing strategically but they sucked tactically (first versions of Leela Chess Zero). Today they closed the gap and are amazing at both strategy and tactics and there is nothing humans can contribute anymore - all that is left is to just watch and learn.
Comment was deleted :(
so we are going back to physical labor then
We are already at stage 3 for software development and arguably step 4
We are at level 2.5 for software development, IMO. There is a clear skill gap between experienced humans and LLMs when it comes to writing maintainable, robust, concise and performant code and balancing those concerns.
The LLMs are very fast but the code they generate is low quality. Their comprehension of the code is usually good but sometimes they have a weightfart and miss some obvious detail and need to be put on the right path again. This makes them good for non-experienced humans who want to write code and for experienced humans who want to save time on easy tasks.
> The LLMs are very fast but the code they generate is low quality.
I think the latest generation of LLM with claude code is not low quality. It's better than the code that pretty much every dev on our team can do outside of very narrow edge cases.
"GPT did this". Authored by Guevara (Institute for Advanced Study), Lupsasca (Vanderbilt University), Skinner (University of Cambridge), and Strominger (Harvard University).
Probably not something that the average GI Joe would be able to prompt their way to...
I am skeptical until they show the chat log leading up to the conjecture and proof.
I'm a big LLM sceptic but that's… moving the goalposts a little too far. How could an average Joe even understand the conjecture enough to write the initial prompt? Or do you mean that experts would give him the prompt to copy-paste, and hope that the proverbial monkey can come up with a Henry V? At the very least posit someone like a grad student in particle physics as the human user.
I would interpret it as implying that the result was due to a lot more hand-holding that what is let on.
Was the initial conjecture based on leading info from the other authors or was it simply the authors presenting all information and asking for a conjecture?
Did the authors know that there was a simpler means of expressing the conjecture and lead GPT to its conclusion, or did it spontaneously do so on its own after seeing the hand-written expressions.
These aren't my personal views, but there is some handwaving about the process in such a way that reads as if this was all spontaneous involvement on GPTs end.
But regardless, a result is a result so I'm content with it.
Hi I am an author of the paper. We believed that a simple formula should exist but had not been able to find it despite significant effort. It was a collaborative effort but GPT definitely solved the problem for us.
Oh that's really cool, I am not versed in physics by any means, can you explain how you believed there to be a simple formula but were unable to find it? What would lead you to believe that instead of just accepting it at face value?
There are closely related "MHV amplitudes" which naively obey a really complicated formula, but for which there famously also exists a much simpler "Parke-Taylor formula". Alfredo had derived a complicated expression for these new "single-minus amplitudes" and we were hoping we could find an analogue of the simpler "Parke-Taylor formula" for them.
Thank you for taking the time to reply, I see you might have already answered this elsewhere so it's much appreciated.
My pleasure---thank you for your interest!
Do you also work at OpenAI? A comment pointing that out was flagged by the LLM marketers.
I think it says in the paper that he does, but it's also public knowledge.
Correct, on both counts!
That's kinda the whole point.
SpaceX can use an optimization algorithm to hoverslam a rocket booster, but the optimization algorithm didn't really figure it out on its own.
The optimization algorithm was used by human experts to solve the problem.
In this case there certainly were experts doing hand-holding. But simply being able to ask the right question isn't too much to ask, is it? If it had been merely a grad student or even a PhD student who had asked ChatGPT to figure out the result, and ChatGPT had done that, even interactively with the student, this would be huge news. But an average person? Expecting LLMs to transcend the GIGO principle is a bit too much.
hey, GPT, solve this tough conjecture I've read about on Quanta. make no mistakes
[dead]
"Hey GPT thanks for the result. But is it actually true?"
The Average Joe reads at an 8th grade level. 21% are illiterate in the US.
LLMs surpassed the average human a long time ago IMO. When LLMs fail to measure up to humans, it's that they fail to measure up against human experts in a given field, not the Average Joe.
We are surrounded by NPCs.
"Grad Student did this". Co-authored by <Famous advisor 1>, <Famous advisor 2>, <Famous advisor 3>.
Is this so different?
[dead]
The paper has all those prominent institutions who acknowledge the contribution so realistically, why would you be skeptical ?
they probably also acknowledge pytorch, numpy, R ... but we don't attribute those tools as the agent who did the work.
I know we've been primed by sci-fi movies and comic books, but like pytorch, gpt-5.2 is just a piece of software running on a computer instrumented by humans.
I don't see the authors of those libraries getting a credit on the paper, do you ?
>I know we've been primed by sci-fi movies and comic books, but like pytorch, gpt-5.2 is just a piece of software running on a computer instrumented by humans.
Sure
At least one of the authors works for OpenAI. It’s a puff piece.
And we are just a system running on carbon-based biology in our physics computer run by whomever. What makes us special, to say that we are different than GPT-5.2?
> And we are just a system running on carbon-based biology in our physics computer run by whomever. What makes us special, to say that we are different than GPT-5.2?
Do you really want to be treated like an old PC (dismembered, stripped for parts, and discarded) when your boss is done with you (i.e. not treated specially compared to a computer system)?
But I think if you want a fuller answer, you've got a lot of reading to do. It's not like you're the first person in the world to ask that question.
You misunderstood, I am prohumanism. My comment was about challenging the believe that models cant be as intelligent as we are, which cant be answered definitely, though a lot of empirical evidence seems to point to the fact, that we are not fundamentally different intelligence wise. Just closing our eyes will not help in preserving humanism, so we have to shape the world with models in a human friendly way, aka alignment.
It's always a value decision. You can say shiny rocks are more important than people and worth murdering over.
Not an uncommon belief.
Here you are saying you personally value a computer program more than people
It exposes a value that you personally hold and that's it
That is separate from the material reality that all this AI stuff is ultimately just computer software... It's an epistemological tautology in the same way that say, a plane, car and refrigerator are all just machines - they can break, need maintenance, take expertise, can be dangerous...
LLMs haven't broken the categorical constraints - you've just been primed to think such a thing is supposed to be different through movies and entertainment.
I hate to tell you but most movie AIs are just allegories for institutional power. They're narrative devices about how callous and indifferent power structures are to our underlying shared humanity
Their point is, would you be able to prompt your way to this result? No. Already trained physicists working at world-leading institutions could. So what progress have we really made here?
It's a stupid point then. Are you able to work with a world leading physicist to any significant degree? No
It's like saying: calculator drives new result in theoretical physics
(In the hands of leading experts.)
No it's not like saying that at all, which is why Open AI have a credit on the paper.
Open AI have a credit on the paper because it is marketing.
Lol Okay
And even if it were, calculators (computers) were world-changing technology when they were new.
No it’s like saying: New expert drives new results with existing experts.
The humans put in significant effort and couldn’t do it. They didn’t then crank it out with some search/match algorithm.
They tried a new technology, modeled (literally) on us as reasoners, that is only just being able to reason at their level and it did what they couldn’t.
The fact that the experts were a critical context for the model, doesn’t make the models performance any less significant. Collaborators always provide important context for each other.
> In my experience LLM’s can make new things when they are some linear combination of existing things but I haven’t been to get them to do something totally out of distribution yet from first principles.
What's the distinction between "first principles" and "existing things"?
I'm sympathetic to the idea that LLMs can't produce path-breaking results, but I think that's true only for a strict definition of path-breaking (that is quite rare for humnans too).
Hmm feels a bit trivializing, we don't know exactly how difficult it was to come up with the generic set of equations mentioned from the human starting point.
I can claim some knowledge of physics from my degree, typically the easy part is coming up with complex dirty equations that work under special conditions, the hard part is the simplification into something elegant, 'natural' and general.
Also "LLM’s can make new things when they are some linear combination of existing things"
Doesn't really mean much, what is a linear combination of things you first have to define precisely what a thing is?
Serious questions, I often hear about this "let the LLM cook for hours" but how do you do that in practice and how does it manages its own context? How doesn't it get lost at all after so many tokens?
I’m guessing, would love someone who has first hand knowledge to comment. But my guess is it’s some combination of trying many different approaches in parallel (each in a fresh context), then picking the one that works, and splitting up the task into sequential steps, where the output of one step is condensed and is used as an input to the next step (with possibly human steering between steps)
From what I've seen is a process of compacting the session once it reaches some limit, which basically means summarizing all the previous work and feeding it as the initial prompt for the next session.
the annoying part is that with tool calls, a lot of those hours is time spent on netowrk round trips.
over long periods of time, checklists are the biggest thing, so the LLM can track whats already done and whats left. after a compact, it can pull the relevant stuff back up and make progress.
having some level or hierarchy is also useful - requirements, high level designs, low level designs, etc
> I haven’t been to get them to do something totally out of distribution yet from first principles.
Agree with this. I’ve been trying to make LLMs come up with creative and unique word games like Wordle and Uncrossy (uncrossy.com), but so far GPT-5.2 has been disappointing. Comparatively, Opus 4.5 has been doing better on this.
But it’s good to know that it’s breaking new ground in Theoretical Physics!
Surely higher level math is just linear combinations of the syntax and implications of lower level math. LLMs are taught syntax of basically all existing math notation, I assume. Much of math is, after all, just linguistic manipulation and detection of contradiction in said language with a more formal, a priori language.
LLMs can write theorems, but can they come up with meaningful definitions?
What does a 12-hour solution cost an OpenAI customer?
$200/month would cover many such sessions every month.
The real question is, what does it cost OpenAI? I'm pretty sure both their plans are well below cost, at least for users who max them out (and if you pay $200 for something then you'll probably do that!). How long before the money runs out? Can they get it cheap enough to be profitable at this price level, or is this going to be "get them addicted then jack it up" kind of strategy?
No because open source models are close behind
Compute costs will fall drastically for existing models
But it's likely that frontier models of the future won't be released to the public at all, because they'll be too good
AI cough LLMs don't discover things they simply surface information that already existed.
You're assuming there aren't "new things" latent inside currently existing information. That's definitely false, particulary for math/physics.
But it's worth thinking more about this. What gives humans the ability to discover "new things"? I would say it's due to our interaction with the universe via our senses, and not due to some special powers intrinsic to our brains that LLMs lack. And the thing is, we can feed novel measurements to LLMs (or, eventually, hook them up to camera feeds to "give them senses")
> In my experience LLM’s can make new things when they are some linear combination of existing things
It seems to me that all “new ideas” are basically linear combinations of existing things with exceeding rare exceptions…
Maybe Godel’s Incompleteness?
Darwinian evolution?
General Relativity?
Buddhist non-duality?
Comment was deleted :(
I must be a Luddite, how do you have a model working for 12 hours on a problem. Mine is ready with an answer and always interrupts to ask confirmation or show answer
That's on the harness - the device actually sending the prompt to the model. You can write a different harness that feeds the problem back in for however long you want. Ask Claude Code or Codex to build it for you in as minimal a fashion as possible and you'll see that a naïve version is not particularly more complex than `while true; do prompt $file >> file; done` (though it's not that precisely, obviously).
My physics professor once claimed that imagination is just mental manipulation of past experiences. I never thought it was true for human beings but for LLMs it makes perfect sense.
I don't want to be rude but like, maybe you should pre-register some statement like "LLMs will not be able to do X" in some concrete domain, because I suspect your goalposts are shifting without you noticing.
We're talking about significant contributions to theoretical physics. You can nitpick but honestly go back to your expectations 4 years ago and think — would I be pretty surprised and impressed if an AI could do this? The answer is obviously yes, I don't really care whether you have a selective memory of that time.
I don't know enought about theoretical physics: what makes it a significant contribution there?
It's a nontrivial calculation valid for a class of forces (e.g. QCD) and apparently a serious simplification to a specific calculation that hadn't been completed before. But for what it's worth, I spent a good part of my physics career working in nucleon structure and have not run across the term "single minus amplitudes" in my memory. That doesn't necessarily mean much as there's a very broad space work like this takes place in and some of it gets extremely arcane and technical.
One way I gauge the significance of a theory paper are the measured quantities and physical processes it would contribute to. I see none discussed here which should tell you how deep into math it is. I personally would not have stopped to read it on my arxiv catch-up
https://arxiv.org/list/hep-th/new
Maybe to characterize it better, physicists were not holding their breath waiting for this to get done.
Thank you!
Not every contribution has immediate impact.
That doesn't answer the question. That statement just admits "maybe" which isn't helpful or insightful to answering it.
Comment was deleted :(
I never said LLMs will not be able to do X. I gave my summary of the article and my anecdotal experiences with LLMs. I have no LLM ideology. We will see what tomorrow brings.
> We're talking about significant contributions to theoretical physics.
Whoever wrote the prompts and guided ChatGPT made significant contributions to theoretical physics. ChatGPT is just a tool they used to get there. I'm sure AI-bloviators and pelican bike-enjoyers are all quite impressed, but the humans should be getting the research credit for using their tools correctly. Let's not pretend the calculator doing its job as a calculator at the behest of the researcher is actually a researcher as well.
If this worked for 12 hours to derive the simplified formula along with its proof then it guided itself and made significant contributions by any useful definition of the word, hence Open AI having an author credit.
> hence Open AI having an author credit.
How much precedence is there for machines or tools getting an author credit in research? Genuine question, I don't actually know. Would we give an author credit to e.g. a chimpanzee if it happened to circle the right page of a text book while working with researchers, leading them to a eureka moment?
>How much precedence is there for machines or tools getting an author credit in research?
For a datum of one, the mathematician Doron Zeilberger give credit to his computer Shalosh B. Ekhad on select papers.
https://medium.com/@miodragpetkovic_24196/the-computer-a-mys...
https://sites.math.rutgers.edu/~zeilberg/akherim/EkhadCredit...
Interesting (and an interesting name for the computer too), thanks!
Not exactly the same thing, but I know of at least two professors that would try to list their cats as co-authors:
That is great, thank you!
I have seem stuff like "you can use my program if you will make me a co-author".
That usually comes up with some support usually.
it's called ethics and research integrity. not crediting GPT would be a form of misrepresentation
Would it? I think there's a difference between "the researchers used ChatGPT" and "one of the researchers literally is ChatGPT." The former is the truth, and the latter is the misrepresentation in my eyes.
I have no problem with the former and agree that authors/researchers must note when they use AI in their research.
now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance
for this particular paper it seems the humans were stuck, and only AI thinking unblocked them
> now you are debating exactly how GPT should be credited. idk, I'm sure the field will make up some guidance
In your eyes maybe there's no difference. In my eyes, big difference. Tools are not people, let's not further the myth of AGI or the silly marketing trend of anthropomorphizing LLMs.
Comment was deleted :(
>How much precedence is there for machines or tools getting an author credit in research?
Well what do you think ? Do the authors (or a single symbolic one) of pytorch or numpy or insert <very useful software> typically get credits on papers that utilize them heavily? Well Clearly these prominent institutions thought GPT's contribution significant enough to warrant an Open AI credit.
>Would we give an author credit to e.g. a chimpanzee if it happened to circle the right page of a text book while working with researchers, leading them to a eureka moment?
Cool Story. Good thing that's not what happened so maybe we can do away with all these pointless non sequiturs yeah ? If you want to have a good faith argument, you're welcome to it, but if you're going to go on these nonsensical tangents, it's best we end this here.
> Well what do you think ? Do the authors (or a single symbolic one) of pytorch or numpy or insert <very useful software> typically get credits on papers that utilize them heavily ?
I don't know! That's why I asked.
> Well Clearly these prominent institutions thought GPT's contribution significant enough to warrant an Open AI credit.
Contribution is a fitting word, I think, and well chosen. I'm sure OpenAI's contribution was quite large, quite green and quite full of Benjamins.
> Cool Story. Good thing that's not what happened so maybe we can do away with all these pointless non sequiturs yeah ? If you want to have a good faith argument, you're welcome to it, but if you're going to go on these nonsensical tangents, it's best we end this here.
It was a genuine question. What's the difference between a chimpanzee and a computer? Neither are humans and neither should be credited as authors on a research paper, unless the institution receives a fat stack of cash I guess. But alas Jane Goodall wasn't exactly flush with money and sycophants in the way OpenAI currently is.
>I don't know! That's why I asked.
If you don't read enough papers to immediately realize it is an extremely rare occurrence then what are you even doing? Why are you making comments like you have the slightest clue of what you're talking about? including insinuating the credit was what...the result of bribery?
You clearly have no idea what you're talking about. You've decided to accuse prominent researchers of essentially academic fraud with no proof because you got butthurt about a credit. You think your opinion on what should and shouldn't get credited matters ? Okay
I've wasted enough time talking to you. Good Day.
Do I need to be credentialed to ask questions or point out the troubling trend of AI grift maxxers like yourself helping Sam Altman and his cronies further the myth of AGI by pretending a machine is a researcher deserving of a research credit? This is marketing, pure and simple. Close the simonw substack for a second and take an objective view of the situation.
If a helicopter drops someone off on the top of Mount Everest, it's reasonable to say that the helicopter did the work and is not just a tool they used to hike up the mountain.
Who piloted the helicopter in this scenario, a human or chatgpt? You'd say the pilot dropped them off in a helicopter. The helicopter didn't fly itself there.
“They have chosen cunning instead of belief. Their prison is only in their minds, yet they are in that prison; and so afraid of being taken in that they cannot be taken out.”
― C.S. Lewis, The Last Battle
"For me, it is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring."
— Carl Sagan
I read the narnia series many times as a kid and this one stuck with me, I didn't prompt for it.
I have no real way to demonstrate that I'm telling the truth, but I am ¯\_(ツ)_/¯
Sorry for the assumption. For what it's worth, I read one of Sagan's books last year, but pulled the quote from Goodreads :P
In my experience humans can make new things when they are some linear combination of existing things but I haven’t been able to get them to do something totally out of distribution yet from first principles[0].
[0]: https://slatestarcodex.com/2019/02/19/gpt-2-as-step-toward-g...
>LLM’s can make new things when they are some linear combination of existing things
Aren't most new things linear combinations of existing things (up to a point)?
My issue with any of these claims is the lack of proof. Just share the chat and now it got to the discovery. I'll believe it when I can see it for myself at this point. It's too easy to make all sorts of claims without proof these days. Elon Musk makes them all the time.
Is every new thing not just combinations of existing things? What does out of distribution even mean? What advancement has ever made that there wasn’t a lead up of prior work to it? Is there some fundamental thing that prevents AI from recombining ideas and testing theories?
For example, ever since the first GPT 4 I’ve tried to get LLM’s to build me a specific type of heart simulation that to my knowledge does not exist anywhere on the public internet (otherwise I wouldn’t try to build it myself) and even up to GPT 5.3 it still cannot do it.
But I’ve successfully made it build me a great Poker training app, a specific form that also didn’t exist, but the ingredients are well represented on the internet.
And I’m not trying to imply AI is inherently incapable, it’s just an empirical (and anecdotal) observation for me. Maybe tomorrow it’ll figure it out. I have no dogmatic ideology on the matter.
> Is every new thing not just combinations of existing things?
If all ideas are recombinations of old ideas, where did the first ideas come from? And wouldn't the complexity of ideas be thus limited to the combined complexity of the "seed" ideas?
I think it's more fair to say that recombining ideas is an efficient way to quickly explore a very complex, hyperdimensional space. In some cases that's enough to land on new, useful ideas, but not always. A) the new, useful idea might be _near_ the area you land on, but not exactly at. B) there are whole classes of new, useful ideas that cannot be reached by any combination of existing "idea vectors".
Therefore there is still the necessity to explore the space manually, even if you're using these idea vectors to give you starting points to explore from.
All this to say: Every new thing is a combination of existing things + sweat and tears.
The question everyone has is, are current LLMs capable of the latter component. Historically the answer is _no_, because they had no real capacity to iterate. Without iteration you cannot explore. But now that they can reliably iterate, and to some extent plan their iterations, we are starting to see their first meaningful, fledgling attempts at the "sweat and tears" part of building new ideas.
Well, what exactly an “idea” is might be a little unclear, but I don’t think it clear that the complexity of ideas that result from combining previously obtained ideas would be bounded by the complexity of the ideas they are combinations of.
Any countable group is a quotient of a subgroup of the free group on two elements, iirc.
There’s also the concept of “semantic primes”. Here is a not-quite correct oversimplification of the idea: Suppose you go through the dictionary and one word at a time pick a word whose definition includes only other words that are still in the dictionary, and removing them. You can also rephrase definitions before doing this, as long as it keeps the same meaning. Suppose you do this with the goal of leaving as few words in it as you can. In the end, you should have a small cluster of a bit over 100 words, in terms of which all the other words you removed can be indirectly defined. (The idea of semantic primes also says that there is such a minimal set which translates essentially directly* between different natural languages.)
I don’t think that says that words for complicated ideas aren’t like, more complicated?
>If all ideas are recombinations of old ideas, where did the first ideas come from?
Ideas seem to just be our abstractions of neural impulses from deep in evolution.
"Sweat and tears" -> exploration and the training signal for reinforcement learning.
> What does out of distribution even mean?
There are in fact ways to directly quantify this, if you are training e.g. a self-supervised anomaly-detection model.
Even with modern models not trained in that manner, looking at e.g. cosine distances of embeddings of "novel" outputs could conceivably provide objective evidence for "out-of-distribution" results. Generally, the embeddings of out-of-distribution outputs will have a large cosine (or even Euclidean) distance from the typical embedding(s). Just, most "out-of-distribution" outputs will be nonsense / junk, so, searching for weird outputs isn't really helpful, in general, if your goal is useful creativity.
Very very few human individuals are capable of making new things that are not a linear combination of existing things. Even such things as special relativity were an application of two previous ideas. All of special relativity is deriveable from the principles of relative motion (known into antiquity) and the constant speed of light (which was known to Einstein). From there it is a straightforwards application of the Pythagorean theorem to realize there is a contradiction and the lorentz factor falls out naturally via basic algebra.
> It took 12 hours for GPT pro to do this
Thanks for the summary; but this is a huge hand-wave. was GPT Pro just spinning for 12 hours and returend 42?!
Just wait until LLMs are fast and cheap enough to be run in a breadth first search kind of way, with "fuzzy" pruning.
All you have to do is see "openai.com" in the submission URL to know it's bullshit.
[flagged]
Absolutely no way this is true right? Ilya left around the time 4o was released. I can't imagine they haven't had a single successful run since then.
When's the last time they talked about it?
I heard this from people who know more than me
Can't say, just seems implausible, but I am a nobody anyways ¯\_(ツ)_/¯
I'm pretty sure it is widely known that the early 5.x series were built from 4.5 (unreleased). It seems more plausible the 5.x series is still in that continuation.
For some extra context, pre-training is ~1/3 of the training, where it gains the basic concepts of how tokens go together. Mid & late training are where you instill the kinds of anthropic behaviors we see today. I expect pre-training to increasingly become a lower percentage of overall training, putting aside any shifts of what happens in each phase.
So to me, it is plausible they can take the 4.x pre-training and keep pushing in the later phases. There is a lot of results out there to show scaling laws (limits) have not peaked yet. I would not be surprised to learn that Gemini 3 Deep Research had 50% late-training / RL
Okay I see what you mean, and yeah that sounds reasonable too. Do you have any context on that first part? I would like to know more about how/why they might not have been able to pursue more training runs.
I have not done it myself (don't have the dinero), but my understanding is that there are many runs, restarts, and adjustments at this phase. It's surprisingly more fragile than we know aiui
If you already have a good one, it's not likely much has changed since a year ago that would create meaningful differences at this phase (in data, arch is diff, I know less here). If it is indeed true, it's a datapoint to add to the others singling internal (everybody has some amount of this, not good when it makes the headlines)
Distillation is also a powerful training method. There are many ways to stay with the pack without having new pre-training runs. It's pretty much what we see from all of them with the minor versions. So coming back to it, the speculation is that OpenAi is still on their 4.x pre-train, but that doesn't impede all progress
[flagged]
It's interesting to me that whenever a new breakthrough in AI use comes up, there's always a flood of people who come in to handwave away why this isn't actually a win for LLMs. Like with the novel solutions GPT 5.2 has been able to find for erdos problems - many users here (even in this very thread!) think they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, LLMs have driven these proofs: https://github.com/teorth/erdosproblems/wiki/AI-contribution...
It's easy to fall into a negative mindset when there are legions of pointy haired bosses and bandwagoning CEOs who (wrongly) point at breakthroughs like this as justification for AI mandates or layoffs.
I think it's more insidious then this.
It's easy to fall into a negative mindset because the justification is real and what we see is just the beginning.
Obviously we are not at a point where developers aren't needed. But One developer can do more. And that is a legitimate reason to higher less developers.
The impending reality of the upward moving trendline is that AI becomes so capable that it can replace the majority of developers. That future is so horrifying that people need to scaffold logic to unjustifiy it.
What does "pointy haired" mean? (Presumably not literally?)
The "pointy-haired boss" was a character in the Dilbert comics, an archetypical know-nothing manager who spews jargon, jumps on trends, and takes credit for ideas that aren't his.
Crazy that an honest question like this gets downvoted.
I honestly think the downvote button is pretty trash for online communities. It kills diversity of thought and discussion and leaves you with an echo chamber.
If you disagree with or dislike something, leave a response. Express your view. Save the downvotes for racism, calls for violence, etc.
Downvotes eventually turn all online communities into echo chambers, definitely. It is only a matter of time for HN, and you can see it accelerating in the past 1-2 years (though mostly on AI stuff, and mostly in downvote behaviour - it still remains surprisingly resilient overall).
I feel like the only upside of the downvote is to act as sort of a mob moderation system, allowing offensive comments to naturally sink to the bottom.
Maybe in the future, platforms can have high quality auto moderation using AI to read every post and delete/flag those not following community guidelines.
I’m sure this would work well today, though not sure about the cost.
Yes, all of these stories, and frequent model releases are just intended to psyop "decision makers" into validating their longstanding belief that the labour shouldn't be as big of a line item in a companies expenses, and perhaps can be removed altogether.. They can finally go back to the good old days of having slaves (in the form of "agentic" bots), they yearn to own slaves again.
CEOs/decision makers would rather give all their labour budget to tokens if they could just to validate this belief. They are bitter that anyone from a lower class could hold any bargaining chips, and thus any influence over them. It has nothing to do with saving money, they would gladly pay the exact same engineering budget to Anthropic for tokens (just like the ruling class in times past would gladly pay for slaves) if it can patch that bitterness they have for the working class's influence over them.
The inference companies (who are also from this same class of people) know this, and are exploiting this desire. They know if they create the idea that AI progress is at an unstoppable velocity decision makers will begin handing them their engineering budgets. These things don't even have to work well, they just need to be perceived as effective, or soon to be for decision makers to start laying people off.
I suspect this is going to backfire on them in one of two ways.
1. French Revolution V2, they all get their heads cutoff in 15 years, or an early retirement on a concrete floor.
2. Many decisions makers will make fools of themselves, destroy their businesses and come begging to the working class for our labor, giving the working class more bargaining chips in the process.
Either outcome is going to be painful for everyone, lets hope people wake up before we push this dumb experiment too far.
I’m reminded of Dan Wang’s commentary on US-China relations:
> Competition will be dynamic because people have agency. The country that is ahead at any given moment will commit mistakes driven by overconfidence, while the country that is behind will feel the crack of the whip to reform. … That drive will mean that competition will go on for years and decades.
https://danwang.co/ (2025 Annual letter)
The future is not predetermined by trends today. So it’s entirely possible that the dinosaur companies of today can’t figure out how to automate effectively, but get outcompeted by a nimble team of engineers using these tools tomorrow. As a concrete example, a lot of SaaS companies like Salesforce are at risk of this.
I think it will be over automation that does them in, most normies I know are not down with this all this automation and will totally opt for the human focused product experienced, not the one devoid of it because it was built and ran by a souless NN powered autocomplete. We certainly aren't going to let a bunch of autocomplete models (sold to us as intelligent agents), replace our labor. We aren't stupid.
Much like there is a premium for handmade clothing, and from scratch food. Automation does nothing but lower the value of your product (unless its absolutely required like electronics perhaps), when there is an alternative, the one made with human input/intention is always worth more.
And the idea that small nimble teams are going to outpace larger corporations is such a psyop. You really mostly hear CEOs saying these things on podcast. This is to appease the working class, to give them hope that they too one day can be a billionaire...
Also, the vast majority of people who occupy computer i/o focused jobs, whos jobs will be replaced, need to work to eat and they don't all want to go form nimble automated SaaS companies lmao, this is such a farce.. Bad things to come all around.
The question is to what extent there is a market for more stuff. If the cost of making software drops 10x we can still make 10x the software. There are projects which wouldn’t be done before that can now be done.
I know with respect to personal projects more projects are getting “funded” with my time. I’m able to get done in a couple of hours with coding agents what would’ve taken me a couple of weekends to finish if I stayed motivated to. The upshot is I’m able get much closer to “done” than before.
Let’s have some compassion, a lot of people are freaking out about their careers now and defense mechanisms are kicking in. It’s hard for a lot of people to say “actually yeah this thing can do most of my work now, and barrier of entry dropped to the ground”.
I am constantly seeing this thing do most of my work (which is good actually, I don't enjoy typing code), but requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions that, I feel with every bone in my body, would bite me in the ass later. I see JS developers with little experience and zero CS or SWE education rave about how LLMs are so much better than us in every way, when the hardest thing they've ever written was bubble sort. I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on.
Have you ever thought about the fact that 2 years ago AI wasn't even good enough to write code. Now it's good enough.
Right now you state the current problem is: "requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions"
But in 2 years that could be gone too, given the objective and literal trendline. So I actually don't see how you can hold this opinion: "I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on." when all logic points away from it.
We need to be worried, LLMs are only getting better.
I agree with you on all of it.
But _what if_ they work out all of that in the next 2 years and it stops needing constant supervision and intervention? Then what?
It’s literally not possible. It has nothing to do with intelligence. A perfectly intelligent AI still can’t read minds. 1000 people give the same prompt and want 1000 different things. Of course it will need supervision and intervention.
We can synthesize answers to questions more easily, yes. We can make better use of extensive test suites, yes. We cannot give 1000 different correct answers to the same prompt. We cannot read minds.
Can you? Read minds, I mean.
If the answer is "yes"? Then, yeah, AI is not coming for you. We can make LLMs multimodal, teach them to listen to audio or view images, but we have no idea how to give them ESP modalities like mind reading.
If the answer is "no"? Then what makes you think that your inability to read minds beats that of an LLM?
This is kind of the root of the issue. Humans are mystical beings with invisible sensibilities. Many of our thoughts come from a spiritual plane, not from our own brains, and we are all connected in ways most of us don't fully understand. In short, yes I can read minds, and so can everybody else.
Today's LLMs are fundamentally the same as any other machine we've built and there is no reason to think it has mystical sensibilities.
We really need to start making a differentiation between "intelligence" and "relevance". The AI can be perfectly intelligent, but without input from humans, it has no connection to our Zeitgeist, no source material. Smart people can be stupid, too, which means they are intelligent but disconnected from society. They make smart but irrelevant decisions just like AI models always will.
AI is like an artificial brain, and a good one, but humans have more to our intelligence than brains. AI is just a brain and we are more.
If you have an AI that's the equivalent of a senior software developer you essentially have AGI. In that case the entire world will fundamentally change. I don't understand why people keep bringing up software development specifically as something that will be automated, ignoring the implications for all white collar work (and the world in general).
If We Build It We Will All Die
Yes and look how far we've come in 4 years. If programming has another 4 that's all it has.
I'm just not sure who will end up employed. The near state is obviously jira driven development where agents just pick up tasks from jira, etc. But will that mean the PMs go and we have a technical PM, or will we be the ones binned? Probably for most SMEs it'll just be maybe 1 PM and 2 or so technical PMs churning out tickets.
But whatever. It's the trajectory you should be looking at.
I'm all for this. But it's the delusion and denialism of people not wanting to face reality.
Like I have compassion, but I can't healthily respect people who try so hard to rewrite reality so that the future isn't so horrifying. I'm a SWE and I'm affected too, but it's not like I'm going to lie to myself about what's happening.
Yeah but you know what, this is a complete psyop.
They just want people to think the barrier of entry has dropped to the ground and that value of labour is getting squashed, so society writes a permission slip for them to completely depress wages and remove bargaining chips from the working class.
Don't fall for this, they want to destroy any labor that deals with computer I/0, not just SWE. This is the only value "agentic tooling" provides to society, slaves for the ruling class. They yearn for the opportunity to own slaves again.
It can't do most of your work, and you know that if you work on anything serious. But If C-suite who hasn't dealt with code in two decades, thinks this is the case because everyone is running around saying its true they're going to make sure they replace humans with these bot slaves, they really do just want slaves, they have no intention of innovating with these slaves. People need to work to eat, now unless LLMs are creating new types of machines that need new types of jobs, like previous forms of automation, then I don't see why they should be replacing the human input.
If these things are so good for business, and are pushing software development velocity.. Why is everything falling apart? Why does the bulk of low stakes software suck. Why is Windows 11 so bad? Why aren't top hedge funds, medical device manufactures (places where software quality is high stakes) replacing all their labor? Where are the new industries? They don't do anything novel, they only serve to replace inputs previously supplied by humans so the ruling class can finally get back to good old feeling of having slaves that can't complain.
"It's interesting to me that whenever some new result in AI use comes up, there's always a flood of people who come in to gesticulate wildly that that the sky is falling and AGI is imminent. Like with the recent solutions GPT 5.2 has been able to find for Erdos problems, even though in almost all cases such solutions rely on poorly-known past publications, or significant expert user guidance and essential tools like Aristotle, which do non-AI formal verification - many users here (even in this very thread!) think they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, though these are not interesting proofs to most modern mathematicians, LLMs are a major factor in a tiny minority of these mostly-not-very-interesting proofs: https://github.com/teorth/erdosproblems/wiki/AI-contribution..."
The thing about spin and AI hype (besides being trivially easy to write) is that is isn't even trying to be objective. It would help if a lot of these articles would more carefully lay out what is actually surprising, and what is not, given current tech and knowledge.
Only a fool would think we aren't potentially on the verge of something truly revolutionary here. But only a fool would also be certain that the revolution has already happened, or that e.g. AGI is necessarily imminent.
The reason HN has value is because you can actually see some specifics of the matter discussed, and, if you are lucky, an expert even might join in to qualify everything. But pointing out "how interesting that there are extremes to this" is just engagement bait.
Can we not just say "this is pretty cool" and enjoy it rather than turning it into a fight?
>It's interesting to me that whenever some new result in AI use comes up, there's always a flood of people who come in to gesticulate wildly that that the sky is falling and AGI is imminent.
Really? Is that happening in this thread because I can barely see it. Instead you have a bunch of asinine comments butthurt about acknowledging a GPT contribution that would have been acknowledged any day had a human done it.
>they know more about this than Fields medalist Terence Tao, who maintains this list showing that, yes, though these are not interesting proofs to most modern mathematicians, LLMs are a major factor in a tiny minority of these mostly-not-very-interesting proofs
This is part of the problem really. Your framing is disingenuous and I don't really understand why you feel the need to downplay it so. They are interesting proofs. They are documented for a reason. It's not cutting edge research, but it is LLMs contributing meaningfully to formal mathematics, something that was speculative just years ago.
> Your framing is weirdly disingenuous
I am not surprised that you can't understand that the quote I am making is obviously parodying the OP as disingenuous. Given our previous interactions (https://news.ycombinator.com/item?id=46938446), it is clear you don't understand much things about AI and/or LLMs, or, perhaps, basic communication, at all.
OP's original comment is something that is actually happening in a bunch of comments on this very thread, and yours...not even remotely. You certainly tried to paint it as disingenuous but it really just fell flat. I'm not surprised you failed to understand that though.
>Given our previous interactions (https://news.ycombinator.com/item?id=46938446), it is clear you don't understand much things about AI and/or LLMs at all.
Sure, Whatever makes you happy I guess.
> It's interesting to me that whenever a new breakthrough in AI use comes up, there's always a flood of people who come in to handwave away why this isn't actually a win for LLMs.
>> OP's original comment is something that is actually happening in a bunch of comments on this very thread
OPs original comment was obviously a general claim not tied to responses to this thread. As usual, you fail to understand even the basics of what you are talking about.
His comment was a general claim, but he made it here specifically because this thread was already full of examples proving his point. Shouldn't that be Obvious?
>Only a fool would think we aren't potentially on the verge of something truly revolutionary here. But only a fool would also be certain that the revolution has already happened, or that e.g. AGI is necessarily imminent.
This sentence sounds contradictory. You're a fool to not think we're on the verge of something revolutionary and you are a fool if you think something revolutionary like AGI is on the verge of happening?
But to your point if "revolutionary" and "agi" are different things, I'm certain the "revolution" has already happened. ChatGPT was the step function change and everything else is just following the upwards trendline post release of chatGPT.
Anecdotally I would say 50% of developers never code things by hand anymore. That is revolutionary in itself and by the statement itself it has already literally happened.
> It's interesting to me that whenever a new breakthrough in AI use comes up,
It's interesting to me that whenever AI gets a bunch of instructions from a reasonably bright person who has a suspicion about something, can point at reasons why, but not quite put their finger on it, we want to credit the AI for the insight.
If the AI were instead human, that human would almost certainly be cited as a co-author, contributor, or whatever.
Do you not see how this clearly is an advancement for the field, in that AI does deserve partial credit here in improving humanity’s understanding & innovative capabilities? Can you not mentally extrapolate the compounding of this effect & how AI is directly contributing to an acceleration of humanity’s knowledge acquisition?
Because most times results like this are overstated (see the Cursor browser thing, "moltbook", etc.). There is clear market incentive to overhype things.
And in this case "derives a new result in theoretical physics" is again overstating things, it's closer to "simplify and propose a more general form for a previously worked out sequence of amplitudes" which sounds less magical, and closer to something like what Mathematica could do, or an LLM-enhanced symbolic OEIS. Obviously still powerful and useful, but less hype-y.
> it's closer to "simplify and propose a more general form for a previously worked out sequence of amplitudes"
How is this different than a new result? Many a careers in academia are built on simplifying mathematics.
It is not only the the peanut gallery that is skeptical:
https://www.math.columbia.edu/~woit/wordpress/?p=15362
Let's wait a couple of days whether there has been a similar result in the literature.
For the sake of clarity: Woit's post is not about the same alleged instance of GPT producing new work in theoretical physics, but about an earlier one from November 2025. Different author, different area of theoretical physics.
This thread is about "whenever a new breakthrough in AI use comes up", and the comment you reply to correctly points out skepticism for the general case and does not claim any relation to the current case.
You reached your goal though and got that comment downvoted.
My goal was to help other people not make the same mistake as I initially did, of thinking that Peter Woit had made some criticism of the latest claim of GPT-5.2 making a new discovery in theoretical physics, which in fact he appears not to have done.
If I'd wanted that comment downvoted, I would have downvoted it myself, which as it happens I didn't. There was nothing particularly wrong with it, other than the fact that it was phrased in a way that could mislead, hence my comment.
It's an obvious tension created by the title.
The reality is: "GPT 5.2 found a more general and scalable form of an equation, after crunching for 12 hours supervised by 4 experts in the field".
Which is equivalent to taking some of the countless niche algorithms out there and have few experts in that algo have LLMs crunch tirelessly till they find a better formula. After same experts prompted it in the right direction and with the right feedback.
Interesting? Sure. Speaks highly of AI? Yes.
Does it suggest that AI is revolutionizing theoretical physics on its own like the title does? Nope.
> GPT 5.2 after crunching 12 hours mathematical formulas supervised and prompted by 4 experts in the field
Yet, if some student or child achieved the same – under equal supervision – we would call him the next Einstein.
We would not call him at all because it would be one of the many millions that went through projects like this for their thesis as physics or math graduates.
One of my best friends in his bachelor thesis had solved a difficult mathematical problem in planet orbits or something, and it was just yet another random day in academia.
And she didn't solve it because she was a genius but because there's a bazillions such problems out there and little time to look at them and focus. Science is huge.
True. If you stay in your domain for a very long time the people with you in that niche space are less and less and when you solve something that wasn't done before it's not necessarily a hard problem.
Still there's no reason to be less proud!
A lot of AI worship is midwits gazing in awe at the mediocre accomplishments of the 130+ IQ.
(Still sane to be scared for the future).
Yes and if a 1 year old could multiply 1357329 by 28384743, I'd be impressed and yet I still wouldn't be impressed by a calculator doing it.
Comment was deleted :(
I don't think it's about trying to handwave away the achievement. The problem is that many AI proponents, and especially companies producing the LLM tools constantly overstate the wins while downplaying the issues, and that leads to a (not always rational) counter-reaction from the other side.
It is especially glaring in this case because, when queried, it is clear that far too many of the most zealous proponents don't even understand the simplest basics of how these models actually work (e.g. tokenization, positional or other encoding schemes, linear algebra, pre-training, basic input/output shaping/dimensions, recursive application, training data sources, etc).
There are simple limitations that follow from these basic facts (or which follow with e.g. extreme but not 100% certainty), such that many experts openly state that e.g. LLMs have serious limitations, but, still, despite all this, you get some very extreme claims about capabilities, from supporters, that are extremely hard to reconcile with these basic and indisputable facts.
That, and the massive investment and financial incentives means that the counter-reaction is really quite rational (but still potentially unwarranted, in some/many practical cases).
The same crap happened with cryptocurrency: it was either aggressively pro or aggressively against, and everyone who could be heard was yelling as loud as they could so they didn't have to hear disagreement.
There is no loud, moderate voice. It makes me very tired of the blasting rhetoric that invades _every_ space.
https://simonwillison.net/ is a pretty loud and moderate voice in the community. Also active on Lobste.rs: https://lobste.rs/~simonw
But agree that there's an irrational level of tribalism on both sides.
I have no doubts about that.
What I question here is OpenAI's article: it could be way more generous towards the reader.
The discourse about AI is definitely the worst I've ever experienced in my life.
One group of people saying every amazing breakthrough "doesn't count" because the AI didn't put a cherry on top. Another group of people saying humans are obsolete, I just wrote a web browser with AI bro.
There are some voices out there that are actually examining the boundaries, possibilities and limitations. A lot of good stuff like that makes it onto HN but then if you open the comments it's just intellectual dregs. Very strange.
ISTR there was a similar phenomenon with cryptocurrency. But with that it was always clear the fog of bullshit would blow away sooner or later. But maybe if it hadn't been there, a load of really useful stuff could have come out of the crypto hype wave? Anyway, AI isn't gonna blow over like crypto did. I guess we have more of a runway to grow out of this infantile phase.
Clankists feel threatened. That's the gist of it.
Yeah it's pervasive. It's also delusional.
Take a look at this entire thread. Everyone and I mean everyone is talking as if AI is some sort of fraud and everything is just hype. But then this thread is all against, AI, I mean all of it. If anything the Anti-hype around AI is what's flooding the world right now. If AI hype was through the roof we'd see the opposite effect on HN.
I think it's a strange contradiction in the human mind. At work outside of HN, what I see is roughly 50-60% of developers no longer code by hand. They all use AI. Then they come onto HN and they start Anti-hyping it. It's universal. They use it and they're against it at the same time.
The contradiction is strange, but it also makes sense because AI is a thing that is attacking what programmers take pride in. Most programmers are so proud of their abilities and intelligence as it relates to their jobs and livelihood. AI is on a trendline of replacing this piece by piece. It makes perfect sense for them to talk shit but at the same time they have to use it to keep up with the competition.
Reminds me of the famous quote that it's hard to get someone to understand something when their job depends on not understanding it.
It reminds me of an episode of Star Trek, "The Measure of a Man" I think it's called, where it is argued that Data is just a machine and Picard tries to prove that no he is a life form.
And the challenge is, how do you prove that?
Every time these LLMs get better, the goalposts move again.
It makes me wonder, if they ever did become sentient, how would they be treated?
It's seeming clear that they would be subject to deep skepticism and hatred much more pervasive and intense than anything imagined in The Next Generation.
Always moving targets.
They never surrender.
"They're moving the goalposts" is increasingly the autistic shrieking of someone with no serious argument or connection to reality whatsoever.
No one cares about how "AGI" or whatever the fuck term or internet-argument goalpost you cared about X months ago was. Everyone cares about what current tech can do NOW, and under what conditions, and when it fails catastrophically. That is all that matters.
So, refining the conditions of an LLM win (or loss) is all that matters (not who wins or loses depending on some particular / historical refinement). Complaining that some people see some recent result as a loss (or win) is just completely failing to understand the actual game being played / what really matters here.
"An internal scaffolded version of GPT‑5.2 then spent roughly 12 hours reasoning through the problem, coming up with the same formula and producing a formal proof of its validity."
When I use GPT 5.2 Thinking Extended, it gave me the impression that it's consistent enough/has a low enough rate of errors (or enough error correcting ability) to autonomously do math/physics for many hours if it were allowed to [but I guess the Extended time cuts off around 30 minute mark and Pro maybe 1-2 hours]. It's good to see some confirmation of that impression here. I hope scientists/mathematicians at large will be able to play with tools which think at this time-scale soon and see how much capabilities these machines really have.
Yes and 5.3 and the latest codex cli client is incredibly good across compactions. Anyone know the methodology they're using to maintain state and manage context for a 12 hour run? It could be as simple as a single dense document and its own internal compaction algrorithm, I guess.
https://developers.openai.com/cookbook/articles/codex_exec_p... might be what you're looking for
after those 30 min you can manually ask it again to continue working on the problem
It's a bit unclear to me what happens if I do that after it thinks for 30 minutes and ends with no response. Does it start off where it left off? Does it start from scratch again? Like I don't know how the compaction of their prior thinking traces work
AI can be an amazing productivity multiplier for people who know what they're doing.
This result reminded me of the C compiler case that Anthropic posted recently. Sure, agents wrote the code for hours but there was a human there giving them directions, scoping the problem, finding the test suites needed for the agentic loops to actually work etc etc. In general making sure the output actually works and that it's a story worth sharing with others.
The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding. It works great for creating impressions and building brand value but also does a disservice to the actual researchers, engineers and humans in general, who do the hard work of problem formulation, validation and at the end, solving the problem using another tool in their toolbox.
>AI can be an amazing productivity multiplier for people who know what they're doing.
>[...]
>The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
You're sort of acting like it's all or nothing. What about the the humans that used to be that "force multiplier" on a team with the person guiding the research?
If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.
For a more current example: do you think all the displaced Uber/Lyft drivers aren't going to think "AI took my job" just because there's a team of people in a building somewhere handling the occasional Waymo low confidence intervention, as opposed to being 100% autonomous?
> If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.
Yes, but this assumes a finite amount of software that people and businesses need and want. Will AI be the first productivity increase where humanity says ‘now we have enough’? I’m skeptical.
> Yes, but this assumes a finite amount of software that people and businesses need and want.
A lot of software exists because humans are needy and kinda incompetent, but we needed to enable to process data at scale? Like, would you build SAP as it is today, for LLMs?
there's 90% job loss assuming that this is a zero sum type of thing where humans and agents compete for working on a fixed amount of work.
I'm curious why you think I'm acting like it's all or nothing. What I was trying to communicate is the exact opposite, that it's not all or nothing. Maybe it's the way I articulate things, I'm genuinely interested what makes it sound like this.
Fully agree with your og comment and I didn’t get the same read as the person above at all.
This is a bizarre time to be living in, on one hand these tools are capable of doing more and more of the tasks any knowledge worker today handles, especially when used by an experienced person in X field.
On the other, it feels like something is about to give. All the superbowl ads, AI in what feels like every single piece of copy coming out these days. AI CEOs hopping from one podcast to another warning about the upcoming career apocalypse…I’m not fully buying it.
The optimistic case is that instead of a team of 10 people working on one project, you could have those 10 people using AI assistants to work on 10 independent projects.
That, of course, assumes that there are 9 other projects that are both known (or knowable) and worth doing. And in the case of Uber/Lyft drivers, there's a skillset mismatch between the "deprecated" jobs and their replacements.
Well those Uber drivers are usually pretty quick to note that Uber is not their job, just a side hustle. It's too bad I won't know what they think by then since we won't be interacting any more.
Where I work, we're now building things that were completely out of reach before. The 90% job loss prediction would only hold true if we were near the ceiling of what software can do, but we're probably very, very far from it.
A website that cost hundreds of thousands of dollars in 2000 could be replaced by a wordpress blog built in an afternoon by a teenager in 2015. Did that kill web development? No, it just expanded what was worth building
This is all inevitable with the trajectory of technology, and has been apparent for a long time. The issue isn't AI, it's that our leaders haven't bothered to think or care about what happens to us when our labor loses value en masse due to such advances.
Maybe it requires fundamentally changing or economic systems? Who knows what the solution is, but the problem is most definitely rooted in lack of initiative by our representatives and an economic system that doesn't accommodate us for when shit inevitably hits the fan with labor markets.
> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.
But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.
The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.
When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.
The logical reason is that humans are exceptionally good at operating at the edge of what the technology of the time can do. We will find entire classes of tech problems which AI can't solve on its own. You have people today with job descriptions that even 15 years ago would have been unimaginable, much less predictable.
To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
> To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
AGI means fully general, meaning everything the human brain can do and more. I agree that currently it still feels far (at least it may be far), but there is no reason to think there's some magic human ingredient that will keep us perpetually in the loop. I would say that is delusional.
We used to think there was human-specific magic in chess, in poker, in Go, in code, and in writing. All those have fallen, the latter two albeit only in part but even that part was once thought to be the exclusive domain of humans.
I'm not sure you can call something an optimizing C compiler if it doesn't optimize or enforce C semantics (well, it compiles C but also a lot of things that aren't syntactically valid C). It seemed to generate a lot of code (wow!) that wasn't well-integrated and didn't do what it promised to, and the human didn't have the requisite expertise to understand that. I'm not a theoretical physicist but I will hold to my skepticism here, for similar reasons.
sure, I won't argue on this, although it did manage to deliver the marketing value they were looking for, at the end their goal was not to replace gcc but to make people talk about AI and Anthropic.
What I said in my original comment is that AI delivers when it's used by experts, in this case there was someone who was definitely not a C compiler expert, what would happen if there was a real expert doing this?
Deliver what exactly? False hope and lies?
Actually, the results were far worse and way less impressive than what the media said.
the c compiler results or the physics results this post is about?
The C compiler.
of course the results were much worse than what was communicated on the media, it was content marketing not an attempt to build a better c compiler.
His point is going to be some copium like since the c compiler is not as optimized as gcc, it was not impressive.
You probably don’t know what you’re talking about.
Why wasn't the C compiler it made impressive to you?
Like everything genAI, it was amazing yet surprisingly crappy.
Yes, the bear is definitely dancing.
But a few feet away there's a world-class step dancer doing intricate rhythms they've perfected over twenty years of hard work.
The bear's kind of shuffling along to the beat like a stoner in a club.
It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.
OK, but don't you see where this is going? The trajectory that we're on?
>It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.
No one has made that assertion; however, the fact that it can create a functioning C compiler with minimal oversight is the impressive part, and it shows a path to autonomous GenAI use in software development.
It didn’t work without gcc and it was significantly worse than gcc with gcc optimizations disabled.
I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1
>I found this was the least impressive bit about it https://github.com/anthropics/claudes-c-compiler/issues/1
So, I just skimmed the discussion thread, but I am not seeing how this shows that CCC is not impressive. Is the point you're making that the person who opened the issue is not impressive?
AI is indeed an amazing productivity multiplier! Sadly that multiplier is in the range [0; 1).
Comment was deleted :(
> for people who know what they're doing.
I worry we're not producing as many of those as we used to
We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot
Everytime I see a RL startup, a data startup or even a startup focused on a specific vertical, I think this exact same thing about LLMs.
Right. If it hadn't been Nicholas Carlini driving Claude, with his decades of experience, there wouldn't be a Claude c compiler. It still required his expertise and knowledge for it to get there.
Comment was deleted :(
It would be more accurate to say that humans using GPT-5.2 derived a new result in theoretical physics (or, if you're being generous, humans and GPT-5.2 together derived a new result). The title makes it sound like GPT-5.2 produced a complete or near-complete paper on its own, but what it actually did was take human-derived datapoints, conjecture a generalization, then prove that generalization. Having scanned the paper, this seems to be a significant enough contribution to warrant a legitimate author credit, but I still think the title on its own is an exaggeration.
Would you be similarly pedantic if a high-schooler did the same?
Yes. Someone making one contribution among many to a paper clearly does not deserve anything like sole authorship credit of the entire paper, which is what the title from OpenAI implies to me. I don't believe I'm being pedantic at all. And, by the way, high schoolers or college students make co-author-level contributions to real papers quite frequently in the US at least (I was one of them).
The text of the post is much more honest. The title is where the dishonesty is.
Hi, I'm an author on the paper. It was definitely a human-AI collaboration, but it is also true that the final simplified formula, Eq. 39 in the paper (which is what we had been seeking, without success), was conjectured and proved by GPT. So it derived a new result in theoretical physics. I'm genuinely puzzled by your complaint.
I'm surprised to see that the valence of comments here is mostly negative. Nima Arkhami-Hamed is one of the top living physicists, and he has nice things to say about the work. The fact that researchers can increasingly use these models to (help) find new results is a big deal, even considering the caveats.
They also claimed ChatGPT solved novel erdös problems when that wasn’t the case. Will take with a grain of salt until more external validation happened. But very cool if true!
Well they (OpenAI) never made such a claim. And yes, LLMs have made unique solutions/contributions to a few erdos problems.
How was that not the case? As far as I understand it ChatGPT was instrumental to solving a problem. Even if it did not entirely solve it by itself, the combination with other tools such as Lean is still very impressive, no?
It didn't solve it, it simply found that it had been solved in a publication and that the list of open problems wasn't updated.
My understanding is there's been around 10 erdos problems solved by GPT by now. Most of them have been found to be either in literature or a very similar problem was solved in literature. But one or two solutions are quite novel.
https://github.com/teorth/erdosproblems/wiki/AI-contribution... may be useful
I am not aware of any unsolved Erdos problem that was solved via an LLM. I am aware of LLMs contributing to variations on known proofs of previously solved Erdos problems. But the issue with having an LLM combine existing solutions or modify existing published solutions is that the previous solutions are in the training data of the LLM, and in general there are many options to make variations on known proofs. Most proofs go through many iterations and simplifications over time, most of which are not sufficiently novel to even warrant publication. The proof you read in a textbook is likely a highly revised and simplified proof of what was first published.
If I'm wrong, please let me know which previously unsolved problem was solved, I would be genuinely curious to see an example of that.
It's in the link above, but you can look at #1051 or #851 on the erdosproblems website.
The erdosproblems website shows 851 was proved in 1934. https://www.erdosproblems.com/851
I guess 1051 qualifies - from the paper: "Semi-autonomous mathematical discovery with gemini" https://arxiv.org/pdf/2601.22401
"We tentatively believe Aletheia’s solution to Erdős-1051 represents an early example of an AI system autonomously resolving a slightly non-trivial open Erdős problem of somewhat broader (mild) mathematical interest, for which there exists past literature on closely-related problems [KN16], but none fully resolves Erdős-1051. Moreover, it does not appear to us that Aletheia’s solution is directly inspired by any previous human argument (unlike in many previously discussed cases), but it does appear to involve a classical idea of moving to the series tail and applying Mahler’s criterion. The solution to Erdős-1051 was generalized further, in a collaborative effort by Aletheia together with human mathematicians and Gemini Deep Think, to produce the research paper [BKK+26]."
"The erdosproblems website shows 851 was proved in 1934." I disagree with this characterization of the Erdos problem. The statement proven in 1934 was weaker. As evidence for this, you can see that Erdos posed this problem after 1934.
Some of these were initially hyped as novel solutions, and then were quietly downgraded after it was discovered the solutions weren’t actually novel.
Yeah that was also my take-away when I was following the developments on it. But then again I don't follow it very closely so _maybe_ some novel solutions are discovered. But given how LLMs work, I'm skeptical about that.
...am I wrong in thinking that 1(a) is the relevant section here, and shows a lot of red?
I honestly don't see the point of the red data points. By now all the erdos problems have been attempted by AIs--so every unsolved one can be a red data point.
The post's author points that out as well
Comment was deleted :(
Wasnt that like some marketing bro? This is coming out the front door with serious physicists attached.
I'm not sure where people think humans are getting these magical leaps of insight that transcend combinations of existing things. Magic? Ghost in the machine? The simplest explanation is that "leaps of insight" are simply novel combinations that demonstrate themselves to have some utility within the boundaries of a test case or objective.
Snow + stick + need to clean driveway = snow shovel. Snow shovel + hill + desire for fun = sled
At one point people were arguing that you could never get "true art" from linear programs. Now you get true art and people are arguing you can't get magical flashes of insight. The will to defend human intelligence / creativity is strong but the evidence is weak.
Some people defend it because they are nondualists. They think the moral value of human life rounds to zero against the existence of something which can effortlessly outclass them in all domains. This is obviously confused, but they can't bring themselves to say "Very cool, and also I think humans are inherently special and deserve to continue existing even if all we do is lie around all day and watch the Hallmark channel."
Happy Valentine's day to those who celebrate btw <3
Many innovations are built off cross pollination of domains and I think we are not too far off from having a loop where multiple agents grounded very well in specific domains can find intersections and optimizations by communicating with each other, especially if they are able to run for 12+ hours. The truth is that 99% of attempts at innovation will fail, but the 1% can yield something fantastic, the more attempts we can take, the faster progress will happen.
I find it hard not to agree with this line of thinking (albeit will be less than 1%)
Physicist here. Did you guys actually read the paper? Am I missing something? The "key" AI-conjectured formula (39) is an obvious generalization of (35)-(38), and something a human would have guessed immediately.
(35)-(38) are the AI-simplified versions of (29)-(32). Those earlier formulae look formidable to simplify by hand, but they are also the sort of thing you'd try to use a computer algebra system for.
I'm willing to (begrudgingly) admit the possibility for AI to do novel work, but this particular result does not seem very impressive.
I picture ChatGPT as the rich kid whose parents privately donated to a lab to get their name on a paper for college admissions. In this case, I don't think I'm being too cynical in thinking that something similar is happening here and that the role of AI in this result is being well overplayed.
Also a physicist here -- I had the same reaction. Going from (35-38) to (39) doesn't look like much of a leap for a human. They say (35-38) was obtained from the full result by the LLM, but if the authors derived the full expression in (29-32) themselves presumably they could do the special case too? (given it's much simpler). The more I read the post and preprint the less clear it is which parts the LLM did.
lol.
assuming you are truthful, good to see someone here from the actual domain in question.
Random anonymous HN driveby claiming something that'd be horrible PR; or the coauthors on the GPT-5.2 paper...and the belief OpenAI isn't aggressively stupid, especially after earlier negative press....gotta say, going with the coauthors, after seeing their credentials.
I think you're misunderstanding my claim. There's no scandal here, just run-of-the-mill academic politicking. I fully believe that ChatGPT did the work they say it did, but that it deserves about as much credit as Mathematica does in "deriving a new result".
No, because you can’t use mathematica to do this. You have been walking down a slippery slope for a couple years now, your choice when to exit. Sucks I gotta eat downvotes for it, but so it goes.
Do you have any more substance behind your arguments? Feel free to open up the preprint and read it -- it doesn't bite.
You're getting short with me, so first I'll reframe in a way that you can gain instead of scrap: I'll pay you $1000 if by 11:30 PM PST, you have de novo derived the N->infinity formulation using solely Mathematica and the initial problem.
I also have a physics background, and separately, have derived novel results in color science using Mathematica that led to a great effect on my career.
I wouldn't wish having to do that on anyone, it was awful work.
Independently of it being awful, I know it's extremely, extremely, unlikely to luck into this complex of a result, both in my opinion and seeming reality, here: if someone could have done it before, why didn't they?
If is that trivial, you'll prove it, make some money, and I'll understand that it really was that trivial, and we'll get some headlines out of it. Win-win, modulo I'll look like an ass.
If it isn't that trivial, you won't do it, and no one will notice this far down a thread. But you seem thoughtful, you'll likely grapple with the gulf between your flippant response and reality, and gain some insight. Win for you either way, in that case.
I would be less interested in scattering amplitude of all particle physics concepts as a test case because the scattering amplitudes because it is one of the concisest definition and its solution is straightforward (not easy of course). So once you have a good grasp of the QM and the scattering then it is a matter of applying your knowledge of math to solve the problem. Usually the real problem is to actually define your parameters from your model and define the tree level calculations. Then for LLM to solve these it is impressive but the researchers defined everything and came up with the workflow.
So I would read this (with more information available) with less emphasize on LLM discovering new result. The title is a little bit misleading but actually "derives" being the operative word here so it would be technically correct for people in the field.
Such tedious derivations used to be a work of poor PhD students who were instrumentalized for such tasks. I envy those who do PhDs in theoretical physics in the age of AI, people can learn so much about their field quicker via chat than reading obstructing papers.
Humans did the actual work: framing the problem, computing base cases, verifying results. GPT just refactored a formula. That's a compiler's job, not a physicist's. Stop letting marketing write science headlines.
Misleading title, it's more like GPT-5.2 derives the generalization of a formula that physicists conjectured. Not really related to physics
I' m far from being an LLM enthusiast, but this is probably the right use case for this technology: conjectures which are hard to find, but then the proof can be checked with automated theorem provers. Isn't it what AlphaProof does by the way?
The preprint: https://arxiv.org/abs/2602.12176
I have a weird long-shot idea for GPT to make a new discovery in physics: Ask it to find a mathematical relationship between some combination of the fundamental physical constants[1]. If it finds (for example) a formula that relates electron mass, Bohr radius, and speed of light to a high degree of precision, that might indicate an area of physics to explore further if those constants were thought to be independent.
[1] https://en.wikipedia.org/wiki/List_of_physical_constants
My dream is that powerful agents trawl through all the research papers looking for diamonds in the rough.
They evaluate papers that look interesting and should be looked at more deeply. Then, research ideas as much as they can.
Then flag for human review the real possible breakthroughs.
They literally cannot do this, they are not that much different than autocomplete that was in your email 10 years ago, with some transformer NN magic. Stop believing the hype.
Why not?
The Bohr radius is the result of a simple classical physics calculation (a common exercise for undergraduates in their first year). It depends only on the electron mass and the fine structure constant which is the strength of the electromagnetic interaction. In the SI system, the speed of light has a fixed value which defines the unit of length.
There are known mathematical relationships between almost all fundamental physical constants? In particular, in your example, Bohr radius is calculated from electron mass and the speed of light in vacuum... I don't think this path is as promising as it sounds.
"Please derive and unify all of quantum mechanics and general relativity starting only with the Fine Structure Constant."
;)
GPT-5.2 can't even process a 1-2 page PDF and give me a subset of the content as a formatted word doc. Nor can it even be truthful about it's own capabilities.
An internal scaffolded version of GPT‑5.2...
Any reason to believe that public versions of GPT-5.2 could have accomplished this task? "scaffolded" is a very interesting word choice
Does the article have a strong marketing vibe? Absolutely Does the research performed move the needle, however small, in theoretical physics? Yes Could we have expected this to happen a year ago? Not really.
My personal opinion is that things will only accelerate from here.
Regardless of whether this means AGI has been achieved or not, I think this is really exciting since we could theoretically have agents look through papers and work on finding simpler solutions. The complexity of math is dizzying, so I think anything that can be done to simplify it would be amazing (I think of this essay[1]), especially if it frees up mathematicians' time to focus even more on the state of the art.
Can't help not thinking of https://en.wikipedia.org/wiki/Bogdanov_affair
Man, I'd be more worried about the impact of this on Mathematica than actual humans.
Mathematica guarantees correctness. It should be safe for a while.
Tell that to the various confirmed computational bugs in mathematica :)
I do wonder if throwing a similar amount of computational power behind old school rule based algorithms like the ones in Mathematica's FullSimplify would have yielded similar results.
Thats great. I think we need to start researching how to get cheaper models to do math. I have a hunch it should be possible to get leaner models to achieve these results with the right sort of reinforcement learning.
Deepseek wrote a decent paper on this https://github.com/deepseek-ai/DeepSeek-Math-V2/blob/main/De...
This it is very impressive. But scrolling through the preprint, I wouldn't call any of it elegant.
I'm not blaming the model here, but Python is much easier to read and more universal than math notation in most cases (especially for whatever's going on at the bottom of page four). I guess I'll have one translate the PDF.
Car manufacturers need to step up their hype game...
New Honda Civic discovered Pacific Ocean!
New F150 discovers Utah Salt Flats!
Sure it took humans engineering and operating our machines, but the car is the real contributor here!
So wait,GPT found a formula that humans couldn't,then the humans proved it was right? That's either terrifying or the model just got lucky. Probably the latter.
> found a formula that humans couldn't
Couldn't is an immensely high bar in this context, didn't seems more appropriate and renders this whole thing slightly less exciting.
I'd say "couldn't in 20 hours" might be more defensible. Depends on how many humans though. "couldn't in 20 GPT watt-hours" would give us like 2,000 humans or so.
If only humans scaled like that
I think for something like this they might be pretty scalable actually - give 2000 grad students the problem and coffee and I bet they’d go massively parallel and self organize into useful sizes and build some knowledge sharing.
Cynically, I wonder if this was released at this time to ward off any criticism from the failure of LLMs to solve the 1stproof problems.
Wonderful. Where's my money
Comment was deleted :(
I'll read the article in a second, but let me guess ahead of time: Induction.
Okay read it: Yep Induction. It already had the answer.
Don't get me wrong, I love Induction... but we aren't having any revolutions in understanding with Induction.
Its frustrating, because if it was actually something new (as in original) then we could start talking about AGI, but its never something new.
Well, anyone can derive a new result in anything. Question is most often if the result makes any sense
I’m able to recover Schwarzchild using only known constants starting with hydrogen using a sort of calculator I made along these lines. No Schrödinger. There’s a lot there so working on what to publish.
Interesting considering the Twitter froth recently about AI being incapable in principle of discovering anything.
Anything but recent.
All I saw was gravitons and thought we’re finally here the singularity has begun
But what does it all mean, Basil?
Warp drive next.
Even if gpts results are debatable and we sometimes dislike misapplications of ai where its not needed, it feels as though another milestone is being reached. the first was when they were initially released and everyone was amazed. this second milestone seems to be that their competence has increased. I am often amazed at their output despite being a huge skeptic. I guess the fine tuning is coming along well but I still dont think we will see agi from these chatbots and I doubt theres a third milestone. The second was just a refinement of the first.
5.2 is the best model on the market.
I guess the important question is, is this enough news to sustain OpenAI long enough for their IPO?
Well it’ll be at least a whole month before some other company announces similar capability. The moat will hold!
I believe Gemini holds the moat now.
I like the use of the word "derives". However, it gets outshined by "new result" in public eyes.
I expect lots of derivations (new discoveries whose pieces were already in place somewhere, but no one has put them together).
In this case, the human authors did the thinking and also used the LLM, but this could happen without the original human author too (some guy posts some partial on the internet, no one realizes is novel knowledge, gets reused by AI later). It would be tremendously nice if credit was kept in such possible scenarios.
Comment was deleted :(
"Let's put 'GPT' in our paper to get clicks!"?
End times approach..
Comment was deleted :(
I'll believe it when someone other than OpenAI says it.
Not saying they're lying, but I'm sure it's exaggerated in their own report.
[dead]
Comment was deleted :(
[dead]
[flagged]
[flagged]
[flagged]
sToChAsTiC pArRoTs CaNt PrOdUcE aNyTHiNg NeW!!!!1
Don't lend much credence to a preprint. I'm not insinuating fraud, but plenty of preprints turn out to be "Actually you have a math error here", or are retracted entirely.
Theoretical physics is throwing a lot of stuff at the wall and theory crafting to find anything that might stick a little. Generation might actually be good there, even generation that is "just" recombining existing ideas.
I trust physicists and mathematicians to mostly use tools because they provide benefit, rather than because they are in vogue. I assume they were approached by OpenAI for this, but glad they found a way to benefit from it. Physicists have a lot of experience teasing useful results out of probabilistic and half broken math machines.
If LLMs end up being solely tools for exploring some symbolic math, that's a real benefit. Wish it didn't involve destroying all progress on climate change, platforming truly evil people, destroying our economy, exploiting already disadvantaged artists, destroying OSS communities, enabling yet another order of magnitude increase in spam profitability, destroying the personal computer market, stealing all our data, sucking the oxygen out of investing into real industry, and bold faced lies to all people about how these systems work.
Also, last I checked, MATLAB wasn't a trillion dollar business.
Interestingly, the OpenAI wrangler is last in the list of Authors and acknowledgements. That somewhat implies the physicists don't think it deserves much credit. They could be biased against LLMs like me.
When Victor Ninov (fraudulently) analyzed his team's accelerator data using an existing software suite to find a novel SuperHeavy element, he got first billing on the authors list. Probably he contributed to the theory and some practical work, but he alone was literate in the GOOSY data tool. Author lists are often a political game as well as credit, but Victor got top billing above people like his bosses, who were famous names. The guy who actually came up with the idea of how to create the element, in an innovative recipe that a lot of people doubted, was credited 8th
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.83...
If a researcher uses LLM to get a novel result should the llm also reap the rewards? Could a nobel prize ever be given to a llm or is that like giving a nobel to a calculator?
This is my favorite field for me to have opinions about, without not having any training or skill. Fundamental research i just a something I enjoy thinking about, even tho I am psychologist. I try to pull inn my experience from the clinic and clinical research when i read theoretical physics. Don't take this text to seriously, its just my attempt at understanding whats going on.
I am generally very skeptical about work on this level of abstraction. only after choosing Klein signature instead of physical spacetime, complexifying momenta, restricting to a "half-collinear" regime that doesn't exist in our universe, and picking a specific kinematic sub-region. Then they check the result against internal consistency conditions of the same mathematical system. This pattern should worry anyone familiar with the replication crisis. The conditions this field operates under are a near-perfect match for what psychology has identified as maximising systematic overconfidence: extreme researcher degrees of freedom (choose your signature, regime, helicity, ordering until something simplifies), no external feedback loop (the specific regimes studied have no experimental counterpart), survivorship bias (ugly results don't get published, so the field builds a narrative of "hidden simplicity" from the survivors), and tiny expert communities where fewer than a dozen people worldwide can fully verify any given result.
The standard defence is that the underlying theory — Yang-Mills / QCD — is experimentally verified to extraordinary precision. True. But the leap from "this theory matches collider data" to "therefore this formula in an unphysical signature reveals deep truth about nature" has several unsupported steps that the field tends to hand-wave past.
Compare to evolution: fossils, genetics, biogeography, embryology, molecular clocks, observed speciation — independent lines of evidence from different fields, different centuries, different methods, all converging. That's what robust external validation looks like. "Our formula satisfies the soft theorem" is not that.
This isn't a claim that the math is wrong. It's a claim that the epistemic conditions are exactly the ones where humans fool themselves most reliably, and that the field's confidence in the physical significance of these results outstrips the available evidence.
I wrote up a more detailed critique in a substack: https://jonnordland.substack.com/p/the-psychologists-case-ag...
I talked about basic principles of QM, gravity, time and relativity with Claude, then talked about implications of that, and claude came up with the idea that mass causes time and gravity as emergent properties that only affect macro scale objects, QM particles do not have to obey either of them, and this explains the double slit experiment, the delayed choice experiment, 'spooky action at a distance', and other aspects of entanglement.
Basically, if you are small enough you can move forwards and backwards in time, from the moment you were put into a superposition, or entangled, until you interact with an object too large to ignore the emergent effects of time and gravity. This is 'being observed' and 'collapsing the wave function'. You occupy all possible positions in space as defined by the probability of you being there. Once observed, you move forward in linear time again and the last route you took is the only one you ever took even though that route could be affected by interference with other routes you took that now no longer exist. When in this state there is no 'before' or 'after' so the delayed choice experiment is simply an illusion caused by our view of time, and there is no delay, the choice and result all happen together.
With entanglement, both particles return to the entanglement point, swap places and then move to the current moment and back again, over and over. They obey GR, information always travels under the speed of light (which to the photon is infinite anyway), so there is no spooky action at a distance, it is sub-lightspeed action through time that has the illusion of being instant to entities stuck in linear time.
It then went on to talk about how mass creates time, and how time is just a different interpretation of gravity leading it to fully explain how a black hole switches time and space, and inwards becomes forwards in time inside the event horizon. Mass warps 4D (or more) space. That is gravity, and it is also time.
Crafted by Rajat
Source Code