hckrnws
The pilot analogy hits different when you consider that pilots still train on simulators for exactly this reason — they're legally required to maintain proficiency even when autopilot handles 99% of flights.
There's no equivalent mandate for software engineers. Nothing stops you from spending years as a pure "prompt pilot" and losing the ability to read a stack trace or reason about algorithmic complexity. The atrophy is silent and gradual.
The author's suggestion to write code by hand as an educational exercise is right but will be ignored by most, because the feedback loop for skill atrophy is so delayed. You won't notice you've lost the skill until you're debugging something the agent made a mess of, under pressure, with no fallback.
The term "Children of the Magenta Line" has long been used in aviation to describe the over-reliance on automation. So even though they train to avoid losing manual skills, it's definitely still a concern.
We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
My biggest lessons were from hours of pain and toil, scouring the internet. When I finally found the solution, the dopamine hit ensured that lesson was burned into my neurons. There is no such dopamine hit with LLMs. You vaguely try to understand what it’s been doing for the last five minutes and try to steer it back on course. There is no strife.
I’m only 24 and I think my career would be on a very different path if the LLMs of today were available just five years ago.
Ok imagine you went back 30 years and you had a swarm of experts around you who you could ask anything you wanted and they would even do the work for you if you wanted.
Does this mean youd be incapable of learning anything? Or could you possibly learn way more because you had the innate desire to learn and understand along with the best tool possible to do it?
Its the same thing here. How you use LLMs is all up to your mindset. Throughly review and ask questions on what it did, or why, ask if we could have done it some other way instead. Hell ask it just the questions you need and do it yourself, or dont use it at all. I was working on C++ for example with a heavy use of mutexs, shared and weak pointers which I havent done before. LLM fixed a race condition, and I got to ask it precisely what the issue was, to draw a diagram showing what was happening in this exact scenario before and after.
I feel like Im learning more because I am doing way more high level things now, and spending way less time on the stuff I already know or dont care to know (non fundementals, like syntax and even libraries/frameworks). For example, I don't really give a fuck about being an expert in Spring Security. I care about how authentication works as a principal, what methods would be best for what, etc but do I want to spend 3 hours trying to debug the nuances of configuring the Spring security library for a small project I dont care about?
> Does this mean you'd be incapable of learning anything?
Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.
You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
I agree with your premise, but this example I strongly disagree with:
> You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
YES! Explain to them, and trust them. They might not do exactly as you wish for them, but I'll bet you don't do exactly as you wish for yourself either. The children need your trust and they must learn how to navigate this world by themselves, with parents providing guidance and only taking the hard stance (but still explaining and discussing!) when safety is concerned. Also, lead by example. If you eat vegetables then children are likely to eat them too. The children are not stupid, they just don't have enough experience yet. Which you gain by trying (and failing), not by listening.
You're right, it was a bad example. I also don't eat my vegetables. I was more trying to make the point that most of us are not rational actors either, was just using children as a convenient proxy, unfairly.
I see it as being more personality/interest than impulse control. A curious/interested person would try and get involved and be a part of it, someone uninterested will just say what's the point and get by having the work done for them.
It may very well have stunted my learning. What’s the point of absorbing information when you have a consortium of experts available 24/7?
Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
Supposedly because AI has limits and you still have to know what you're doing so you can guide it and do it better.
If that's not true, then what's the problem with not learning the material? Go do something more productive with your time if the personal curiosity isn't good enough. Were in a whole new world.
>Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
This is true, and I can't answer that 100% confidently. I imagine I would just be doing more more/complicated things and learning higher level concepts. For example, if right off the bat I could produce a web app, Id want to deploy it somewhere. So Id come across things like ssh, nginx, port forwarding, jars, bundles, DNS, authentication, etc. Do this a 1000 times just the way I wrote 1000 different little functions or programs by hand and you'll no shit absorb little here and there as issues come up. Or maybe if whats hard a year ago is easy today, Id want to do something far more incredibly complex than anything anyone's been able to imagine before, and learn in that struggle.
Programmers in the 90s were far more apt at understanding CPU registers, memory and all sorts of low level stuff. Then the abstraction moved up the stack, and then again and again. I think same thing will happen.
Also, you can't say Im in a privileged position for already knowing how to code and at the same time asking what's the point of learning it yourself.
The problem is that the abstraction level moved up so far that we're now programming in the English language, and we're more like managers than programmers. This will only get worse. The next step will be that AIs run entire companies. And BigAI will not allow us to profit from that because they will just run the AI themselves, the current situation was just a stepping stone.
Managers still need technical skills though.
If AIs really get there, we're all out of jobs to do.
> We should be very concerned for the next generation. When you have the constant temptation of digging yourself out of a problem just by asking an LLM how will you ever learn anything?
This is just the same concern whenever a new technology appears.
* Socrates argued that writing would weaken memory, that it would create only superficially knowledge but incapable of really understanding. But it didn't destroy it. It allowed to store information and share it with many others far away.
* The internet and web indexers made information instantly accessible, allowing you to search for the information you just need, the fear is that people would just copy from the internet, yet researching information became way faster, any one with Internet access could access this information and learn themselves, just look at the amount of educational websites with courses to learn.
Each time a new technology came and people feared that it could degrade knowledge, the tools only helped us to increase our knowledge.
Just like with books and the internet, people could simply copy and not learn anything, its not exclusive to LLMs. The issue isn't in the tool itself, but how we use it. The new generation will probably instead of learning how to search, they will need to learn how to prompt, ask and evaluate whether the LLM isn't hallucinating or not.
Socrates was proven dead wrong by neurobiology.
LLMs making you dumber is far from being "disproven" by science. Quite the opposite https://arxiv.org/abs/2506.08872
I'm not sure what you mean by Socrates was proven dead wrong.
The study you linked doesn't show that people are becoming dumber because of LLMs, its just showing that when you offload tasks to these tools your brain engages less in that specific task, just like you'd do with a calculator, instead of doing complex calculations on paper, the calculator will do them for you, or when writing and using a spell-checker or using a search engine, instead of opening a book and searching. The question is whether in the long-term cognitive capacity is reduced, and like I said before this argument predates LLMs (All the way back to Socrates)
Also, take the study with a grain of salt as this is a small sample with only 54 participants for a single task on a short term study.
Personally, I believe LLMs just allows us to have a higher level of abstraction.
As an older person, I'm not worried. The world changes all the time. People are put people in difficult situations, and they have to adapt. "Oh no, how will people learn things?" is not that big of a struggle in the grand scheme. We're not burning books or giving people lobotomies. People can still learn if they want to, easier than ever before. Businesses will adapt, people will adapt, by necessity. Things will be very different, sure. But then we get used to the difference, and it becomes normal.
Kids today couldn't imagine how people used to live just 100 years ago, like it was the dark ages. People from that age would probably look at kids 10 years ago and think, these poor children! They don't know how to work hard! They don't know anything about life! They're glued to these bizarre light machines! Every age is different.
Yea, IMO people shouldn't make jobs / professions too big a part of their identity. At some point human programming may be largely gone, but probably there will be increased demand for something else.
It should be government's job to make it as easy as possible for people to retrain, switch jobs and start new careers. Obviously taxation should be reworked too, if AI and robots replace lots of jobs in some sectors. Profits produced by efficiency gains shouldn't be concentrated just among few billionaires.
My concern is also, how will programming and software design ever improve?
In my eyes, it will be same as introduction of garbage collectors. It will help to a degree, make people more lazy along the way and cause some additional and brand new issues. But over all very little will change as for serious implementations human intellect is still going to be the primary actor and AI will be disallowed.
At the beginning of the internet, I used to save all webpages where I’d find info, just in case I would be stuck without a connection or if the website removed it. I had parts of the MDN.
The internet never fell. I bet it’ll be the same with AI. You will never not have AI.
The big difference is the internet was a liberation movement: Everything became open. And free. AI is the opposite: By design, everything is closed.
Not only that. AI will have increasingly diminishing returns as it relies on good quality human written code. As that starts being less and less true, quality of generated code will also suffer since at some point AI will train from AI generated content.
I've seen a lot of posts like this one, but this is the first to encapsulate how I feel so well.
Honestly, I don't really know what to do. I spent my whole life (so far; I'm still very young) falling in love with programming, and now I just don't find this agent thing fun at all. But I just don't know how to find my niche if using LLMs truly does end up being the only way for me to build valuable things with my only skills.
It's pretty depressing and very scary. But I appreciate this article for at least conveying that so effectively...
Could I ask, what did you love about programming that you now don't find this agent thing fun at all.
I'm genuinely curious, I feel very differently and excited about this agent thing.
Asking because unlike a lot of other commentary, this struck me as being more about the act itself than being depressed/anxious for financial reasons, etc
I love the act of writing code, it clicks well with me. I love the feeling of my brain solving problems, figuring out how something works, and then finally understanding it. I love debugging. I love having built something that people love, solely wrought by my own fingers.
I got into programming because the act of spinning a web of code just feels like what I'm designed to do.
Vibe coding definitely has some of that, but it feels so detached from any understanding of the computer itself. I feel like I'm bossing someone around — and I would never want to be a non-coding manager. I'm curious, how/why do you feel so different?
(Obviously the financial side is stressful too, but I feel like I'm in a good spot to figure that out either way.)
I am not sure how many other people on here are old enough to remember, but I first learned to program before I had the internet. I had to read books, and then if I was trying to figure out how to do something, I would have to figure out which book to look it up in, and then figure out where in the book to find it and how to apply it to my situation. It made me learn a ton, because I would have to read a lot of books to even know where to look; I had to do my own ‘scraping and indexing’.
I remember as the internet took off and you could just search for things, I thought it made programming too easy. You never had to actually learn how it worked, you can just search for the specific answer and someone else would do the hard work of figuring out how to use the tools available for your particular type of problem.
Over the years, my feelings shifted, and I loved how the internet allowed me to accomplish so much more than I could have trying to figure it all out from books.
I wonder if AI will feel similar.
I use AI for very little but I do like using it for stuff I'm just not very interested in but have to get done.
For programming, I don't like it. It's like a master carpenter building furniture from IKEA. Sure it's faster and he doesn't have to think very hard and the end result is acceptable but he feels lazy and after a while he feels like he is losing his skills.
The best days of computing for me were what you remember. A computer was just a blank slate. You turned it on, and had a ">" blinking on the screen. If you wanted it to do anything you had to write a program. And learning how to do that meant practice and study and reading... there were no shortcuts. It was challenging and frustrating and fun.
All fair, but I think a different interpretation could be that AI allows you to vastly expand the scope of the possible, such to create a situation again where things are challenging and frustrating and fun.
This is the part that interests me most. The IKEA analogy from the parent comment assumes the carpenter's only option is to build the same furniture faster. But what if the carpenter uses the prefab stuff for the boring parts and spends their real energy on the joints and details that actually matter?
I've noticed this pattern in music too - the people who understand theory deeply use generative tools in ways that beginners literally can't, because they know which output to keep and which to throw away. The tool doesn't replace the taste, it just gives you more raw material to apply taste to.
But here's what I keep wondering: does expanding the scope of the possible eventually erode the deep understanding that makes the expansion valuable in the first place? Like, if you never have to debug a memory leak because the agent handles it, do you lose the intuition that would let you architect systems that don't leak in the first place?
Most programmers don't like the fuzziness of AI, so things may be challenging and frustrating, but certainly not fun.
I've always felt a little odd saying, "Back in my day we had to understand the cpu, registers, etc." It's a true statement, but doesn't help in any way. Is that stuff still worth knowing, IMHO? Yes. Can you create incredibly useful code without that knowledge today? Absolutely.
There are some people who still know these things, and are able to use LLMs far more effectively than those who do not.
I've seen the following prediction by a few people and am starting to agree with it: software development (and possibly most knowledge work) will become like farming. A relatively smaller number of people will do with large machines what previously took armies of people. There will always be some people exploring the cutting edge of thought, and feeding their insights into the machine, just how I image there are biochemists and soil biology experts who produce knowledge to inform decisions made by the people running large farming operations.
I imagine this will lead to profound shifts in the world that we can hardly predict. If we don't blow ourselves up, perhaps space exploration and colonization will become possible.
I think that tt's more likely at this point that we turn the depleting quantities of exploitable resources on this planet into more and more data centers and squander any remaining opportunity at space exploration/colonization at scale.
If this happens to software development, this will happen to most mental jobs.
> Can you create incredibly useful code without that knowledge today?
You could do that without that knowledge back in the day too, we had languages that were higher level than assembler for forever.
It's just that the range of knowledge needed to maximize machine usage is far smaller now. Before you had to know how to write a ton of optimizations, nowadays you have to know how to write your code so the compiler have easy job optimizing it.
Before you had to manage the memory accesses, nowadays making sure you're not jumping actross memory too much and being aware how cache works is enough
Or more so - machines have gotten so fast, with so much disk and memory.. that people can ship slopware filled with bloatware and the UX is almost as responsive as Windows 3.1 was
I don't think it's odd. Sacrificing deep understanding, and delegating that responsibility to others is risky. In more concrete terms, if your livelihood depends on application development, you have concrete dependencies on platforms, frameworks, compilers, operating systems, and other abstractions that without which you might not be able to perform your job.
Fewer abstractions, deeper understanding, fewer dependencies on others. These concepts show up over and over and not just in software. It's about safety.
So much AI moaning and groaning these days seems based on the idea that people have to be forced to do anything of value, even for themselves.
It seems to imply a great deal of pessimism about human self-determination. Like, I can't be anything good unless there is an external mold pressing me into the good shape. And it can't be my choice because I would never choose anything good. I'll only do good things for myself if forced.
Since AI is supposedly taking everybody's jobs and making it so we can choose never to better ourselves, maybe future governments will need to institute taskmasters to force us into regimens of physical and mental health and vigor. A whole new adult school system will have to be instituted.
Or we can just do art or live short, simple lives now that we won capitalism and our basic needs can be automated in a global socialist utopia.
I don’t really like making art though. I like programming.
That is art, when done properly.
I dont use it a lot but when I do it's pretty much 2 patterns
* "search on steroids" - get me to the thing I need or ask whether the thing I need exists, give me few examples and I can get it running.
* getting the trivial and uninteresting parts out of the way, like writing some helper function for stuff I'm doing now, I'll just call AI, let it do its thing and continue writing the code in meantime, look back ,check if it makes sense and use it.
So I'm not really cheating myself out of the learning process, just outsource the parts I know well enough that I can check for correctness but save time writing
I learned to program on a Commodore 64 using books I could get from libraries and some magazines like Compute!'s Gazette. I got online very early via BBSes (originally on a 300 baud modem for my C64) and was on the internet by the mid to late 1980s.
I never had the feeling that being able to search for things on the internet made things too easy. For me it felt like a natural extension to books for self-learning, it was just faster.
LLMs feel entirely different to me, and that's where I do get the sense that they make things "too easy" in that (like the author of the OP blog post) I no longer feel like I am building any sort of skill when using them other than code review (which is not a new skill as it is something I have previously done with code produced by other humans for a long time).
As with the OP author I also think that "prompting" as a skill is hugely overblown. "Prompting" was maybe a bit more of a skill a year ago, but I find that you don't really have to get too detailed with current LLMs, you just have to be a bit careful not to bias them in negative ways. Whatever value I have now as a software developer has more to do with having veto power in the instances where the LLM agent goes off the rails than it does in constructing prompts.
So for now I'm stuck in a situation where I feel like for work I am being paid to do I basically have to use LLMs because not doing so is effectively malpractice at this point (because there are real efficiency gains), but for selfish reasons if I could push a button to erase the existence of LLMs, I'd probably do it.
> I never had the feeling that being able to search for things on the internet made things too easy. For me it felt like a natural extension to books for self-learning, it was just faster.
I think this depends on how you are using the internet. Looking up an API or official documentation is one thing, but asking for direct help on a specific problem via Stackoverflow seems different.
Fair point, Stackoverflow didn't exist for quite a while after I started using the internet for information, and while I made as much use of it as anyone for googling answers to questions such as "What does this specific 32-bit HRESULT error code from the Win32 API mean in this context", I'm not sure if I ever posted a single question on the site.
> I had to read books
Same here. Except that as native french speaker there simply weren't that many quality books about programming/computers that I could easily find in french.
So at 11 years old I also learned english, by myself, by using computers (which were in english back then) and by reading computer books.
And we'd exchange tips with other kids in the neighborhood who also had computers and were also learning to code (like my neighbors who eventually, 20 years later, created a software startup in SoCal).
Even people with the Internet growing up still learned largely through books, probably until StackOverflow really took off. One "hack" prior to SO that sometimes goes under-acknowledged was Google Groups. Around 2000, they bought and made free to the public the entire Deja News USENET archive, and suddenly you could search comp.lang.whatever and usually find someone who'd asked (and someone who answered) whatever question you had. And the signal-to-noise ratio was extremely high, given the barriers to entry (technical and financial) to being active on USENET's technical groups in the 90s.
Of course, asking a question was another matter, likely to result in a rebuke for violating the group's arcane decorum. But given how pervasive "RTFM" culture was back then, most "n00bs" were content to do just that (RTFM) until they came up against something that genuinely wasn't covered in some FAQ or manpage.
I have also felt something similar.
A few days back, I tried to implement a PDF reader by pure vibe coding. I used all my free Antigravity, Cursor, and Co-pilot tokens to create a half-baked, but working Next.js PDF-reader that (to be honest) I wouldn't have glued together without 2 weeks of work. As an MLE, I have done negligible web development using JavaScript and have mostly worked with Python and C.
But the struggle actually started after the free tokens were exhausted. I was feeling anxious to even look into those Next.js files. I am not able to describe, but it was probably some kind of fear - fear of either not being able to debug/implement a new feature, or not willing to put in precious hours (precious because of FOMO that I could do something cool with AI-paired vibe coding) to understand and build the feature myself.
I abandoned that project since that day. Haven't opened it yet - partly because I am waiting for the renewal of free tokens.
Reviewing code is absolutely different from writing it, and in my opinion much harder if the goal is more than surface level understanding.
This is what I am still grappling with. Agents make more productive, but also probably worse at my job.
The biggest problem in my head with AI generated code is that its mistakes are subtle but can still be critical. There will be a point where people don't understand generated code and just leave it unmodified allowing other code to pile up and depend on it. At that point you no longer have a bug, but a new feature. Also, AI doesn't grasp things on a big scale, just shits out output with highest score. This doesn't mean output is a great fit for your project or for upcoming plans.
The 747 analogy cuts deeper than skill atrophy. The pilot's problem isn't just that he stopped improving — it's that the feedback loop between action and consequence was severed. When the plane does the flying, you stop building the intuition that tells you when something is subtly wrong before it becomes catastrophically wrong.
That's the real risk with coding agents, and it's not about prompting skill or code review habits. It's about the degradation of the anomaly-detection faculty — the part of an experienced engineer's brain that notices "this doesn't feel right" before understanding why.
The pilot analogy also suggests the failure mode: not gradual incompetence, but sudden catastrophic incompetence. Pilots who over-rely on automation perform fine until the automation fails — then they're disoriented in a situation their skills haven't been trained to handle. Air France 447 is the canonical example.
The coding equivalent isn't a developer who writes bad code. It's a developer who can't diagnose what went wrong when the agent produces something plausible but subtly broken in a domain they no longer understand deeply enough to interrogate.
The "write code by hand as an educational task" suggestion is right but probably underestimates the discipline required. It's hard to choose the slower path when the faster one is immediately available.
> For example, to add pagination to this website, I would read the Jekyll docs, find the right plugin to install, read the sample config, and make the change. Possibly this wouldn’t work, in which case I would Google it, read more, try more stuff, retest, etc. In this process it was hard not to learn things.
How is this any different than building Ikea furniture? If I build my "Minska" cupboard using the step-by-step manual, did I learn something profound?
Firstly, if you're doing those steps, you're building your own tutorial, not just following the exact steps in a manual provided with the software. The sample config won't be exact or perfect for your setup, so you'll need to say least figure out how to adjust it to your needs.
That said, I think you're still leaning things building IKEA-style software. The first time I learned how to program, I learned from a book and I tried things out by copying listings from the book by hand into files on my computer and executing them. Essentially, it was programming-by-IKEA-manual, but it was valuable because I was trying things out with my own hands, even if I didn't fully understand every time why I needed the code I'd been told to write.
From there I graduated to fiddling with those examples and making changes to make it do what I wanted, not what the book said. And over time I figured out how to write entirely new things, and so on and so forth. But the first step required following very simple instructions.
The analogy isn't perfect, because my goal with IKEA furniture is usually not to learn how to build furniture, but to get a finished product. So I learn a little bit about using tools, but not a huge amount. Whereas when typing in that code as a kid, my goal was learning, and the finished product was basically useless outside of that.
The author's example there feels like a bit of both worlds. The task requires more independent thought than an IKEA manual, so they need to learn and understand more. But the end goal is still practical.
If you've never put a cupboard together, you would have learned what the different parts, what size of screws to use (in the rough sense),... You may have forget it right after, but when someone ask you to help them, you will be a bit more proficient than someone with no experience.
But the nice thing about a cupboard and its components is that they are real objects, so the remembrance is done with the whole body (like the feeling of a screw not correctly inserted). Software development is 90% a mental activity.
I find the opposite is true for me. In my wheelhouse I can use an agent to do a thing, and I can be very critical of the implementation. Outside of my wheelhouse I actually learn quite a lot by watching the agent solve a problem. Since I do have a strong background I am still able to judge the overall approach and identify obvious stupid things the agent tries to do. I would say the code quality is probably a bit worse in those situations than I would have ended up with, but takes about 1/3 of the time. The most difficult part is opening a PR and worrying there might be a couple stupid blips left that I missed, didn’t affect the implementation, but my coworkers are going to look at and ask me wtf I was thinking
> I believe in coding primarily as a means to an end
Yes. Absolutely. To what end, though? Is your end deterministic like a cryptographic protocol or loose like pagination of a web page? Is your end feature delivery or 30 years of rock solid service delivery at minimal cost?
AI is a dangerous tool. It exposes fundamental questions by automating away the mundane. We have had the luxury of not thinking deep and hard about intent and value creation/capture and system architecture. AI is putting us face to face with our ineptitude: maybe it wasn’t the tech stack or the programmers or the whatnots? Maybe the idea was shait, maybe I had no understanding of the value added of my product? Maybe …?
You get the best gear - musical instrument, bicycle, camera, etc - the pros have and still the results are not great. Gotta ask why. We are experiencing this at literally industrial scale.
Re: "reviewing code is very different from producing it, and surely teaches you less" - I feel this so much when reviewing the code one of my coworkers writes. My coworker makes plenty of mistakes and I learned the hard way that reviewing his PRs in a web page is not enough. These days when I have to review his code I download his branch locally and load the entire solution in the IDE. I then track his changes and usually find a few things wrong.
BTW - my coworker is not AI. It is a flesh-and-bones SWE.
> Coding agents are here to stay, and you’re a fool if you don’t use them.
Why would they be here to stay? The crux of the author's argument is that using them is detrimental in the long term. The correct response to that is not a lukewarm response of "maybe do some coding now and again", it is "don't use tools that make you worse".
shich's point about simulator mandates is the sharpest thing in this thread. Aviation treats skill atrophy as a systemic risk with institutional solutions. Software engineering treats it as an individual discipline problem. "Just practice writing code by hand" is the equivalent of telling pilots to go fly a Cessna on weekends.
The analogy also breaks in a useful way though. Autopilot doesn't modify the aircraft. Coding agents modify the codebase, and each session changes the terrain the next session operates on. The skill you actually need isn't "can I still write code" — it's "can I reason about a codebase substantially authored by a process I didn't control." That's closer to forensic engineering than piloting.
> I do read the code, but reviewing code is very different from producing it, and surely teaches you less. If you don’t believe this, I doubt you work in software.
I work in software and for single line I write I read hundredths of them.If I am fixing bugs in my own (mostly self-education) programs, I read my program several times, over and over again. If writing programs taught me something, it is how to read programs most effectively. And also how to write programs to be most effectively read.
> I work in software and for single line I write I read hundredths of them.
I'm not sure whether this should humble or confuse me. I am definitely WAY heavier on the write-side of this equation. I love programming. And writing. I love them both so much that I wrote a book about programming. But I don't like reading other peoples' code. Nor reading generally. I can't read faster than I can talk. I envy those who can. So, reading code has always been a pain. That said, I love little clever golf-y code, nuggets of perl or bitwise magic. But whole reams of code? Hundreds upon hundreds of lines? Gosh no. But I respect anyone who has that patience. FWIW I find that one can still gain incredibly rich understanding without having to read too heavily by finding the implied contracts/interfaces and then writing up a bunch of assertions to see if you're right, TDD style.
> If I am fixing bugs in my own (mostly self-education) programs, I read my program several times
I think here lies the difference OP is talking about. You are reading your own code, which means you had to first put in the effort to write it. If you use LLMs, you are reading code you didn't write.
I read other people’s code all the time. I work as a platform engineer with sre functions.
Gemini 3 by itself is insufficient. I often find myself tracing through things or testing during runtime to understand how things behave. Claude Opus is not much better for this.
On the other hand, pairing with Gemini 3 feels like pairing with other people. No one is going to get everything right all the time. I might ask Gemini to construct gcloud commands or look things up for me, but we’re trying to figure things out together.
If I need to change someone's code, I also read it. several times.
>hundredths of them
Man, it would rule so much if programmers were literate and knew how to actually communicate what they intend to say.
It's obvious from the context here what the intended meaning was. Everyone makes typos sometimes.
It is literally not clear. OP could mean that they read hundredths of lines of code for each code they write ie 100 lines of code written and 1-3 lines read. That is in fact literally what they wrote.
Man it would rule so much if programmers could manage not to be assholes by default so much of the time.
It's ironic that the more ignorant one is the one calling another ignorant.
Alright I've had my fun with the name-calling. I will now explain the stunningly obvious. Not a thing anyone should have to for someone so sharp as yourself but there we are...
For someone to produce that text after growing up in an English speaking environment, they would indeed be comically inept communicators. Which is why the more reasonable assumption is that English is not in fact their native language.
Not merely the more generous assumption. Being generous by default would be a better character trait than not, but still arguably a luxury. But also simply the more reasonable assumption by plain numbers and reasoning. So, not only were you a douche, you had to go out of your way to select a less likely possibility to make the douche you wanted to be fit the situation.
Literate programmers indeed.
Not everyone has English as a first language.
> reviewing code is very different from producing it, and surely teaches you less
Maybe he meant "reviewing code from coding agents"? Reviewing code from other humans is often a great way to learn.
I interpreted this as not as good a way to learn.
I learn the most from struggling through a problem, and reading someone’s code doesn’t teach me all the wrong ways they attempted before it looked like the way it now does.
Exactly. And vice versa, one of the biggest benefits of code review is calling out pitfalls you, the reviewer have ran into that the reviewee isn't aware of. LLM addicts won't have any experience with what works/doesn't work, so their reviewing will be pretty useless
I was thinking in situations where a coworker might send me something to review, and I might have thought "hmm, I wouldn't have done it like that, but this is a great way to do it too". Also, a good source of teachable code is to participate in a programming contest, and then review the repositories of the teams who scored better than me after the contest.
I agree that if I don't already know how to implement something, seeing a solution before trying it myself is not great, that's like skipping the homework exercises and copying straight from the answer books.
Yeah I learn from reading other work too, but it doesn’t stick as well as when I work through it.
The problem now is the pressure to use llms means creating more code but understanding so much less.
This is why tutorials in programming don't really teach much because you get the finished version. Not all the wrong steps that were taken, why they failed, what else was tried.
These steps are what help you solve other issues in the future.
Whether we want to accept it or not, we’re now QA. That’s not derogatory, at all.
But I don’t think the answer here is to double down on reading the code and understanding that deeply. We’re rapidly moving past this.
I think the answer is to review the code for very obvious bad choices. But then it’s about proper validation. Check out the app, run the flows, use it for real. Does it _actually_ function?
Or that’s what is working for me. I cannot review all the LOC and I’m starting to feel like I don’t want.
This is why I still haven't embraced agents in my work but stick with halfway manual workflow using aider. It's the only way I can keep ownership of the codebase. Maybe this will change because code ownership will no longer have any value, but I don't feel like we're there yet.
[...] since I work at an AI lab and stand to gain a great deal if AI follows through on its economic promise.
And there it is.What in the linkedin sudden b2b marketing insight was that.
I think this author could consider thinking of the AI as more than just a task rabbit that allows us to not code, not think, not understand.
If the LLM is indeed such a master at complex coding tasks that we don't understand, why not ask it some questions about how the code works?
You can even ask directly about the concern. "I am worried that by letting you do everything I am not learning how the system works. Could you tell me more about what you did and how I might think through it if I needed to do it myself?"
What the fuck are people working on where it's possible for the LLM to just add entire features. Refactors and class/method level code can be impressive, anything highly structured with good guard rails. As soon as things start to reach beyond that it falls to absolute garbage.
I built this entire app on ios + website without opening an IDE.
https://www.gophergolfer.com/iphone
NextJS, Rails, GraphQL, React Native
Certainly wasn't one-shot for all of it but case in point it has dozens and dozens of "features" all LLM implemented
I want to hear someone say "I work at Google on a 10yr old 100KLOC service and AI is doing it all we are just vibe coding" as that would be really interesting. Greenfield yeah AI slaughters greenfield before breakfast
Is the source code available?
There are companies building entire applications, indeed replicating the functionality of existing SaaS applications to test their original applications,with no humans in the development loop.
We're looking at the twilight of programming as a human skill. The LLMs are just that good.
The end result is, and will always be garbage if there is no "human in the loop" to test whether the result meets the requirements, and telling LLM what to do if it doesn't.
Like somebody else said, there is still a need for QA (and usually for requirements gathering too), that's a part of the development cycle. Developing software that is meant to be used by humans with zero humans involved isn't realistic.
I mean... It takes 10 minutes of testing to know this is bullshit. At least in the near term. I've sat with an agent and played the part of a vibe coder. Not looking at the code, frankly providing more guidance and feedback then a vibe coder could, and even in a thousand line app it falls to absolute shit fast. It does get something that "technically" works, but it will collapse in on itself in no time.
The act of designing software might be changing, less writing the actual code, but someone who knows what the fuck they're doing still has to guide the ship.
"Vibe coding" with a single agent is really only a thing for small-scale projects. Really you want to be orchestrating many agents: some generating code, some reviewing, and some testing, feeding back into the generators. Cloudflare developed a clone of Next.js this way and are putting it into production. No humans in the main development loop.
[dead]
[dead]
[flagged]
[flagged]
Crafted by Rajat
Source Code