hckrnws
The hardest part of building software is not coding, it's requirements
by BerislavLopac
If you know the address, it is a matter of figuring out directions. But if you do not know the address, no directions may be discovered. How do you figure out the address?
If you cannot determine what needs doing, you cannot determine how to do it. How do you determine the what?
The answer is in the problem statement, but first you need the problem statement. How do you get to the problem statement?
It is easier to follow orders than it is to determine what the orders ought to be. How does one determine the orders?
How shall we live?
A solution is a response to a question asked within a context of presuppositions and facts. But this is not necessarily a one-directional process in practice. In practice, we do not know the context entirely and in the needed resolution. Thus, we often learn from our attempts at a solution that something in the presuppositions is amiss. And indeed, this is how science functions and how Socratic dialogues proceed. A critical experiment can reveal that something is wrong, but it cannot tell us necessarily what. The needle in the stack of theories, and common or philosophical or merely technical presuppositions, is not necessarily in the theory being tested, though that it where we begin for economical reasons.
You need prudence, the heart of practical reason, to know what to do.
And then you need to actually do it.
> It is easier to follow orders than it is to determine what the orders ought to be. How does one determine the orders?
This is true when you are given orders of exactly the right specificity, and at no other time. In my experience, most orders are either so vague as to barely be orders while also obscuring all context necessary to determine what the requirements might be ("implement tagging") or so specific that they are a death march and probably won't work anyway ("Here is a UML class diagram. All you have to do is turn it into working C++ code.").
In my opinion, any worldview or theory resting on the assumption that the people giving orders have it harder than the people following them is wrong until proven otherwise. It's very plain to anyone with real experience that the reason everybody wants to be a General is /because it's easier than being a Private/.
That is why I have always loathed the old-school essay https://en.wikipedia.org/wiki/A_Message_to_Garcia .
It describes a situation where a lazy manager simply wants someone to follow directions rather than gather requirements.
It was lauded as a work of genius business prose at a company I worked at and fortunately departed.
This x100
I got to a repeatable functional process to do exactly what you described when I started applying the state machine design pattern to organizations - and the accompanying broader Markov Decison Process that allows it to iterate.
You need to define two states:
1.Current state
2.Desired next state
Once you have your states, then you build a “transition matrix” that is unique to this state transition which will guide the “how.”
To be clear “organizational state management” is not some turn-key, set it and forget it mathematical, solution.
It’s a simple metaphorical method, But the structure is one that reveals the simplest path to a defined target condition by demanding observability for each step and begs the question of how to draw the map.
how do you navigate picking the right level of specs for "Desired next state" to me that's hard in personal projects for instance you can choose for a variety of topics then have to decide how deep you get into each.
I guess I'm asking if you have a heuristic for what's a good enough desired next state.
Generally it's based on whatever the next big economic decision point is
So if the desired state is "New feature X generates Y new revenue per month" then we need to define the state map of the system where that would be true.
That begs many questions then, which reveal our state variables:
"What feature metrics do we bill for, in what increments, and on what frequency?" - Business Model
"What scale is required for that transaction frequency?" - Infrastructure planning
"What amount of downtime can we afford and still meet y goal" - Observability Requirements
"What tools do we add to help us build?" - Development Environments/Practice
etc... until everyone is satisfied that you can map the current state of things, through this transition matrix of questions and into the goal state.
The other piece is having an accurate map of the current state of all your dependencies - including people, time, money, focus, etc...
Interesting. Any resources to learn more about this?
My favorite template for defining the two states & transition path is the A3 Plan popularized by Toyota. I have returned to it over and over throughout my career as a simple and effective problem solving tool.
I'm not working off of an existing framework, just my own experience and battle scars.
I'd certainly take the time to write my state management process out if there's interest though! Would you read it?
I did an impromptu example writeup in another response if that looks interesting to you:
Sounds like you'd get a lot out of Event Storming (if you haven't looked in to it already)
Worked well for us for restructuring 200 people with that. Basically the tech savy people and the ones doing most of the difficult work sketched the interactions and how the project unofficially worked and we restructured along the "real architecture".
I really like your comment. I plan on using it next time product asks me "how long will it take" when we haven't gotten anywhere close to deciding what "it" is.
We don't know where we're going, we don't know how we're getting there, I can't possibly tell you how long it'll take to get there.
What you're describing is a bidirectional iterative process that prioritizes building and testing and feedback.
Which is the entire point of agile, even though most people here seem to hate it.
I like to call it small "a" agile vs. big "A" Agile. The difference being big "A" Agile has companies selling prescriptive solutions, ticketing systems, books, consultants, etc. and then management that hears about a silver bullet and thinks it solves all their woes.
Small "a" agile is an iterative process with a tight feedback loop, small testable deliverables and a backlog where prioritized items bubble up resulting in other items getting stale. Stale items are good in the sense that you didn't build, test and deliver something that is of little to no value.
Agreed but you don't need agile and a lot of the hate I think comes from it not actually being a bidirectional iterative process for most people in 'agile' companies.
Whenever you talk about "agile" you need to be specific: are you talking about actual methods to achieve real functional agility in your team? or are you talking about consultants selling the idea of scrum which management just turns into a daily hour long meeting?
Both are called agile but they're very different. Most people will agree that the first is good and the second is bad.
Comment was deleted :(
> It is easier to follow orders than it is to determine what the orders ought to be.
Depends on the problem space. For example, any dimwit could have ordered any mathematician to prove Fermat’s last theorem. That doesn’t mean it was easy to actually do it.
The solution is almost always “fuck around and find out” (FAFO).
This comment should be a post itself!
Maybe I'm a bit cynical but most of the "software engineering is so much more important than 'coding'" talk I have encountered came from people who could not write code that compiled/ran, let alone find creative solutions or do interesting things. Maybe it's important for them to see the actual programmer as a replaceable commodity, I dunno. Always bothered me. I'd also say that good programmers tend to be good software engineers even if they never draw a UML diagram (because...why). I sometimes get the feeling that certain "architects" assume programmers are just code monkeys that do not think. And then there's anecdotes of issues that came about because the programmer didn't understand the domain and thankfully the godlike architect saved the day etc...conveniently ignoring the myriads of wrongly collected requirements, wrong diagrams, architectures/diagrams that were never updated/are out of sync with the actual code etc.
Once again only from my experience...good programmers are capable of...thinking. Big shocker there but programmers tend to think about why they do and do not do stuff in certain ways (and even if it makes sense to do them).
I'm well aware there are excellent architects/managers but from my experience the good ones were strong programmers before and then "moved up" but they always understood the code level. And that's why I'm not afraid of "AI will take all programming jobs" scenarios...because even if the (en)coding would all be done by AI, to become really good at directing said AI you also hove to be good at programming (at least in my opionion).
But apart from that, I also see programming as an artistic endeavor at times. There's a lot of creativity and self expression that goes into certain kinds of programs. And writing good code that solves interesting problems is actually hard. I'm pretty sure the initial Quake engine was harder to write than getting the requirements right for said engine for example.
> I'm pretty sure the initial Quake engine was harder to write than getting the requirements right for said engine for example.
Gamedev is historically an outlier because the technical requirements are relatively obvious: keep FPS above a minimum, and latency low. Actual game content is highly flexible, but performance cannot be compromised.
By contrast, enterprise and industrial systems have far more specific requirements that must be identified and satisfied, including domain logic, security and safety.
Games have two other elements that make them an oddball:
1. The data integrity mostly doesn't matter. Crashes, corruption, whatever. Just throw playtesting at it until it kind of works. Higher discipline than that is only needed for multiplayer.
2. Most of the engineering exists to support a pipeline to create, load and render specific assets at a specific level of dynamism, ranging from "menu text" to "skeletal animation" to "character creation". It's the definition of what an asset is or can be, and how dynamic it has to be, that can make the difference between "days" and "years" of development. A team that can argue its technical case eloquently can chop out the most expensive kinds of assets, make extremely shoddy tools to help deliver the ones that remain, and still end up with an engaging experience.
When we discuss old engines like Quake, the genius in them is primarily in finding exactly the right specification of assets to hit the target machine of the time while achieving previously unprecedented levels of detail. It's easy to make a game on modern hardware run at 500hz if you stick to rendering Atari 2600-grade scenes - it's the additional detail that turns it into a major project to hit 30hz.
Today the burden in games has shifted away from the runtime performance costs of a scene - since there are a lot of standard methods of getting an acceptable result, including entire engines with level-of-detail systems to help automatically downgrade graphics - and towards the production cost of achieving detailed assets, which is more in the realm of what technical artists now do as a dayjob.
in an ideal world the code is the best, first, and foremost specification of the requirements.
you should be able to parse the abstract syntax tree and just look at top level definitions to get a good overview.
> "software engineering is so much more important than 'coding'" talk I have encountered came from people who could not write code that compiled/ran
Not only - a person who highly proficient in coding can think it the easiest part of development process not thinking that it took many years to arrive to the point when it become easy. It's like a rich person can not understand that getting money can be a hard part when buying something. People tent values less what they have in abundance.
Yes, as a not-great programmer, I have trouble meeting the requirements even when I know exactly what they are. Even if it's like aligning footnotes in a word document or something, it's hard to be exact.
My experience as well. I have never worked with a non-coding business analyst or architect who provided any positive value to projects I worked on. Quite the opposite.
The secret to completing projects, when people like that are involved, is to ignore whatever input/demands they have and instead solve the actual customer problem. If that means deploying unnecessary tech X, demanded by some “authority”, but not actually use it in production that’s fine.
> Once again only from my experience...good programmers are capable of...thinking.
Agreed. Unfortunately most programmers are not good programmers.
Code itself is a mere encoding of that (mis)understanding in a highly specialized format.
What makes me “hopeful” - for me as a “programmer” - about the future is that our problems in nearly all domains are very damn near certain not caused by technical limitations.
We could have settled on some GUI/UX, communication, database, knowledge handling, etc. standards decades ago and we could have decided we are not going to re-implement all the basics of computing all over again all the time because we need our scrollviews to “snap” just right or we need to reimplement IRC again.
We could have fixed all sorts of issues proactively decades ago.
We didn’t and technology was a very minor part of that equation if it played a role of any significance at all. Nearly any domain I can think of is severely limited by various legal, social, financial and/or political issues that vastly overshadow any technical considerations.
AI won’t budge any of that.
At least in my circle (Ruby on Rails), big companies who continue to use the classic Rails+MySQL combo have to do ungodly things to keep it from collapsing. Just read any of their blog posts or look at some of their open source stuff. It's very easy to see how someone who's been through that might go out to build something new from the ground up to try and sidestep problems.
Even things like TCP and HTTP are seeing revamps because the stuff from last decade is just not that good. GPUs exist because x86 just didn't cut it on a fundamental level.
So I think on some level we kind of did try to standardize stuff. We always try to standardize stuff. Then someone comes along with a new non-standard thing that just blows your standard out of the water, because turns out it's really hard to build an unbeatable standard.
But yes, maybe there's a bit too much effort going into this reinvention process. Most new things really don't hold a candle to the tried and true stuff. Problem is sometimes they do.
> Even things like TCP and HTTP are seeing revamps because the stuff from last decade is just not that good
It’s IMO extremely debatable if they were not “that good”.
The mindset that serves simple websites with 60 requests to 12 different domains is the problem and it’s hurting my brain to think about how mind-numbingly stupid it all is. I think “revamping” TCP and HTTP is misguided at best. I think I may have an unpopular opinion here or not. I am not sure anymore.
It’s a human issue like someone said here. We don’t settle down on something unless forced. We are not going to fix it, which is good, because we will have lots to do.
TCP needs revamping. It is a mess of back compat. on back compat. with many performance regressions and security vulnerabilities which shouldn't be there in tge first place.
Personally, I adore development of QUIC and I think it is going to bring alot of good into networking, performence and security.
Let’s agree to disagree.
I am also not sure what “security” is being touched by TCP (and should be touched by that layer)? Performance could be better, but I’d say pick your battles. I prefer an “eternal” standard than turning everything upside down on a whim every generation for their incessant need to track everything you do and deliver as much ads to you in as little time as possible.
Encrypting everything by default on the lowest possible layer is batshit crazy. A sign of our times, not some fundamental technological leap forwards. In 100-200 years we will smack our collective foreheads.
I have to agree with this, encryption is an overhead that you just don't need sometimes. I'm imagining an engineer in a HPC organisation in 30 years feeling like a genius for figuring out they can get more performance by removing it.
We should've agreed on that. The current state is very much bad for all involved. Implementing SaaS CRM #15324 from scratch 'on different and current tech', spending millions$ without any other reason than that you think you are better is not doing the world any good.
Human nature is the problem which means it’s probably unsolvable. No matter how good your generic CRUD framework is, every person who wants to build a CRUD app inevitably wants to make it unique in some way that the framework doesn’t support.
That is, everyone thinks they can do better. To be fair, there’s no way to know if you can do better without trying. Most of the time you can “try” in your mind though if you are knowledgeable and skilled enough.
Part of the problem I run into these days is too many people designing systems/products without even knowing what’s out there.
Well, that is indeed so; there are easily 15k+ crms out there to. So when they checked the top 10 on google and didn’t find a fit, people often just think ‘wow, there is a hole in the market’ or ‘we need a custom solution as nothing fits’ and build one. In London alone, I know over 100 crm implementations, most are internal, some saas, but all doing the same thing.
Really? Removing caching from the world and then see what is still working. In fact, you probably wouldn't even be able to read my response anymore.
So yes, technical limitations are a major concern when designing software.
Caching is also as old as computer science itself, which is my point. It’s not rocket science. We got that. We don’t need caching solution #6362.
Then I suggest you reword
> nearly all domains are very damn near certain not caused by technical limitations
to
> nearly all domains are very damn near certain not caused by bleeding edge technical limitations
Which is then true. But also doesn't really tell much anymore.
I will admit that there are technical limitations in the world. I am not quite sure I said there weren’t. What I said was that they just don’t matter in the grand scheme of things. Caching is not holding anyone back, it’s just a technical detail.
Maybe I just read your post wrongly, but my impression is just very different. Even though I don't do bleeding edge work mostly, I still spent about a third of my coding time dealing with performance related issues. And I do use postgres, the jvm and generally mostly mature technology. But much is related to the specific domain I'm working at, so I don't think more standardization would have been of any help in my case.
I guess we just do different things.
30+ years of experience here, working at all levels from coding to VP of development to CEO of a successful software company:
I completely disagree.
Writing maintainable, bug free, performant code is HARD. And very few developers know how to do it well. A single GOOD developer will consistently outperform a 5+ team of “average” developers.
> We could have settled on some GUI/UX, communication, database, knowledge handling, etc. standards decades ago and we could have decided we are not going to re-implement all the basics of computing all over again all the time because we need our scrollviews to “snap” just right or we need to reimplement IRC again.
We could (sometimes we even did) and still we'd have multi-million industries not aware of/ignoring those conventions for mere economical reasons.
> mere economical reasons
Why mere?
Can one afford to be uneconomical?
Do constraints not matter?
Personally one of my definitions of good engineering is making "good compromises" between different needs.
>Can one afford to be uneconomical?
Sometimes. You can always afford to be cheap, too: that may lead the team to spend the right amount or less, but someone will later spend a lot more, which in SWE parlance is called "re-implement all the basics of computing all over again all the time". I don't know the name of the game where you have to avert that the bigger expenditure comes to hit your team's back. (substitute whatever you want in place of "team": yourself, your firm, your family or clan)
I honestly chafe a little bit at the "requirements" language. I have very rarely built software that has firm requirements. A better fit for my mental model (which is the language I find myself using constantly at work) is whether something is "useful" or not, constrained by priorities and trade-offs. "If we spend the time to make it do this useful thing, we won't get to that other useful thing until it is no longer as useful as it would be if we did it earlier" or "that change is not actually useful for xyz reason". This is where the agile / iterative method of designing and building software meshes best with my thinking; instead of enumerating requirements, we're discovering useful tools and functionality and iterating toward the most useful set of tools that we can build within all the rest of our constraints.
I don't think this is like The One Way or anything, but it's the only thing that makes sense to me. When I hear about people sitting around writing down big lists of "requirements", it just sounds like guesswork and making stuff up to me.
But a bit more on topic: Yeah, I'm optimistic that it will still be a primarily human job to figure out what is useful to build, with AI tools playing a very useful supporting role (just like all sorts of other software tools that came before them).
> "If we spend the time to make it do this useful thing, we won't get to that other useful thing until it is no longer as useful as it would be if we did it earlier"
In a discussion regarding process and requirements this type of analysis doesn't always make sense. The team is obligated to follow the process, because someone has determined it's really important. So that's that then.
If you work under process with controls, like in safety critical/security systems, which use "safe" software processes, there may be reasons you are completing this work. Those reasons are often legal, or to reduce blame and liability. They might be anticipating that they will need to supply certain information in the case of a lawsuit, or that they will get in trouble for not having completed all the homework, which includes requirements.
Even taking this out of the picture and purely looking at merits, there's a reasons we gather requirements. Without deciding what something does, how will we prove that it accomplishes these goals? How do we know what a product actually does? It can also be really valuable tool from an ethics standpoint. Having requirements makes it harder to sneak content in. It should be impossible to add content without consideration. This is actually a massive issue, with scandals and fingerpointing, that arguably shouldn't be possible, because we all agreed on how this will work.
There's also some amount of bias built into time management. People are really bad at judging the "usefulness" of something, especially when it requires them to do something they don't like. Why let the developer of an important system decide what makes sense for them, likely based on their own agenda, instead of demanding they decide how everything works, and then prove it?
So we decided you need to do this important thing, you decided you thought it wasn't very "useful", and didn't do it; now someone's dead. How do you explain this? Those decisions you made based on "usefulness" look pretty absurd when you're not following the ISO process, as expected.
> The hardest part of building software is not coding, it’s requirements
Yes, but... "coding" would be much harder, were we to change two things about current practice:
1. Code must not have security vulnerabilities. (Currently, security vulnerabilities are ordinary and expected, as if security isn't really a consideration -- only passing limited functional tests, and appears to work. Make security a consideration, and I'd say most developers would have to discard their current ways of working.)
2. Code must be able to continue be maintained and evolved without an explosion of human resources required. (Our intuition for how to keep velocity high sustainably hasn't been helped by the convention of frequent job-hopping before one sees full-lifecycle effects, nor by startups that plan on large "growth" hiring.)
Yeah, this reads like cope. So, all software engineering? Then you must have a pretty narrow view of the problem space. Quite a lot of problems have a pretty well defined set of requirements that can fit on one page. It's the technical excellence that delivers on those requirements.
> Then you must have a pretty narrow view of the problem space. Quite a lot of problems have a pretty well defined set of requirements that can fit on one page.
If this is true, then these problems are already solved. And, given the nature of software, they do not need to be solved again. The reality is that every single piece of software ends up being unique at the edges. It's what makes software great, but also so challenging. It's also these edges that take your 1 page of requirements and turn them into an ambiguous, conflicting set of 50 pages.
> If this is true, then these problems are already solved
There's a lot of induced demand in software development. We have spent the last 70 years on incredible productivity improvements through better tooling, language design, frameworks, etc. But all that has done is create more demand, both for functionality and usability. Now that animations are cheap to add to UIs, people want to have them. Automating tasks that can be done quite cheaply by humans is becoming cost effective in ever more fields. Software of complexity that would be unimaginable in the 50s is used for purposes as mundane as many-to-many short messages.
Lots of software is unique, but at the same time there's also lots of software or software features that just recently became cheap enough to implement to be worth doing. And those can sometimes be quite simple in their requirements.
Can you give an example of a non-trivial problem with a well defined set of requirements that you can fit on one page?
To support our next-gen machine learning system, we need a 10 exabyte storage array. It should host a system accessible over TCP/IP or Infiniband that can stream random-access 1MB blocks of data to 65,536 different computers, at continuous loads of 64 GB/second each computer, using a protocol of your choice or design. Correct for all data corruption and do not lose a single bit during the next 1,000 years of operation.
Yes, we can add more specs but these alone should be pretty daunting.
What does this have to do with building software? This is something submitted on a form for hardware for a capex
If you think you’re gonna organize that many bits and not lose them for centuries at a time, you’re seriously underestimating the need for data replication and error correction algorithms, first of all.
Again, that's a hardware storage solution's problem. Adjust the parity level in the filesystem. Unless you're writing the requirements for building the software for a storage platform.
… yes, that is in fact the point
There are money, operations, and delivery time requirements, just off the top of my head, not expressed in that requirement that vastly changes the solution.
And the client is all over you/the company.
As a usual bore, I'd like to point out that trivial/non-trivial is highly subjective.
Is making an app to connect in Bluetooth to an infrared camera trivial?
Is making a 55Gib fizzbuzz non-trivial?
The specs for speedy FizzBuzz are relatively clear, even if the implementation is challenging.
For the camera, there are tons of unspecified behaviors and baked-in assumptions. Is the user configuring the camera, or are we discovering any/all cameras of a certain type? Authentication? What should happen when the connection fails? What if the bandwidth isn’t sufficient (e.g., due to distance or congestion) to deliver the full take from the camera in realtime? How will camera malfunctions be detected, handled, and reported?
A hobby project or prototype can ignore most of this, and insist that you turn it off-and-on if anything looks odd; a fancy turnkey security system should carefully consider all of this and more!
No plan survives contact with reality.
Ambiguous requirements are more symptomatic of a problem space that is poorly defined e.g. useless SaaS
If you're building to solve real problems(tm) you don't need a PM to pad out tickets with filler.
It is due to a problem space that’s poorly defined, but that’s not always indicative of ticket padding or a useless product.
I’ve dealt many times with real people, who present me with a real problem, but no single concrete solution.
If the PM doesn’t take the proper time to understand the problem (and try to find the real, underlying problem that often exists) — Or rushes into a solution without proper evaluation — Or just generally wants to quickly churn user asks into tickets — then the result is the same.
I generally agree that, in my experience, a good senior software engineer is well suited to teasing out “the real problem” and getting to a properly detailed solution even without a PM.
However, that doesn’t mean I don’t think there’s potential value in a PM that can set a broader vision of the product and help prioritize tasks. But the good ones do so by working with and deferring often to his or her software dev team, and NOT by working “top down”.
Can you share one problems defined set of requirements that fit on a single page?
Cool, so can the author build a browser - all the requirements are in the w3c specs, or just look at Firefox/chrome so it should be no problem. Oh, and and build it for <100k, that's another requirement. In 6 months, that's another requirement too. Since I did the hardest part, I expect a 90/10 split. And no one else better copy my requirements or I'll sue.
I'm sure the author can build it fast and cheap. Just don't expect it to be good. Now, what have you proven?
Whenever I read/hear about project/software requirements that need to be followed exactly, I think about the funny “exact instructions” video :-)
right! very good point, then it's all about context.
Maybe would be easier to put developers to do the business job for few weeks as interns and not writing a line of code until knowing how to work in the business today, and then they could understand what needs to be built perfectly. :)
Edit: Which in startup world would equate either to be a domain expert on what you are building or talk a lot to users to gain that context, which is the YC advice if I'm not mistaken. makes sense.
Mhmh why it is never the other way around? Let's put managers to write code for a week? Let's put the UX/UI people to write code for a week? That's sounds ridiculous right? Even though I feel this would be the best case scenario, not the one proposed by you.
Reminds me of a software architect accurately and confidently providing the wrong spec and accuse later our team of poor performances in an unnecessarily passioned way in a meeting. Turned out later the cause was later revealed to be what he specifically asked to be set to wasting a week of efforts in the team. BTW if you have career advice on how to deal with a situation like that, I'd like to hear.
Any software project where instructions must be followed to the letter, instead of having tech leaders communicating the _intent_ of an activity or component, where details are merely to be seen as suggestions, tend to either fail directly or be extremely wasteful.
If the dev team is getting such instructions from someone in a different time zone, and communicate with them only once or twice per week, it gets even worse.
Architects are usually not stake holders or your direct line of management. Chide them and explain that someone who can't code shouldn't be calling the shots.
I’ve repeatedly noticed that even the most skilled programmers can yield ‘mediocre results’ when there’s an absence of a requirements engineer or product manager. These roles are crucial for translating managerial requests into very clear requirements and/or user stories that define the desired functionality.
Edit: Restructured the comment.
—-
I penned the following thoughts yesterday in a discussion on using AI as a programmer. Despite subsequently deleting my comment, I believe it’s pertinent to repost here due to its relevance to the topic at hand:
My success with AI integration as a programmer heavily depends on the precision and clarity of my prompts. These prompts often mirror detailed requirement documents, specifying what I need the AI to generate.
However, crafting such specific prompts can pose a substantial challenge. For individuals who find it difficult to express their needs succinctly and clearly, the benefits of generative AI might be limited. It’s a tool that necessitates a particular skill set and comprehension to leverage effectively.
I've been in requirements meetings similar to this video: https://youtu.be/BKorP55Aqvg
You beat me to this. It should be watched daily by software organizations around the world.
Requirements is always the problem: capturing and realizing user-desired system behavior in a form programmers can build from. Unfortunately big design upfront tries to nail these down at the beginning, an Sisyphean task because almost no user knows exactly what they need. Agile development should instead not try to capture every requirement, but get a few big things right to be usable and organically grow incrementally with continuous feedback from users throughout the development process. (It's an anti-pattern to stay incommunicado from users for too long or only involve them at the end for UAT.)
Formal requirements in the form of rigid specifications and documentation in waterfall development is useful in high risk, well-specifiable projects such as a nuclear reactor SCADA control system.
> It's an anti-pattern to stay incommunicado from users for too long or only involve them at the end for UAT.
In my experience it is key to involve users at a very early stage, as even just one level up key processes are lost.
Sure managers known the happy path, but often the weird exceptions that needs to be handled aren't documented and get forgotten by higher-ups.
there's a lot you can still do before involving the pricey engineers. Having people just sit down and consider requirements, talking to users, and thinking about what the app should essentially do, while also incorporating a ux/designer as needed to create mocks prior to building an application is still way more cost effective than just telling a developer to "build me an app that ingests all the data at our company"...which is a lovely experience I've had (I'm exaggerating but I had a one page doc about a useless description about data ingestion with no description about users).
Heck I've even built hobby projects where I considered many of the details of requirements beforehand and saved myself a lot of time in developing features I wasn't sure about.
> I don’t know what rate of accidents and fatalities will be acceptable by governments, but you have to think it needs to be at least as good as human beings.
To match or beat the safety of drivers aged 16-17 self-driving cars need to beat 1,000 crashes per 100,000,000 miles [1]. This is 99.999% miles driven without crashing.
I'd like to call that "five nines" but crashes are sort of instantaneous. Unlike a website where you measure recovery time. If I change the unit to meters, I get a different number.
[1] https://aaafoundation.org/rates-motor-vehicle-crashes-injuri...
This follows the five reasons for failing software projects, which are in order:
1. Vague or incomplete requirements (55% !)
2. Inadequate project management
3. Uncontrolled complexity
4. Over dependence on testing
5. Unreadable code
People usually focus on addressing #3 and #5 but proper requirements are so much more important.
Do you have a source for these numbers?
yes, it's from a report by Gary E. Mogyorodi from 2003 titled "What are Requirements- Based Testing", and a paper from Jones from 2012 called “Software Quality Metrics: Three Harmful Metrics and Two Helpful Metrics”
> Is the idea behind using AI to create software to just let those same stakeholders talk directly to a computer to create a SMS based survey? Is AI going to ask probing questions about how to handle all the possible issues of collecting survey data via SMS? Is it going to account for all the things that we as human beings might do incorrectly along the way and how to handle those missteps?
Have you tried? I mean I had a very quick chat and it brought up issues around SMS quickly. I gave it significantly less information than a client would have done, and I have done no iteration on this it's a first attempt at this.
https://chat.openai.com/share/9b88e1e4-aec1-48d3-9540-553ea3...
> In order to produce a functional piece of software from AI, you need to know what you want and be able to clearly and precisely define it.
Why? Why can't AI be used in an agile process? You can definitely ask for changes to existing code and have a LLM spit out diffs.
> It’s everything before that. Artificial intelligence can do some extraordinary things, but it can’t read your mind or tell you what you should want.
Humans can't read your mind, and LLMs can tell you what you might want.
I think there's a broader point though:
> Once you get the hang of the syntax, logic, and techniques, it’s a pretty straightforward process—most of the time.
No. It is for you and me. It is not for many people. There is a level of technical precision that requires frankly slightly odd people to do well enough, and even halfway decent engineers need to make this a regular thing they do not just a few times a year when they need code written.
There are lots of people who could do the other parts but not the actual coding.
I think a lot of positions are reasonably safe, because the demand for software outstrips the supply of programmers so dramatically that making software easier or cheaper will for a decent chunk of the time just expand the amount done. But remember as well that for your job to disappear your profession does not - programming can be a job but you make struggle to get a programming job if demand is lower than supply.
>> In order to produce a functional piece of software from AI, you need to know what you want and be able to clearly and precisely define it.
> Why? Why can't AI be used in an agile process? You can definitely ask for changes to existing code and have a LLM spit out diffs.
I think you're just describing an iterative process by which someone might eventually precisely define what they want.
> I think you're just describing an iterative process by which someone might eventually precisely define what they want.
Not necessarily, you'd have a series of change requests and resulting code - but no requirement that all features delivered were specified precisely.
How does this differ from an agile process?
So, the 3 hard problems in computer science are now:
1. Cache invalidation
2. Naming things
3. Requirements
4. Off-by-one errors
Just start numbering your list with 0 instead of 1 and you'll have solved the last one.
Only if the the algorithm is somewhat trivial.
Counterexample: Solving the traveling salesman problem to optimality.
We know exactly the requirements, yet we have been struggling for decades to find an algorithm that satisfies the requirements. We have not, and we probably never will.
Not my experience at all. My experience is usually more like “Make X faster” or “Integrate Foobar with Gloofuz” or “Make this Desktop application run in a Browser”. Easy to require, years of hard work to actually do.
A non-coding example would be “Safely fly a man to the moon and back again”. Easy to require, hard to do.
I have never in my 30+ years career worked with a Business Analyst or non-coding Architect who provided any positive value to any of the projects I have worked on. That of course doesn’t mean that they don’t exist. I just haven’t experienced it personally.
It's not just software - it's engineering design in general. The hardest part of engineering design to know what needs to be built.
This is akin to knowing the right problem to solve but engineers are trained poorly for that. For 4 years, we're given problems to solve so we get very good at that but that's very different from knowing what problems need to be solved for the project, which is essentially what writing requirements is.
The hardest thing is to consistently do everything right. Building and deploying software correctly isn't easy, or anyone could do it with minimal training.
Depends a lot on what sort of software you're building.
A lot of code has complex requirements and simple implementations, but the opposite definitely happens too. Rarely in risk-averse business environments, but in R&D-adjacent work it's common.
You can have straightforward requirements (e.g. "return true if the provided HTML code is a blog post"). The difficult part is discovering a means of satisfying the requirement.
> In order to produce a functional piece of software from AI, you need to know what you want and be able to clearly and precisely define it.
Requirements are coding, for the human computer. A really flaky computer.
Good news: prompts are just another form of requirements. Since we're educating so many people to be "prompt engineers" the requirements might actually improve as a result!
Startup and business worlds are so much apart. One is all about execution of a vision at first, the other is all about inside politics with people goals in the middle. And as a startup growth, it will be englued in politics.
I believe AI will be a boon for solo founders at early stage, but anything else, requirements, change managements, politics are among the key successes and AI won't help there.
I actually think there might be an application for the current level of LLM to assist with requirements, changes, reprioritization ect. Imagine that all requirements, requests for change or reprioritization is routed though a LLM. Suddenly it's clear who requested what, and more importantly, it can notify stakeholders who might care about it based on previous input.
I am a fan of the idea of late architecture.
If the architecture supports your intuition of the problem then coding can be straightforward.
I am unsatisfied by many representations of problems in OOP languages, functional languages and LISP.
The requirements of "must be endlessly available, scalable, multithreaded, multi availability zone, secure, robust, extendable, cheap, hirable for" are quite expensive
... and defining precise specifications.
I look at requirements as the human-level needs and wants from a system. If we have an intersection of two busy roads shared by cars and pedestrians there's an immediate requirement that cars don't run down pedestrians that are trying to cross. We also want to let some cars get through and some pedestrians get through. And everyone should get a chance to get through the intersection.
The specification of such a system to meet those requirements is a bit different. We would need to define systems for indication who can cross, when, the timing of various events in the system, etc. It's much more about the technology involved and how it behaves.
Now when we get to software we have tools to write precise specifications but they are so often ignored it seems pointless to advocate for them. Developers think they can design a system involving multiple communicating processes and guarantee that no bad state can ever be entered and that good things eventually happen... without specifying any of those properties. Usually they write a hand-waving informal document in a wiki somewhere that everyone forgets about, add some unit tests (if you're lucky) and some integration tests and think reviewing it closely in code review will catch the rest of the, "bugs."
It's amazing that this practice has been sufficient in practice for such a large body of software that it almost seems like evidence that formal specifications are not required to write software... and may even be a waste of time!
But I think that what we're talking about here is resiliency: we add more hardware when the software doesn't scale well in performance, we add operational tooling when it isn't as reliable as we'd like: but the costs never go down. We add more in order to do more.
I don't think we can write efficient software without being precise with our specifications. As Conal Eliot theorizes, we need correctness in order to scale efficiency: proof that we can satisfy certain properties and guarantees with less code; that code we write composes with other systems, etc.
Writing less code is the hardest part of building software, in my experience.
Update: And think of it this way, if the specification is imprecise then there is no way to determine that a program is incorrect. It could erase all of your data or leak your personal information and that would be perfectly acceptable.
Tech companies nowadays love to hire product managers. How are product managers different from software engineers when it comes to creating requirements, especially on highly technical projects?
And yet you continue to say you can do the product manager's job. You can't.
That's why I always preferred the continental term informatics over CS.
Absolutely agree.
This is also why there will be a new vocation of "Prompt Engineering" (I think it's already begun).
It’s a cute idea, but it seems unlikely that an implementation generated from this workflow will remain stable and maintainable at scale, especially as complexity of the requirements increase.
this feels like it sums up pretty well why I'm not worried about the rise of AI generated content 'taking my job.'
Disclaimer: I did not read the article. But:
It has nothing to do with SW! Anything you want to develop: SW, HW, Mechanics, procedures… you need to know first what you want to do.
[dead]
One thing AI could do is generate the "requirements" based on the code. Since really at the end of the day it's the code and not the "requirements" that dictate how an application behaves.
We might find some interesting bugs this way.
234 - Section B4
- The application will crash each leap year that falls on a Monday when running in GMT+10 timezone.
It can take weeks for project managers to realize that a programmer misunderstood the requirements in a deliverable. It will take seconds for managers to realize that AI misunderstood the requirements. Managers will be able to rapidly iterate on requirements in ways they've never been able to in the past. And it just so happens that distilling requirements is actually something that AI is really good at.
How would AI in it's current state support gathering and communicate requirements?
In my experience, it requires careful observation, an investigative approach and deep reflection to understand one's needs, context, goals, etc, in order to translate into software requirements.
I'm a big fan and user of GPT for coding and everything, but I don't see it helping a lot with requirements in the foreseeable future. Not with its current capabilities...
You’re not thinking with AI magic. “It’s early” so we can just hand-wave those details away until next funding round.
I'd guess the end user (product owner) of the software would interact with some sort of ChatGPT and describe what they want/need and we'd iterate from there. At the very least it would be a good support tool for initial scoping.
ChatGPT would also be useful for the developers to learn more about the domain (assuming no hallucinations) before the kickoff meeting. Let's say you're building some sort of accounting software for the banking sector. You could have a chat session that explains how business processes work in that area, what regulations apply etc. You'd also pick up the correct words/phrases that are used in that domain etc.
Crafted by Rajat
Source Code