Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This strategy tends to fail economically. The tech startups that succeed are usually ones that let their customers do things they would not otherwise be able to do. Usually doing something that nobody has done before is hard enough without considering the corner cases; if it follows a typical 90/10 rule, then doing 100% of the job will take 10x as long as the competitor who's only doing the easiest 90%, and your market will have been snapped up by them long before you can release a product. Customers would rather use a product that works 90% of time than do without a product entirely, at least if it delivers functionality they really want but can't get elsewhere (and if it doesn't, your company is dead anyway).

Once you've got a commanding lead in the marketplace you can go back and hire a bunch of engineers to finish the remaining 10% and make it actually work reliably. That's why solutions like testing & exceptions (in GCed languages) succeed in the market: they can be bolted on retroactively and incrementally make the product more reliably. It's also why solutions like proof-carrying code and ultra-strong (Haskellish) typing fail outside of markets like medical devices & avionics where the product really needs to work 100% at launch. They force you to think through all cases before the program works at all, when customers would be very happy giving you (or a competitor) money for something 80-90% done.

Someday the software market will be completely mature, and we'll know everything that software is good for and exactly what the product should look like and people wouldn't dream of founding new software startups. At that point, there'll be an incentive to go back and rewrite everything with 100% solid and secure methodologies, so that our software has the same reliability that airline travel has now. That point is probably several decades in the future, though, and once it happens programming will not be the potentially extremely lucrative profession it is now.



I'd agree that it's totally reasonable to 'hack together' a quick prototype with 'duct-tape and cardboard' solutions -- not just for startups, but even in full-scale engineering projects as the first pass, assuming you intend to throw it all away and rewrite once your proof-of-concept does its job.

The problem is that these hacky unstable unreliable solutions sometimes never get thrown out, and sometimes even end up more reliable (via the testing and incremental improvement methods you mention) than a complete rewrite would be -- not only because writing reliable software is hard and takes time (beware the sunken costs fallacy here!), but because sometimes even the bugs become relied upon by other libraries/applications (in which case you have painted yourself into a REALLY bad corner).

It's a balance, of course. You can't always have engineering perfection top-to-bottom (though I would argue that for platform code, it has to be pretty close, depending on how many people depend on your platform); if you shoot too high, you may never get anything done. But if you shoot too low, you may never be able to stop drowning from bugs, crashes, instability, and general customer unhappiness no matter how many problem-solver contractors you may try to hire to fix your dumpster fire of code.

So again: Yes, it's a balance. But I tend to think our industry needs movement in the "more reliability" direction, not vice versa.


This is simply not my experience with exceptions. Exceptions are frequently thrown and almost never need to be caught, and the result is easy to reason about.

My main use case for exceptions is in server code with transactional semantics. Exceptions are a signal to roll everything back. That means only things that need rolling back need to pay much attention to exceptions, which is usually the top level in jobs, and whatever the transaction idiom is in the common library. There is very little call to handle exceptions in any other case.

GC languages make safe rollback from exceptions much easier. C++ in particular with exceptions enabled has horrible composition effects with other features, like copy constructors and assignment operators, because exceptions can start cropping up unavoidably in operations where it's very inconvenient to safely maintain invariants during rollback.

Mutable state is your enemy. If you don't have a transaction abstraction for your state mutation, then your life will be much more interesting. The answer isn't to give up on exceptions, though, because the irreducible complexity isn't due to exceptions; it's due to maintaining invariants after an error state has been detected. That remains the case whether you're using exceptions, error codes, Result or Either monadic types, or whatever.


Sounds very specific to server code


Not sure which type of "server" you meant when you said that, is that in the narrow sense of database server?

Behaviors similar to the above are not that infrequent, are expected from many other servers-in-wide-sense: media decoder would drop all decoding in progress and try to resynch to the next access unit, a communication front-end device would reset parts of itself and start re-acquiring channels (such exception-like reaction is even specified in some comm standards). Network processor would drop the packet and "fast-forward" to the next. Etc.

You could argue that this still looks like a server behavior loosely defined (and I agree), but a) this makes application field for exceptions large enough IMO, and especially b) how differently could one implement all that with other mechanisms (like return codes), and for what benefits?


> This is simply not my experience with exceptions. Exceptions are frequently thrown and almost never need to be caught, and the result is easy to reason about.

I write GUI apps and that is also how I use exceptions - and it works just fine. If you have an exception, the only rational thing to do most of the time is to let it bubble up to the top of the event loop show a warning to the end user or cleanly quit the program while making a backup of the work somewhere else.


>assuming you intend to throw it all away and rewrite once your proof-of-concept does its job

Once a throwaway works, it becomes production.


And this is part of why I never ever ever "write one to throw away". It's very rare that it actually gets thrown away and redone "properly".

Also I just don't want to waste my time writing something that's for sure going to be discarded. There's a middle ground between "write something held together with duct tape" and "write the most perfect-est software anyone has ever written". My goal is always that the first thing I write should be structured well enough that it can evolve and improve over time as issues are found and fixed.

Sometimes that middle ground is hard to find and I screw up, of course, but I just think writing something to throw away is a waste of time and ignores the realities of how software development actually happens in the real world.


This. Once the spaghetti code glued together to somehow work is deployed and people start using it, it's production system and next sprint will be full of new feature stories, nobody will green light a complete rewrite or redesign.


And that’s how you get a culture where severe private data breaches and crashy code are the status quo :/

We can do better. Why don’t we? I guess the economical argument explains most of it. I think if more governments would start fining SEVERELY for data breaches (with no excuses tolerated), we’d see a lot more people suddenly start caring about code quality :)


>We can do better. Why don’t we? I guess the economical argument explains most of it. I think if more governments would start fining SEVERELY for data breaches (with no excuses tolerated), we’d see a lot more people suddenly start caring about code quality :)

Governments care about the "economical argument" even more so. They don't want to scare away tech companies.

Besides, today's governments don't protect privacy, rather the opposite.


Sure. And if governments started shooting people for drunk driving, we'd have less drunk driving.

Some of us don't like the negatives even if we would enjoy the positives.


We got a green light for a complete rewrite, but only because of licensing issues with the original code. I'm just hoping we don't fall for the second-system syndrome.


There are exceptions of course. I have also been involved in some complete rewrites and green field projects to replace existing solutions but it's very rare. Happens much more often in government sphere compared to private sector.


Which is the mistake, the throwaway should test one sub system or the boundary between two sub systems and nothing more. To get tautological again, once you have a working system you have a system.


Yep, that was pretty much my point :). I think it’s a dangerous precedent to follow (bad practice), but I’ve certainly been guilty of it on occasion.


With that "works 90% of time" idea, please don't ever involve yourself in software for anything serious: air traffic control, self-driving cars, autopilots, nuclear reactor control, insulin pumps, defibrillators, pacemakers, spacecraft attitude control, automated train control, the network stack of a popular OS, a mainstream web browser, a Bitcoin client, the trading software of a major exchange, ICANN's database, certificate signing, ICBM early warning system, cancer irradiation equipment, power steering, anti-lock brakes, oil/gas pipeline pressure control, online tax software...


I actually do have some experience in that area - one of my early internships was in a consultancy that specialized in avionics, health care, finance, and other areas that required ultra-high-assurance software.

It is a very different beast. Their avionics division was a heavy user of Ada, which you will basically never find a webapp written in. There are traceability matrixes for everything - for each line of code, you need to be able to show precisely which requirement requires that code, and every requirement it impacts. Instead of testing taking maybe 5% of your time (as with startup prototype code) or 50% of your time (as with production webservices), it took maybe 80% of the total schedule.


Not working in those fields either, but i don’t understand how people can be comfortable writing life-or-death code in C either. Anything that doesn’t involve a heavy dose of formal proof or automatic validation of properties of your code seems irresponsible as well.


C is very safe if you are experienced and don't do anything fancy.

What else would you use apart from Ada? I wouldn't trust any language with a large runtime like Python, Java, and yes, also not Haskell.

C is very amenable to proofs that use Knuth's proof style. Also of course Frama C exists.

EDIT: If Rust is more mature, it may be an option, but I'd wait at least 5 more years until (if?) it is widely used.


ocaml with coq prover ?


But those are only a fraction of the software ever written.


A market subject to regulation would just move what’s considered the easiest 90%. Maybe in a small startup, one would write a fancy nonlinear or deep ML model in TensorFlow while for a regulated/compliance-oriented codebase, you’d stick to linear algebra for the ML model to guarantee convergence.


And this is likely why security folks will always have a gig.


Agree with this except for the last bit about “mature” software market. Software is just the execution of logical processes. There’s no reason to think we’ll run of out need for new logical processes to be implemented.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: