Are we wasting our time with these LLMs?
In the early days of the "world wide web" internet, we made geocities websites, used apache everywhere, had websites with crappy javascript that never worked because everyone turned it off, etc etc. Was that a waste of time? Hell no, because we learned a lot. The folks who got to do that had huge benefits in understanding how a lot of things worked under the hood simply because they played with it, not because they studied it. A lot of people learned to program because of the old MUDs. We're learning how this crap works. Those of us tinkering with this stuff are learning about data science, neural networks, understanding prompts and weights, getting a look under the hood at how generative AI is generating, etc. And some of us (not me lol) are enterprising enough to find ways to make money doing so. I have no doubt some folks will be getting mega-rich who are playing with this now. Not one minute of this is being wasted. Keep having fun. Keep fine-tuning. Keep learning. The knowledge will only help you down the road.
I finally got to demonstrate "fearless refactoring" in action!
And here's the big thing. mypy doesn't actually statically type check Python. You cannot statically type check Python, because even if your entire codebase and all your dependencies make ubiquitous use of type hints, your entire program is still 100% purely dynamically typed. Type hints and static types are completely orthogonal concepts. Therefore mypy statically typechecks mypy. A statically typed language with an incompatible type system to Python's type system. Therefore the mypy type system has to be insanely permissive in order to not mark so much correctly typed Python code as invalid as to render it effectively unusable. In addition the Python and mypy type systems have extremely low expressive power compared to Rust's type system rendering most of what Rust APIs encodes about the domain impossible to encode in a Python API or otherwise impossible to encode nearly as well. All of this means that mypy only barely gives an inkling of a taste of the advantages of the Rust type system.
It's okay to Make Something Nobody Wants
Products seem to be made for users, but I think this might be an illusion; they are more like a medium for self-expression. Different expressions, conceived by various minds, undergo a form of natural selection, with the surviving expression being the one that resonates most with users. I mean, the process unfolds like this: you create something not because “I think they might need this,” but because “I find this so fucking interesting.” Then, when others use your product, they feel the same emotions you did, and they say, “I find this so fucking interesting.” From this perspective, a product is like a container for emotions; the creator puts them in, duplicates it a thousand times, and users extract them from the product. You can’t be devoid of emotion and expect users to experience emotion after using it.
Mitchell Hashimoto (@mitchellh)
My favorite part about installing Windows is seeing the 47 different progress bar animation styles and font styles before first boot. It really sets the tone for the entire Windows experience.
I never want to return to Python
Python doesn't have static typing. It has type hints, which are... more like guidelines than actual rules. And last I checked, there were still many places in the standard library that didn't have type hints. Things like Python type hints and TypeScript will forever baffle me. They seem to me like putting an airbag on a motorcycle. If you want safer transportation, _just don't use a motorcycle._ (TypeScript at least has the rationale that JavaScript has long been the only option in a browser environment, but WASM is starting to open the door to viable alternatives.)
Ray (@raysan5)
For the last 12 years MANY gamedev educational institutions have focused their courses on specific game engines (mostly Unity), many of those students, now devs, will need to learn other low-level technologies in a hurry, it could be tough. Schools should learn a lesson from this
𝕤𝕠𝕗𝕚 (@sincerelysofi)
feeling of shame when you are unable to adjust to having a top panel on your desktop (as in Mac OS) or no panel at all (like in twm) because you subconsciously view bottom panels as pedestrian / Windows-like
Ask HN: Why did Visual Basic die?
Visual Basic is one of the best arguments for open source and community ownership in the history of computing, IMO. Microsoft's decision to tank it was hugely painful for companies that had made major investments in it -- no company should make that kind of investment in a proprietary platform that can be killed off by a single company and not forked and maintained by others.
2023-09-11T19:34:31.322081Z (@mycoliza)
kinda feels like the main difference between Zig and C is that people who write Zig are actually *choosing* to have a bad time, while a lot of write C because they're kinda forced to
Paul Butler (@paulgb)
When I get frustrated with Rust and work through it, I come out feeling like I learned a new fundamental truth about the universe. When I get frustrated with JS and work through it, I feel like I spent so long at a carnival game that the operator gave me a toy to get rid of me.
Teaching with AI
Those who really desire to understand how things work will be undeterred by the temptation of AI. There are two types of people: those who care to know and really understand and those who don’t. Should we really force people, past a certain point, to care when it’s clear they don’t and are only doing something because they are forced to? I would argue that people should spend more time on the things they truly care about. That’s the critical difference; when you care about something and get enjoyment and satisfaction out of it, you want to understand all the fine details and have a thirst for knowledge and true insight. When you don’t care, you take the absolute shortest path so you can make time to do whatever it is that brings you true satisfaction. That’s perfectly okay with me because I do it all the time for things I couldn’t care less about. If someone who wants to be a software engineer can’t be bothered to learn and understand the fundamentals I’d argue that software engineering isn’t the discipline for them. The more you understand, the larger the surface area of the problem you have for which to explore further.
I am afraid to inform you that you have built a compiler (2022)
To some extent my entire career has been searching for and destroying said half baked implementations. This saying can be adapted to infra: “half baked, bug ridden kubernetes”, “half baked, bug ridden proxySQL”, “half baked, bug ridden redis”, the list goes on and on. In some ways I feel like my impact has been quite boring, in other ways quite vital. But it’s never made me friends with the kind of developers who look sideways at the idea that other peoples life’s work might be better than their 5 year old weekend project.
Andreas Kling (@awesomekling)
~2 years ago I became convinced that meticulously checking every heap allocation for failure would lead to robust GUI applications that don't fall apart under resource pressure. Fast-forward to today, we have made the SerenityOS codebase significantly uglier and less pleasant to work on as a direct result of pursuing this goal. At the same time, the sought-after robustness remains a hypothetical mirage. It's time to admit I was wrong about this. Not because it's impossible, but because it's costing us way more than it's giving us. On reflection, I believe the main mistake here was adopting the meticulous checks wholesale across the entire operating system. It should have instead been limited to specific, critical services and libraries. Adopting new patterns is easy. Admitting that you adopted the wrong pattern and reversing course is harder. However, I now believe we need to walk backwards a bit to make GUI programming on SerenityOS fun again. 🤓🐞
How a startup loses its spark
I really like the approach of Netflix of 10 years ago when it was still small. They hired mature people so they could get rid of processes. Indeed, they actually tried to de-process everything. As a result, things just happened. Non-event was often mentioned and expected in Netflix at that time. Case in point, active-active regions just happened in a few months. A really easy to use deployment tool, Asgard, just happened. The VP of CDN at that time said Netflix would build its own CDN and partner with ISPs. Well, it just happened in merely 6 months with 12 people or so. Netflix said it was going to support streaming and move away from its monolithic Tomcat app, and it just happened. And the engineers there? I can't speak for others but I myself had just one meeting a week -- our team meeting where we just casually chatted with each other, to the point that the team members still stayed close to each other and regularly meet nowadays. I also learned that the managers and directors had tons of meetings to set the right context for the team so engineers could just go wild and be productive. At that time, I thought it was natural, but it turned out it was a really high bar.
teej dv 🔭 (@teej_dv)
"I use Linux as my operating system," I state proudly to the unkempt, bearded man. He swivels around in his desk chair with a devilish gleam in his eyes, ready to mansplain with extreme precision. "Actually," he says with a grin, "Linux is just the kernel. you use GNU+Linux." I don't miss a beat and reply with a smirk, "I use Alpine, a distro that doesn't include the GNU coreutils, or any other GNU code. It's Linux, but it's not GNU+Linux." The smile quickly drops from the man's face. His body begins convulsing and he foams at the mouth as he drop to the floor with a sickly thud. As he writhes around he screams "I-IT WAS COMPILED WITH GCC! THAT MEANS IT'S STILL GNU!" I interrupt his response with "and work is being made on the kernel to make it more compiler-agnostic. Even if you were correct, you won't be for long."
A world where people pay for software
Software has no marginal cost. You can make something that's used by untold millions of people. Even if many people pirate enough people won't for you to recoup your development cost and then some. Software is easier to produce, sell, and distribute than any physical product. You don't have to worry about warehouses filled with unsold inventory. You don't have to worry about quality control and returns. It still blows my mind how much easier it is to run a business that deals with bytes instead of atoms. The OP talks about software having no copy protection, but Amazon sells DVD players and cordless drills for $30. Imagine for a second how hard it is to compete with that. Competing with Google or Microsoft or some startup is a walk in the park in comparison. In software the hard part is making an excellent product. And let's face it, that's where most people fail. It has nothing to do with monetization.
What Is Nix?
Dockerfiles which just pull packages from distribution repositories are not reproducible in the same way that Nix expressions are. Rebuilding the Dockerfile will give you different results if the packages in the distribution repositories change. A Nix expression specifies the entire tree of dependencies, and can be built from scratch anywhere at any time and get the same result.
Hailey (@[email protected])
I remain unconvinced that docker layers are a good abstraction. Building an image that uses another image as a base? Sure, that makes sense, but keeping all those layers around and exposing them to the user as a domain concept does not. There's an ongoing runtime perf cost to supporting them, and they're just not that effective when it comes to deduplicating image contents. Consider your `bundle install` layer, yeah you can reuse it between app versions that don't bump any gems, but the moment you bump even one gem, you're paying hundreds of MB if not close to a GB for that bump. I keep thinking about this paper ( out of the AWS Lambda team where they mention flattening layers into a single ext4 with a deterministic extent layout. Deduplicating 512 KiB chunks of this image turns out to be a lot more effective for them than layer-based deduplication ever was, plus it enables image lazy loading in a way that layers simply can't achieve.
Looking for people's experiences moving from front end to backend
I didn’t make the transition (I’ve always been infra backend) but I’ve worked with folks that have. IMO the sooner you make the jump the better. Infra especially is something you can spend 30 years honing your craft on and there’s still more to learn. Most of the technically challenging problems are in infra. However I found product engineers especially frontend really struggle with the switch because strong CS and OS fundamentals actually matter especially at larger companies. You need to understand performance, latency, networking, in many cases lower level concepts too like storage. However you’ll be rewarded with way more mentally stimulating work, less grinding, and better job security (of the junior engineers the VAST majority did not go into infra/backend so the shortage will persist for a fairly long time). So try and make the switch where you are or be proactive about it. Don’t wait
Glyph (@[email protected])
One of my litmus tests for a software product these days is that, if it has search, I should be able to search for a nonsense phrase and get an answer that says “no results”. Every website and app is so damn thirsty for clicks now that it will just show an infinite scroll of useless garbage no matter what I’m looking for, which means I can’t get “no results” and then refine my search quickly, I have to page through the “results” to see if they’re plausibly related to my query. Please stop it.
Sourcegraph is no longer Open Source
Never found a startup on the premise that someone else's product will be inadequate forever. The recent rewrite of github search has probably made sourcegraph irrelevant. If you may recall, original github search used almost the most horrible algorithm possible. It dropped all punctuation and spacing and just searched for identifiers. No patterns allowed, no quoting allowed. One of the only meta-arguments was filename:xyz. Now that github has improved its basic search functionality, sourcegraph might be doomed. I used sourcegraph at Lyft which (at the time) had unlimited money to waste on software tools, and installed the open-source version at Databricks but nobody cared.
:pdx_elk: (@[email protected])
The reason I hate "opsec" as a term is it feels like military larping, and I think it creates a culture and mindset around that. Digital safety is a better term, imo. We care about each other, and we want to keep each other, and ourselves, safe while also living our lives and taking measured risks.
Mitchell Hashimoto (@mitchellh)
I'm convinced everyone who actually likes JS/TS and the whole ecosystem is just suffering from Stockholm syndrome paired with being forced to use it. We're all just stuck with this reality. 😵‍💫 Layers and layers of madness, pure madness.
Why did Nix adopt Flakes?
We use it for devshells, and it’s awesome. New devs install nix and direnv and they instantly have all the right versions of all of our tooling. A first day setup process is now done in minutes instead of a day. Flakes made it possible for us to package up internal and external tools and ensure consistency across a team. I have no experience running it in production, but I imagine if you don’t want to use containers it’d be a pretty good option.
Apollo will close down on June 30th
This makes me indescribably sad. Apart from mourning the loss of a fantastic app by an awesome developer, to me it signals the end of a golden era of small indie client only apps. Since the APIs for the likes of reddit, twitter (RIP tweetbot) and others were available for free or a reasonable fee it spawned a whole cottage industry of developers who made a living selling alternate front ends for these services. These apps invented many of the conventions and designs that eventually percolated to the official clients. Sometimes these innovations even became platform wide conventions (pull to refresh anyone?). The writing was on the wall for a while, but now the door is firmly closed on that era - and we will all be poorer for it.
Diesel 2.1
It's not true that diesel is "incompatible" with async, it just does not expose an async interface. Now having an async database interface usually not required for several reasons: * Your service likely does not get the required amount of traffic to care about that (which means you won't see more traffic than which uses diesel) * Even if you get that amount of traffic your main bottle neck is not the communication with the database itself, but getting a database connection, because there are usually only a few tens of those connections. For those fixed number of connections you can then easily use a thread pool with the corresponding number of threads. Additionally as already mentioned by others: There is `diesel-async` for a complete async connection implementation.
Read Every Single Error | Pulumi Blog
Error budgets and the SRE model are haute couture. Some preach that we should never look at errors at this level of granularity and instead use expensive tools that aggregate, categorize, and collect statistics on errors flowing through your system. But all of this automation can actually make things worse when you reach for it prematurely. Aggregating errors is a great way to gloss over important details early on. Collecting fancy metrics does not matter if your users are not happy. Cutting your teeth with the tools and processes that make sense for your level of scale is the only way to build a high-performance culture. Skipping straight to step 100 does not always help.
The Maddest My Code Made Anyone | Blog |
Programmers sometimes have that experience, as do musicians, hardware designers, film directors, novelists, painters, game designers and all other professions that create things regular people interact with casually. Consumers (specifically the sub-genre of critics) often have no real imagination what making a thing means and under which constraints it happens. They often see publishing a imperfect work to the public as an affront to their sophisticated intellect and taste, even (or especially?) if it is free. In German there is the saying: Wer macht, hat recht which translates to who makes is right. Complaining is simple, just shut up and do it better. Of course complaining is totally okay if we e.g. talk about social or political conditions, or some mandatory process you have to subject yourself to by law. But even there I hate people who just complain and leave it at that without even trying to change a thing.
Mitchell Hashimoto (@mitchellh)
I'm actively trying to work through my Nix God complex. It's been so long that when I see non-Nix users complain about issues getting software to run, I'm truly confused. It's like someone looking at a river lamenting about having to ford it while I'm riding a bicycle on a bridge
Rome v12.1: a Rust-based linter formatter for TypeScript, JSX and JSON
I've got mixed feelings about Rome. There's so much room to cover with ridiculously slow tools today. But I'm sick and tired of these people in the industry dropping their toys because they're tired of working on stuff people actually use instead of just improving what they currently have. Would it have been impossible to nudge Node.js in the direction of where Deno is today? Would it have been impossible to replace Babel with a Go implementation? I also don't want tools that want to be literally everything. Imagine if Daniel Stenberg was like, "You know what I'm tired of cURL, let me rebuild literally the same thing in another language and give it a new name, and entirely different opts."
The Legend of Zelda: Tears of the Kingdom Release
Mechanical sympathy. Rather than designing a game on a PC to take arbitrary advantage of modern tech and then trying to cram it down onto a more limited console platform, Nintendo ask, at design time, what the most interesting things they can do are that would work perfectly within the constraints of the platform — and then do that. (And Nintendo engineers can have perfect knowledge of "the constraints of the platform", because 1. they built the platform; 2. it's the only platform they ever code for, never porting to anything else; and 3. for late-in-generation titles, they have been developing for it for years already, while also doing platform-SDK support for every third-party development studio.) Oh, and besides that, because they design each platform initially specifically to work well for the types of games they want to make. (This goes all the way back to the Famicom, which has hardware PPU registers that were specifically implemented clearly to make the launch-title port of Donkey Kong extremely easy to code.)
The JavaScript Ecosystem Is Delightfully Weird
Javascript is this generation's C++. It's a massive language and the only way to stay sane on a project is to agree to use a well demarcated subset of it. Nothing wrong with being C++. The reason JS is so massive and weird is because it's the language that everybody uses, or has to use at some point. Upsides and downsides.
TS to JSDoc Conversion
Lordy, I did not expect an internal refactoring PR to end up #1 on Hacker News. Let me provide some context, since a lot of people make a lot of assumptions whenever this stuff comes up! If you're rabidly anti-TypeScript and think that us doing this vindicates your position, I'm about to disappoint you. If you're rabidly pro-TypeScript and think we're a bunch of luddite numpties, I'm about to disappoint you as well. Firstly: we are not abandoning type safety or anything daft like that — we're just moving type declarations from .ts files to .js files with JSDoc annotations. As a user of Svelte, this won't affect your ability to use TypeScript with Svelte at all — functions exported from Svelte will still have all the same benefits of TypeScript that you're used to (typechecking, intellisense, inline documentation etc). Our commitment to TypeScript is stronger than ever (for an example of this, see I _would_ say that this will result in no changes that are observable to users of the framework, but that's not quite true — it will result in smaller packages (no need to ship giant sourcemaps etc), and you'll be able to e.g. debug the framework by cmd-clicking on functions you import from `svelte` and its subpackages (instead of taking you to an unhelpful type declaration, it will take you to the actual source, which you'll be able to edit right inside `node_modules` to see changes happen). I expect this to lower the bar to contributing to the framework quite substantially, since you'll no longer need to a) figure out how to link the repo, b) run our build process in watch mode, and c) understand the mapping between source and dist code in order to see changes. So this will ultimately benefit our users and contributors. But it will also benefit _us_, since we're often testing changes to the source code against sandbox projects, and this workflow is drastically nicer than dealing with build steps. We also eliminate an entire class of annoying papercuts that will be familiar to anyone who has worked with the uneven landscape of TypeScript tooling. The downside is that writing types in JSDoc isn't quite as nice as writing in TypeScript. It's a relatively small price to pay (though opinions on this do differ among the team - this is a regular source of lively debate). We're doing this for practical reasons, not ideological ones — we've been building SvelteKit (as opposed to Svelte) this way for a long time and it's been miraculous for productivity.
How to recover from microservices
Making a large, resilient, performant system is hard. Trying to design one for a novel problem space on day one is impossible. Heed the timeless advice of John Gall: A complex system that works is invariably found to have evolved from a simple system that works. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. Simplicity demands that you do not start by inviting the beast of complexity – distributed systems – to the first dance. It's possible you'll one day end up with a complex, distributed systems that use microservices with justification, but that will only happen in good conscience if you started with a simple, monolithic design.
Give It the Craigslist Test
Similar to this: I raised a seed round with a deck that was (deliberately) just black Times New Roman text on a white background, plus a few screenshots. The product was also deliberately simple and rough around the edges. I stole an idea from Joel Spolskey and made beta features in the app have graphics that were literally drawn in crayon, to make it clear they were unfinished and to make it easy to test changes. Investors liked the deck. It made it clear that what mattered was the content, not the presentation.
So this guy is now S3. All of S3
Here's how I think about it: * ActivityPub -> AT Protocol ( * Mastadon -> Bluesky ( Right now, federation is not turned on for the Bluesky instance. There are differences in both, however. I'm not going to speak about my impressions of the Mastadon vs Bluesky teams because frankly, Mastadon never really caught on with me, so they're probably biased. ('they' being my impressions, that is, I just realized that may be ambiguous.) At the protocol level, I haven't implemented ActivityPub in a decade, so I'm a bit behind developments there personally, but the mental model for AT Protocol is best analogized as git, honestly. Users have a PDS, a personal data server, that is identified by a domain, and signed. The location of the PDS does not have to match the domain, enabling you to do what you see here: a user with a domain as their handle, yet all the PDS data is stored on bluesky's servers. You can make a backup of your data at any time, and move your PDS somewhere else with ease (again, once federation is actually implemented, the path there is straightforward though). This is analogous to how you have a git repository locally, and on GitHub, and you point people at the GitHub, but say you decide you hate GitHub, and move to GitLab: you just upload your git repo there, and you're good. Same thing, except since identity is on your own domain, you don't even need to do a redirect, everything Just Works. This analogy is also fruitful for understanding current limitations: "delete a post" is kind of like "git revert" currently: that is, it's a logical deletion, not an actual deletion. Enabling that ("git rebase") is currently underway. Private messaging does not yet exist. Anyway if you want to know more the high-level aspects of the docs are very good. Like shockingly so. They fall down a bit once you get into the details, but stuff is still changing and the team has 10,000 things to do, so it's understandable.
Mitchell Hashimoto (@mitchellh)
The idea of using verified domains as a username is so obvious in hindsight its shocking no mainstream app I know of did this before. Proving domain ownership has been used for so many other things of course, just shocked domain-as-identity is effectively nowhere until now…
Searches for VPN Soar in Utah Amidst Pornhub Blockage
I have a favorite Utah story that is I think appropriate here. Many years ago as a young and green consultant I was sent to Salt Lake to help with some ASP.NET/C# app with Utah Department of Liquor. I was told to look for the tallest building in SLC and the warehouse did not disappoint, it was huge (well, SLC is really flat and squat too). They showed me the warehouse full of really fancy robotic stuff (all made in Utah, and they were correct to be proud of it). We got to work looking over the code of the app, and along the way they learn that I am originally from USSR/Russia. "Oh" the devs say, "do you want to see our Russia module"? I am of course intrigued, and discover that during the process of organization of 2002 SLC winter Olympics (Mitt Romney's baby/rise to prominence), there was a huge diplomatic incident. The rules of State of UT at the time limit the number of bottles sold to any one in a given transaction, and the Russian delegation was refusing to come to Utah because they would not be allowed to buy as much liquor (likely vodka) as they wanted to. This got escalated to the highest levels of State department, and the intrepid UT legislature found a way! They [very quickly] passed the law that any person with Russian citizenship could buy whatever the heck they want in any amount. Now it was up to the poor saps in the UT Dept. of Liquor to implement it. But you couldn't just rely on people showing passport! No, the software team feverishly coded up the "Russian Module" that implemented passport number validation, making sure that if you did show a red passport with double-headed eagle, its number was valid. There was serious collaboration on the numbering schemes and maybe even some proto API validation to the Russian Federation servers. Yeah, legit module. Used for 2 weeks, and then decommissioned as the law sunset very rapidly. So, where there is a will, there is a way. And a VPN.
“Why I develop on Windows”
> I know a lot of developers who will opt to do all of their scripting in python these days, even putting #!/bin/python3 at the head of a script so that it runs through the shell. ...which is exactly what you're meant to do. This is not an example of how bad Bash it, it shows that you didn't understand what Bash is. It's expected to use various languages to write code on Linux, nobody wants you to do things in a language that wasn't made for the task. Imagine you had to use Python on the shell and, any time you open a terminal, needed to import os and do something like print(os.path.glob("*")) instead of just opening a terminal and typing "ls" to get your directory listing. Different tools for different jobs. Also the point they try to make about bash looking like a foreign language and having weird syntax. Yes, that's the thing: it's a very specific thing called a shell, not just any old programming language that you're meant to use for things that are not shell scripts. If Python feels more natural to you, that's probably what you should be using. Don't feel like you need to use Bash for bigger tasks than a few lines of code for no reason other than because you're on a system that has it.
Horrible Code, Clean Performance
It is absolutely true that some hot-path code needs to be mangled into an ugly mess to reach performance requirements. The problem is that I have encountered people who somehow take this as a blanket justification for writing unreadable code, and who operate on a false dichotomy about a choice between readable code and performant code. It is important to keep in mind that: 1) Most code, i.e. at least 80% of the code in a codebase, will never be a performance hotspot and does not need specific optimizations (i.e. as long as the code does not do stupidly inefficient things, it's probably good enough). 2) Even in performance hotspot codepath, you should not write unnecessarily hard to read code unless it is strictly necessary to achieve required performance. In both cases, the key point is to benchmark and profile to find the specific places where the ugly hacks need to be introduced, and do not introduce more than is strictly necessary to get the job done.
Marcin Krzyzanowski (@krzyzanowskim)
TIL companies like Facebook and Amazon rely on my OpenSSL distribution for Apple systems. I didn't even have to sign DNA nor solve leetcode to influence millions of developers for free
AI-enhanced development makes me more ambitious with my projects
In Jiro Dreams of Sushi, the new staff starts from cooking rice perfectly first and perfecting roasted seaweed before moving on to preparing an egg sushi and then graduating to fish. It's not grunt work. It's how new engineers learn the ropes and gain experience doing low risk work; it's part of the learning process that only feels like grunt work to a senior dev.
Incompetent but Nice
I used to think the same way as you, and then I started a company and had to pay out of pocket for employees, and the sad truth that I almost hate myself for admitting is that if you have to pick between incompetent but nice, and competent but a jerk, you take the jerk. And yes, multiple people will even quit because you picked the jerk over the nice guy, and I still found it's worth it to take the jerk because of how competency scales. A good/competent software engineer can genuinely do the work of many, many mediocre developers and you're almost always better off with a small number of really solid developers over a large number of nice but mediocre ones. Now of course we can always exaggerate things to an extreme and compare a racist, sexist, jerk who swears nonstop, to someone who is mildly incompetent, and there are certain principles and boundaries that are worth upholding with respect to how people treat each other regardless of their productivity, but in actuality that's not really the difficult choice you end up facing. The really difficult choice you end up facing is someone who is nice and gets along with people but is ultimately too dependent on others to do their job versus someone who works independently and does an excellent job but is very blunt and can be an asshole in regards to the expectations they hold others to. Good software developers often expect their peers to also be at a high standard and will speak in very plain, rude, and blunt language if they feel others are not pulling their weight. And finally, I have observed that in the long run, competent people tend to prefer to work with others whose skill they respect and they feel they can learn from because they're really good at their job, compared to working with someone who is pleasant but is always dependent on others. Being nice is a good short term skill to have, but people get used to those who are nice but they never get used to someone who is incompetent.
Hetzner launches three new dedicated servers
I've been using Hetzner servers for ~15 years with multiple clients and employers, and always been disappointed with other providers compared to what Hetzner delivers. OVH with their frequent network level outages, the 2021 fire and so on. DigitalOcean with their way too frequent and long lasting maintenance windows. And AWS/GCP/Azure with their obscene pricing, ridiculous SLA and occasional hour-lasting outages. One application platform I managed was migrated from DO to Hetzner with huge cost savings, much better uptime and insanely much higher performance running on bare metal servers rather than cheapo VMs. If you need more than two vCPUs and a few gigs of RAM, I see absolutely no reason to use overpriced AWS/GCP/Azure VMs.
Is Setting Up a VPS Worth It?
We used to manage 500+ servers with Ansible for almost 10 years. It was a nightmare. With so many servers Ansible script would ocassionally fail on some servers (weird bugs, network issues, ...). Since the operations weren't always atomic we couldn't just re-run the script. it required fixing things manually. Thanks to this and emergency patches/fixes on individual servers, we ended up with slightly different setup on the servers. This made debugging and upgrading a nightmare. Can this bug happen on all the server or just this one because it has a different minor version of package 'x'? We switched to NixOS. It had a steep learning curve for us, with lots of doubts if this was the right decision. Converting all the servers to NixOS was a huge 2-year task. Having all the servers running same configuration that is commited to GitHub, fully reproducable and tested in CI, on top of automatic updates of the servers done with GitHub action, was worth all the troubles we had with learning NixOS. This entire blog post could be a NixOS config.
Leveraging Rust and the GPU to render user interfaces at 120 FPS
That's not sufficient, though. You want the 0.1% case where you press undo a couple of times and big edit operations get reversed to be smooth. You have to hit your frame target when a lot is happening, the individual key press case is easy. It's just like a video game. A consistent 60fps is much better than an average frame rate of 120fps that drops to 15fps when the shooting starts and things get blown up. You spend all the time optimizing for the worst case where all your caches get invalidated at the same time.
Nix journey part 0: Learning and reference materials
The “What’s missing” section is on point. There are a lot of tutorials for helping someone learn the basics of Nix and build Baby’s First Package. There are not many tutorials about how to package real software you are actually developing. I think this is because it is (relatively) easy to explain what attrsets are or how to type `nix run some-flake` and press enter, and it is hard to explain what `LD_LIBRARY_PATH` is, or how Python environments are built, or why you should have known to apply this `.patch` file at that step, etc. It is, in the words of the authors of Zero to Nix, “fun and exciting to make a splash” by writing a completely new Nix 101 tutorial. That’s why we have half a dozen Nix 101s, very little Nix 201, and Nix 301 is searching GitHub for code snippets that solve a problem adjacent to yours.
The Lone Developer Problem
Yeah, exactly the same in my experience too. In fact, the biggest software atrocities I ever saw were team-based, with people having different opinions and wanting to modify the architecture every six months. And getting away with it because there was no vision. This is where a good team lead or technical lead, or even Fred Brooks' "Surgical team", or your example of "single developer and contributors": have one person with the vision making the difficult architectural decisions and you'll get some conceptual integrity. What I see a lot is people with little experience who learned things one way and become unable to understand or respect working code and want to change everything purely for personal preference. Maybe this is where the bias against lone developer code comes from.
The Lone Developer Problem
In my experience it's more often the other way around. Most projects I've seen with actually readable code and a consistent overall structure have been written (mostly) by a single coder, of course usually with contributions from others, but not real 'team work'. Of course there are also messy projects by single authors, and readable code bases by teams. But in the latter case: the more the responsibilities are spread, the messier the outcome (IME at least). I think in the end it comes down to the experience of the people involved. And then of course there's personal taste, one person's readable code is a complete mess to another. In any case, the post reads like the author stumbled over one messy project written by a single author and extrapolates from there to all other projects.
CoffeeScript for TypeScript
Way back in the early 2010s I was very "excited" about coffee script and similar projects. They sounded like they should be great for productivity. When I actually tried to write a project in coffee script, the results were the opposite of what I expected. The code was harder to read, harder to modify, harder to understand, harder to reason about. There's something about removing stuff from syntax that makes programming harder. My hypothesis is this: your brain has to spend extra effort to "decompress" the terse syntax in order to understand it, and this makes reading code unnecessarily difficult. So I fundamentally disagree with the underlying premise of these projects, which seems to be based on PG's concept of "terse is power". My experience suggests the opposite: there's power in being explicit. Type declaration is an example of such a feature: it makes explicit something about the code that was implicit. Type declarations add more to the parse tree, and require you to type more, but they actually give you more power. The same can be said about being explicit in the language constructs. There of course has to be a balance. If everything is way too explicit (more so than needed) then your brain will do the opposite of what it needs to do with terse code: it has to spend more effort to remove the extra fluff to get to the essence of what the code is doing. Being terse is good, up to a point. Same with being explicit. Languages that try to bias too strongly towards one extreme or the other tend to miss the mark. Instead of aiming for balance, they start to aim for fulfilling some higher telos.
The Power of “Yes, If”: Iterating on Our RFC Process – Squarespace / Engineering
Yes we should rewrite it in language Y if everyone on the team is comfortable with the language, it provides nonfunctional benefits, and has potential to drive business value. It’s just about acknowledging the conditions that would make an idea a good one. All ideas are good in a specific context. Instead of assuming everyone’s aware of the current context, state the ideal context for an idea.
Disqualified from a National Web Design Competition for Using GitHub
This is one of those important events in life where you realise that sometimes those who hold seniority over you aren't necessarily as smart as you are. This experience will help you to cultivate a healthy disrespect for authority. We all go through something like this at some point. The best thing to do is to find some sort of constructive way to channel your experience. One path I would suggest is to consider launching your own rival competition, where the judges are volunteers from industry, and the prize is an internship at a company or something like that. This would not only provide your peers with a great opportunity to get quality feedback, but also serve as a really useful experience that would help you in your future career. What have you got to lose? Perhaps you could even get GitHub to sponsor it :)
I love building a startup in Rust. I wouldn't pick it again
If you're thinking about building something in Rust, a good question to ask is, "what would I use if Rust didn't exist?" If your answer is something like Go or Node.js, then Rust is probably not the right choice. If your answer is C or C++ or something similar, then Rust is very likely the right choice. Obv, there are always exceptions here, but this helps you work through things a bit more objectively. Rust can be a fantastic language for many purposes, but it has a very high development cost.
Maybe people do care about performance and reliability • Buttondown
Anyway, my point is that it’s complicated, you can’t just blame it on apathetic devs or apathetic consumers. Dysfunction is omnipresent, but it’s also complex.
Maybe people do care about performance and reliability • Buttondown
More and more I feel like software is dysfunctional because everything is dysfunctional, because complex and interlocking societal factors make excellence a pipe dream. I’ve got this idea that the administrative overhead required to do something scales with the square of the size of the task, and doing something efficiently scales even faster than that. The more you scale, the more of those complex factors come into play, interacting with each other to produce emergent phenomena that makes everything more difficult. I’d say you could only change the factors that lead to slow software by changing society itself, but I’m not sure that any society would have globally fast software.
Browsers are essential and how operating systems are holding them back (2022) [pdf] (2022)
> Nothing any OS vendor or browser vendor has done in the last decade has been a user-focused positive experience. They have become delivery tools for revenue only rather than information access. Succinctly put. I've felt this shift everywhere; it killed the fun and curiosity I felt when I first encountered computers and the web. I can't recommend anything in good faith. When I open a new website or program I dread to think what it is collecting from me... who is looking at it, where it is stored... forever. It just seems so powerless to resist, especially when so much of wider society expects you to use $CHATAPP or even $DATING_APP. I can't imagine a first date where I scold the lady on her use of proprietary software: "Please install this XAMPP-Mastodon-Matrix chat app from the F-Droid store or I won't speak to you again"
Ubuntu 19.10: It’s fast
I tried both Wayland and X11. I feel like I'm going crazy because every time I mention the words Linux and HiDPI I have this same conversation, and it's been happening for years. The my takeaway is always, Linux users have ridiculously low standards for what works when it comes to UI. The conversation usually goes something like: "I don't know Wayland works for me with X setup" "What about the blurriness with fractional scaling" "Oh I'm used to it/It only happens with some programs <usually all programs using some incredibly ubiquitous UI toolkit>" Or: "What about when you move a window from one screen to another" "Oh I don't do that/Oh it gets a little blurry/Oh just use X11 and <insert Xrandr hack to mess with the frame buffer>" Or: "What about the tearing" "I got used to it/What tearing, I'm not gaming?" Or: "What resolution are your screens" "2k small screen and 4k big screen , I can just run the same scaling on both" I remember one time I had this conversation in person, and we failed at the, "move that window to the other monitor" step when it blew up the window to 200% size on the smaller screen. "Why do you expect the window to automatically resize itself and change the font" "Because the application is unusable when every UI element is twice as big as it should be?" "But I want my application to be unusable [paraphrase], you just think it should resize because that's what OSX does, stop bringing your OSX mentality to it and it's fine" I think that's when I should have stopped ever hoping for anything better and stop saying Linux and HiDPI in one sentence... but here we are...
Ubuntu 19.10: It’s fast
Well there is fractional scaling, it just looked like garbage and had tearing. But also handling a mix of low and high DPI displays... and any solution that includes the command `xrandr` is wrong, either because of clarity issues, or tearing/performance issues, or graphical bugs in the DE, or a mix of all of the above I don't get it, why can't we all just copy what OSX did. They got HiDPI so right with such a flexible solution, that I literally forgot that was still a thing until my latest endeavor with Linux
Will Nix Overtake Docker?
No, it definitely (but unfortunately) will not. Nix does everything docker does better than docker does, except most crucially, integrate with non nix tooling. Nix vs Docker is like Rust vs JavaScript - you can point out every reason js is terrible and rust is better, but for the common developer looking to get things done, they’ll often gravitate to the tool that gets them the biggest impact with the least upfront investment, even if that tool ends up causing major predicable headaches in the future.
Ask HN: What Next After Ubuntu?
I’ve ran NixOS on my last two machines. I like it more than the alternatives, but it isn’t without flaws. At this time, you must be sold on the idea of declarative configuration and willing to learn at least the basics of how Nix the language works. It’s cool that you can Git pull and build an OS, but management of the project can be very slow. Using a ‘pull requests’ model majorly slows down progress; if you need to revise changes for a new package or package update, you will make the correction and get approval, even as a maintainer, but no one will come back around to merge it. With a patch-based model maintainers can waste less time by just making those few modification to the patch and getting updates upstreamed faster without the back-and-forth. That said, it’s still something I’d recommend for someone with the experience and interest. There’s never been a system I was as confident with running patches and just updating the system myself for when stuff wasn’t working. But also, Guix is out there doing similar stuff and you must admire the free software goals even if it can sometimes be impractical (I just do not like the Scheme-based syntax).
A Linux evening...
This post resonates strongly with me. I love the term "a linux evening." This was precisely my experience when I used Linux full time: mostly it worked great, but then occasionally something wouldn't work (some personal examples: touchpad doesn't work after OS update, wifi card stops working etc.) and then I have to spend a few frustrating hours debugging the issue. All I can think in these moments is "you don't get this time back. Is this really how I want to spend three precious hours of my life, when, if I used a different platform, I could avoid this hassle completely?" I know it's a tradeoff and I sacrifice a lot to live in my current Macintosh rut, but I just don't have the motivation to be my own DIY tech support wiz after a full day on computers for work.
What if you delete the “Program Files” folder in Windows? [video]
I developed windows during the Windows 10 timeframe. Although I left before windows 11 was conceived, it's painfully obvious that it is just a UI reskin on top of 10. This was preordained by certain organizational choices made during my time there; namely, that the "Shell" team responsible for the start menu, desktop, and other UI tidbits[0] was completely divorced from the rest of windows development, with their own business priorities and so on. This was the team responsible for Windows 8/.1, so as you can imagine they were somewhat sidelined during Windows 10 development. It appears they have their revenge, first and foremost from the promised-never-to-happen rebranding (whereby they jettisoned the Windows 10 brand which was an embarrassment for that team and that team only). That the result is only a reskinned 10 is the natural result because that is the only part of the product they have the authority or ability to change. The Shell team was trying to push this same new UI during my whole time at Msft, with at least three cancelled attempts that I was aware of even from an IC perspective. By the end the embarrassment was contagious. [0] Plus Edge, as part of the same vestigial business unit. This explains the central position of advertising in the reskin, because Edge in all of its forms was always meant to drive ad revenue. That is the distinct business priority I mentioned earlier, which sets this organization apart from Windows (NT,win32,etc.) development proper, which was shifted to Azure.
Goodbye, data science
Unfortunately it seemed pretty clear from the start that this is what data science would turn into. Data science effectively rebranded statistics but removed the requirement of deep statistical knowledge to allow people to get by with a cursory understanding of how to get some python library to spit out a result. For research and analysis data scientists must have a strong understanding of underlying statistical theory and at least a decent ability write passable code. With regard to engineering ability, certainly people exists with both skill sets, but its an awfully high bar. It is similar in my field (quant finance), the number of people that understand financial theory, valuation, etc and have the ability to design and implement robust production systems are few and you need to pay them. I don't see data science openings paying anywhere near what you would need to pay a "unicorn", you can't really expect the folks that fill those roles to perform at that level.
Toot!.app ↙︎↙︎↙︎ (@[email protected])
Notice: I've disabled the issue tracker for now. Having a huge, never-ending list of unsolved small bugs that people keep adding to is unfortunately *massively* de-motivating for working, and it's better for me and the development process to get rid of it for now. I have lots of things to work on at the moment anyway, and I could not respond to 99% of the requests anyway. If you do have critical bugs, please message me instead. And if at all possible, plaese go easy on requesting features.
Thorsten Ball - How can you not be romantic about programming?
I think there is a lot of romanticism in computing because there is a lot of irrationality. We don't like to admit that. We pretend to be "scientists". Irrationality is as much the engine of progress as reason. Both can be directed toward good or evil ends. Ada Lovelace saw one romantic side of computing as the possibility of machines writing poetry, music and song. Today as much fear, horror and loathing as joy surrounds that idea - but that is also romantic in Mary Shelley's sense. Big-R Romantic features are in both; possibility, drama, tragedy, and rejection of reason according to a counter-enlightenment embrace of emotivism. Ours is the age of impossibility - the hopeless inevitability of the status-quo, the lack of vision for alternative systems, amidst a grinding project to render all human affairs predictable, legible, identifiable, and controlled. Today "computer love" (the romance in computing) derives from the struggle to overcome the ignorant, cowering bureaucracy to which lesser men put machines in pursuit of mediocrity and dull power.
Thorsten Ball - How can you not be romantic about programming?
If you haven’t been here long enough and try to guess how much there is and how many generations are layered on top of each other — you won’t even come close. But stay around. After a while, more and more, you’ll find yourself in moments of awe, stunned by the size and fragility of it all; the mountains of work and talent and creativity and foresight and intelligence and luck that went into it. And you’ll reach for the word “magic” because you won’t know how else to describe it and then you lean back and smile, wondering how someone could not.
Thorsten Ball - How can you not be romantic about programming?
This world of programming is held together by code. Millions and millions of lines of code. Nobody knows how much there is. Some of it is more than 30 years old, some less than a week, and chances are you used parts of both yesterday. There are lines of code floating around on our computers that haven’t been executed by a machine in years and probably won’t be for another lifetime. Others are the golden threads of this world, holding it together at the seams with no more than a dozen people knowing about it. Remove one of these and it all comes crashing down.
Thorsten Ball - How can you not be romantic about programming?
Fantastic amounts of code have been written, from beginning to end, by a single person, typing away night after night after night, for years, until one day the code is fed to a machine and, abracadabra, a brightly coloured amusement park appears on screen. Other code has been written, re-written, torn apart and stitched back together across time zones, country borders and decades, not by a single person, but by hundreds or even thousands of different people.
Is Our Definition Of Burnout All Wrong?
One of the things I've spent time helping other engineering managers understand is that burnout doesn't relate only to exhaustion. Instead, as the Maslach Burnout Inventory points out, it tends to be a three-factored issue. The MBI is a tool widely used in research studies to assess burnout, and it measures three scales: 1) *Exhaustion* measures feelings of being overextended and exhausted by one's work. 2) *Cynicism* measures an indifference or a distant attitude towards your work. 3) *Professional Efficacy* measures satisfaction with past and present accomplishments, and it explicitly assesses an individual's expectations of continued effectiveness at work. So you can absolutely be experiencing burnout even if you're not experiencing exhaustion, if the other two scales are tipped hard enough. Among the questions that help measure Cynicism and Professional Efficacy: * I really don't care what happens to some of my colleagues/clients. * I have the impression that some of my colleagues/clients make me responsible for their problems. * I have achieved many rewarding objectives in my work For more details about the MBI, check out
Moxie Marlinspike (@moxie)
One unique thing about software as an engineering discipline is that it offers abstractions which allow ppl to start contributing in the field w/o having to understand the whole field. To be great, though, imo understanding what’s under the abstractions is really important: 1/ These abstractions are the “black boxes” in your work. Maybe you make HTTP requests all the time, or submit queries to a DB, or read and write to files, or make a syscall, or even type useState—but have never interrogated what’s happening under the abstraction when you do. 2/ These abstractions are great for most things, but are still “leaky” at some point — and understanding their underlying complexity is incredibly valuable for being a great software dev. Here are some books I found valuable for learning about these abstractions early on: 3/ 1. TCP/IP Illustrated, Volumes 1, 2, and 3: A lot has changed since this was written (in all volumes, but particularly 2&3), but I think it’s still a valuable resource for understanding the basis of what’s happening every time you make an HTTP request. This really pays off. 4/ 2. Computer Organization and Design: The Hardware Software Interface The hardware software interface is the ultimate abstraction layer. You’d be surprised how often knowing how cache lines work will help you. 5/ 3. Transaction Processing: Concepts and Techniques A lot has also changed since this was written, but it’s still a great exploration of an area that is perhaps the leakiest abstraction of all and where understanding the underlying system is of enormous value. 6/ 4. Understanding the Linux Kernel / The Design and Implementation of the 4.4 BSD Operating System Great for understanding the complexities and limitations of the the filesystem, memory, network interface, etc. Abstractions that effect almost every aspect of your software. 7/ Maybe there are better references now, but studying these early on has and continues to help me immensely. Abstractions are great for getting people contributing in the field quickly, but imo looking through the abstractions is hugely rewarding and will make you super effective.
Sean Hood (@[email protected])
If MySpace taught a generation HTML; is the Mastodon era going to create a generation of sysadmins?
Sean Hood (@[email protected])
If MySpace taught a generation HTML; is the Mastodon era going to create a generation of sysadmins?
How to build a Semantic Search Engine in Rust | by Sacha Arbonel | Nov, 2022 | Medium
We need more project oriented tutorials like this if we want to promote rust. That's one reason why python and js are so successful. I like the last part where you left the links of libraries where readers can explore on. Thanks and good job!
Building a semantic search engine in Rust
I remember when "semantic search" was the Next Big Thing (back when all we had were simple keyword searches). I don't know enough about the internals of Google's search engine to know if it could be called a "semantic search engine", but not, it gets close enough to fool me. But I feel like I'm still stuck on keyword searches for a lot of other things, like email (outlook and mutt), grepping IRC logs, searching for products in small online stores, and sometimes even things like searching for text in a long webpage. I'm sure people have thought about these things: what technical challenges exist in improving search in these areas? is it just a matter of integrating engines like the one that's linked here? Or maybe keyword searches are often Good Enough, so no one is really clamoring for something better
Dave Temkin (@dtemkin)
We built Netflix streaming from scratch without ever spending a night in the office. Any employer that tells you that you need to do otherwise is toxic and you deserve better.
Dave Temkin (@dtemkin)
We built Netflix streaming from scratch without ever spending a night in the office. Any employer that tells you that you need to do otherwise is toxic and you deserve better.
Being Ridiculed for My Open Source Project (2013)
The other day I wrote a fan letter to a developer who has been maintaining a popular and useful library for several years. In his reply, he said that this was the first fan letter he had ever received. I think we need to show Open Source developers a lot more love and a lot less snark...
In Defense of Linked Lists
>When people asking my opinion for Rust, I loved to share them the Linkedin List implementation link: This LinkedList obsession is a bit bizarre to me, and tends to come from older programmers who come from a time when coding interviews involved writing linked lists and balancing b-trees. To me though it also represents the stubbornness of C programmers who refuse to consider things like growable vectors a solved problem. My reaction to the LinkedList coders is not "well Rust needs to maintain ownership", its why does your benchmark for how easy a language is involve how easy it is to fuck around with raw pointers?. LinkedLists are a tool, but to C programmers that are an invaluable fundamental building block that shows up early in any C programmers education due to how simple they are to implement and the wide range of use cases they can be used for. But they are technically an unsafe data structure and if you willing to let some of that stubbornness go and finally accept some guard rails, you have to be able to see that a data structure like linkedlists will be harder to implement. It has nothing to do with the language; implementing with LinkedLists with any sort of guardrails adds a ton of complexity, either up front (e.g. borrowchecker) or behind the scenes (e.g. a garbage collector). When you accept this fact, it becomes ludicrous to imply that a LinkedList implementation is a good benchmark for the ergonomics of a language like Rust.
Functional programming should be the future of software
I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all. The reason why multiple paradigms exist is because here in the real world, the competing issues and constraints are never equal, and never the same. A big part of engineering is navigating all of the offerings, examining their trade-offs, and figuring out which ones fit best to the system being built in terms of constraints, requirements, interfaces, maintenance, expansion, manpower, etc. You won't get a very optimal solution by sticking to one paradigm at the expense of others. One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)
Functional programming should be the future of software
Functional programming won't succeed until the tooling problem is fixed. 'Tsoding' said it best: "developers are great at making tooling, but suck at making programming languages. Mathematicians are great at making programming languages, but suck at making tooling." This is why Rust is such a success story in my opinion: it is heavily influenced by FP, but developers are responsible for the tooling. Anecdotally, the tooling is why I gave up on Ocaml (given Rust's ML roots, I was seriously interested) and Haskell. I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling. As for Haskell, I gave up maybe 20min in of waiting for deps to come down for a Dhall contribution I wanted to make. Institutionally, it's a hard sell if you need to train the whole team to just compile a project, vs. `make` or `cargo build` or `npm install && npm build`.
Show HN: A tool to help you remember shit you are interested in
This seems really well built. It's fast and responsive. It looks nice. But I just don't understand what I would use it for. It seems like the idea is to build a database of people, movies, Wikipedia articles and such and then be able to find them via search/links. But I'm not at all sold on why I need this in my life. Is there a way to make the value clearer? Am I just not in the target audience? Who is going to see this and say "TAKE MY MONEY" and why? I'm thinking of products that were instant sign-ups for me... Spotify: For one price, listen to all the music on Earth whenever you want. TAKE MY MONEY! Gmail: Fast email with 2 GB storage. This was such an instant sign-up they had to make an invite system to slow people getting access. Maybe could add something like Lichess: Chess training and games, with modern UX, offered open source as a public good. I mean, if you're at all interested in chess, that's an instant sign-up, right? Trying to say, this idea of presenting a clear value isn't limited to big players like Spotify and Gmail, but can also be done by smaller companies if the value presented is really clear. What should someone see that makes them instantly recognize they need this in their life, because that's what I'm totally missing here.
Jony Ive on Life After Apple
“Language is so powerful,” says Ive, who often begins a new project with conversation or writing, not sketches. “If [I say] I’m going to design a chair, think how dangerous that is. Because you’ve just said ‘chair,’ you’ve said no to a thousand ideas.” The older I get the more I believe this to be the most difficult aspect of making decisions. Saying 'no' to thousands of potentialities seems scary because it's a memento mori of the finiteness of individual lives.
Do you use Nix or equivalent to manage projects and/or systems?
We use nix very conservatively. We only use it for managing local developer environments, ie. build toolchain and other cli tools (ansible, terraform, etc). That has worked out amazingly for us. I’m in general a lot more skeptical about nix for production. You don’t clearly get the kind of support like you would from, for example, Ubuntu’s packages. There’s no “LTS” as far as I know for nix, merely the stable NixOS release. Though, that being said, nixpkgs tends to be way ahead of other package managers’ versions of software. We’ve started messing around with using nixpkg’s docker tools for some web projects. That would be the first time that we’d be using nix in our production environment. In general, it’s really easy to go overboard with nix and start using it really inappropriately. But if you use some discipline, it can be an *amazing* tool. It’s completely solved our python problems related to installing ansible. That’s invaluable.
Got promoted to Director after boss quit. Any advice?
Learn how to back off and trust others. You're not an IC anymore. Focus on enabling work getting done. You're the cat herder now. Make sure you have good cats, that they get fed enough, and that they're in the right barn or field. Don't try to catch mice, or tell your cats how to catch mice. Focus on overall velocity, removing roadblocks, and setting directions. And try to not get stressed out by the fact that you aren't directly contributing. Lots of people going through this transition have a hard time. In the past if something wasn't going well you could just take direct action by working harder, learning a new approach, or rethinking the problem. Your hard work and thinking is what lead you to success. Lots of people making this transition have a hard time because their work doesn't directly lead to success anymore. And when things go bad, or they run into problems, they try to DO something about it by making changes themselves, micromanaging, demanding over time, etc. They often feel they can't control the situation so they try and do SOMETHING to help them feel in control. But it's counter productive and puts you in a spiral of ever escalating issues. You need to focus on helping the tree grow and helping it grow in the right direction. So, yeah. Good luck. :)
Laws barring noncompete clauses spreading
Over the years so many different jurisdictions around the US and the world have stated their desire to be the "next Silicon Valley" and have poured an immense amount of money and effort to make it so, whether in the form of incentives for businesses, tax breaks, education, job training, or even just straight paying smart people to move there. Every such scheme has generally failed because they refused to emulate the one key piece of California law that is necessary for a startup ecosystem to exist – banning noncompetes. "But I'll spend money to train my employees and they'll just take those skills to go work for a competitor or start their own business!" Yes, that's a feature of the system, not a bug.
Ask HN: What was being a software developer like about 30 years ago?
It was great. Full stop. A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-) Starting in 1986 I worked on bespoke firmware (burned into EPROMs) that ran on bespoke embedded hardware. Some systems were written entirely in assembly language (8085, 6805) and other systems were written mostly in C (68HC11, 68000). Self taught and written entirely by one person (me). In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software. Bugs in production were exceedingly rare. The relative simplicity of the systems was a huge factor, to be sure, but knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done". Schedules were no less stringent than today; there was constant pressure to finish a product that would make or break the company's revenue for the next quarter, or so the company president/CEO repeatedly told me. :-) Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).
Ask HN: What was being a software developer like about 30 years ago?
Fun! Precarious. Very slow. Like a game of Jenga, things made you nervous. Waiting for tapes to rewind, or slowly feeding in a stack of floppies, knowing that one bad sector would ruin the whole enterprise. But that was also excitement. Running a C program that had taken all night to compile was a heart-in-your-mouth moment. Hands on. They say beware a computer scientist with a screwdriver. Yes, we had screwdrivers back then. Or rather, developing software also meant a lot of changing cables and moving heavy boxes. Interpersonal. Contrary to the stereotype of the "isolated geek" rampant at the time, developing software required extraordinary communication habits, seeking other experts, careful reading, formulating concise questions, and patiently awaiting mailing list replies. Caring. Maybe this is what I miss the most. 30 years ago we really, truly believed in what we were doing... making the world a better place.
Incidents caused by unappreciated OSS maintainers or underfunded OSS projects
our model of society is not compatible with open source there needs to be a massive shift and appreciate more the work of volunteers, contributors and benevolent until then, these problems will amplify and i'm not talking about github sponsors since it's opt in, and it's more of a popularity check than anything else i'm talking about that dude who will randomly appear to send a PR that fixes something important, the dude who decide overnight to open source his work but is agoraphobic, that other dude who help write documentation, that other dude who help triage issues, countless hidden people who never are rewarded
Don't Be A Free User (Pinboard Blog)
I love free software and could not have built my site without it. But free web services are not like free software. If your free software project suddenly gets popular, you gain resources: testers, developers and people willing to pitch in. If your free website takes off, you lose resources. Your time is spent firefighting and your money all goes to the nice people at Linode.
Don't Be A Free User (Pinboard Blog)
I love free software and could not have built my site without it. But free web services are not like free software. If your free software project suddenly gets popular, you gain resources: testers, developers and people willing to pitch in. If your free website takes off, you lose resources. Your time is spent firefighting and your money all goes to the nice people at Linode.
Don't Be A Free User (Pinboard Blog)
If every additional user is putting money in the developers' pockets, then you're less likely to see the site disappear overnight. If every new user is costing the developers money, and the site is really taking off, then get ready to read about those synergies.
What “work” looks like
Software development is creative work. Creative insight can come anywhere, any time. Better ideas can make difficult things easy. And make the impossible– possible. So the most important thing on a software team (or really any team creating high technology products or services) is an environment where team members feel safe to be themselves– psychologically safe, where they can try out new things, make mistakes, fail, and not be punished or belittled. Say their ideas and have them improved by others, not criticized. It's an environment where team members take care of themselves so they can be creative– sleep enough, exercise enough, be with friends and family enough, play enough. You have to be at your keyboard or lab bench or whatever enough to make things. But if you are there too much your creativity plummets. This is what I try to get across to my teams.
Why we're leaving the cloud
Of course it's expensive to rent your computers from someone else. But it's never presented in those terms. The cloud is sold as computing on demand, which sounds futuristic and cool, and very much not like something as mundane as "renting computers", even though that's mostly what it is. But this isn't just about cost. It's also about what kind of internet we want to operate in the future. It strikes me as downright tragic that this decentralized wonder of the world is now largely operating on computers owned by a handful of mega corporations. If one of the primary AWS regions go down, seemingly half the internet is offline along with it.
Write Better Error Messages
Watched the new Quantum Leap yesterday (it's not great) and there was this really cringeworthy moment when something goes wrong with their awesome supercomputer and the screen flashes a giant "INTERNAL SYNTAX ERROR". Apparently, somebody didn't run their linter before sending people through time. Too bad.
Write Better Error Messages
Probably just me, but I am less concerned with how good my error messages are, and more concerned with trying very very hard to make the errors happen closer to the cause of the problem, rather than further away. "Fail early, fail hard" i.e. if I can make the error message happen near the beginning of a process, I can get away with making it a hard error. Hard errors in the middle of a multi-hour operation tend to annoy people.
Product vs. Engineering
> What I have noticed even in top engineering companies is an interesting dichotomy. Product determines the "innovation", engineering determines how to build it. I wonder if thats because, if you let engineers do both, you end up with a mess and accomplish nothing. The top companies have product and engineering working closely together. This allows product people to go deep on optimizing their product skills and engineers to go deep on optimizing their development skills, both of which are most effective when performed in conjunction with each other as part of a strong team. There are great product-minded engineers and great engineering-minded product managers out there, but it's much easier to find people who are simply good at their domain and know how to work closely with people in other domains to get things done. Some companies try to cargo-cult this by drawing a dividing line: Product defines the "what" and engineers define the "how". Product works in isolation, hands things off to engineers, then engineers churn through tickets in isolation. This is not good at all.
Syntax Design
I find the section on "syntactic salt" interesting: > The opposite of syntactic sugar, a feature designed to make it harder to write bad code. Specifically, syntactic salt is a hoop the programmer must jump through just to prove that he knows what’s going on, rather than to express a program action. This is perhaps an uncharitable way to describe it, but the concept does ring a bell. Rust's unsafe {}, C++'s reinterpret_cast<>(), etc - all slightly verbose. More important than jumping through hoops, the verbosity helps when reading code to know that something out of the ordinary is going on.
Protein interface – how to change aproach to building software?
Interviewing is outside my skill set, so take this with a grain of salt, as it's just the sort of question I'd like to answer: "We have an application that needs to run inside a vehicle, which means the power will be killed at regular, but unpredictable, intervals. How would you design this to ensure data integrity?" It's weird enough that few people will have solved it before, but it can be solved at every layer between circuit and application, so you can actively brainstorm with the candidate to draw out some of their solutions into more detail. And if they start with, "well, I'd build a react app," you can go straight into the trash can with their resume, because you can have that whole discussion without deciding on so much as a language, much less a framework, so you can see who jumps too hastily to wrong assumptions.
Digital Gardening
We run our company with a forest in mind. Client projects are gardens within the forest. We have a green house for seedlings (innovation projects), we have a fire in the center, where we regularly meet and hang out. We have an outlook point, where we look out to sense what’s on the horizon… obviously, we don’t want our gardens full of weeds or trash laying around.
Moving from React to Htmx
I think this take does yourself a disservice: htmx is an extension of HTML and, in general, of the hypermedia model, and it is this model that should be contrasted with JSON data APIs. I think that you should learn JavaScript, and I certainly think you should learn HTML(!!!) and CSS. But I also think you should consider how much more can be achieved in a pure hypermedia model with the (relatively small) extensions to HTML that htmx gives you. I have a collection of essays on this topic here: Including an essay on how I feel scripting should be done in Hypermedia Driven Application: There is more to all this than simply avoiding JavaScript.
A Real World React – Htmx Port
> as you reach a more “app like” experience with multiple layers of state control on the front end you need to reach for a front end JS framework I think that if you fully embrace HTMX's model, you can go far further than anticipated without a JS framework. Do you really need to be managing state on the client? Is it really faster to communicate via JSON, or protobuf, whatever, rather than atomic data and returning just small bits of replacement HTML -- inserted seamlessly that it's a better UI than many client-side components? Why have HTML elements react to changes in data or state, rather than just insert new HTML elements already updated with the new state? I think you're describing a, let's do React in HTMX mindset, rather than let's go all in on the HTMX model. And I might be giving HTMX too much credit, but it has totally changed how I go about building web applications.
Using a Framework will harm the maintenance of your software
From my own experience, writing something without a framework often seems very elegant to yourself, but the moment you try to onboard other people to your framework-less code it becomes a nightmare. Turns out most folks don't want to get familiar with e.g. intrinsics of browser technologies, HTTP request processing or other complex things that you've reimplemented in your code, they just want to deliver working software using frameworks and conventions they know. You can think of frameworks like conventions: If enough people know them, it makes life so much easier for everyone, even though the convention might not always be the best fit. To state an analogy, imagine each municipality would invent their own traffic signs from first principles - because it makes maintenance easier for them - and you were tasked to drive through such a city with large speed, learning the conventions as you go. An absolute nightmare. I think that's how most programmers feel about code that brings its own framework-less abstractions and technologies. So while I would've been able to write my own frameworks I've become humble and reasonable enough to just default to something that's popular and well-known, because it will make life easier for my colleagues or employees.
Using a Framework will harm the maintenance of your software
Rails will harm the maintenance of your software* *Is really the accurate summation of the article. And yes, this is well known. Every article about a company upgrading Rails is "it took us several years and only three people died." And we know better than to use MVC nowadays. No offense to Rubyists, but in the Ruby ecosystem, I have seen a disturbing lack of absorbing information from other programming ecosystems. This article smells like that to me. If you've only used Ruby and Rails, you might not realize some of the dangers and inherent limitations of the design unless you've worked in other ecosystems.
The 4th Year of SerenityOS
You are already powerful enough! Some of our most active developers today didn't even know C++ when they started. If you're interested, look for something small that annoys you, and then see if you can't figure out enough of the code to fix it. :^)
The 4th Year of SerenityOS
I follow Andreas on twitter and he is a big inspiration for me when I go look at more challenging problems now. I have an addictive personality, so far cigarettes are the only thing that got me and only for 4 years, but I largely stay away anything else now because I see how it effected members of my family and how easily someone like myself could go the same way. Because of that I very much appreciate channeling yourself into something as ambitious as an operating system instead. It's actually the same way I've built any of my best work and how I've gotten even this far in my career. The line I say is: programming keeps me sane.
The 4th Year of SerenityOS
I think it’s that most people are doomers and/or are defeated by doomerism. Most people think it’s impossible to build an OS or a web browser (and are told this when they ask for help building one.) In reality engineering is straightforward, you just need someone to show you how to properly write data structures and algorithms and to break problems down. Andreas showed these kids this, reinvigorating the web based hacker culture I grew up in. Anything is possible and even if a problem ends up being more than you can handle at least you learned a ton along the way. Now days searching for how to code leads you to a ton of tutorials about gluing modules together. I feel sorry for young people with that thirst who won’t be satisfied thanks to the commoditization of learning to code.
Take a Break You Idiot
It's funny isn't it. Recently in a job with "unlimited" vacation, because of a dubious message from one of my two bosses who was a bit of a dick, I was too scared to take a real vacation. Until Christmas. Then I decided I was going to take some. It had been a rough year, isolating from Covid, not enough money, and living in shitty circumstances. It was the first PTO I'd had in over a decade, as working as a freelancer/consultant often means no PTO, so I decided to savor it, come what may. I took just under 3 weeks, like almost everyone else: there was a shared vacation calendar where I could see everyone else's Christmas break. My reward when I got back? Low performance metrics "in December" were cited when laying me off. It wasn't just about December, but December was the month they decided to measure and "give me a chance". They didn't take into account the break, and the only way their "assessment" could be satisfied would have been to work through Christmas. I then worked my ass off to ship a technically difficult, world-record-beating feature during my notice month, which they told me if I delivered it would surely be impressive, and turn it around. I did ship it, but not until the very end of the notice period, which was too late. If they had cared, they would have seen it was on track. If they had kept me on, let me relax, and worked with me rather than their choice of how to assess work, they would now have a world-beating product. It's their choice of course, and I now don't think they were serious about trying to build a real product. I think it's a bit of a smoke-and-mirrors scheme to keep grant money flowing in. After all, in about 4 years nobody has ever run the product on the real data it is designed for, except me, and I had to pay for servers from my own pocket to run those tests. Even now, I believe I'm the only person ever to run it, or even be able to run it. It's been interesting to watch how the product has stayed in the doldrums since I left, and how the folks working on it are now starting to implement things for which I have had working, high-performance functionality for months in my private fork since leaving. (It's open source.) It will be particularly interesting to see if their version is ever able to run on the real world data it was created for, or if their perpetual optimism will be forever misplaced. Ironically, I'd say the company had the nicest, most helpful HR, legal and accounting teams I've ever seen at any company. There was a lot to like, and I'm sad to have had to leave. But I don't miss feeling constantly afraid there. And, as a person who really enjoys creating things, I don't miss watching another team member shipping garbage commits that usually didn't work, and doing fine, while I was the only person on the project providing real functionality but not scoring well on the right metrics, because I spent too much time solving the product's blocker problems. To score well I'd have to ship garbage too. Oh well.
Take a Break You Idiot
There was a time a dozen years ago when I was working alone on my (over-elaborate, uncontrollably sprawling) graphics software product. One time I wrote a multi-thousand-line refactoring of existing code into a new class and felt very happy about getting it done. The next day I discovered that I had already done the exact same work the previous week, just as a slightly differently named class. That wasn’t an isolated memory loss experience in those days. I ordered lunch, sat down, then five minutes later just stood up and left, assuming I’d already eaten. An hour later I realized what happened. Long-term productivity is impossible without proper rest, including regular vacations where you’re truly out of work mode preferably for a week at the minimum.
Take a Break You Idiot
There was a time a dozen years ago when I was working alone on my (over-elaborate, uncontrollably sprawling) graphics software product. One time I wrote a multi-thousand-line refactoring of existing code into a new class and felt very happy about getting it done. The next day I discovered that I had already done the exact same work the previous week, just as a slightly differently named class. That wasn’t an isolated memory loss experience in those days. I ordered lunch, sat down, then five minutes later just stood up and left, assuming I’d already eaten. An hour later I realized what happened. Long-term productivity is impossible without proper rest, including regular vacations where you’re truly out of work mode preferably for a week at the minimum.
Helix: A Neovim inspired editor, written in Rust
What's preventing it is their existing codebases, mostly. IIRC, one of the first things Neovim did was throw out literally tens of thousands of lines of legacy code from Vim. Meanwhile, Helix can just add the `lsp-types` crate as a dependency and they're already a quarter of the way to making LSP work. The difference between adding something to a 30-year-old C project and a new Rust project is so massive that it can actually be the difference between "fairly straightforward" and "essentially impossible".
André Staltz - Software below the poverty line
Marx wrote a famous piece called "Fragment on Machines". It actually predates Capital volume 1. He talks about the mix of knowledge and labour to produce machines that are capable of transforming nature (doing labour). From here, Marx explores a world where labour can be produced entirely (or almost entirely) by machines, for him machines are capable of undoing capitalism. The so called post-scarcity society. I think the key part here is that software is actually capable of replacing large portions of labour; think about how much book keeping work is saved through Excel. But what happens when capital owners own all the machines, what happens to people? This is a fundamental problem that Marx explores through out his whole work. I think OSS is actually what machines should look like for Marx, available for everyone at the cost of production and upkeep of the machines which in our case is the cost of copying and storage of the bits that compose the software. But Marx through out his work also explores deeply the relationship between labour and capital, and obviously producing machines requires labour! I know you're probably joking, but I we can learn a lot about OSS from Marx. I mean, a big part of Stallman's philosophy behind the free software movement is inspired by marxist ideas.
André Staltz - Software below the poverty line
This is why I think that open source / free software is the greatest trick that late stage capitalism ever pulled. It exploits the generosity and naivity of devs who have committed to a particular ideology that, while well motivated at the start, has nevertheless turned out to be extremely easily exploited by corporations who now essentially get an enormous amount of labour for free. What's more there is intense social pressure from large segments of the dev community to both contribute to open source and to publicly endorse and promote "open source values". Even the author refuses to acknowledge that the problem with open source is open source licensing. Dropping the non discrimination clause in open source licenses and demanding payment for labour from large companies, would be enough to solve all these issues. But that is anathema to the ideologues who dominate the conversation.
André Staltz - Software below the poverty line
This is why I think that open source / free software is the greatest trick that late stage capitalism ever pulled. It exploits the generosity and naivity of devs who have committed to a particular ideology that, while well motivated at the start, has nevertheless turned out to be extremely easily exploited by corporations who now essentially get an enormous amount of labour for free. What's more there is intense social pressure from large segments of the dev community to both contribute to open source and to publicly endorse and promote "open source values". Even the author refuses to acknowledge that the problem with open source is open source licensing. Dropping the non discrimination clause in open source licenses and demanding payment for labour from large companies, would be enough to solve all these issues. But that is anathema to the ideologues who dominate the conversation.
André Staltz - Software below the poverty line
There are two alternatives possible. One is that we collectively decide to stop shaming software developers for having the audacity to want some level of ownership over the product of their work. We don't shame authors for wanting copyright on their books; we don't shame musicians, artists, designers, or aerospace engineers for asking for some copyright protection for their creative babies. Yet when a software developer does it: fuck that guy! He's trying to take control of what's running on your computer (or the internet server that you're sending requests to ...). Nobody throws a hissy fit when J.K. Rowling has (gasp!) copyright over the Harry Potter books that are sitting on your Kindle. It's your Kindle! Shouldn't you have the right to copy off the words in the books and re-sell it to other people for much less money, undercutting Rowling? How dare she try to get some legal protection that says you can't do that! It's fucking ridiculous when we talk about authors that way, but somehow it's OK to talk about software developers that way. Do you think "open source authors" would make a living from their books? It's already difficult enough for new authors to get any notice; how much worse would it be if prominent authors (who were already rich) came out and founded the "Free Books Foundation" that comes out and says every young author who's trying to sell her books for money is being a greedy asshole and we should fight against them and every author needs to spend a significant portion of their free time contributing to "open books" or they're assholes? Of-fucking-course it's not sustainable. That's because it's always been OK to want copyright on your creative work. I'll be the first to say patents are a huge problem right now and we might be better off without any patent law, but copyright is not the same. Yes, the terms are way too long, and the family of Marvin Gaye proves that "copyright trolls" are possible, but the fundamental concept of copyright is actually critical if we want creative people to ever get a paycheck. The other alternative is Universal Basic Income, so that making "below the minimum wage" doesn't mean "fuck you, you get to die sick and homeless in a tent on the side of the highway". Then people could actually just contribute to OSS because they want to.
André Staltz - Software below the poverty line
The struggle of open source sustainability is the millennium-old struggle of humanity to free itself from slavery, colonization, and exploitation. This is not the first time hard-working honest people are giving their all, for unfair compensation. This is therefore not a new problem, and it does not require complicated new solutions. It is simply a version of injustice. To fix it is not a matter of receiving compassion and moral behavior from companies, for companies are fundamentally built to do something else than that. Companies simply follow some basic financial rules of society while trying to optimize for profit and/or domination. Open source infrastructure is a commons, much like our ecological systems. Because our societies did not have rules to prevent the ecological systems from being exploited, companies have engaged in industrialized resource extraction. Over many decades this is depleting the environment, and now we are facing a climate crisis, proven through scientific consensus to be a substantial threat to humanity and all life on the planet. Open source misappropriation is simply a small version of that, with less dramatic consequences.
SQLite: QEMU All over Again?
SQLite only works as a concept because it is not networked. Nobody truly understands the vast and unsolveable problem that is random shit going wrong within the communication of an application over vast distances. SQLite works great because it rejects the dogma that having one piece of software deal with all of that shit is in any way a good idea. Back your dinky microservice with SQLite, run multiple copies, have them talk to each other and fumble about trying to get consensus over the data they contain in a very loose way. That will be much, much less difficult than managing a distributed decentralized database (I speak from experience). It's good enough for 90% of cases. Remember P2P applications? That was basically the same thing. A single process running on thousands of computers with their own independent storage, shuffling around information about other nodes and advertising searches until two nodes "found each other" and shared their data (aw, love at first byte!). It's not great, but it works, and is a lot less trouble than a real distributed database.
SQLite: QEMU All over Again?
I feel like a lot of fantastic software is made by a small number of people whose explicit culture is a mix of abnormally strong opinionatedness plus the dedication to execute on that by developing the tools and flow that feel just right. Much like a lot of other "eccentric" artists in other realms, that eccentricity is, at least in part, a bravery of knowing what one wants and making that a reality, usually with compromises that others might not be comfortable making (efficiency, time, social interaction from a larger group, etc).
The 'attention economy' corrupts science
And yet, in my career, I've noticed the rewards are increasing for being the person who is willing to focus on one thing for a long time (for several weeks, or months). For instance, I've never been the kind of software developer who could write obviously clever code. But I have written code that was admired and praised, and sometimes seen as the salvation of the company I was working for -- but not because I'm especially skilled as a software developer, but only because I was willing to think about specific problems, deeply, for longer than anyone else at the company. In 2012/2013, to the extent that I helped re-invent the tech stack at, it was because I was willing to spend weeks thinking about exactly why we'd reached the limits of what we could do with various cache strategies, and then what would come next. I then introduced the idea of "an architecture of small apps" which was the phrase I used because the phrase "microservices" didn't really become widespread until Martin Fowler wrote his essay about it at the very end of of 2013. Likewise, I now work as the principal software architect at, and my main contribution has been my willingness to spend weeks thinking about the flaws in the old database schema, and what we needed to do to streamline our data model and overcome the tech debt that built up over the 7 years before I was hired. We live in a world where there are large economic rewards for the kinds of people who are willing to think about one thing, deeply, for weeks and weeks or even months and months, until finally understanding a problem better than anyone else. I have to hope some young people eventually escape the attention-sucking technologies that try to sabotage their concentration, and eventually discover the satisfactions of thinking about complex problems, continuously, for months and months and months.
Ask HN: In what ways is programming more difficult today than it was years ago?
> Spending months to get the basics up and running in their React frontends just to be able to think independently of hand-holding tutorials for the most basic operations. Frontend devs who were present before the advent of the major web frameworks, and worked with the simplicity of js script + DOM (or perhaps jquery as a somewhat transparent wrapper) benefited from seeing the evolution of these frameworks, understanding the motivations behind the problem they solve, and knowing what DOM operations must be going on behind the curtain of these libraries. Approaching it today not from the 'ground up' but from high level down is imo responsible for a lot of jr web devs have surprising lack of knowledge on basic website features. Some, probably a minority, of student web devs may get conditioned to reach for libraries for every problem they encounter, until the kludge of libraries starts to cause bugs in and of itself or they reach a problem that no library is solving for them. I feel like this is particularly bad outcome for web devs because web I feel is uniquely accessible for aspiring developers. You can achieve a ton just piggybacking off the browser and DOM and it's API, the developer tools in the browser etc. But not if you are convinced or otherwise forced to only approach it from the other side -- running before you crawl, or trying to setup a webpack config before you even understand script loading, etc.
Ask HN: In what ways is programming more difficult today than it was years ago?
Programming today is easier in many ways: Information is readily available for free (I recall saving up a lot of money for a kid to buy specific programming books at the book store after exhausting my library’s offerings). Compilers and tooling are free. Salaries are much higher and developers are a respected career that isn’t just “IT”. Online programming communities are more abundant and welcoming than impenetrable IRC cliques of years past. We have a lot that makes programming today more comfortable and accessible than it was in the past. However, everything feels vastly more complicated. My friends and I would put together little toy websites with PHP or Rails in a span of weeks and everyone thought they were awesome. Now I see young people spending months to get the basics up and running in their React front ends just to be able to think independently of hand-holding tutorials for the most basic operations. Even business software felt simpler. The scope was smaller and you didn’t have to set up complicated cloud services architectures to accomplish everything. I won’t say the old ways were better, because the modern tools do have their place. However, it’s easy to look back with rose-tinted glasses on the vastly simpler business requirements and lower expectations that allowed us to get away with really simple things. I enjoy working with teams on complex projects using modern tools and frameworks, but I admit I do have a lot of nostalgia for the days past when a single programmer could understand and handle entire systems by themselves because the scope and requirements were just so much simpler.
DALL·E Now Available Without Waitlist
It's really amazing how DALL-E missed the boat. When it was launched, it was a truly amazing service that had no equal. In the months since then, both Midjourney and Stable Diffusion emerged and got to the point where they produce images of equal or better quality than DALL-E. And you didn't have to wait in a long waitlist in order to gain access! They effectively gave these tools free exposure by not allowing people to use DALL-E. Furthermore, the pricing model is much worse for DALL-E than any of its competitors. DALL-E makes you think about how much money you're losing continuously - a truly awful choice for a creative tool! Imagine if you had to pay photoshop a cent every time you made a brushstroke. Midjourney has a much better scheme (and unlimited at only 30/month!), and, of course, Stable Diffusion is free. This is a step in the right direction, but I feel that it is too little, too late. Just compare the rate of development. Midjourney has cranked out a number of different models, including an extremely exciting new model ("--testp"), new upscaling features, improved facial features, and a bunch more. They're also super responsive to their communtiy. In the meantime, OpenAI did... what? Outpainting? (And for months, DALL-E had an issue where clicking on any image on the homepage would instantly consume a token. How could it take so long to fix such a serious error?) You have this incredible tool everyone is so excited to use that they're producing hundred-page documents on how to get better results out of it, and somehow none of that actually makes it into the product?
fasterthanlime 🌌 (@fasterthanlime)
rustaceans really will implement TryFrom<(Lol, Lmao, GoodLuck)> instead of adding a single associated func, smh
Get in Zoomer, We're Saving React
What's really frustrating about all this is how passive and helpless the current generation of web developers seem to be in all this. It's as if they've all been lulled into complacency by convenience. They seem afraid to carve out their own ambitious paths, and lack serious gusto for engineering. If there isn't a "friendly" bot spewing encouraging messages with plenty of 👏 emoji at every turn, they won't engage.
Get in Zoomer, We're Saving React
If there's one solid criticism I've heard of React, it's this: that no two React codebases ever look alike. This is generally true, but it's somewhat similar to another old adage: that happy families all look alike, but every broken family is broken in its own particular way. The reason bad React codebases are bad is because the people who code it have no idea what they're supposed to be doing. Without a model of how to reason about their code in a structured way, they just keep adding on hack upon hack, until it's better to throw the entire thing away and start from scratch. This is no different from any other codebase made up as they go along, React or not.
Cinder is Meta's internal performance-oriented production version of CPython
To me it looks like lock-in. They chose a language good for prototyping and quick iteration, and then their codebase gets stuck with a permanent performance problem. You see the same problem in Python with regards to correctness - it's hard to refactor Python or change a large codebase and have it keep working correctly, so huge Python projects tends to ossify. It may be a rational solution only in the short-term, but still an objectively bad solution overall.
Cinder is Meta's internal performance-oriented production version of CPython
It's bizarre. I don't think it's an exaggeration that it's the 10th project I've heard about to speed up Python. Seriously, use a faster language. If you need a performant fork of Python, you're using the wrong tool for the job.
Show HN: I made 7k images with DALL-E 2 to create a reference/inspiration table
End of the day, unless it's opened up Dall-E 2 will be seen as an evolutionary dead end of this tech and a misstep. It's gone from potentially one of the most innovative companies on the horizon to a dead product now I can spin up equivalent tech on my own machine, hook into my workflow and tools in an afternoon all because Stable Diffusion released their model into the wild.
1Password delisting forum posts critical of their new Electron based 1Password 8
One of the very best things I ever did while working on an Android app was to buy a dirt cheap phone. Every performance problem was obvious. Every fix was a clear improvement. And when things were acceptable there, the app absolutely screamed on modern phones. We had startup times faster than Android's launch animation with a little bit of care. Our users loved it.
Neubrutalism is taking over the web?
> "People simply get bored with how their apps and websites look after six to seven years. They need a change" Real world objects rarely change design because of the costs involved. When they do, the change needs to justify that cost. For example, I'm not going to change the buttons on my microwave because I'm "bored" with them. The costs of changing software design is far less impractical and expensive, and therefore isn't driven by the same high level of justification. I strongly suspect then, there are two reasons for these design changes we see every couple of years in software: The first is easy, and most of us probably already agree; designers gotta design. They have to justify their salary _somehow_. The second is more philosophical. The west — and especially the U.S.A. — looks to alleviate existential crisis with distractions. Shiny new toys keeps us from having to face uncomfortable truths about the nature of reality (if you're not religious).
Neubrutalism is taking over the web?
> "People simply get bored with how their apps and websites look after six to seven years. They need a change" Real world objects rarely change design because of the costs involved. When they do, the change needs to justify that cost. For example, I'm not going to change the buttons on my microwave because I'm "bored" with them. The costs of changing software design is far less impractical and expensive, and therefore isn't driven by the same high level of justification. I strongly suspect then, there are two reasons for these design changes we see every couple of years in software: The first is easy, and most of us probably already agree; designers gotta design. They have to justify their salary _somehow_. The second is more philosophical. The west — and especially the U.S.A. — looks to alleviate existential crisis with distractions. Shiny new toys keeps us from having to face uncomfortable truths about the nature of reality (if you're not religious).
Hydration is pure overhead
if none of this makes sense to you - don’t try to make sense of it or you’ll be disappointed do you need 2000+ of dependencies to essentially show a HTML page in a web browser? why should you have to wait 5 minutes to generate a static website? Netlify and Vercel are well aware of these inefficiencies and offer you a “cloud” solution that promises to solve the problems you shouldn’t even have had in the first place if you think you need things like Gatsby or Next.js you’ve been brainwashed by capitalists
Automation is the serialization of understanding
To paraphrase the maxim, working automated systems evolve from working manual systems. But only some manual systems work. I start CI/CD by doing the whole process manually. For example, type the commands to build a Docker image, or spawn a VM, or obtain a secret. I encode all this in pseudo code, then put it in Bash (or Python). When a conditional branch appears (a different environment with different credentials), I treat it like any other code. Separate the bits that stay the same, and inject the bits that change. The problem with most CI/CD systems is that people tightly couple themselves to the tool without really understanding it - the point the article is making. They over-complicate the solution because the documentation encourages you to do that. When they want to customise, debug, or even migrate away from it, it’s very difficult.
Windows 95 – How Does It Look Today?
I lol'd at your comment. Poor UX designers. In an age of gentleness, I wish I could barge into their houses and rearrange all their furniture, toss the contents of their refrigerators into the bathtub, and spraypaint their bedrooms a cheap pink color. Because that's what they do to my computer interfaces at random intervals, and I have no power over it anymore.
Windows 95 – How Does It Look Today?
I lol'd at your comment. Poor UX designers. In an age of gentleness, I wish I could barge into their houses and rearrange all their furniture, toss the contents of their refrigerators into the bathtub, and spraypaint their bedrooms a cheap pink color. Because that's what they do to my computer interfaces at random intervals, and I have no power over it anymore.
Tell HN: AWS appears to be down again
We are barbarians occupying a city built by an advanced civilization, marveling at the hot baths but know nothing about how their builders keep them running. One day, the baths will drain and anyone who remembers how to fill them up will have died.
Had my first "Rust Moment" today.
Wait until you go back to Python after some time in Rust. Returning whatever you feel like from a function, having potentially uninitialized variables of whatever type, and all the other things that make Python fun feel like drunk driving a 747 when you come back.
How I made Google’s data grid scroll faster with a line of CSS
I work in UX, I am constantly being given designs that don't work well with native/semantic elements- a great example is tables. As soon as the table needs some kind of animation, drag-drop behavior, anything like that, I can't use a "table" anymore; or it becomes some frankenstein kafkaesque amalgamation that is impossible to maintain. Does the table really need an animation? (probably not) drag and drop? (probably not) But management and the people in charge of OK'ing these designs have a 'make-it-happen' attitude and nobody really cares about semantic, native feel when they've invested so much into a "design system" that is largely antithetical to that. Select elements are the bane of my existence. Impossible to style. I am constantly re-implementing a "select" because it has to look a certain way. Just terrible.
My ideal Rust workflow
> How do people develop in Rust? I'm trying to learn it, but it's hard to jump into code-bases and understand the code as I cannot run snippets. I might be able to help answer this! I've spent over 10 years of my career writing production code in Lisp or Scheme, and about 5 years now writing occasional production code in Rust. So maybe I can explain how the two workflows differ. In Lisp, it's super-easy to redefine a function or test some code. You can constantly test small things as you work. And you can easily open a listener on errors and inspect the current state. It's genuinely great. In Rust, you rely much more heavily on types and tests. You begin by nailing down a nice, clean set of data structures that represent your problem. This often involves heavy use of "enum" to represent various possible cases. Once you know what your data structures look like, you start writing code. If you're using "rust-analyzer", you'll see errors marked as you type (and things will autocomplete). If you want to verify that something works, you create a function marked "#[test]", and fill it with exactly the same code you'd type into a listener. Maybe you run "cargo watch -x test" in the background to re-run unit tests on save. Then, maybe 2 hours later, you'll actually run your program. Between rust-analyzer and the unit tests, everything will probably work on the first or second try. If not, you write more "#[test]" functions to narrow down the problem. If that still fails, you can start throwing in "trace!", or fire up a C debugger. This workflow is really common in functional languages. GHC Haskell has a lovely listener, for example, but I rarely use it to actually run code. Mostly I use it to look up types. The difference is that in strongly-typed languages, especially functional ones, types get you very close to finished code. And some unit tests or QuickCheck declarations take you almost the rest of the way. You don't need to run much code, because the default assumption is that once code compiles, it generally works. And tests are basically just a written record of what you'd type in a listener. For understanding code, the trick is to look at the data structures and the type signatures on the functions. That will tell you most of what you want to know in Rust, and even more in Haskell. So that's why I don't particularly miss a listener when working in Rust. Does this answer your question?
GitHub stale bot considered harmful
In my experience, these auto-closing bots are the natural result of software development workflows that treat issues as tasks to be closed, rather than a data point that a user is experiencing a problem of some kind (maybe they are doing things wrong, expecting something the project doesn't provide, or triggering a real problem – the exact cause is immaterial). This treatment of issue-as-a-task is made worse by micro-management frameworks like Agile, which encourage metrics on how many of these issues-as-tasks are closed, which leads to ill-advised features like this that close them automatically because "Duh, no one said anything in 30 days". If I were to design this myself, I would argue that the correct way to treat an issue is not to have it have a closed or open state at all. If the issue spawn a task or related tasks, you can close them. Or you can provide feedback on the issue that states that it is invalid. The user has already experienced a problem or wants a feature, there is no value in putting a red label that indicates "I'm done with this, please go away". It unnecessarily invalidates the experience of users who have their provided valuable time to report something to your software project. I think this is similar to the approach used by forums like Discourse, where a thread about a problem will usually not be closed or locked, but will just age out of current discussion if nobody brings it up.
Tech workers warned they were going to quit. Now, the problem is spiralling out of control. Tech workers complain of toxic work environments, unrealistic demands from employers, and a lack of career progression. Research suggests that they may have reached their limit.
It’s not just a lack of career progression for the technically inclined. It’s also the fact that extroverted project managers with no technical skills tend to shoot up into higher ranks despite holding a fraction of the experience of the technical staff. We’re literally being led by loud-mouthed idiots whose defining traits are that they don’t think deeply, they talk over people, and they thrive off meetings. If I have one more manager state, “I don’t understand technology, hahaha,” I’m going to scream. We’re a technology company. You work managing developers. You should understand technology! No manager working with developers in a tech company should feel comfortable admitting they don’t understand technology, let alone mention it to the whole team repeatedly. In fact, they shouldn’t have been hired in the first place. They damn sure shouldn’t be promoted!
Facts every web dev should know before they burn out and turn to painting
The thing that burns out web developer is web development. The constant shift of technologies (just for the sake of it) and the nerfed (to oblivion) environment is hell. As a web developer, after you learn the basics (~5 years) there is no feeling of where the "up" direction is. Everything feels like sidesteps. There is no feeling of permanence to your knowledge, unlike a lawyer or a doctor. The knowledge feels like water pouring into a full container - as much of it goes out, as it goes in. Switched to embedded systems 7 years ago to avoid the burnout. It is better, in my opinion. Funny enough, there is a natural barrier to entry that keeps most programmers away - You have to understand how computers work, how operating systems work and you have to be able to code in C/assembly. I have a lot of fun and actually picked up electronics as a hobby, which will help me better myself professionally. I think there is enough here to keep me entertained until I retire.
Facebook going down meant more than just a social network being unavailable
Was talking about this with a friend today, and I think this incident highlights why I sometimes get really depressed about my career and technology. I'm a Gen X-er, and I started my career in the late 90s. Before that I was a ham radio operator in junior high and HS (back when they had Morse code tests!). I remember the heady euphoria around the Internet then, and the vision of "tech utopia" was certainly the dominant one: the Internet would bring a "democratization of information" where anyone with a computer could connect to the Internet, publish a website, and communicate with people across the world. Really cool new services came online frequently. I still remember the first time I used Google, and at the time I was blown away by how good it was ("like magic!" I said) because the results were so much better than other search engines of the time. But these days, the older I get the more and more I feel like tech is having a negative impact on both society at large and me personally. In the 90s we all thought the Internet would lead to a decentralization of power, but literally the exact opposite happened. Sure, telcos sucked, but there were tons of them spread across all corners of the globe. Now there is 1 single megacorp that a sizable portion of humanity depends on for phone/text communication. It just makes me sad. Sure, there are pluses to tech I'm ignoring here, but I just think that how reality turned out so 180 from the expectations of the late 90s is what really hurts.
Facebook going down meant more than just a social network being unavailable
Was talking about this with a friend today, and I think this incident highlights why I sometimes get really depressed about my career and technology. I'm a Gen X-er, and I started my career in the late 90s. Before that I was a ham radio operator in junior high and HS (back when they had Morse code tests!). I remember the heady euphoria around the Internet then, and the vision of "tech utopia" was certainly the dominant one: the Internet would bring a "democratization of information" where anyone with a computer could connect to the Internet, publish a website, and communicate with people across the world. Really cool new services came online frequently. I still remember the first time I used Google, and at the time I was blown away by how good it was ("like magic!" I said) because the results were so much better than other search engines of the time. But these days, the older I get the more and more I feel like tech is having a negative impact on both society at large and me personally. In the 90s we all thought the Internet would lead to a decentralization of power, but literally the exact opposite happened. Sure, telcos sucked, but there were tons of them spread across all corners of the globe. Now there is 1 single megacorp that a sizable portion of humanity depends on for phone/text communication. It just makes me sad. Sure, there are pluses to tech I'm ignoring here, but I just think that how reality turned out so 180 from the expectations of the late 90s is what really hurts.
Do programmers dream of electronic poems?
I have never been called a massive wanker, but I do often get confused stares when I try to explain this. For me both literature, or creative writing, to be less presumptuous, and programming are ways of expressing the ideas, stories and models that float around in my mind when I am thinking about the world. Some stories are better told with fiction, others by software. Many can be told by both, in the same way that a painter can paint the same picture with different techniques and get results that highlight different aspects of the picture. As with all forms of expression, it is never possible to completely transfer the inner world of my brain to that of someone else. So we use approximations. Programming and creative writing are different techniques for making those approximations and both use text as a storage format. And thus they are naturally closely related to each other.
Enterprise Software Projects Killed the Software Developer
Elegant and clever code wont live through a maintenance cycle. I'll take a software developer who writes and structures code so change requests and code are written in a way that the DSL is the same across the organization. This makes changes easy. Clever people should be writing libraries or doing research. Don't kid yourself, you are either the guy who builds the building and its easy because its greenfield, or you are doing remodeling and the hard part is making the upgrade fit in the building and not look like shit.
Parser generators vs. handwritten parsers: surveying major languages in 2021
I took the compilers class at Stanford and never really understood the algorithms of bottom up parsing, or even really how grammars worked. I just made the tool work. I then went to work at a private company, and an older guy who had gone to a state school that taught recursive descent (his was the last class to teach it) taught me how to do it. In a month or so I had learned more about how grammars actually work, what ambiguity is, and so forth, than in my whole class at Stanford. I now teach compilers at a university, and I teach recursive descent.
Psst: Fast Spotify client with native GUI, without Electron, built in Rust
What's funny about having to rely on unauthorized clones to provide a fast native UX was that Spotify's original client back in 2008 started out as beautifully light, custom rendered native client. Few Apps ever had that wow factor the first time I used it, it was so much lighter and more responsive than anything else of the day. I remember being perplexed at how I could search and skip to any part of a song quicker than iTunes could looking at a local library. Everything was latency-free and instantaneous. We were building a Music Startup at the time, so we investigated how it worked. We we’re very surprised we couldn’t find any evidence of an established UI toolkit. It looked as though they had built their own custom UI renderer and optimized TCP protocol which sent back its metadata in XML. Their traffic looked like it was initially seeded from their own (or CDN) servers (for best latency) and then overtime we would see some P2P traffic on the wire. Our QT/C++ client had decent performance but was noticeably heavier than Spotify's. I was disappointed to see their native client eventually be abandoned and succumb to become yet another Chromium wrapper. I expect it fell to the pressures of a growing startup adding 100s of developers (without the skill of their original CTO/devs) where a native UI couldn't be updated and re-iterated as fast as a Web App. I wish they maintained 2 desktop clients, and left their native client alone to just be an audio player and push all their new social features to their new flagship CEF app. It's unfortunate the skill and desire of building fast native UIs are being lost to Electron and CEF wrappers. Seems the larger the organization the more likely they are to build new Web rendered Desktop Apps and we have to rely on unauthorized Indie efforts like this for fast, responsive native UIs.
Compiling rust is NP-hard
I have worked for 3 years on a project where it took a whole week to get the code compiled, signed by an external company and deployed to the device so that I could see the results. I just learned to work without compiling for a long time. Over time my productivity increased and the number of bugs fell dramatically. Working this way requires you to really think about what you are doing, which is always a good idea. This was over a decade ago and now I work mostly on Java backends and I am happy that I typically spend days or even weeks without ever compiling the code and that it usually works the first time I run it. I can't think how I could get back. It looks really strange to me to observe other developers constantly compiling and running their code just to see if it works. It kinda looks as if they did not exactly understand what they are doing because if they did, they would be confident the implementation works. The only time when I actually run a lot of compile/execute iterations is when I actually don't know how something works. I typically do this to learn, and I typically use a separate toy project for this.
‘Positive deviants’: Why rebellious workers spark great ideas
The fact that offering an idea that's better than what's already being done is seen as rebellious at all, as opposed to being the entire job of an engineer, or the definition of what engineers do, is not a good sign for any organization. Next they'll be talking about rebellious accountants who have recorded more numbers by the end of the day than were in the spreadsheet at the beginning, or subversive lawyers who review contracts that had not already been reviewed. Before long it will take a fifth-column delivery driver to move a pizza to a location it's never been before.
Untapped potential in Rust's type system
Interesting article, but I think the key to writing idiomatic Rust is not to stretch what the type system can do but rather be happy at what It can express and avoid unnecessary abstraction. The compile-time guarantees that we have to prove in Rust, also serve to give a hint for when not to abstract.
Rethinking the computer ‘desktop’ as a concept
The desktop is broken not because of the file/folder paradigm but because we stopped using files to represent information. Figma, Slack, and Notion should save their information to disk. You should be able to open a Notion document, or a Figma design, from your desktop, instead of through their Web interface. You should be able to save a Facebook post or Tweet and their replies to disk. Why can't you? Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content. Furthermore, browser APIs have never embraced the file system because there is still a large group of techies who think the browser should be for browsing documents and not virtualizing apps (spoiler: this argument is dead and nobody will ever go back to native apps again). Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store. I'm hoping the new Chrome file system API will bring a new era of Web apps that respect the file system and allow you to e.g. load and save documents off your disk. However, this still won't be good enough for multiplayer apps, where many devices need to access the same content at the same time. I don't know if there is any real way we can go back to the P2P paradigm without destroying NAT - WebRTC tries but WebRTC itself resorts to server-based communication (TURN) when STUN fails.
Ask HN: Does anyone else find the AWS Lambda developer experience frustrating?
You've discovered what many other people have: The cloud is the new time-share mainframe. Programming in the 1960s to 80s was like this too. You'd develop some program in isolation, unable to properly run it. You "submit" it to the system, and it would be scheduled to run along with other workloads. You'd get a printout of the results back hours later, or even tomorrow. Rinse and repeat. This work loop is incredibly inefficient, and was replaced by development that happened entirely locally on a workstation. This dramatically tightened the edit-compile-debug loop, down to seconds or at most minutes. Productivity skyrocketed, and most enterprises shifted the majority of their workload away from mainframes. Now, in the 2020s, mainframes are back! They're just called "the cloud" now, but not much of their essential nature has changed other than the vendor name. The cloud, just like mainframes: - Does not provide all-local workstations. The only full-fidelity platform is the shared server. - Is closed source. Only Amazon provides AWS. Only Microsoft provides Azure. Only Google provides GCP. You can't peer into their source code, it is all proprietary and even secret. - Has a poor debugging experience. Shared platforms can't generally allow "invasive" debugging for security reasons. Their sheer size and complexity will mean that your visibility will always be limited. You'll never been able to get a stack trace that crosses into the internal calls of the platform services like S3 or Lambda. Contrast this with typical debugging where you can even trace into the OS kernel if you so choose. - Are generally based on the "print the logs out" feedback mechanism, with all the usual issues of mainframes such as hours-long delays.
I can only think that modern front end development has failed
What upsets and concerns me the most is when I see poorly developed SPA on really important sites. For example, government service application websites. If reddit or nytimes has a bloated, intermittently failing SPA site, that's an annoyance. When it's a form to apply for unemployment, ACA health care, DMV, or other critical services, it's a critical failure. Especially since these services are most often used by exactly the population most impacted by bloated SPA (they tend to have slow or unreliable internet and slow computers, maybe even a cheap android phone is all they have). Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services. And I haven't even mentioned how SPA sites often lack any accessibility features (which is so much easier to implement if sticking to standard HTML+CSS and no/minimal JS).
The Space of Developer Productivity
The problem starts with name. Developers are creating not producing. They don't make the same widget every day. When you are measuring productivity instead of creativity, you hinder creativity and therefore output.
The tree-based approach to organizing documentation sucks
Documentation sucks because nothing is used very often anymore. In the good old days (TM), software was used for much longer in pretty much the same shape. Think of GNU coreutils. In contrast, your API or your frontend code or your Amazon Lambda or your Microservice is quite likely not feature-complete, does some things that should be handled by a different component and was developed with exactly one use case in mind until it was "good enough". Thanks to scrum, no one cares about completeness, orthogonal design, or composition of smaller parts anymore. Hence documentation has only token value. Except, maybe, for end user documentation, but I am yet to encounter a "user story" that begins with "As a user, I want to do X, read how to do X in the documentation, follow the instructions and get the desired results."
Modules, Monoliths, and Microservices
My observation is that much of industry does not care about any of these technical or security issues. In theory microservices are technical artifacts, but what they tend to be are cultural artifacts. Microservice adoption is often driven by cargo culting, or (better) a considered decision to work around a lack of organisational cohesion. What microservices let you do is ship your org chart directly, and also map back from some functionality to an owning team. You know who to page when there's a problem and it's easier to tell who is late delivering and whose code is slow. In cultures with "lax" technical leadership (aka no everyone uses same thing mandate, I'm not judging) it lets teams use their favourite tools, for better or worse. Other org related things are the ability to have independent (per team) release schedules. Separable billing. Ability to get metrics on performance, cost and failures that can be used as indicators of team performance and promotion material. Microservices can also act as "firewalls", limiting the impact a bad hire or team can have across your codebase. None of this is intended to be negative judgement; microservices can (among other things), help teams feel a sense of agency and ownership that can be hard to maintain as org size scales up.
Why Databricks Is Winning
The one thing I see in my current company, and a growing trend with SaaS apps is that companies are forgetting how to actually engineer. Like Boeing- the more you outsource the less able you're able to react to changing market forces and fix issues. We run Hadoop & Spark internally, but the team is underfunded and stuck in a constant cycle of fighting fires. And the result (and part of a larger push of the company due to the same cycle of under-funding and culture issues) is that we're moving our petabytes of data to cloud providers into their systems. Not only is the cost of doing this dwarfing that it would take to actually fix our issues, but we're going to lose the people who know how to design and manage petabyte scale hadoop clusters. We wind up in a situation where we locked up data fundamental to our company and our position in the market with a 3rd party, and losing the talent that would allow us to maintain full control over the data. If the service increases prices, changes it's offering, or we get to a point where the offering doesn't meet our needs- we're fucked. It's nice that Databricks has a nice "offramp" that you can take to go somewhere else, but the general idea is the same.
The web didn't change; you did
The web really didn't change. It really didn't become complex. The web development process is not one single path. There is simply more choice and more options. We, you and I, the developers, consumers and businesses are responsible for demanding more complicated (and more thorough) tools. We are not, however, beholden to complexity.
Show HN: Straw.Page – Extremely simple website builder
i'm convinced this style is the next big thing in web UI - at least for startups/simple web tools/anything more dekstop-oriented than mobile-oriented. it's such great a rejection of all the stale, boring, "clean" UI convention that we're drowning in today. it's not just nostalgia - it's fun, it's rebellious, it has real character. it shouts "I'm having fun, why shouldn't you?"
I don't want to do front-end anymore
I tell anyone asking me for career advice the same two things. The first: the deeper in the world’s dependency tree you are, the less frequently things will churn, and the longer your skills will last. TCP doesn’t change very often. [Theoretical skills][1] may be applicable for your entire career. Human skills are more durable than any technical skill. Kernels don’t change very often (but more than TCP). Databases don’t change very often (but more than kernels). There is a spectrum of skill durability, and you will burn out faster if you find that all of your skills become worthless after a very short time. Dependency forces things not to change their interface, which causes the work to shift toward performance and reliability among other things that some people find far more rewarding over time. The second: the more people who do what you do, the worse you will be treated, the more BS you will have to put up with, the worse your pay will be, the faster you will be fired, the harder it will be to find a job that values you, etc… etc… Supply and demand applies to our labor market, and if you want to be happier, you should exploit this dynamic as heavily as possible. Avoid competition like the plague. But don’t avoid funding. How do you avoid competition without going off into the wilderness where there is no money to be made? Hype drives funding, but it also drives a lot of competition. However, using rule #1 above, the hyped things depend on other things. Many of these dependencies are viewed as “too hard” for one reason or another. That’s the best place to be. Go where other people are afraid, but nevertheless have a lot of money depending on. All hyped things rely on things that for one reason or another are not commonly understood, and tend not to change quickly. That’s a good place to find work involving durable skills that tend to have lower competition. Go where the dependency is high but the competition is low, and you have a better chance of being happy than people who go where the competition is high or the dependency is low. Bonus points if it’s actually “hard” because then you won’t get bored as quickly. There are areas of front-end that are high-dependency, durable, slow-changing, and low-competition. That’s where engineers are likely to be happiest. But these two principles apply to every field or zooming out to any business generally. I’m pretty happy working on new distributed systems and database storage engines for the time being. But I’m always looking for the things that are viewed as hard while also receiving significant investment, as these are the things that will ultimately give me more opportunities to live life on my own terms. [1]
Respect Your Power Users
I would also add that there are a few different types of power users. Two off the top of my head are "very active users" and "very technical users". The foremost often can be maintainers of communities. Example: Reddit or Discord. These same communities might end up being the main part of your product. Other examples include social media like Youtube, or even Instagram. To these users a different set of power tools are needed, than for the "technical power users" who need different kinds of power tools. For the "very active users", you might want to be able to provide things like UI customization, social media linking, statistics and easy tools for moderation. Examples of tools for "technical power users" might be providing a large set of actions that can be custom-key-bound. A macro/scripting API. An alternate API to your service completely (REST-ful), or support for modding. You can guess what those tools will later be used for quite simply I'm sure. :)
The unreasonable effectiveness of simple HTML – Terence Eden’s Blog
For some content, I think extremely simple HTML design is preferable over sexy styling and functionality. A lack of styling is a style itself and it sends signals to a user. The following link signals to me that there is no bullshit to be found (Warren Buffet’s website): [][1] Contrast the above website with this website that is trying to sell the user something and keep in mind that both websites are owned by the same organization: [][2] Same organization. Different goals embodied by different design choices. [1] [2]
New Intel CEO rehiring retired CPU architects
This is an encouraging move. My secondhand understanding was that Intel was losing top talent due to pressure to pay closer to median industry compensation. Top engineers recognized they were underpaid and left the company. I've been part of a similar downhill slide at a smaller company in the billion dollar revenue range. To be blunt, once the [mediocre] MBAs start realizing that the engineers are getting paid more than they are, the pressure to reduce engineering compensation is strong. Frankly, there are plenty of engineering candidates on the market who are happy with median compensation. Many of them are even great engineers and great employees. However, being a top company in a winner-take-all market requires the top engineers. The only way to attract and retain them at scale is to offer high compensation. I'm hoping that's part of what's happening here.
Pirate Bay founder thinks Parler’s inability to stay online is 'embarrassing'
I personally don't find their ability to remain online that surprising. The Pirate Bay and other torrent networks were built by people with a passion for building, maintaining and hacking things. People who, even without a solid CS background, would spend hours a day learning new things, developing distributed protocols, evading DNS blocks and hosting their content wherever they could to make it accessible - included the small server in their own garage if needed. And they are used by people who don't mind learning a new protocol or how to use a new client to get the content they want. I don't see the same amount of passion for technology and hacking among the Parler users, nor its maintainers. Those who believe in conspiracy content are people characterized by a psychological tendency to take shortcuts whenever they can in order to minimize their efforts in learning and understanding new things. So when the first blocker hits they usually can't see alternative solutions, because it's not the way their brains are wired. They always expect somebody else to come up with solutions for them, and they always blame somebody else when the solution won't come. And even if they decided to migrate their content to the dark web or on a Tor network, not many people will follow them - both because they don't have the skills, and because they don't want to acquire those skills. Plus, they'd lose the "viral network effect" that they get when posting click-bait content on public networks, the new censorship-proof network will only attract a small bunch of already radicalized people. And even if they wanted to hire some smart engineers to do the job for them, we all know that engineers tend to swing on the other opposite of the ideological spectrum. Those who have built systems for escaping REAL authoritarian censorship would rightfully feel disgusted if asked to apply their knowledge to provide a safe harbour for rednecks to vomit their conspiracy-theories-fueled hate.
Moral Competence | Evan Conrad
What is most interesting to me is that the business model he rejected[1] is not just the one of his app, but essentially the one used by almost all therapists. [1] [][1]: "Unfortunately, in order for the business to work and for us to pay ourselves, we needed folks to be subscribed for a fair amount of time. But that wasn't the case and we honestly should have predicted it given my own experience: as people did better, they unsubscribed. Unfortunately, the opposite was true as well, if folks weren't doing better, but were giving it a good shot, they would stay subscribed longer. So in order to continue Quirk, a future Quirk would need to make people feel worse for longer, or otherwise not help the people we signed up to help. If the incentives of the business weren't aligned with the people, it would have been naive to assume that we could easily fix it as the organization grew. We didn't want to go down that path, so we pivoted the company." [1]
Load testing is hard, and the tools are... not great. But why? | nicholas@web
The best you can do here is probably at the API and system design time, not at your test time. If you design a simpler API, you're going to have far less surface area to test. If you design a system with more certainly independent pieces (distinct databases per service, for example) then it's easier to test them in isolation than in a monolith. Doing this also lets you use a tool that is simpler, so you get two wins!
Fostering a culture that values stability and reliability
Next time you see a git repo which is only getting a slow trickle of commits, don’t necessarily write it off as abandoned. A slow trickle of commits is the ultimate fate of software which aims to be stable and reliable. And, as a maintainer of your own projects, remember that turning a critical eye to new feature requests, and evaluating their cost in terms of complexity and stability, is another responsibility that your users are depending on you for.
Fostering a culture that values stability and reliability
There’s an idea which encounters a bizzare level of resistance from the broader software community: that software can be completed. This resistance manifests in several forms, perhaps the most common being the notion that a git repository which doesn’t receive many commits is abandoned or less worthwhile. For my part, I consider software that aims to be completed to be more worthwhile most of the time.
Beyond customization: build tools that grow with us |
When a tool is designed to be simply customizable with an abundance of settings and options, adding power means adding complexity and steepening the learning curve. If great tools are about multiplying our creativity, customization gets in the way of this mission, because it limits how flexible our tools can be, and how easily we can learn to use them. We need a better way to build tools that wrap around our workflows than simply adding levers and dials for every new option.
Coding as a tool of thought – Surfing Complexity
This article really gets to a fundamental misunderstanding I feel our whole industry has: Programming is not construction, it is design. Yeah, houses rarely collapse, but structural engineers don’t expect that their second draft will be taken out of their hands and built. Or that the fundamental requirements of their structure will be modified. I don’t mean to suggest that programming should behave more like construction. The value of programming is the design. Programming is the act of really thinking through how a process will work. And until those processes are really done and won’t change (which never happens) that design never stops.
Coding as a tool of thought – Surfing Complexity
Developers jump to coding not because they are sloppy, but because they have found it to be the most effective tool for sketching, for thinking about the problem and getting quick feedback as they construct their solution.
Coding as a tool of thought – Surfing Complexity
As software engineers, we don’t work in a visual medium in the way that mechanical engineers do. And yet, we also use tools to help us think through the problem. It just so happens that the tool we use is code. I can’t speak for other developers, but I certainly use the process of writing code to develop a deeper understanding of the problem I’m trying to solve. As I solve parts of the problem with code, my mental model of the problem space and solution space develops throughout the process.
Coding as a tool of thought – Surfing Complexity
By generating sketches and drawings, they develop a better understanding of the problem they trying to solve. They use drawing as a tool to help them think, to work through the problem.
HTML Over The Wire | Hotwire
I'm not going to lie, when I hear "SPA", I don't think "fast"; I think "10s of megs of javascript, increasingly unresponsive browser tab". Maybe that's an unfair genralisation from a small percent of poorly written SPAs, but that small percent have really had me hankering for multiple page websites with judicious use of JS.
Toolchains as Code
Just like Go set a new standard that languages should come with their own auto-formatter, I think rustup planted a seed that programming platforms should also come with their own tool manager. My hope for JavaScript is that eventually Node will ship with a tool manager similar to or even based on Volta.
My Engineering Axioms
Every program has state, but how that state is managed can make a world of difference. Poor management of state is a huge contributing factor to overall system complexity, and often occurs because it hasn't been thought about early enough, before it grew into a much worse version of the problem.
My Engineering Axioms
Unless you're working completely alone, it's not just your ability to solve technical problems, to write good code, etc, that matters. To the contrary, they matter even less if you make the people around you unhappy and less productive. Just like learning to write good code, you have to learn "to people" good as well. Empathy is a big part of this, as is recognising that people are different – be caring, be understanding, help others and ask for help yourself, be nice. Be an engineer others want to work with.
My Engineering Axioms
Until you have a high degree of confidence that your abstraction is going to pay for itself because it solves a real, abstract problem you really do have, don't do it. Wait and learn more. Until then, repeating code can help avoid dependency, which itself makes the code easier to change independently or delete. A premature abstraction creates complexity through dependency and indirection, and can become a bottleneck to your ability to respond to change.
Write code. Not too much. Mostly functions. | Brandon's Website
Code, like food, has value. I think those of us who write it can (hopefully) agree on that. Some, though, are so afraid of writing/eating too much that they avoid writing/eating what they should. In the context of programming, I think this translates to an unhealthy fear (again, for some) of duplication. A little bit of duplication - writing something in a way that doesn't completely maximize conciseness - isn't the end of the world. Sometimes it's the best path forward. Sometimes it's okay to copy-and-modify here and there, especially when you're still figuring out what your application will end up being.
Back to the '70s with Serverless
One thing that surprised me as a latecomer to software development coming from a visual arts background is how much choice of technology and working practices are purely fashion driven. The thing about fashion is that the way if develops is largely arbitrary. Changes in fashion resemble a drunken walk through possible design space with the drunkard receiving regular shoves from "influencers" who are usually trying to sell you something. Occasionally you have a fashion "revival" where someone takes an idea from the past, gives it a new spin, and then sells it back to newcomers as the next big thing. This seems especially true in the types of startups and companies many HN readers work at or aspire to join / build - that is ones which are low stake / high reward. I think when you combine the low stakes nature of the VC driven startup world with its cult of youth and the in group conformity of young people this is what you get. [1] by low stakes I mean no one will die and you won't be prosecuted if your single page app startup goes tits up. Indeed you're supposed to "fail fast" precisely because the cost of failure is so low. Even if a VC or angel has invested a few million in you, to them that's still low stakes because they exist on an entirely different plane of wealth and you are just one of multiple bets. [2] We're going to rebel by all dressing the same but not the same as our dad!
Playmaker: The Reality of 10x Engineer | by Ofer Karp | Nov, 2020 | Medium
10x engineer is underpaid senior working as middle/junior. or underpaid architech/principal workong as senior engineer.
Why software ends up complex · Alex Gaynor
Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer. Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term. You have to develop an opinion of the thing you're building/maintaining and what it should be able to do and not do. You can't count on project managers to do that. The trick to doing this effectively is to find out the problem the feature is actually trying to solve and providing a better solution. Usually the request is from end users of software and they have identified the problem (we need to do xyz) and prescribed a solution (put the ability to do xyz in a modal on this page.) But if you can look to what other software has done, do a UX review and find a way to add a feature in that solves their problem in a way that makes sense in the larger context of the software, they won't have a problem with it since it solves their problem and the codebase will take less of a hit. Unfortunately, it's a lot easier to just add the modal without complaint.
Why software ends up complex · Alex Gaynor
Every feature request has a constituency – some group who wants it implemented, because they benefit from it. Simplicity does not have a constituency in the same way, it’s what economists call a non-excludable good – everyone benefits from it. This means that supporters can always point to concrete benefits to their specific use cases, while detractors claim far more abstract drawbacks. The result is that objectors to any given feature adition tend to be smaller in number and more easily ignored. Leading to constant addition of features, and subtraction of simplicity.
Why software ends up complex · Alex Gaynor
The most natural implementation of any feature request is additive, attempting to leave all other elements of the design in place and simply inserting one new component: a new button in a UI or a new parameter to a function. As this process is repeated, the simplicity of a system is lost and complexity takes its place. This pattern is often particularly obvious in enterprise software, where it’s clear that each new feature was written for one particularly large customer, adding complexity for all the others.
Can developer productivity be measured? - Stack Overflow Blog
In every organization I've worked in, it was obvious who the high performers were and who the low performers were. It was obvious to everyone. The only blind spots were people usually seriously misjudged their own performance. The problem, however, is that management is always being pushed to make objective measurements. For example, to fire someone, you have to first put him on an improvement plan with objective measurements. Otherwise, you're wide open to a lawsuit over discrimination, etc. You have to prove to a judge someone isn't performing, or that you gave raises based on performance. Management also gets pushed into these attempts at objective measurements by attempts to optimize the numbers like what works great for a manufacturing process.
Can developer productivity be measured? - Stack Overflow Blog
This assumes direct managers want productive developers - this is not my experience. The goal of managers is to increase the number of people they manage, and get more money. I have time and again done things fast only to have blocks put in place to slow things down - no one wants the job done easily and go home, where's the money in that. The inability to measure productivity is a direct result of this imho.
Can developer productivity be measured? - Stack Overflow Blog
Software engineering is a creative, not a manufacturing discipline. Every one of these attempts to measure or gauge developer productivity seems to miss that point.
Why is the Google Cloud UI so slow? | DebugBear
The real answer is that Google's promotion and hiring processes don't respect front end developers. Systems programming and distributed systems are considered "hard" and worthy of reward. This explains why Google's front ends are bad, and it also explains why there's a proliferation of non-composable distributed systems inside Google. As a second order effect, the design of those back ends also make it harder to make fast front ends. And front end devs are often using tools designed for back end devs, like the Bazel build system. (Compare that to FB having online / incremental compilers for Hack, as far as I understand.) So they either don't get the best people working on front ends, or the people they have aren't doing their best work because they're looking to move into a role that may be more respected or rewarded. Before 2005, Google built two of the most innovative AJAX apps ever: GMail and Maps. People may not remember how awesome they were. When GMail came out, it was faster than Microsoft Outlook on desktop, which I was using at the time. You would click and your message would appear instantly, which was not true of desktop apps. The app startup time was also better than desktop! (i.e. seeing all your messages from a cold start) When Maps came out, people didn't believe that the scrolling and zooming could be done without a Flash app. It also had incredibly low latency. But somewhere along the way the company lost its leadership and expertise in the web front end, which I find very sad. (I worked there for many years, but not on front ends.) The slow Google+ app circa 2011 was a great example of that, although I believe the structural problem had set in before that project. I don't think there's any question that FB and even MS are significantly more accomplished in these areas. They're the "thought leaders" (React, Reason, TypeScript, etc.) --- edit: Also, if you want to remember what Google UI looked like circa 2005, look at sourcehut: [][1] It was fast, simple, and had a minimalist style (though some people mistake that for no style). There is probably a generation of people who are scratching their heads at that claim, but yes that's pretty much what Google used to look like: the home page, which lacked JS; News; Groups; Webmaster Tools; Ads front end to some extent, etc. [1]
Winning back the Internet by building our own | ROAR Magazine
For Cubans, who were barred from connecting their own internets to the globally-networked Internet due to the US embargo, SNET provided everything you would expect to get through your computer, like news, games, blogs, social networking and more. It had all this even though it did not connect to the Internet we are most familiar with. Meanwhile, both Guifi and NYCMesh offer its users a combination of “intra-mesh services” and content for local residents similar to SNET along with more traditional Internet access, highlighting the fact that building our own internets is not an either-or proposition, nor a zero-sum game.
An ex-Googler's guide to dev tools
In short, the build system is often a big giant hairball, and one that you should be wary of trying to disentangle before you pick off the lower hanging developer productivity fruit. It may be tempting to tackle this earlier, because Blaze was worlds better than what you're using now and Google has even helpfully open-sourced a derivative of Blaze called Bazel. But Bazel is not Blaze—for one, it lacks a massive distributed build cluster that comes free alongside it—and the world outside of Google is not Google.
An ex-Googler's guide to dev tools
The most intractable part of the software development life cycle is often CI and the build system. This is because understanding the build often involves understanding every piece of the overall codebase in a fairly nuanced way. Speeding up the build is something that various people try to do over time, and so the build code accrues a growing set of hacks and optimizations until the point is reached where the number of people who actually understand enough about what is going on to make a change with zero negative consequences is very small.
An ex-Googler's guide to dev tools
As a new member of the team, you likely don't have the influence or authority to change all the tools your team uses. Moreover, you also lack knowledge—knowledge of how and why your new team behaves the way it does and why it uses its current set of tools. Simply copy-pasting whatever worked for Google is not necessarily going to work for your new team. So learn what is working for your new team along with what isn't.
Performance Matters • Hillel Wayne
Most of us aren’t writing critical software. But this isn’t “critical software”, either: nobody will suddenly die if it breaks. You just switch back to paper PCRs. But it could have saved lives. At scale, it could have saved people dying from PCR errors. It could have saved the person the EMTs couldn’t get to because they lose an hour a week from extra PCR overhead. If it was fast enough to use.
AWS Cognito is having issues and health dashboards are still green
We hired an engineer out of Amazon AWS at a previous company. Whenever one of our cloud services went down, he would go to great lengths to not update our status dashboard. When we finally forced him to update the status page, he would only change it to yellow and write vague updates about how service might be degraded for some customers. He flat out refused to ever admit that the cloud services were down. After some digging, he told us that admitting your services were down was considered a death sentence for your job at his previous team at Amazon. He was so scarred from the experience that he refused to ever take responsibility for outages. Ultimately, we had to put someone else in charge of updating the status page because he just couldn't be trusted. FWIW, I have other friends who work on different teams at Amazon who have not had such bad experiences.
Essay: How do you describe TikTok? - Kyle Chayka Industries
Thanks for this experiment in critical writing, it's appreciated. Looking forward to more critiques of algorithms from an experiential viewpoint. Reviewing an algorithm seems to me like reviewing architecture, in that social media creates a sense of space within its platforms. You noted that TikTok feels like a canal, being close to one-dimensional (which is what makes it so pleasant). There's a careful control/management of the space which separates a well-curated feed from a lesser one. On TikTok, you can go forwards, or you can go backwards. Instagram used to be one-dimensional, but over time has sprawled into 4 or 5 dimensions, ruining it in my opinion. The algorithm has a difficult time dealing with the added complexity, and it's not very beginner-friendly anymore. Meanwhile, users tend to navigate along the dimensions they're already used to, and automated suggestions are treated as an intrusion. TikTok's success is its well-defined boundaries which give it better control over the experience. (I could comment about the American obsession with having "choice", but I'll shelve that one for now.)
How we designed Dropbox’s ATF - an async task framework - Dropbox
I can understand the need for a company to be constantly trying to add value to their product, but that tendency to be changing so much can easily cause you to lose sight of what made you popular in the first place. I use Dropbox personally to keep documents synced between my computer and my wife's and also to grab documents I need from the web if I'm on another computer. I occasionally share a folder if I need to give a large number of files to someone. I recently had a notification come up on the dropbox taskbar icon and it popped up this huge window that looked like a massive electron app. In the old days, there wasn't even a UI, just a context menu that also showed the state of the sync. For me, Dropbox provides the most benefit when it's not visible, running invisibly in the background doing it's thing.
Geek Blight - Origins of the youtube-dl project
Last, but not least, tools like youtube-dl allow people to access online videos using only free software. I know there are not many free, libre and open source software purists out there. I don’t even consider myself one, by a long shot. Proprietary software is ever present in our modern lives and served to us every day in the form of vast amounts of Javascript code for our web browser to run, with many different and varied purposes and not always in the best interest of users. GDPR, with all its flaws and problems, is a testament to that. Accessing online videos using youtube-dl may give you a peace of mind incognito mode, uBlock Origin or Privacy Badger can only barely grasp.
No More Free Work from Marak: Pay Me or Fork This
Seriously. What's the point of open source if companies just steal it, build billion dollar industries on top, and then lock everything down? Apple is telling us we can't run our own software on their goddamned devices, yet they built their empire on open source. Look at Facebook, Google, Amazon. They've extracted all the blood they can and given us back scraps. AWS is repackaged software you pay more for. Yes, it's managed, but you're forever a renter. They've destroyed our open web, replaced RSS with DRM, left us with streaming and music options worse than cable and personal audio libraries. The web is bloated with ads and tracking, AMP is given preference, Facebook and Twitter are testing the limits of democracy and radicalizing everyone to cancel one another. Remember when the Internet was actually pleasant? When it was nice to build stuff for others to use? Stop giving your work away for free when the companies only take.
Technical debt as a lack of understanding
I've had to explain this to non-technical stakeholders many, many times over the years, and I always use the restaurant metaphor: If you run a commercial kitchen and you only ever cook food, because selling cooked food is your business -- if you never clean the dishes, never scrape the grill, never organize the freezer -- the health inspector will shut your shit down pretty quickly. Software, on the other hand, doesn't have health inspectors. It has kitchen staff who become more alarmed over time at the state of the kitchen they're working in every day, and if nothing is done about it, there will come a point where the kitchen starts failing to produce edible meals. Generally, you can either convince decision makers that cleaning the kitchen is more profitable in the long run or you can dust off your resume and get out before it burns down.
Technical debt as a lack of understanding
Software development looks a lot like evolution. The market and business requirements are the environment that weeds out the unfit software. Adapt or die. Codebases that are slow to adapt to outside changes are like species that are slow to adapt to selection pressures. So like the vestigial organs bursting from infection, so are companies that are unable to ship because devs are slowed down by messy code.
Technical debt as a lack of understanding
The ugly code can be dealt with. But we can't dealt the ugly environment. The most severe technical debt is the environment, that is OS, toolchains, framework, library, were fixed at the time development started. Updating the environment shall be part of the cost of development, but we tend to ignore it for more present short term gain, burden the cost to the future self. Within a few years, the environment is too old to work with. We have to deal with the bugs that were fixed years ago in upstream, reinvent the features that was also present in the upstream. 5 years past and we seriously consider updating the environment but since there was no update, existing code relies on old behaviours so we have to fix all of them but that doesn't introduce any short term gain so updating was abandoned. 10 years past and software is dead.
Technical debt as a lack of understanding
The problem with analogies is that software is fundamentally new. It's not debt where you can just pay it off after the launch. It's not a mess where a cleaning crew can have it taken care of in a day or a week. It's not a structure that will collapse because you added one too many storeys. Software takes all the guardrails off of complexity. A swiss watch is a mechanical masterwork, but the complexity is limited because you have to fit the gears into a limited space. Everything else we deal with has some kind of pushback on complexity, with the possible exception of biological systems that take millions of years to change. Software can grow in complexity with no obvious bound. You can tackle any one particular bug with an extra branch to say "don't let this happen". But a gigabyte of branches is a hell of a lot of complexity. Software engineering is an attempt to wrangle that complexity through all kinds of strategies from "architecture" (another poor analogy) to type systems and OOP and FP and the actor model and everything else. Technical "debt" is really the mismanagement of complexity. It's hard to understand the costs because the costs are inherently unknown unknowns. If you mismanage complexity, then all estimates are meaningless because at any point you could hit a never-ending fractal of problems. It might be completely intractable to add any significant new feature. Developers want to ship features, call it a job well done and take some time off for Christmas. When working with technical debt, no matter how smart the developer is, it's really just luck of the draw who hits a fractal of problems and never finishes and who doesn't and converges on a solution (and when it's bad enough, the latter just never happens).
So you want to buy a farm?
Also, I suspect many of us here "spent" time learning programming as children/teenagers and honed it as early twenty-somethings. At those stages of life time is essentially free and unlimited. You can easily pull allnighters and 40 hour hacking weekends and 80 hour weeks - and you do it because it's exciting and fun, and it has only very minor opportunity costs - you might miss a school or college assignment deadline, or a few shifts at your minimum wage part time job. Your bedroom at your parents house or you college dorm is paid for already (even if just by usurious student loans). Once you get to the "disillusioned with the damned tech industry" stage of your life though, you have responsibilities and rent/loans/bills to pay and probably family you need/want to spend time with and a circle of friends who're in the same stage of life who can't on zero notice order in pizza and mountain dew and hack from 6pm on Friday thru to midnight Sunday catching only naps on the couch as needed. I reckon there's almost as much of a hill to climb for a "woodworker since junior high" looking at programming as a way out of a woodworking career they've become jaded with - as there is for a thirty-something software engineer dreaming of building timber boats for a living instead of being part of "The best minds of my generation are thinking about how to make people click ads." -- Jeff Hammerbacher (But yeah, you don't need to buy new timber when you accidentally "move fast and break things" as a programmer. On the other hand, at least the tools you buy as a woodworker will still work and be useful in a decade or century's time...)
Write code that is easy to delete, not easy to... — programming is terrible
If we see ‘lines of code’ as ‘lines spent’, then when we delete lines of code, we are lowering the cost of maintenance. Instead of building re-usable software, we should try to build disposable software.
Technical debt as a lack of understanding
You can’t expect people to be productive in something that was a culmination of rushed code, poorly understood requirements, and shortcuts made by people who no longer work there. At that point your technical debt balloon has popped, you are in possession of a toxic asset, and it’s time to pay the piper.
Technical debt as a lack of understanding
Knowledge management is so important in organizations, but they rarely undergo that critical step of reorganizing to reflect the current understanding. Need evidence? Take a look at your nearest corporate wiki. I can almost guarantee it’s a mess because most companies should never have wikis. Successful wikis, like Wikipedia, are powered by an army of editors and most organizations will never prioritize that much time or content strategy. Poorly managed knowledge leaves organizations with the memory of goldfish. I can’t tell you how many new product initiative meetings I’ve been in where no one remembers the meeting about the exact same thing from two quarters ago. It’s like Groundhog’s Day, but you’re having the same meetings over and over.
Technical debt as a lack of understanding
In a go-go-go product cycle, that loss of understanding begins to create problems that have literal and figurative costs. A general sense of confusion builds and builds. The developer economics are fairly simple to quantify; either you slow down and pay someone to refactor and document the code after every major iteration, or you pay every developer who works on the project until the end of time to stare at the code for a few hours and wonder what the hell is going on. That dumbfounded staring at the codebase compounds over time. Organizationally, you pay in velocity and turnover; talented people are going to leave after a few rounds of bullshit.
Microservices – architecture nihilism in minimalism's clothes
In my opinion, microservices are all the rage because they're an easily digestible way for doing rewrites. Everyone hates their legacy monolith written in Java, .NET, Ruby, Python, or PHP, and wants to rewrite it in whatever flavor of the month it is. They get buy in by saying it'll be an incremental rewrite using microservices. Fast forward to six months or a year later, the monolith is still around, features are piling up, 20 microservices have been released, and no one has a flipping clue what does what, what to work on or who to blame. The person who originally sold the microservice concept has left the company for greener pastures ("I architected and deployed microservices at my last job!"), and everyone else is floating their resumes under the crushing weight of staying the course. Proceed with caution.
Microservices – architecture nihilism in minimalism's clothes
Microservices are popular because managing large teams is a pain in the ass and creating a small team to spin off some new business case is really easy to manage. You get budget, you create the new team, if it sucks, reorganize or fire the team (and offload the services to other teams). I'm telling you, it's all Conway's Law. We literally just don't want to think about the design in a complex way, so we make tiny little apps and then hand-wave the complexity away. I've watched software architects get red in the face when you ask them how they're managing dependencies and testing for 100s of interdependent services changing all the time, because they literally don't want to stop and figure it out. Microservices are just a giant cop-out so somebody can push some shit into production without thinking of the 80% maintenance cost.
Keeping Netflix Reliable Using Prioritized Load Shedding
I think you’re talking about SPAs in specific. Many have race conditions in frontend code that are not revealed on fast connections or when all resources are loaded with the same speed/consistency. Open the developer console next time it happens, I bet you’ll find a “foo is not a function” or similar error caused by something not having init yet and the code not properly awaiting it. If an SPA core loop errors out, load will be halted or even a previously loaded or partially loaded page will become blank or partially so. Refreshing it will load already retrieved resources from cache and often “fixes” the problem.
If Not SPAs, What?
I think there is another layer to that conversation. Frameworks become bureaucratic and boring because they are developed by large teams for large teams. Most developers are working on small projects and need more fun and less maintaining huge amount of boilerplate code that recreates the browser. The framework that I feel makes development less ugly is svelte. But still, I really don't like the idea of heavy client side websites. It really makes everything more complicated and the user's device slower. I love the simplicity of Turbolinks, I love how clean svelte code is and I am trying to figure out the "glue"
Sharp tools for emergencies and the --clowntown flag
This seems like a good compromise to me. The tools that provide safety eventually fail, but you need social pressure to avoid devs saying ‘f*** it. We’ll do it live.’ every day.
Sharp tools for emergencies and the --clowntown flag
The last thing you want is to normalize the use of a safety override. Best practices in software aren't usually "written in blood" like they are with "real" engineering disciplines, but they still need to be considered. The number of outages, privacy leaks, data loss events and other terrible things could be greatly reduced if we could just learn from our own collective history.
Sharp tools for emergencies and the --clowntown flag
In particular, "clowntown" made it out of the spoken realm and back into the computers in the form of command-line arguments you could pass to certain tools. By using them, you were affirming that whatever you were asking it to do was in fact broken, crazy, goofy, wacky, or maybe just plain stupid, but you needed it to happen anyway. It was a reminder to stop and think about what you were doing, and why you had to resort to that flag in the first place. Then, when the fire was out, you should go back and figure out what can be done to avoid ever having to do that again.
Software correctness is a lot like flossing • Hillel Wayne
One reason I don’t like the “developers don’t care” excuse is that it’s too nihilistic. If that’s the case, there is nothing that we can do to encourage people to use better correctness techniques. Changing “developers don’t care” would mean changing the fundamental culture of our society, which is way above our pay grades. On the other hand, if adoption is a “flossing problem”, then it’s within our power to change. We can improve our UI/UX, we can improve our teaching methods, and we can improve our auxiliary tooling.
Surviving disillusionment - spakhm's newsletter
If you work in technology, the monastery can be distant and vague, whereas Paul from marketing wants to circle back with you here and now. Then, as you circle back again and again, the monastery recedes further into the distance, and the drudgery appears closer and closer, until it occupies your entire field of vision and you can't see anything else.
Surviving disillusionment - spakhm's newsletter
But sitting at a mandated retrospective or mindlessly gluing APIs together doesn't put me over the moon. It makes me feel the opposite (whatever the opposite of being over the moon is). And so, engineers are faced with two realities. One reality is the atmosphere of new technology, its incredible power to transform the human condition, the joy of the art of doing science and engineering, the trials of the creative process, the romance of the frontier. The other reality is the frustration and drudgery of operating in a world of corporate politics, bureaucracy, envy and greed— a world so depressing, that many people quit in frustration, never to come back.
Surviving disillusionment - spakhm's newsletter
Once you observe the darker side of human nature in the technology industry, you cannot forget or unsee it. The subsequent cynicism can be so disheartening that the romance of the computer revolution is beat out of people completely. I've met many engineers with extraordinary talent who decided to stop making software. They wanted to program computers all their lives. They were born for it. After spending six, eight, ten years in the industry, they quit for good. Now they're running breweries and hydroponic farms, with no desire to ever again touch a compiler, let alone get back into the fray.
Be prolific
It's the same with software I imagine, because of several reasons. 1. Writing more code (and being conscious of it) makes you a better engineer. You'll run into more issues that you will fix and, hopefully, remember. 2. If you'd take the art example and say "Paint 20 cubist pieces", and then transfer that to "Write 20 authentication servers", each iteration you'll benefit from what you learned and be able to 'clean up' the code. It's essentially writing 20 PoCs where each PoC improves on the last one. EDIT: Writing more versions also allows you to explore more ideas without fear. If you have to write "one good version" you'll be less prone to exploring 'exotic' ideas. So you'd benefit from that as well.
Forcing Functions in Software Development
At an agency, we used to run our web apps on some crappy 08 model laptops running on a gig of memory with outdated browsers. If the webapp ran there without major hitches, it was considerd good enough. It made everyone on the team think hard about optimizing even before a single line of code was written. It really did force excessive simplicity and not jumping on new libs/frameworks just because we can.
You Reap What You Code
One we first adopt a new piece of technology, the thing we try to do—or tend to do—is to start with the easy systems first. Then we say "oh that's great! That's going to replace everything we have." Eventually, we try to migrate everything, but it doesn't always work. So an approach that makes sense is to start with the easy stuff to probe that it's workable for the basic cases. But also try something really, really hard, because that would be the endpoint. The endgame is to migrate the hardest thing that you've got. If you're not able to replace everything, consider framing things as adding it to your system rather than replacing. It's something you add to your stack. This framing is going to change the approach you have in terms of teaching, maintenance, and in terms of pretty much everything that you have to care about so you avoid the common trap of deprecating a piece of critical technology with nothing to replace it. If you can replace a piece of technology then do it, but if you can't, don't fool yourself. Assume the cost of keeping things going.
You Reap What You Code
The curb cut effect was noticed as a result from the various American laws about accessibility that started in the 60s. The idea is that to make sidewalks and streets accessible to people in wheelchairs, you would cut the part of the curb so that it would create a ramp from sidewalk to street. The thing that people noticed is that even though you'd cut the curb for handicapped people, getting around was now easier for people carrying luggage, pushing strollers, on skateboards or bicycles, and so on. Some studies saw that people without handicaps would even deviate from their course to use the curb cuts. Similar effects are found when you think of something like subtitles which were put in place for people with hearing problems. When you look at the raw number of users today, there are probably more students using them to learn a second or third language than people using them with actual hearing disabilities. Automatic doors that open when you step in front of them are also very useful for people carrying loads of any kind, and are a common example of doing accessibility without "dumbing things down." I'm mentioning all of this because I think that keeping accessibility in mind when building things is one of the ways we can turn nasty negative surprises into pleasant emerging behaviour. And generally, accessibility is easier to build in than to retrofit. In the case of the web, accessibility also lines up with better performance.
Knolling | Andri is…
An efficient team is, invariably, a team that keeps the code tidy and all external aspects of it up to date. Always be knolling. This does not directly contribute to the solution or success of the current task, but the current task is not your entire job responsibility. In the long run, your job is to complete tasks consistently and in accordance to specifications. If you’re held up by ancillary tasks such as upgrading dependencies or unwinding an abstraction that was meant to solve duplication that turned out to be incidental, then you have failed to keep a tidy system.
The open source paradox - <antirez>
Like a writer will do her best when writing that novel that, maybe, nobody will pay a single cent for, and not when doing copywriting work for a well known company, programmers are likely to spend more energies in their open source side projects than during office hours, while writing another piece of a project they feel stupid, boring, pointless.
How To Be An Anti-Casteist
The strict social hierarchical dynamics of Indian culture is damaging to a lot of workplaces. 2nd generation Indians are great. The Indians that are from lower casts or from oppressed groups like Christians or Muslims are really great. But the higher castes are extremely insular, and treat anyone of any race poorly. This might be taboo, but every time I see a situation where there are multiple Indians in a reporting chain, I run. If you have an Indian above and below, you will be bypassed on work, undermined, and given absurd directions, almost designed to drive you out. Then there is the case where if an Indian gets into management, they will start filling everything with their friends. Other management positions, they will start fighting to bring in some contractors from some place like Infosys. Its the death knell of the IT division at the company. Being on a team where you are the only non-Indian means you will be an outcast. You'll not be invited to meetings, they'll talk in their native tongue to exclude you. I've been the only white guy working with Chinese, and they don't do that. I've been in similar situations with Africans / and African Americans and they will welcome you right along. This is the truth, no matter how politically incorrect it is, and every time you walk into an IT office and there are 80% Indians, that's the reason.
Distance traveled |
There are so many forces pushing us to move as fast as possible, but little about doing good work is about getting places as fast as we can.
Is revenue model more important than culture?
This is why I always gravitate towards software projects that are centered around making money (within ethical bounds, of course). The closer to the bottom line my code is, the larger the sales and support team is around my code, and the more customers there are (real paying customers, not internal employees who like to be called customers) using my code, the better. It may sound overly hard-nosed and cynical to some people, but I find it's just the opposite. The drive to make more money is the only thing that trumps every other petty motivation people follow at work. It trumps favoritism, empire building, and intra-office rivalries. It trumps good ol' boys networks and tech bro networks. Money brings people into the same room who would never normally be in a room together, and they do it willingly. It forces people in power to listen to small fries. While money corrupts on an individual level, it purifies on an institutional level. Its universally accepted value allows a variety of individual motives to flourish. This seems to change once a company goes public and hits a certain size, as the flow of money becomes less and less tied to actual sales and consumer behavior and more and more based on financial engineering and stock price.
Update on Mozilla WebThings
Mozilla seems to be really underperforming in upper management - all of these initiatives that have failed have resulted in engineering layoffs. When will the business unit leaders responsible for repeated failure be let go and replaced?
We need physical audio kill switches
I've been thinking the same about power switches lately. If I turn a flashlight, or an old radio on or off, I flip a switch and get the result I want. With my 65 EUR gamepad, or 300 EUR headphones, I hold a button and wait several seconds for the result. Why has UX regressed so much in these areas?
“I no longer build software”
Add me to the woodworking ex-developers. I built a website that pays the bills, and now I have a lot of time on my hands. I am finishing my first piece of furniture today. It's pretty scary to work without an undo button. The physical world isn't just instructions, but movements. A little twitch can ruin a cut. A clumsy movement can dent a piece of wood you spent an hour sanding. You truly experience the meaning of "measure twice, cut once". Resources also feel tangibly limited. You can't just spin up another server, you must drive across town to buy more lumber. I still enjoy coding though. My passion for it returned once I could do it on my own time, without stakeholders, sprints, meetings, deadlines or even schedules. I sit down and work until the coffee wears off, then go do something else. It's a hobby again. I don't think programming is the problem. Anything you do 40 hours a week for other people will get to you just the same. Programming is a pretty sweet gig, all things considered.
React is becoming a black box
This is a symptom of a lot of developers thinking they have to write code exactly like everyone else (or at least, strictly adhere to "best practices"). It's a very subtle disease, but I've noticed it again and again over the years. Reading between the lines, this is a criticism of hooks if they're viewed as a wholesale replacement for classes; from experience I'd argue they're not—they're just a convenient tool for simplifying common patterns. I'd imagine the author knows that to be the case and instead of just using classes where appropriate (or where they wanted), they had to rationalize using hooks because of the aforementioned "but everybody else is using hooks" problem. I suffered from this behavior for years before I realized it was impeding my work. The term that came to mind for the phenomenon was "The Invisible Developer:" a non-existent developer sitting over your shoulder always judging you for your programming choices. That developer doesn't exist. If instead how "in fashion" your code is is the standard on your team: you're on the wrong team.
Why Johnny Won't Upgrade · Jacques Mattheij
More often than not automatic updates are not done with the interest of the user in mind. They are abused to the point where many users - me included - would rather forego all updates (let alone automatic ones) simply because we apparently can not trust the party on the other side of this transaction to have our, the users, interests at heart.
Don't marry your design after the first date - Tom Gamon
The more time you spend in the problem space, the more information you can gather and the better decision you can make when the time comes. For example, you can probably start working on your domain logic without knowing how the data is going to be served to the client, or what particular flavour of database you are going to use. Once you have chosen a database, by carefully encapsulating the access logic, if it turns out that this database isn’t the one, it is much easier to part ways amicably.
Show HN: HyScale – An abstraction framework over Kubernetes
This a hundred times. Do yourself a favour and use Dhall/Cue/Jsonnet to develop some abstractions that fit your workload and environment. There is not much value proposition in a tool like this if you can use a slightly lower-level, more generic tool (like a configuration-centric programming language, which is an actually full-fledged programming language) to accomplish the same goal in a more flexible and more powerful fashion, that leaves you space for evolution and unforeseen structure changes. The idea of tools mandating what 'environments' are is absurd, as it's pretty much always different for everyone (and that's good!).
The software industry is going through the “disposable plastic” crisis
In the micromanaged world of agile, ticket velocity is more important than any other metric. At least everywhere I've worked. Open source is the only place I regularly see high quality code. There the devs are allowed to love their code like pets not cattle.
The software industry is going through the “disposable plastic” crisis
The lie we tell ourselves is that the quality of code matters to non-engineers. It seems it doesn't. The most uncomfortable truth of our field is that there is no floor for how bad code can be, yet still make people billions of dollars. Because that's the outcome everyone else is seeking - making money. They don't care how good the code is. They care about whether it's making money or not.
The software industry is going through the “disposable plastic” crisis
People blame developers but it's all driven by a product mentality that favors rapid iterations and technical debt to run business experiments on customers. Slow-and-steady, carefully written software isn't tolerated within many product orgs these days.
Dear Google Cloud: Your Deprecation Policy Is Killing You
It is a total hassle to keep up with Googlers changing everything constantly. It's not just GCP it's every platform they control. Try keeping a website on the right side of Chrome policies, a G Suite app up, a Chrome extension running. Thousands of engineers chasing promotions by dropping support for live code. If it was their code they wouldn't do it. The org is broken. If you want to see what mature software support looks like, check out Microsoft. Win32 binaries I wrote in college still run on Win 10. Google looks unimpressive by comparison. But they all got promoted!
The day I accidentally built a nudity/porn platform
- anything that allows file upload -> porn / warez / movies / any form of copyright violation you care to come up with. - anything that allows anonymous file upload -> childporn + all of the above. - anything that allows communications -> spam, harassment, bots - anything that measures something -> destruction of that something (for instance, google, the links between pages) - any platform where the creator did not think long and hard about how it might be abused -> all of the abuse that wasn't dealt with beforehand. - anything that isn't secured -> all of the above. Going through a risk analysis exercise and detecting the abuse potential of whatever you are trying to build prior to launching it can go a long way towards ensuring that doesn't happen. Reacting very swiftly to any 'off label' uses for what you've built and shutting down categorically any form of abuse and you might even keep it alive. React too slow and before you know it your real users are drowned out by the trash. It's sad, but that's the state of affairs on the web as we have it today.
Stefan Hajnoczi: Why QEMU should move from C to Rust
Rust has a reputation for being a scary language due to the borrow checker. Most programmers have not thought about object lifetimes and ownership as systematically and explicitly as required by Rust. This raises the bar to learning the language, but I look at it this way: learning Rust is humanly possible, writing bug-free C code is not.
In spite of an increase in Internet speed, webpage speeds have not improved
>The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive. Sadly, this is probably where the core of the problem lies. "It makes code more readable and intuitive" is NOT the end goal. Making your job easier or more convenient is not the end goal. Making a good product for the user is! Software has got to be the only engineering discipline where people think it's acceptable to compromise the user experience for the sake of their convenience! I don't want to think to closely about data structures, I'll just use a list for everything: the users will eat the slowdown, because it makes my program easier to maintain. I want to program a server in a scripting language, it's easier for me: the users will eat the slowdown and the company budget will eat the inefficiency. And so on.
Laws of UX
It feels like modern website design conflates “better UX” with “surface level attractiveness” Craigslist is a great example, original reddit is another example: my UI/UX designer friend considers original reddit to be quote “ugly and horrible”, and while there definitely could be some improvements, the reddit redesign (which I know my friend would come up with something similar to) is quite literally orders of magnitude worse, but is aesthetically “nicer”. Original reddit looks ugly, but everything you want from an interface is there once you get through a 3 minute learning curve: information dense, enough white space (but not too much), consistent behaviour, fast, respects scrolling, etc etc. Where did we go “wrong” with web design that what we have now is seemingly worse? And what does a good balance of “actually functionally useful” and “aesthetically pleasing” look like?
The Fear Of Missing Out - Christine Dodrill
Infinite scrolling and live updating pages that make it feel like there's always something new to read. Uncountable hours of engineering and psychological testing spent making sure people click and scroll and click and consume all day until that little hit of dopamine becomes its own addiction. We have taken a system for displaying documents and accidentally turned it into a hulking abomination that consumes the souls of all who get trapped in it, crystallizing them in an endless cycle of checking notifications, looking for new posts on your newsfeed, scrolling down to find just that something you think you're looking for.
Why are CEOs failing software engineers?
CEOs can't really communicate with developers or designers if they have no practical experience with development or design. Lack of mutual respect can make it very hard to find a balance between giving enough creative freedom and setting deadlines. They may either give too much creative freedom to avoid problems (expensive in the short term), or don't give enough to play safe (toxic & expensive in the long term).
The Trick
The problem is the relative social status between the client and Geon. If the client had heart palpitations and Geon was a cardiologist, this wouldn't happen. You would not have Mr Alpha explaining to the doctor how he needs to do the scan and the surgery, and being very cross when he didn't get his way. Even though Mr Alpha probably cares more about his heart working than a user interface. The same goes for pilots and other professionals, they get less crap than they would if they didn't have some sort of status that prevents most of the I-know-best crowd from sticking their heads in. For some reason, software doesn't have that feel to it. In many places, it's a sort of implementation detail, where the generals have already decided the strategy, and the devs just have to follow the orders. It would be good with some cultural change around what people think devs do and what you can say to them.
Why Tacit Knowledge is More Important Than Deliberate Practice
And so if you are a programmer, or designer, or businessperson, an investor or a writer reading about deliberate practice, you may be asking: “Well, what about my field? What if there are no established pedagogical techniques for me?” And if you have started to ask this question, then you have begun travelling a more interesting path; this is really the right question to ask. The answer, of course, is that the field of NDM is a lot more useful if you find yourself in one of these fields. The process of learning tacit knowledge looks something like the following: you find a master, you work under them for a few years, and you learn the ropes through emulation, feedback, and osmosis — not through deliberate practice. (Think: Warren Buffett and the years he spent under Benjamin Graham, for instance). The field of NDM is focused on ways to make this practice more effective. And I think much of the world pays too much attention to deliberate practice and to cognitive bias research, and not enough to tacit knowledge acquisition.
Anxiety Driven Development
I think the serenity prayer, sans unnecessary theological content, is relevant here. Grant me the serenity to accept the things I cannot change, the courage to change the ones I can, and the wisdom to know the difference. For a lot of software products, there is no winning in the long run. You've got good product-market fit and customer loyalty, but your code base is a huge mess and the hard technical problems are solved by third-party libraries. Your tech is a liability and eventually someone with better tech will be smart enough to study your customers, or the students who will eventually replace your inevitably-retiring customers on the front lines and push adoption going forward. And this is okay. The advantage corporations have over government institutions is that they can be created and destroyed with much less friction. If you're lucky, your growth curve looks like double-sigmoid table-top. Probably it looks like an asymmetric Gaussian. What it doesn't look like is an exponential. Understand where your product is in its life-cycle, and maximize ROI.
Do you feel JS developers are pushing a bit too hard to use NodeJS everywhere? | Lobsters
There’s a huge funnel problem for computer science at the moment. Go and Rust have some pretty serious evangelical marketing teams, but they are a drop in the ocean compared to the emergent ultramarketing behemoth that feeds JavaScript to the new developer. Part of this is that JS is constantly “new and modern” – with the implication that it’s a bandwagon that you’ll be safe on, unlike some of the old cobwebbed bandwagons. Constant change and “improvement” is itself a safety generator. Another part is that it’s so easy to get to hello, webpage. The sweet spot on the racket is enormous. Every computer including your phone comes with at least 1 and usually several JS interpreters. Frictionlessness drives adoption. The problem is that JS is, violently, garbage for most purposes. It’s a local maxima that has essentially everyone trapped, including the next generation. It’s not clear how we escape from this one.
Microsoft Defender SmartScreen is hurting independent developers
Application signing is a mafia protection racket, plain and simple. If you aren't signed by an "authority", every user is told by default automatically that your code is unsafe until you pay money. It is 100% analogous to thugs walking into your store saying "It would be a real shame if something were to happen to scare people away." The message is "We Protected You" and "Unsafe". WHY? Because "WE don't recognize" it. Application signing certificates cost money. Always. And if you're making something for free either out of the goodness of your heart or because you like making things, that money has to come out of your pocket just so the thugs don't stand in front of your door with bats. Nobody should be ok with that. AND FUN FACT: malicious or incompetent actors can and do also pay money.
Rust: Dropping heavy things in another thread can make your code 10000 times faster | Lobsters
I would say that this is a kind of premature optimization in 99.9% cases. Where are we going as software developers if we have to spawn a thread to just release memory? Maybe I’m just old fashioned or maybe I’m just shocked.
You can't tie your shoes while running | Fatih's Personal Blog
At this point you may be on board with everything I said, but you’re still reluctant to stop the world and ignore business needs until you’re done. Unfortunately, you can’t tie your shoes while running. I’m not saying that you should do your improvements secretly, communicate this need to business. Technical debt is a reality of the world and it slows down development to a halt. It’s your responsibility to take the time to pay it back and increase the speed, don’t expect it to come from business. One reason I believe the boy scout rule sounds lucrative is that in a large team, it’s hard to communicate a best practice to everyone involved. You know in your heart that doing things the current way is bad for everyone, but you don’t feel up to the task of getting everyone on board. Maybe because there are too many people, maybe you don’t feel senior enough. So you just sweep around your own front door and feel like you made a positive change. But remember this, for the next person who reads your code, they’ll see two different ways of doing the same thing and they will be confused.
You can't tie your shoes while running | Fatih's Personal Blog
It’s easy to imagine that if you keep doing the improvement as you touch the code, at some point you’ll cover the whole codebase. This is a wrong assumption. Some parts of the code are never touched until the project is rewritten or gone out-of-commission. We don’t care about what will happen eventually, we care about making the code better now. Even if it was possible to apply an improvement incrementally over the lifetime of a project, it still wouldn’t make sense. Because there won’t be a single improvement over that lifetime, there will be a bunch of them at the same time and while it can be possible to keep one in mind, it’s not humanly possible to juggle many of them. Your codebase will be a graveyard of many ways of doing something. One underrated quality of every codebase is consistency, the law of least surprise. At any point in time, you want everything to be consistent. Business requirements are hard already, you don’t want to take on more challenge by adding different paradigms into the mix.
The best tool for the automation job
Do you believe in the abilities of your tech team ? Do you believe in your organization’s ability to train and develop talent ? If yes, then finding developers shouldn’t be a problem – just hire smart junior developers and train them well. If not, it’s time for some organizational soul-searching. If you’re startup and don’t have the time – you hopefully have a senior person already on your founding team. Take a couple months to train some smart junior people, and you’ve tripled or quadrupled your dev group for the cost of one or two seniors. And you’ve developed a training culture and infrastructure so you can hire and train more juniors much more easily. You’ve traded a small group of senior devs for a) the staff you needed, b) a learning, improvement-based culture, and c) a much easier path to more staff in the future.
Where Did Software Go Wrong? | Jesse Li
Software cannot be divorced from the human structures that create it, and for us, that structure is capitalism. To quote Godfrey Reggio, director of Koyaanisqatsi (1982), “it’s not the effect of, it’s that everything exists within. It’s not that we use technology, we live technology. Technology has become as ubiquitous as the air we breathe, so we are no longer conscious of its presence” (Essence of Life 2002).
Where Did Software Go Wrong? | Jesse Li
These examples give us a decent idea of what software is good for. On its own, it never enables anything truly new, but rather changes the constant factors of speed and marginal cost, and raises the barrier for participation arbitrarily high. Once the software train begins to leave the station, we have no choice but to jump and hang on, lest we get run over or left behind—and we are not sure which is worse.
Where Did Software Go Wrong? | Jesse Li
For many of us fortunate enough to stay home during the coronavirus outbreak, our only interface with the world outside our families and homes—the relays of connection between us, our families, communities and societies—have been filtered through our screens and earbuds. It is apparent now more than ever exactly what software does for us, and what kinds of inequalities it reinforces. Through Instacart, Amazon Fresh, and other grocery delivery services, we can use an app to purchase a delivery driver’s body for an hour to expose themself to the virus on our behalf. Unsatisfied with even this, some developers have written scripts to instantly reserve the scarce delivery slots on these services. One developer wrote to Vice’s Motherboard “I designed the bot for those who find it extremely inconvenient in these times to step out, or find it not safe for themselves to be outside. It is my contribution to help flatten the curve, I really hope this’ll help reduce the number of people going out” (Cox 2020). Is that right? Does a bot really reduce the number of people going out, or does it merely change the demographics of who gets to stay home, favoring those with the resources and technical skills to run a Python script and Selenium WebDriver? With a constant and limited number of delivery slots, Joseph Cox points out that these bots create “a tech divide between those who can use a bot to order their food and those who just have to keep trying during the pandemic” (2020).
Where Did Software Go Wrong? | Jesse Li
And that is exactly it: in the modern world, our social interactions, our devices, governments, and markets, are circulations and flows of the same realities under the same rules. Our software creates new problems—problems that we’ve never had before, like fake news, cyberbullying, and security vulnerabilities—and we patch them over with yet more layers of code. Software becomes quasi-cause of software. These are echoes of the same voices in a positive feedback loop, growing louder and less coherent with each cycle—garbage in, garbage out, a thousand times over.
The State of Go
Not only that, but rarely am I supporting a library that I'm the sole developer on. Go takes away so much "individuality" of code. On most teams I've been on with Python and Java I can open up a file and immediate tell who wrote the library based on various style and other such. It's a lot harder with Go and that's a very good thing.
Where Did Software Go Wrong? | Jesse Li
Every time we dive into a codebase, speak with a mentor, take a course, or watch a conference talk, we are deliberately adding new voices to the little bag of voices in our mind. This is not purely a process of consumption: in internalizing voices, we form counter-words, mentally argue with them, and ventriloquize them through our own work—in a word, we engage in a dialogue. Next time you settle down to read some code, listen carefully for the voices inside the code and the voices inside your mind, however faint they sound. I can hear the voice of a senior engineer from my last job every time I write a type definition.
Where Did Software Go Wrong? | Jesse Li
Software is at once a field of study, an industry, a career, a process of production, and a process of consumption—and only then a body of computer code. It is impossible to separate software from the human and historical context that it is situated in. Code is always addressed to someone. As Structure and Interpretation of Computer Programs puts it, “programs must be written for people to read, and only incidentally for machines to execute” (Abelson et al. 1996). We do not write code for our computers, but rather we write it for humans to read and use. And even the purest, most theoretical and impractical computer science research has as its aim to provoke new patterns of thought in human readers and scholars—and these are formulated using the human-constructed tools of mathematics, language, and code. As software engineers, we pride ourselves in writing “readable” or “clean” code, or code that “solves business problems”—synonyms for this property of addressivity that software seems to have. Perhaps the malware author knows this property best. Like any software, malware is addressed to people, and only incidentally for machines to execute. Whether a sample of malware steals money, hijacks social media accounts, or destabilizes governments, it operates in the human domain. The computer does not care about money, social media accounts, or governments; humans do. And when the malware author obfuscates their code, they do so with a human reader in mind. The computer does not care whether the code it executes is obfuscated; it only knows opcodes, clocks, and interrupts, and churns through them faithfully. Therefore, even malware—especially malware—whose code is deliberately made unreadable, is written with the intention of being read.
ROI in companies that decided to switch to Rust
I'm CTO of a legal tech firm, clausehound, and we're almost fully migrated to rust (from a Frankenstein's monster of WordPress PHP). We've built a web application that organizes legal language, bringing a lot of software-thinking (eg git-like versioning of contract drafts) to law, and the clarity demanded by rust has been a huge benefit. Our PaaS offering is a graphql API that lets you explore knowledge about contracts, so there's a lot of very strict relationships defined that rust has been perfect for, vs the willy-nilly, force-everything-into-a-string-or-hashmap approach that PHP forced on us. The learning curve was pretty steep, and there's no way we can afford any devs who come with rust experience already, so we've had to do lots of education in-house. Ownership has, unsurprisingly, been the big concept to teach. The flip side is that the more mature our product becomes, the more good examples devs can find throughout the codebase, because odds are someone has used a similar approach to borrowing to what they need already. I'm fortunate in that I'm in a position to make the decision myself, but I can see a huge drawback at some organizations. Rust is, in many ways, a major shift for some companies, and often people organize themselves around valuing abstractions, instead of the value that abstraction provides. Eg I can't think of a single test on our php we were running that even applies to rust, since every (and many more) cases we were checking are enforced at compile time. A lot of organizations are weird about testing: the purpose of testing isn't really to find bugs, it's to both maximize user satisfaction, and minimize their risk, by finding where expected behaviour differs from actual. But often, managers will look at total bugs found, or worse, total tests written, as a success metric. I can promise you that when you don't need to write phpunit, jest, etc tests just to make sure a variable actually is what you say it is, you'll find fewer bugs in testing and have a harder time writing lots of tests. Tests are just one easy example. Every org is going to have a bunch of metrics they care about a lot that won't make half as much sense on rust. You're going to need to do a lot of work (attending exec meetings, read sales materials, etc) to find the places where you can match the rust ROI to what they're measuring. You may need to question many of the metrics themselves, which is usually a big uphill battle. If you'd like to chat about it, I'm happy to talk
The Imperial High Modernist Cathedral vs The Bazaar – Ben Podgursky
At risk of appropriating the suffering of Soviet peasants, there’s another domain where the impositions of high modernism parallel closely with the world of software — in the mechanics of software development. First, a definition: Metis is a critical but fuzzy concept in SlaS, so I’ll attempt to define it here.  Metis is the on-the-ground, hard-to-codify, adaptive knowledge workers use to “get stuff done”.   In context of farming, it’s: “I have 30 variants of rice, but I’ll plant the ones suited to a particular amount of rainfall in a particular year in this particular soil, otherwise the rice will die and everyone will starve to death” Or in the context of a factory, it’s, “Sure, that machine works, but when it’s raining and the humidity is high, turning it on will short-circuit, arc through your brain, and turn the operator into pulpy organic fertilizer.” and so forth.   In the context of programming, metis is the tips and tricks that turn a mediocre new graduate into a great (dare I say, 10x) developer.  Using ZSH to get git color annotation.  Knowing that,  “yeah Lambda is generally cool and great best practice, but since the service is connected to a VPC fat layers, the bursty traffic is going to lead to horrible cold-start times, customers abandoning you, the company going bankrupt, Sales execs forced to live on the streets catching rats and eating them raw.”  Etc. Trusting developer metis means trusting developers to know which tools and technologies to use.  Not viewing developers as sources of execution independent of the expertise and tools which turned them into good developers.
The push for new and shiny solutions to old known problems - Jesper Reiche
In my book simplicity always wins. You win by subtracting complexity – not by adding it. Start with the simplest possible solution to the problem and see where it gets you. Simple makes it cheaper. Simple solutions makes it easier to implement, makes it easier to test, faster to ship and hence faster to get feedback. Once you have this feedback, whether from unit tests, Proof of Concept or user tests, then you can decide to add complexity if your simple solution proves too slow or too rudimentary. Always start with the simple solution.
buggy culture · Muvaffak
The moment a code is reviewed, approved and merged; it’s not yours anymore. If a bug occurs, it belongs to everyone who contributed to the software. Still not convinced? Think of it this way. Do you pay the developer extra who wrote “Buy Now” button every time a sale happens to congratulate how the code piece he wrote brings in sales? No, because you know every sale is enabled by all part of the web site collectively. The same goes for bugs, too; they are caused by the whole codebase being that way, not by that one if statement.
Ask HN: Who Regrets Choosing Elixir?
I've used Elixir since 2015 and I find Elixir to be unusable for any kind of intelligent domain modelling, but that's primarily because it's dynamically typed and has no concept of real sum types, etc., not necessarily because it's any worse at this than Ruby. Any codebase beyond fairly small will be harder and harder to work with to an unreasonable degree, in my experience, and any perceived "velocity" gained from the dynamic nature of it is paid for doubly so by the lack of safety you get beyond toy projects. I'm only slightly more for Erlang as a choice mostly because it's simpler than Elixir and doesn't have as much obfuscation of logic added to it that Elixir does, but in reality it's also a bad choice for any bigger system. The runtime is absolutely great, but the languages on it are generally not good enough.
Leave Scrum to Rugby, I Like Getting Stuff Done - Qvault
Sprints are useful like achievements in video games are useful; they make us feel warm and fuzzy inside. Motivation is a powerful tool, don’t misunderstand me. The problem is that those warm fuzzies are mostly for the sake of the management team. It makes them feel in control and informed. They know exactly what was done and when it was completed. I’m not against management being informed… but at what cost?
Fragile narrow laggy asynchronous mismatched pipes kill productivity
> Complexity. This is the enemy, the second enemy is bad attempts to reduce complexity which often end up adding more complexity than they take away, just harder to find. This is true at every level of the systems design process - often by trying to make a system "simpler" i.e. less complex for the end user, the complexity is shifted further down the stack into application code, or even to the underlying infrastructure. It's easy for those of us with technical backgrounds to see the beauty and simplicity in well designed interfaces, but as the realm of computing and computer interaction shifts away from technical to non-technical people, we start to absorb some of that complexity into our systems design to make up for knowledge shortcomings of end users. Your example of sed being better than the "fancy data tools" I feel is a good one - whilst sed is incredibly powerful for this use case, if the consumer of what needs to be run there only knows how to use excel, it's often required to create these abstraction layers to allow the end user to do their own primary function/role.
Second-Guessing the Modern Web
I’m tempted to step back and evaluate this on another level. Our industry is very big, and any industry that gets that big will be able to house a lot of people just for the sake of it. If you think we have a large amount of fresh frontend people, understand they are hired almost with a one to one correspondence with fresh product/business people. Modern product development is essentially a polishing job on every component that Twitter Bootstrap or Jquery UI ever invented. Over and over, we dress up a modal, with a slider, with a ‘user flow’, with some tooltips, and so on, and allow the process to masquerade around as real design/engineering. There’s so much money in this industry that we can hire entire teams to basically take a Bootstrap component, and theme it. This gets passed on as product development, and from the developer side, it gets passed on as engineering. If this is the level of masquerading occurring, why would a frontend developer ever go ‘what’s the right solution here?’. Something similar is happening on the backend and infrastructure. It too will take on a mask behind devops and data science and start pumping out what are probably straight up SQL queries and cron jobs. This will get passed off as design and engineering as well. We’re too big.
Ask HN: Name one idea that changed your life
"Premature optimization is the root of all evil" More and more, I'm realizing this applies more broadly than just for code. Abstraction is a form of optimization and shouldn't be done before the space has been properly explored to know what abstractions should be built. Standardization is a form of optimization and shouldn't be proposed until there's a body of evidence to support what's being standardized. Failure to validate a product before building it? Premature optimization. Build infrastructure without understanding the use case? Premature optimization. Build tools before using them for your end product/project? Premature optimization. This advice comes in different forms: "Progress over perfection", "Iteration quickly", "Move fast and break things", "Don't let perfection be the enemy of good enough", etc. but I find the umbrella statement of not prematurely optimizing to encompass them all.
Tools/practices to manage deeply nested thought-stacks? | Lobsters
Breadth-first, not depth. Defer relentlessly. Check with your primary goal regularly. Time-boxing. The trick with making meaningful progress and not spinning out on these tangents is *pausing to recognize them as tangents*. Only execute on a sub-task if it is *necessary* to complete your immediate goal. If a sub-tasks can be deferred, do that; You can evaluate if they are still useful later. Capturing them to get it out of your head should alleviate some of the pull that they have on you – they won’t be forgotten, but don’t need to be done now. And always be asking the question “Is this helping me solve my immediate problem?”. Why did you want the interactive debugger? Probably for more context. For debugging specifically, always be asking is there a dumber/simpler way to find concrete information? Just sitting and thinking through the specific context that you think you need may have allowed you to continue with print debugging might have short-circuited the tangent. The other tactic that can help is, when you start a sub-goal, estimate how much time it is worth to you, and set a timer. Had you valued the interactive debugger at 20m, and then the timer went off and you realized that you were about to re-install your interpreter. That is a good moment to re-evaluate. And having the concrete time box prevents you from losing an entire afternoon to a chain of those. As a reminder, maybe put a sticky note in front of you with your current goal. And keep checking in that you are still really working towards it. As for tooling, omnifocus and both have quick capture features for things you can defer until later. And and workflowy literally let you nest these tangents, which can be a visual signal when you’ve gone too far. But I think the crux of your question is more about focus and process and less about the tools.
Complexity Has to Live Somewhere
The trap is insidious in software architecture. When we adopt something like microservices, we try to make it so that each service is individually simple. But unless this simplicity is so constraining that your actual application inherits it and is forced into simplicity, it still has to go somewhere. If it's not in the individual microservices, then where is it? Complexity has to live somewhere. If you are lucky, it lives in well-defined places. In code where you decided a bit of complexity should go, in documentation that supports the code, in training sessions for your engineers. You give it a place without trying to hide all of it. You create ways to manage it. You know where to go to meet it when you need it. If you're unlucky and you just tried to pretend complexity could be avoided altogether, it has no place to go in this world. But it still doesn't stop existing. With nowhere to go, it has to roam everywhere in your system, both in your code and in people's heads. And as people shift around and leave, our understanding of it erodes. Complexity has to live somewhere. If you embrace it, give it the place it deserves, design your system and organisation knowing it exists, and focus on adapting, it might just become a strength.
Server-Side Rendering is a Thiel Truth
Client-side rendering is (obviously) necessary to support complex interactions with extremely low-latency: Figma or Google Docs could only be client side apps. It is useful for write-heavy applications people use interactively for long periods: email, chat. It is harmful for read-only, or read-mostly applications. Harmful to the implementors as it imposes unnecessary cost, harmful to users as it's likely slower, less likely to use the web platform correctly and less accessible. Inappropriate use of client-side rendering is why to find out my electricity bill I have to wait ~10 seconds while a big React app loads and then hits up 5 REST endpoints. So is your app mostly forms or displaying content? User preference panels? Mortgage applications? Implement it with server-side rendering with a sprinkling of JS to implement widgets the web lacks. If only part of your app requires low-latency interactions, use client-side rendering only there. p.s don't believe it can be fast? Have a quick wander around the D Forum - it's many, many times faster than most client-side rendered apps I use. Oh and GitHub (source: I worked there) is overwhelmingly server-side rendered (with Rails, gasp), and so is StackOverflow. It's quite surprising that this is a Thiel truth.
Agile's early evangelists wouldn't mind watching Agile die
IMO Agile has become regulatory capture. It's a means by which non-engineers can extract value from a booming market which doesn't directly benefit from their skills. That being said. I think there is a lot of wisdom in the original agile manifesto. The core principles are solid, but the methodology has clearly been co-opted by consultants and supported by management looking to increase the headcount under themselves. I've often struggled to understand why my team is made up of only 20% engineers with the other 80% pretending to create value by holding meetings to tell engineers what to build next when I feel like it's your clients that should be doing that. Ultimately it's engineering that becomes the constrained resource which leads to technical debt in favor of pushing out product's features. I would venture a guess that most engineers have used (critically) more software in their lives than any non-technical person driving the development of the product. Why then are engineers not the most consulted people on the efficacy and value of new features? I think there is a big myth out there that engineers are incapable of directly handling client feedback.
Stop apologizing for bugs – Dan Slimmon
Everyone knows that all code has bugs. Code is written under constraints. Deadlines. Goals other than quality. Imperfect knowledge of the future. Even your own skill as an engineer is a constraint. If we all tried to write perfect, bugless code, we’d never accomplish anything. So how does it make sense to apologize for bugs? This rule I’ve made for myself forces me to distinguish between problems caused by constraints and problems caused by my own faults. If I really think I caused a problem through some discrete action (or lack of action), then that’s something I’ll apologize for. But if I wrote code that got something done, and it just so happens that it didn’t work in a given situation, then I have nothing to apologize for. There was always bound to be something.