Crowdstrike Update: Windows Bluescreen and Boot Loops
Throwaway account... CrowdStrike in this context is a NT kernel loadable module (a .sys file) which does syscall level interception and logs then to a separate process on the machine. It can also STOP syscalls from working if they are trying to connect out to other nodes and accessing files they shouldn't be (using some drunk ass heuristics). What happened here was they pushed a new kernel driver out to every client without authorization to fix an issue with slowness and latency that was in the previous Falcon sensor product. They have a staging system which is supposed to give clients control over this but they pissed over everyone's staging and rules and just pushed this to production. This has taken us out and we have 30 people currently doing recovery and DR. Most of our nodes are boot looping with blue screens which in the cloud is not something you can just hit F8 and remove the driver. We have to literally take each node down, attach the disk to a working node, delete the .sys file and bring it up. Either that or bring up a new node entirely from a snapshot. This is fine but EC2 is rammed with people doing this now so it's taking forever. Storage latency is through the roof. I fought for months to keep this shit out of production because of this reason. I am now busy but vindicated. Edit: to all the people moaning about windows, we've had no problems with Windows. This is not a windows issue. This is a third party security vendor shitting in the kernel.
Replica Velocity (@ReplicaVelocity)
These cybersecurity companies stifle a lot of the competition and tend to rely on fear campaigns (and user knowledge gaps) to maintain that dominance. Because of that market position, their lack of QA testing becomes more damaging whenever something goes wrong.
Story points are pointless, measure queues
Yeah, it seems like it's fairly common for people/teams to follow the idea that any story that is 8 or more points should be broken down to tasks of 5 or less. This simply doesn't make sense to me. If the most simple task is 1 point, is your most complex task allowed really only 5 times as complex? Story points usually follow an exponential increase for a reason, enforcing staying in the mostly linear portion is just pretending the complexity and uncertainty has been decreased.
Story points are pointless, measure queues
Story points aren't time (as OP states). They're relative complexity, and uncertainty (hence the fibbonacci sequence building uncertainty in to larger numbers). And stories should be able to sized as big numbers. I've never been on a team comfortable with more than a 7, at least not since my first agile experience where we all took an agile/scrum training together for a few days. We'd frequently give things like 21 or 30 or 50 points, as appropriate. That's the only place I've ever seen a burndown chart that looked like it should. Everywhere else, it's flat until the last day and then drops to zero as all those "it's a 7 I promise" get carried over to the next sprint for the 3rd time.
GitHub Copilot is not infringing your copyright
This is missing the largest argument in my opinion. The weights are the derivative work of the GPL licensed code and should therefore be released under the GPL. I would say these companies release their weights or simply not train on copyleft code. It is truly amazing how many people will shill for these massive corporations that claim they love open source or that their AI is open while they profit off of the violation of licenses and contribute very little back.
Judge dismisses DMCA copyright claim in GitHub Copilot suit
It seems to me that regardless of the outcome of this case, some developers do not want to have their code used to train LLMs. There may need to be a new license created to restrict this usage of software. Or, maybe developers will simply stop contributing open source. In today’s day and age, where open source code serves as a tool to pad Microsoft’s pockets, I certainly will not publish any of my software open source, despite how much I would like to (under GPL) in order to help fellow developers. If I were Microsoft, I’d really be concerned that I’m going to kill my golden goose by causing a large-scale exodus from GitHub or open source development more generally. Another idea I’ve considered is publishing boatloads of useless or incorrect code to poison their training data. As I see it, people should be able to restrict how people use something that they gave them. If some people prefer that their code is not used to train LLMs, there should be a way to enforce that.
Writing GUI apps for Windows is painful
Problem: It is extremely hard to stylize native Win32 controls. That's not a problem, it's a feature. I am absolutely bloody sick of apps that go out of their way to reinvent the standard UI controls in perplexing ways and behave unexpectedly. Following the system UI preferences is what you should do, and it irritates your users if you don't. There is a “hidden” dark mode for Win32 controls used by Windows File Explorer that you can activate, but it covers only some of the controls and still doesn’t look good. Don't do that. If you use the regular Win32 controls then they will automatically get the styling the user has set. I've been writing Win32 apps for close to 30 years now. It's sad to see the regression in UIs over the years.
Writing GUI apps for Windows is painful
I have a very low opinion of developers who decry having to pay for a commercial licence for otherwise LGPL licensed software libraries. They expect to be paid for their work, and ensure that by creating closed source software. That's fair enough. Yet, the devs that solved the actually difficult parts of creating a UI library have to be utter saints who freely bestow a gift of code upon the world.
The Open-Source Software bubble that is and the blogging bubble that was – Baldur Bjarnason
The biggest problem—and this isn’t limited to web development—is how it has baked exploitation into the core worldview of so many people. We use open-source software. We get paid to use open-source software. Our employers benefit, but the money never trickles down—money never trickles down. This is fine when the project in question is directly funded by a tech multinational. Less so when the project is something specialised, a little bit niche, or inventive, and therefore not financed by a gigantic corporation.
The Open-Source Software bubble that is and the blogging bubble that was – Baldur Bjarnason
The extraction mentality is baked into the business. Which is sort of fine when you’re dealing with projects funded by mega-corporations but disastrous when applied to the unfunded or poorly funded rest. The money hose, combined with free or subsidised services, is a control mechanism that lets big tech companies control the OSS ecosystem. Projects they want to promote will get the money spigot. Other projects, like MongoDB or Redis, get turned into commodities and resold as cheap services.
The Open-Source Software bubble that is and the blogging bubble that was – Baldur Bjarnason
A surprising amount of OSS is made by former big tech developers. They can afford to subsist on meagre revenue—for a time—because their pay and stock options have left them free of debt and with well-stocked savings accounts. This is much more common than you’d think. Scratch away at the surface of pretty much any active OSS project that has no discernible revenue, and you either get a burnout waiting to happen, or you’ll find a formerly well-paid dev coasting on savings. Many of the rest have solid VC funding. Though, VC funding always runs out at some point. The business fundamentals just aren’t there for open source when you have Google, Amazon, and the rest gatekeeping all of the value in the market. This is why the ecosystem is already beginning to pull apart at the seams.
The Problem With Free Software…
Little tricks of intellectual property law like “copyleft” are ultimately impotent against the weight of the entire edifice of the system in which they exist. Even if we did suddenly change everything to be licensed as AGPL overnight, does anyone really think that the FSF would be able to stand up in a copyright court against Apple and Google?
The Problem With Free Software…
Expecting the freeloading corporations that benefit from open-source software to contribute fairly is, I think, unrealistic. The only reason they support the development now is because they know it’s ultimately less expensive for them; if they actually had to pay the real value of the development, they’d just take it in-house and prevent others from benefiting from it.
The Problem With Free Software…
One somewhat appealing option is to try to deny the ability of corporations to benefit from the work of the commons. Unfortunately, attempts at doing this end up being viciously attacked by existing FLOSS communities, crying that by limiting who can use their software, they’re no longer “free”, technically proprietary, and generally heretical. They know well that they exist at the pleasure of those large companies and will enthusiastically police their own ranks to prevent true anti-corporate sentiment from becoming the mainstream.
The Problem With Free Software…
It seems clear that open source isn’t sustainable as things are now. Many of the large projects survive on the contingent largess of corporations, but how long will that last? Those that aren’t directly funded exist mainly because developer salaries are so inflated that the people maintaining them can afford to spend their time on something that’s a pure cost to them. Neither of those things are viable in the long term though.
The Problem With Free Software…
I’ve come to really distrust the term “open-source”, as its original intention was – and remains – to make work exploitable by large companies, rather than to protect users or developers
Capitalism and Open Source
Capitalism is naturally welcoming to "open source" (relabeled as: knowledge thievery) but extremely hostile towards Freeware. It's pretty obvious why.
Andreas Kling (@awesomekling)
As an OSS maintainer, I find that I trust someone who has fixed 10 bugs far more than I trust someone who has added 10 features. Who do you trust more?
Farm: Fast vite compatible build tool written in Rust
Another way to view it: slow builds is a poor man’s feature, that punishes you for bringing in too many deps. The JS ecosystem (speaking as an omnivore using many languages) suffers two big problems: 1. Tooling is too fragmented and complex and poorly designed. This includes even the standard tools and formats like npm and package.json. 2. There’s a weak culture of API stability. Leave a project for 3 months and there are new config files, plugin formats, etc. It’s improving slowly through standardization with things like ES modules. But it’s still a Wild West.
Khalid ⚡️ (@[email protected])
@tastapod @LGUG2Z I always find it ironic that folks want to make a living from writing software but are immediately averse to paying someone else who wants to make a living writing software. Healthy ecosystems thrive on a foundation of mutual support. An ecosystem where everyone just “takes” is on a trajectory to heartache.
Ludovic Courtès (@[email protected])
@hipsterelectron Well, Guix (and Nix) takes a radically different approach from Spack (and Conda, dpkg, rpm, etc.): each commit in Guix defines a single, unambiguous dependency graph with zero degrees of liberty. Conversely, these other tools typically define instead a “family” of dependency graphs. Benefit IMO of the zero-degrees-of-liberty approach is predictability and reliability: it’s easier to test a single well-defined graph.
Mitchell Hashimoto (@mitchellh)
I think frontend interactivity is overrated for most websites. When I submit a form and don’t see my browser loading a new page the chance of bugs in general on that site is significantly higher.
Mitchell Hashimoto (@mitchellh)
Upgraded my Linux NixOS machine to 24.05 and my Mac nix-darwin to 24.05. Five line diff (guided by `nix` CLI telling me what to do) and everything just works. 3 years of Linux updates with zero issues whatsoever, if I told my college-aged self this I'd laugh in my own face.
Debian 12 KDE Plasma: The right GNU/Linux distribution for professional digital painting in 2024. Reasons and complete installation guide. - David Revoy
It’s very easy to explain the problem with Wayland in non-technical terms. [Thing] worked before, and now it doesn’t anymore. Times thousand. As long as technical purity is prioritized higher than the user’s needs, this won’t change. You can’t just rewrite everything, drop and break half the functionality and pretend that it’s a drop-in replacement. This will be more painful than the Python 2 to Python 3 version jump. But I guess things need to get worse before they can get better if you are in a local optimum.
Engineering for Slow Internet
I can tell our industry is fucked because I can’t imagine convincing a product manager or manager to prioritize a ticket called “making the product usable from slow satellite uplinks”. “What’s the expected ROI on that?” “I already promised features X, Y, and Z to customers for this quarter.” The only times I’ve been able to convince leadership to work on stuff that didn’t directly affect our bottom line were when external regulations demanded it.
iTerm2 and AI hype overload - Xe Iaso
There’s a perspective I haven’t seen yet that may explain the backlash: I would never integrate OpenAI into a terminal program I was building because of hallucinations, a lack of trust in OpenAI, and other reasons. I’m not an anti-AI Luddite either, hopped on the AI train in 2015. So, when someone else integrates OpenAI where I don’t think it belongs, I feel like I can’t trust their judgment, and therefore can’t trust them to make good product management decisions in the future. Adding questionable features is a red flag indicating the product team isn’t focused or has poor judgment. Put another way, you don’t put the person that says Taco Bell is quality Mexican food in charge of arranging catering for the company Christmas party. I can’t trust them to not screw it up.
Do I not like Ruby anymore?
I feel really similarly about Python (for web backends, specifically, but probably also any large project). I can’t wrap my head around how anyone can try to write and maintain a large, long-lived, project with Python. And if you do somehow get a bunch of people to agree on conventions and style, and write reasonably correct code, your reward is just like you described for Ruby: really bad performance (both execution speed *and* memory consumption).
Do I not like Ruby anymore?
Having worked with Ruby for close to two decades I’m convinced there is no point anymore for large software systems. It’s a nice language for sure but after a few ten thousand lines it becomes impossible to deal with. Add just a couple inexperienced engineers and the whole thing unravels. You’re constantly fighting an uphill battle for good code and the language does not help you: it has no package-level segregation, no type system, no way to enforce interfaces, implicit method definitions all over…you have workarounds for all of these but they’re all non-idiomatic and no one uses them. It’s just as easy to write “beautiful” code as it is bad code, you gotta keep yourself to an opinionated subset of the language and be vigilant about it. And what you get in the end is not encouraging—if you manage to keep good separation of concerns and extensible code you get the big prize: one of the worst performance profiles in major languages. Over the years I’ve seen speed of development fall off a cliff while systems get bigger, taking away one of Ruby’s main advantages. This is also an overrated aspect of software systems and languages: you’re in a “I want this to be stable and efficient” state for much more time than you’re in “I need to prototype this fast” state, so it doesn’t make sense to optimize for the later. If you take an extra week to get to market in a “slow to write, statically typed” language nothing bad will happen to your company and you won’t need to worry about any of the above.
The more I see of your work, the more I admire the sinergy of execution and purpose in it. You're not just making something out of boredom or to take a niche in some back-end service, but building a tool that you use on the daily basis and it shows. For me it's one of the most rewarding and admirable paths in programing. I hope this project lives long, grows and keeps bringing You as much joy as it brings to it's users or, better yet, more. Cheers!
- No syntax highlighting - vi with only basic motions - single terminal fullscreen - comic sans font I trust this man with my life
Making EC2 boot time faster
Boot time is the number one factor in your success with auto-scaling. The smaller your boot time, the smaller your prediction window needs to be. Ex. If your boot time is five minutes, you need to predict what your traffic will be in five minutes, but if you can boot in 20 seconds, you only need to predict 20 seconds ahead. By definition your predictions will be more accurate the smaller the window is. But! Autoscaling serves two purposes. One is to address load spikes. The other is to reduce costs with scaling down. What this solution does is trade off some of the cost savings by prewarming the EBS volumes and then paying for them. This feels like a reasonable tradeoff if you can justify the cost with better auto-scaling. And if you're not autoscaling, it's still worth the cost if the trade off is having your engineers wait around for instance boots.
Ask HN: Why do you all think that Htmx is such a recent development?
I created the library that would become intercooler.js in 2012 and released it in 2013, based on a mashup of $.load(), pjax & angular attributes. The world at that time was not ready to consider an alternative to the hot new ideas coming out of the big tech companies (angular from google, react from facebook). In 2020 during covid i decided to rewrite intercooler.js w/o the jQuery dependency and rename it to htmx. The django community started picking it up because they were being largely ignored by the next.js/etc. world and they didn't have a built in alternative like rails has w/ Turbo. In 2023 it got picked up by an ocaml twitch streamer, teej_dv, who knew some other folks in the twitch programming community. He told ThePrimeagen about it who took a look at it in July 2023 on stream and became enthusiastic about it. At the same time FireshipDev did an "htmx in 100 seconds" short on it. That lit the rocket. I was lucky that I also had just released my book at around the same time (it had been cancelled by a major publisher about a year beforehand.) Another thing that happened is that Musk bought twitter, and a large number of big tech twitter accounts left. This opened up an opportunity for new tech twitter accounts to grow up, like a fire in a forest. I am pretty good at twitter and was able to take advantage of that situation. Here's the story visually: So I spent about a decade screaming into the void about hypermedia and had largely given up on the idea making a dent, rewrote intercooler.js just to stay sane during covid and then got very, very lucky.
What UI density means and how to design for it
This explains exactly why physical restaurant menus are so much better vs mobile site menus. If I'm viewing the menu of a restaurant on my phone, I always look in Google Maps for someone who took a picture of the menu, because it's a dense UI. Every "mobile friendly" menu site is able to show maybe 5 items on the page at once, so it takes many pages of scrolling to see everything.
Nix is one of the reasons open source wins in the long run. I would have never imagined that package dependencies could be installed and managed across programming language boundaries, kernel boundaries, shell boundaries, dotfile boundaries etc. Given that computer complexity grows exponentially all the time, then at some point everyone will be forced to use something like Nix. I agree with the article with Emacs and Unix/Linux.
Noi: an AI-enhanced, customizable browser
I don't want an AI browser, per se. I want an AI agent that slips into every pane of glass the platform companies own (Chrome, iOS, Windows) and works for me against the advertisers and attention stealers. I want an advocate that detects and nukes advertisements. That filters clickbait and rage content. Something easy enough that everyone can install, so we can all be collectively free of this nonsense no matter what tech stack we use. Imagine if AI became the ultimate anti-advertising, attention-preserving, sanity-defending weapon. "No Google, you're not allowed to advertise to my person." Or, "these comments are toxic drama, so let's not expose our human to them." This would be a great new technological era.
Why is Nix location hardcoded to /nix ?
Most significantly: You need to hardcode some paths into most binaries. Most binaries assume that, for example, the location of `bash` is always the /bin/bash, and while Nix can replace one hard-coded path with another hard-coded path in /nix, it can't really do that at runtime without huge amounts of binary patching. And in most binaries it is outright impossible without a wrapper: ELF, the binary executable format on Linux, expects a path to the linker hardcoded in the ELF header. So unless you invent a new binary executable format and make a kernel that can read that, or wrap *every* binary into a Nix wrapper, that isn't happening. It's just a convention on Linux that binaries have a static location that you can express as a string in a binary, and not something loaded from an environment variable. Nix is a package manager, not a kernel, so it cannot do anything about that.
Triptych 🌱 (@[email protected])
I think as web developers we have been collectively brainwashed into thinking that you must use React and Tailwind to create websites, when all you really need are vanilla web standards and a text editor. We should be making it easier to make websites than it is right now. You should not need a PhD in web design to create something that you can share and express yourself. We need more tools that make web development easy, not more tools that turn it into some 10 step build process #webdev
Hasen Judi 🇮🇶 🇯🇵 (@Hasen_Judi)
The proof that html/js/css is failed model for GUI development is that even with over a decade of experience, you can't just write the code for a UI and know it will work correctly. It's always trial and error, and at the end of the day, when you finally get all the details for html/css/js correct, you look at for example, the css you produced, and you're still feeling confused and bewildered. Like, why does this particular encantation of symbols produce the desired result? At least, you know that you would not have gotten it right on first iteration, because it's not the kind of thing you get right on first iteration.
Aram 🌈♾️ (@[email protected])
@ellie @mo8it At certain scales, it's not possible for one person to even read everything. I have a team of volunteers and a helpful community and things still get dropped. Writing any of these comments creates noise, frustration and distracts from things that might be critical (eg. security issues or spambots on the chat server). I totally empathize with people wanting their issue fixed or PR in, but as @ellie says - let us manage our priorities. If it drops off our radar - that's how it is.
Corporate Open Source is Dead | Jeff Geerling
None of these blogposts (including this one) have any realistic solution to the problem of making OSS software and being able to live from it, and prevent others from exploiting you in the process. Hyperscalers like Amazon exploit OSS projects by reselling them as a cloud service and they earn a gigantic sum in the process. But this is not a neutral thing to do - the OSS project is still responsible for maintenance! (And in many places, the "no warranty" clause seems completely disregarded - users and corporations demand bugfixes since it's a "critical library") The most telling sentence is "Open source culture relies on trust. Trust that companies you and I helped build (even without being on the payroll) wouldn't rugpull."... where is any trust in exploiting someone's work without giving anything back? the hyperscalers routinely break the OSS social contract, but because they abide to the letter of the licences, they get a free pass and many white knights from even the OSS community and even OSI itself. A business model of "you can see the source, you can modify it but you can't offer it as a service or resell my work" is much more honest and trustworthy than the "develop a library, a cloud service picks it up then pressures you with PRs and issues until you permanently burn out from the whole thing" This is partly addressed by the post - "But you know what? I'd just prefer honesty. If revenue is so dependent on selling software, just... make the software proprietary. Don't be so coy!". This is not honesty though. Claiming that anything not party-approved.... I mean OSI-approved is not open source and it's proprietary is very myopic thing. For users and developers, it's much more beneficial if they can see or even modify the source even if they don't have an unrestricted right to use and modify it however they want. This absolutist, black-and-white approach could potentially lead to many pieces of software becoming fully proprietary, all-rights-reserved in the future since the open source community harasses source available projects quite frequently, and not many have the patience to put up with that. And that would be a sad outcome indeed for user freedom, repairability, portability and other values RMS and the FSF dearly holds.
Mitchell Hashimoto (@mitchellh)
Cloud pricing applied to literally anything else, i.e. the coffee shop: coffee is $5, but if you stay it’s free and we charge $0.06/minute unless you’re using internet then it’s $0.09/minute. First timers get 30 minutes free. Talking to others in the cafe is free. Voice calls outside the cafe are $1/minute unless its to our other cafe locations then its free for the first 12 minutes then $0.50/minute thereafter. If you have any problems write your issue on a napkin and throw it straight into the trash. If you pay for a premium support plan (contact us) then please step into the personalized Lamborghini to talk to your account manager. Enjoy your time at Cloud Cafe, please complete the survey on your way out.
The Dark Side of Open Source
Certainly, no contributors get into projects with the sole purpose to get a financial gain out of them. Open source has never been about money either. But for you as an author, the lack of funds to sustain your ideas and pay for even a small portion of the time you're spending on them is—I'm not going to lie—devastating. It may not be your concern at first but it will inevitably become one when your ideas gain popularity, demanding significantly more time than there are hours in a day.
You won't find a technical co-founder
If you want me as a technical cofounder, you are going to tick most of these boxes: 1. You have at least 8-10 years of experience in that domain with some of it in a leadership position (ie product manager) 2. You already have a huge following or audience. Especially important if this is a SaaS product or b2c. Not as important if this is hard tech (ie biotech). But if you’re doing a HR startup, show me your 10,000 engaged HR followers on LinkedIn. If you’re doing a martech startup, show me your Substack with 10k marketing subscribers. 3. You already have talked to at least 25 people about this, and gotten feedback about their pain points, initial reaction to your solution, etc. These conversations are documented and easily shared with me 4. You preferably have at least 1 successful exit/company under your belt, or a couple of failed ones is OK too (but I would need references to make sure they failed for legitimate reasons) the worse is 0 years of experience starting a company. 5. You are a cold email/call/outreach junkie. Because you will OWN sales in the early days. I want to see your 100 send emails or 100 daily calls or 100 sent DMs everyday asking potential customers for feedback or validation conversations or to sign up for your beta
vx-underground (@vxunderground)
Microsoft engineer: 500ms lag in liblzma? Something's up. Also Microsoft engineer: 45 minute lag in Microsoft Teams? Perfect.
Initiated a feature freeze, clients are now all love the product
I initiated a feature freeze at the start of 2024 as we didn't have the staff to support adding and maintaining any more features to the core of our product. We had turnover and layoffs and could barely support what we had already. Obviously this was unpopular off the tech team, and executives in general wanted to keep pushing new features to keep up and "capture the market with new generative AI features" Anyways I ignored all of this and simply refused to have the team develop these features until we had more staff. Since then, our clients feedback on our system has improved tremendously. They say it is faster, more stable and they are loving how it consistent things are working. Also, they are discovering features they never used before and are now actually using them and liking them. I don't know if they realize there was a feature freeze. Previously they complained things were too unstable and often breaking in ways that were really bad. But the executive team wanted ABC feature done by QN so we had to deliver fast. Now they are giving great referrals and we are getting more and more sign-ups and sales through the pipeline from good word of mouth. Execs are now calling me a genius, saying they love how I turned around the product and saw through their true vision (by ignoring, iceboxing all feature requests) and that we've already exceeded our targets for Q2 and Q3 for the year and are set up for a great raise in 2025. So yeah idk what to takeaway from this but I found it really funny.
To those that use HTMX in production, how is it?
Been using it in production for over 2 years. Started as a solo dev, got acquired and migrated the app to bigger company. It’s really amazing when you are in A players shop with everyone having good full stack fundamentals. Productivity is insane. Caching is extremely easy and naturally convenient. However; extremely painful in “I am here to build boilerplate react components and slap a bunch of libraries together” culture. Lots of places run like a factory floor with devs expected to be hammers hitting nails. Unfortunately, htmx doesn’t work well there, because it’s hard to break big features into small little tickets with isolated components and where someone can build stuff without ever knowing or understanding how the backend works. There are 2 companies out there. One is 3 excellent devs teams with engineering heavy culture. Product manager are engineers and everyone knows shit code will get you fired. HTMX is great there. Then, there is we have 6 devs from Romania getting paid $18/hr punching tickets on JIRA and the product manager is some journalism major from a car rental company. JS framework is probably a better idea there.
Mitchell Hashimoto (@mitchellh)
There are apps in Electron that aren’t dogshit but they default to dogshit and require a lot of work not to be. But in every case native is faster and more intuitive and I’ll die on that hill.
Andreas Kling (@awesomekling)
Agreed, `// TODO` comments are hiding problems from us by not manifesting at runtime. However, always crashing can be too crude, especially when you're in the middle of building huge software with lots of unfinished areas. Oftentimes there are nice-to-have TODOs where the program can still make progress. I propose a two-macro approach (names not perfect): - TODO(): abort execution right now. - TODO_LOG(): print or log somewhere, continue executing.
USB hubs, printers, Java, and more seemingly broken by macOS 14.4 update
For me OSX updates have broken, over the past few years, my USB C ethernet dongle and firmware upload connectivity to specific hardware chipsets. This caused me to buy a non-Apple laptop. Now my Apple laptop refuses to update at all (top of line 4 year old laptop with max spec) so the upgrade driven failures problem category has effectively gone away, but OSX is becoming increasingly unusable. As a result I won't be buying more Apple hardware and have moved back to Linux as the primary machine - super fast and stable, with a slight setup tax and occasional annoyances.
USB hubs, printers, Java, and more seemingly broken by macOS 14.4 update
My upgrade to Monterey (MacOS 12.x) broke my Canon D530 printer driver. Re-installing the driver didn't help. Now I have to print to a PDF, copy that to an old Snow Leopard 10.6 machine, and print from there. FYI, Snow Leopard is 11 major OS revisions behind Monterey. Printing worked fine in Mohave, 3 major revisions ago. I also can't write any files to /, even with SIP disabled, and during the Monterey upgrade, Apple deleted all files and directories in / that they didn't recognize, including my system backup. I had to recover that from Backblaze. Can't say I'm a fan of recent MacOS. If you think you are in control of your Apple machine, think again.
Should organizations relating to the defense sector being able to sponsor NixOS?
Nobody is suggesting that anyone should be banned from anything or be prevented from contributing. The question is whether arms dealers should be able to run ads on the nix foundation. I personally do not think that they should. However, if a majority truly is in favour of an “apolitical” arms-dealer-friendly stance, I would repeat my suggestion from last time around: Actually follow through and own that, making bank in the process. Just taking on one US dealer is not only short sighted and way too political - it is leaving *so* much shooty-shooty-boom-boom cash on the table. To name a few, both the Chinese & Russian sectors would be *very* interested in having visibility, a presence, and undoubtedly pay significantly for the privilege. Failing to explicitly invite them would be a disgrace. I have more ideas for other opportunities & sectors if this is the path chosen.
Losing Faith on Testing
>I get paid for code that works, not for tests A blog post could be written about just this statement and how it contributes to a low trust workplace where those who cut corners are favored by stakeholders and everyone else is left scrambling to clean up the messes left in their wake. If you're writing code for yourself, sure, be targeted and conservative with your tests. But when you're working with others, for goodness sake, put the safety nets in place for the next poor soul that has to work on your code.
Vision Pro: What we got wrong at Oculus that Apple got right
Coming from a senior Oculus lead, the most interesting thing about this write up for me, is what it lacks: it says almost nothing about the software stack / operating system. Still 100% talking about hardware at the bottom and end user applications at the other end. But there is no discussion of the platform which to me is actually the highest value proposition Apple is bringing here. In short: Apple has made a fully realized spatial operating system, while Meta has made an app launcher for immersive Unity/Unreal apps for vanilla Android. You can get away with an app launcher when all you want to support is fully immersive apps that don't talk to each other. But that fails completely if you are trying to build a true operating system. Think about what has to exist, to say, intelligently copy and paste parts of a 3D object made by one application into a 3D object made by another, the same way you would copy a flat image from photoshop into a Word document. The operating system has to truly understand 3D concepts internally. Meta is building these features but it is stuck in a really weird space trying to wedge them in between Android underneath and Unity/Unreal at the application layer. Apple has had the advantage of green field engineering it exactly how they want it to be from the ground up.
Bazel Release 1.0
Google just tests the living Jesus out of everything, and only versions a few core packages such as protobufs, grpc, and other packages used by pretty much everybody (this is called the "crust"). Everything else is tip of tree, and things are automatically re-tested using checked-in tests if they are affected by your changelist. You basically can't submit if you break anything. So in a way, Google doesn't need "versioning". Whatever is currently checked in is good to go. Tests are required, of course, and a Google reviewer won't let you submit anything if your tests suck. This, obviously, precludes the use of such a set-up in, shall we say, "more agile" orgs which don't have good test coverage. Blaze (at Google) is also not just a build system, but also an interface to a much larger distributed build and test backend, which lets you rebuild everything from the kernel upwards in seconds (by caching petabytes of build products at thousands of possible revisions), serves up source code views for developer workstations (code is not stored there either), and sustains the scale of distributed testing needed for this setup to work. As a result, nobody builds or tests on their own workstation, and there's close to zero (or maybe even zero, period) binaries checked into Google3 monorepo. If you need a Haskell compiler and nobody used it in a while, it'll be rebuilt from source and cached for future use. :-) Fundamentally, I think Google got things very, very right with Blaze. Bazel is but a pale shadow of what Blaze is, but even in its present state it is better than most (all?) other build systems.
Avdi Grimm (@[email protected])
A big chunk of the Helm docs deal with how to avoid accidentally introducing the wrong *whitespace* into generated YAML. The level of not just fragility, but *obscured* fragility here in the code that rolls out production systems is beyond absurd and gets into the realm of normalized negligence.
Building a scheduler with resource requirements
This is a cool series of posts, thanks for writing it! We've released a bit about how the AWS Lambda scheduler works (a distributed, but stateful, sticky load balancer). There are a couple of reasons why Lambda doesn't use this broadcast approach to solve a similar problem to the one these posts are solving. One is that this 'broadcast' approach introduces a tricky tradeoff decision about how long to wait for somebody to take the work before you create more capacity for that resource. The longer you wait, the higher your latency variance is. The shorter you wait, the more likely you are to 'strand' good capacity that just hasn't had a chance to respond yet. That's a tunable tradeoff, but the truly tough problem is that it creates a kind of metastable behavior under load: excess load delays responses, which makes 'stranding' more frequent, which reduces resource usage efficiency, which makes load problems worse. Again, that's a solvable problem, but solving it adds significant complexity to what was a rather simple protocol. Another issue is dealing with failures of capacity (say a few racks lose power). The central system doesn't know what resources it lost (because that knowledge is only distributed in the workers), and so needs to discover that information from the flow of user requests. That can be OK, but again means modal latency behavior in the face of failures. Third, the broadcast behavior requires O(N^2) messages for N requests processed (on the assumption that the fleet size is O(N) too). This truly isn't a big deal at smaller scales (packets are cheap) but can become expensive at larger scales (N^2 gets steep). The related problem is that the protocol also introduces another round-trip for discovery, increasing latency. That could be as low as a few hundred microseconds, but it's not nothing (and, again, the need to optimize for happy-case latency against bad-case efficiency makes tuning awkward). Fourth, the dynamic behavior under load is tricky to reason about because of the race between "I can do this" and getting the work. You can be optimistic (not reserving capacity), at the cost of having to re-run the protocol (potentially an unbounded number of times!) if you lose the race to another source of work. Or, you can be pessimistic (reserving capacity and explicitly releasing what you don't need), at the cost of making the failure cases tricky (see the classic problem with 2PC coordinator failure), and reducing efficiency for popular resources (in proportion to the latency and popularity of the resource you're looking for). Slow coordinators can also cause significant resource wastage, so you're back to tuning timeouts and inventing heuristics. It's a game you can win, but a tough one. This needle-in-a-haystack placement problem really is an interesting one, and it's super cool to see people writing about it and approaching the trade-offs in designs in a different way.
Nintendo is suing the creators of Switch emulator Yuzu
> As a result, Nintendo ... is demanding that the Yuzu emulator is shut down. When corporations like Uber violate multiple laws, do they get shut down? When Amazon treats its employees poorly, does it get shut down? When Google forbids manufacturers to pre-install competitor apps, does it get shut down? Well, it seems that as long as copyright is not infringed, everything is ok. Also it seems to me that Nintendo might themselves violate antitrust laws by using their monopoly power on market of Nintendo-compatible games, and not allowing enough competition there.
I turned my open-source project into a full-time business
> In any case, it changed years later when a startup using Nodemailer was acquired for half a billion dollars. I was financially not in a good place back then, and when I saw the news, I started to wonder – what did I get out of this? This is the root of most things like the BSL. You create an open source project or product, and companies with billions in quarterly revenue build the core of their business on your software, and meanwhile won't contribute to your ongoing viability (nevermind actual success) even in amounts that are entirely trivial for them. Toss the cloud providers into it now and it's even uglier.
Reducing our AWS bill by $100k
Considered bare metal at all? Also have a startup - we estimated our cloud bill would have been over $4k/mo+ but on bare metal we are running at about $200/mo
Disillusioned with Deno
I got thoroughly frustrated with Deno on a side project for similar reasons (leaky node compat abstractions, kludgey interop between code that does IO the Deno way and Node modules, immature Deno stdlib, undermaintained Deno libraries, spooky bugs). I lost a lot of time to figuring out how to make things work the Deno way. Then I spent a day switching my codebase back to Node. I was struck by how much worse my code got. The Deno code made use of niceties like top level await and import maps. I needed to resort to a bundler to dedup instances of Y.js between my backend and Lexical. The Deno libraries tended have cleaner APIs (e.g. Oak vs. Koa). After I took a step back, I scrapped the Node rework and accepted my misgivings with Deno, for now. Going back to Node was kind of like playing an older game; the graphics were great at the time but now that I've seen what a 2023 backend Typescript codebase looks like, I don't want to go back.
Institutions try to preserve the problem to which they are the solution
I had same experience at a large company. Guy had a very simple project. He came to me and asked for "help." I found an external vendor who specialized in solving that problem (building a basic product extension) and got it done in two weeks. When I gave him the solution, he immediately stopped talking to me and wanted nothing to do with me. It turned out he had gone to a VP, cleared a 50 person team to work on this problem. He had a weekly call with like 10 people (tiger team he called it) to do nothing but this and nine months later they released the solution and had a giant party. Everyone got credit, high fives all around. AT that point I realized that work is a huge scam at large corporations. He was optimizaing for a "promotable event" that "spreads the credit far and wide." Nothing to do with solving the problem efficiently.
Meta's new LLM-based test generator is a sneak peek to the future of development
> Maybe I've done too much TDD, but to me the tests describe how the system is supposed to behave. This is very much what I want the human to define and the code should fit within the guardrails set by the tests. People who work on legacy code bases often build what are called “characterisation tests” - tests which define how the current code base actually behaves, as opposed to how some human believes it ought to behave. They enable you to rewrite/refactor/rearchitect code while minimising the risk of introducing regressions. The problem with many legacy code bases is nobody understands how they are supposed to work, sometimes even the users believe it is supposed to work a certain way which is different from how it actually does - but the most important thing is to avoid changing behaviour except when changes are explicitly desired.
Meta's new LLM-based test generator is a sneak peek to the future of development
I find it interesting that generally the first instinct seems to be to use LLMs for writing test code rather than the implementation. Maybe I've done too much TDD, but to me the tests describe how the system is supposed to behave. This is very much what I want the human to define and the code should fit within the guardrails set by the tests. I could see it as very helpful though for an LLM to point out underspecified areas. Maybe having it propose unit tests for underspecified areas is a way to do look at that and what's happening here? Edit: Even before LLMs were a thing, I sometimes wondered if monkeys on type writers could write my application once I've written all the tests.
AWS CodePipeline
Internally at Amazon, Pipelines (which inspired this service) was a lifesaver. Apollo (which is the inspiration for CodeDeploy) was also helpful, but should probably just be replaced by Docker or OSv at this point. But if they ever release a tool that is inspired by the Brazil build system, pack up and run for the hills. When it takes a team of devs over two years to get Python to build and run on your servers, you know your frankenstein build system is broken. It could be replaced by shell scripts and still be orders of magnitude better. Nobody deserves the horror of working with that barf sandwich.
Ask HN: How many of you Apple developers still use Objective C?
I use Objective-C exclusively (no Swift) in my App Store apps. I wrote a Swift app for a hobby/free project a few years ago and regretted it. They changed the language and deprecated some of my code, which isn't easily replaceable without a significant rewrite. The project now compiles only in Swift 4 and will die when Swift 4 support is removed from Xcode. I see no reason to use Swift. The compiler is slower and buggier. The debugger is slower and buggier. C interoperablity, while it exists in Swift, can be very painful. And I don't actually ship any bugs that Swift could have theoretically saved me from. I see no gain in switching. People have been telling me since 2014 that every line of code I write in Objective-C is "technical debt". I continue to laugh at them and ask them to compile Swift code they wrote in 2014. Of course if I had to get a job, it would be a different story, but I own my company, so I can do whatever I want.
Show HN: htmz – a low power tool for HTML
I'm the creator of htmx and think this is a great library/snippet. Much closer to what htmx-like functionality in HTML would/should look like in that it is following existing norms (iframes, the target attribute) much more closely than htmx. From a practical perspective, a lot of the bulk of htmx is bound up in things like history support, collecting inputs, a lot of callbacks/events to allow people to plug into things, etc. I expect a lot of htmx-like libraries will come out now that it has some traction: it's not a super complicated idea, and many of them will pick smaller, more targeted subsets of functionality to implement. That's a good thing: the ideas of hypermedia are more important than my particular implementation.
Adrianna Tan (@[email protected])
“ Then [..] work for a tech giant for your whole life, get free kombucha and massages on Wednesdays” And now, the dream is over. All that’s left is: work for a tech giant until they fire your ass, like those 12,000 Googlers who got fired last year six months after a stock buyback that would have paid their salaries for the next 27 years.”
Daniel 🦔 (@DanielW_Kiwi)
A friend of mine is not a programmer and he is building his accounting system with GPT 3.5 generating python for him AI is going to create lots of work for skilled programmers to undo those sorts of messes in years to come.
josef (@[email protected])
there's been almost twenty years of work on optimizing javascript engines with JIT and complex heuristic-based GC and a wealth of feature-rich profiling and analysis tools and validation and testing frameworks for deployment and integration and syntax improvements and functional and higher-order primitives and serverside transpiled code. and it's all enabled some amazing new stuff, for example github now takes 10 seconds to display a plain text file, and you cant search properly anymore
Rivers Cuomo is an active developer on GitHub
Interesting, he seems to primarily work on a Discord Bot. This is at heavy risk for confirmation bias, but I believe that writing chat bots is one of the best ways for people to get into and enjoy coding, because it's fun and rewarding, and simple enough (with an existing framework to use) that just uses strings. For a large generation it was MySpace and the ability to customize your page heavily with HTML. I know a number of people who learned HTML for that reason. Chat bots seem like the closest modern day equivalent (despite the main platforms making it harder with stuff like difficult to connect to the real time websocket and force use of webhooks). 10 years ago or so when Slack was new and had a gloriously simple API, I even wrote a framework that made it as easy as implementing one function, and you could receive messages (among other metadata like the username of the sender) as strings and send replies easily as strings. It served as an entry point for a few friends who had some fun with it and learned some ruby in the process. Anyway, if you're looking to get into coding but want to do a "real" project (or something very rewarding), start by writing simple chat bots! If you need some ideas, these are simple: 1. Start with writing a simple echo bot that just replies to every message with the same message that it received. 1. Write a bot that responds to every message with a random number between 1 and 100. For a slight increased challenge, have it do fizzbuzz where the nth message received is the counter. 2. Write a bot that that will reverse the message of whatever it receives, so it echoes replies but backwards. 3. Write a bot that will lookup a word when the message sent is "define <word>" and reply with the definition from one of the many dictionary APIs out there. Go from there!
Heather Buchel (@[email protected])
I see a lot of folks making the argument that yeah, a lot of layoffs are happening, but it's due to irresponsible hiring. And yeah, it's true that you shouldn't hire a bunch of engineers that aren't doing anything; which usually isn't the case. I think people don't understand just how unbelievably BAD the tech grift is right now for people in leadership positions. It's not just that they hired a lot of people, it's that they also tasked them with working on the wrong things.
Apple built iCloud to store billions of databases
Sadly I never got to work on this when I was at Apple (interviewed for it though!), but hearing about this a few years ago sort of made me realize something that should have been obvious: there’s not really a difference between a database and a file system. Fundamentally they do the same thing, and are sort of just optimizations for particular problem-sets. A database is great for data that has proper indexes, a file system is great for much more arbitrary data [1]. If you’re a clever enough engineer, you can define a file system in terms of a database, as evidenced by iCloud. Personally, I have used this knowledge to use Cassandra to store blobs of video for HLS streams. This buys me a lot of Cassandra’s distributed niceties, at the cost of having to sort of reinvent some file system stuff. [1] I realize that this is very simplified; I am just speaking extremely high level.
Confession: Love my employer, but they don’t pay great.
I’ve found that the more a job pays for a person that better off you’re usually treated.
Steve Purcell (@[email protected])
Some folks think that quietly working around unexpected cases is "defensive programming", but real defensive programming is failing fast and loudly when something seems off.
NixOS: Declarative Builds and Deployments
Having been on NixOS exclusively for a couple years now, it's inconceivable for me to go back to a non-declarative OS. It would be analogous to from from Git to unversioned source code. My operating environment is a piece of compiled software itself now, and is remarkably reliable and predictable. Yes, it's difficult to learn and takes more work, but it's similar to Git in that respect - powerful tools are worth the effort if it's your vocation.
Why are Apple Silicon VMs so different?
Back when I ran Windows in a KVM VM for gaming, a lot of anti-cheat systems didn't take kindly to running in a virtualized environment. Turning on HyperV to go KVM->HyperV->Windows effectively 'laundered' my VM signature enough to satisfy the anticheats, though the overall perf hit was ~10-15%.
Why are Apple Silicon VMs so different?
Doesn't Windows do it more or less the same? A lot of Windows features depend on Hyper-V, once enabled Windows is not booted directly any more, Hyper-V is started and the main Windows system runs in a privileged VM. All other VMs need to utilize the Hyper-V hypervisor, because nested virtualization is not that well supported. So even VMware then is just a front-end for Hyper-V.
Thomas 🔭✨ (@[email protected])
𝓖𝓵𝓸𝓼𝓼𝓪𝓻𝔂 Blockchain: a slow database Crypto: an expensive slow database NFT: an expensive slow database to store URLs AI: a way to write slow and inefficient algorithms LLM: a database that stores text in a slow and inefficient way Chat GPT: an expensive imprecise query language for slow and inefficient text databases that often returns wrong results
Khalid ⚡ (@[email protected])
I wonder if the “Right to Repair” laws will extend to software. At some point, software gets abandoned, and maybe codebases should be made OSS after a certain period. This will become more critical as vehicles are more integrated with software and services that will one day shut down.
Mitchell Hashimoto (@mitchellh)
For open source in particular, the misbalance that always made me sad was when an issue reporter spends 30 seconds writing an issue that's going to take a maintainer (working for free) hours, days, weeks to resolve and maintain, then gets mad when its not fixed quickly. 🤔
Ron Gilbert (@[email protected])
There is no capturable metric for enjoyment. What products/games can capture is engagement and that is then misinterpreted as enjoyment. There are a lot of products I engage with because I want want they produce, but I am unhappy.
27 years ago, Steve Jobs said the best employees focus on content, not process
Corporations, at their root, are an arbitrage on the fact that other corporations follow the bell curve. The entire goal of salaries and “teams” in my experience, is to ENSURE that high performers get diluted and averaged in with mediocre performers so the company can pretend the high performers don’t exist. This was my experience in (large co). I have seen situations where a single IC is dragging a division of 30 people yet still being compensated for doing the work of one IC. Management of that group took the approach “it’s a team effort!” And get the credit for that output. Their boss looks down and sees Director managing 30 people and getting amazing result X, where X is 90% the effort of the one super star. Eventually super star gets fed up and leaves, and gets paid what everyone else gets somewhere else “hoping to be valued.” Management still win. They get the credit for the super stars work. Frustrated super star leaves. Mediocre management is still there. A decade later nothing but the WORST and LEAST talented garbage are left. No one remotely talented would ever join that company because it’s a trap - you just get averaged in with mediocrity. The “averaging the great in with the spectacular” to reduce the relative power of the spectacular is the entire point of “management.” You have a team of six, pretend the work of the super star is “everyone working together” and attempt to grow your headcount off that super star. That has been my entire career. Never seen it go differently.
Mitchell Hashimoto (@mitchellh)
Nothing at all wrong with taking a job just for financial reasons, but people who are passionate about it consistently produce better results. This expands beyond software/hardware, too. I.e. for housework I try to always hire the people that are "weirdly" into their craft. When I was replacing my garage door (a non-standard, oversized, very heavy door), I met with a few different people and one guy talked to me for an hour about all the different options for mounting garage doors. I just couldn't get him out of my garage he kept commenting on different aspects of garage doors for no reason at all. Despite being pretty annoying honestly, I hired him because he seemed weirdly into his craft and he did a perfect job, as expected.
Amazon's Silent Sacking - Justin Garrison
My FAANG adjacent company is following the exact same practices. The goal is to "manage out" without paying a severance. They do this by making people miserable - fake PIPs, constant blaming, putting everything on "performance" etc. My coworker got fired this way but I learned something amazing from him - his management was ready to cull him as soon as his project finished. This guy quickly figured this out and instead of quitting, he essentially stopped working hard. Then, he started giving fake status reports leading the management to believe that work is getting done. One fine day, he was let go. But management was left picking up the pieces after his departure. With few engineers around, it led to lots of outages. Suffice to say, my company is losing b2b customers because my company decided to fire people who were keeping the services up and running.
It's not microservice or monolith; it's cognitive load
The moment you adopt service based teams with service based managers, say goodbye to engineers caring about working product. Say hello to cross team meetings and project management every time you want to ship a feature. It's pure vanity for a startup to think they will become the next AWS by adopting hard service-based contracts between teams.
Meta censors pro-Palestinian views on a global scale, Human Rights Watch claims
One day, there will be a bigish war, and comms between the sides will be cut off, either by law, or all comms links severed. At that point, every global business will have to figure out how to split their business in two, their database in two, their server infrastructure in two, etc. and have both halves work. I suspect that few businesses have prepared for such an eventuality, and I suspect severe disruption can be expected when it happens. For anyone designing something new today, the main prep you can do is to never use sequences for database ID's. By using random id's, you can mostly let your replicated database partition itself, and a later merge isn't too hard as long as most users have been operating on only one side of the divide.
Andreas Kling (@awesomekling)
One of my favorite things about working at Apple was getting to write native code for a very small set of target devices. It's so much easier to write good software when you know exactly what kinds of environments it will run in. And any device that needs to run your code can be on your desk within a few hours (unless you are remote 😅) IMO this is the main ingredient in the Apple "secret sauce" for software quality. 🤓🍎
Why do programmers need private offices with doors?
Not to create too much of a political tangent, but the lack of private offices, or rather the ubiquitous mandatory nature of open offices, and its universal unpopularity with non-managers, is evidence imo that tech could use a union or professional association or guild or something. Oh sure you can talk about high comp and career mobility. But where does all of the vaunted labor power of software engineers go when they ask for something as simple as a cubicle?
Why do programmers need private offices with doors? (Do Not Disturb)
I particularly enjoyed a recent company meeting that spent considerable time talking about the importance of flow state. It had an awkward pregnant pause when someone (usually very quiet) unmuted to ask, "is the policy to increase the number of days we must spend in our open-plan office kind of undermining this?". Literally all of our directors just shifted on their seats hoping another would answer that. Eventually, HR director stated "Not at all, that's what headphones are for!" Which was particularly delightful, as our tech director had only 20 minutes before stated how he would like to discourage people sitting in the office in silos with their headphones on.
Oxlint – Faster than ESLint
Would you consider removing your customisations to be closer to the workflows supported by these tools? One of the great things about go is that you're free to have an opinion, but if you disagree with go fmt or go build, your opinion is wrong.
Etsy is laying off 11% of its staff
> I appreciate that our industry has developed a norm of generous severance What if you had unions or proper labor laws that actually guaranteed this? As an European it's weird to see you having to rely on the "kindness" of the company to not get fcked over.
Avdi Grimm (@[email protected])
For a long time I had trouble with the concept of unionizing programmers, because it felt like we're already so privileged compared to blue-collar workers. What I realize now is that all that privilege vanishes like a mirage the moment people start talking about unionization. Or the moment interest rates go up.
Ramin Honary (@[email protected])
@Pitosalas yes, it is a pitty that #Python won the popularity contest. It started out as a great language for teaching coding. Academics all thought, "well, lets start them out with an easy language, once they learn one language, they can learn other languages more easily. They will learn the correct language for whatever job they might need to do once they get a real job." But as it turns out, people want to only use the first programming language they ever learn for everything. So then they do use their favorite (first) programming language for everything, even for tasks that the language is not well suited. So now we have an industry-wide situation where instead of people learning the correct programming language for each task, they just try to hack and kludge Python until they can use it to solve any possible #programming problem. Now Python is used in many thousands of code bases where it is quite probably the worst choice of programming language for the task that code is trying to solve. This is especially true of machine learning and AI.
Launch HN: Slauth (YC S22) – auto-generate secure IAM policies for AWS and GCP
Not to knock on the OP but in general, if you are doing a startup in 2023, you cannot do it without AI otherwise no one will take your seriously. I am not joking. AI is the new Gold Rush that blockchain used to be. Personally, I do think that AI is awesome and has lot of great use cases but unfortunately, most VCs/Investors are looking for that keyword if you wanna get funded so I feel a lot of startups are forcing AI into their stuff.
Alex Russell (@[email protected])
A decade ago, a tribe of JS partisans took the web by the reins, forked HTML and JS syntax, and yeeted userland abstractions into the critical path because "a better user experience". This was premised on the idea that everyone's CPUs/networks would get faster the way their top-end phones did. They could not have been more wrong. JS-first web development has been a planetary-scale exercise in the rich making life harder for the less well-off.
Тsфdiиg (@tsoding)
"Users don't care about performance" They do. They just rarely have a choice. Give them to choose between a slow and a fast app and you will be blown away by the results. The problem is that all apps these days are a slow, buggy crap. It's you who don't care. Not them.
Too many terrible engineers… especially at big tech.
LC should never have been a metric to be used in choosing a candidate. It's like judging how good of a worker they are by solving a Rubik's cube. Some people may be super fast in solving a Rubik's cube and some people haven't even solved it once in their life. But neither tells the employer how smart they are or how much of a good fit they are on the team.
company is ending their 401k match, how should I respond?
Great sentiment but it is all for naught. I wish it was different. Nothing said in an exit interview will help coworkers. Those informed by the interview have no voice. Also, those that decided to terminate the match don't value employee retention metrics. Only when the resignations have an impact that hiring can't resolve will they be motivated to do anything differently.
No Should be Your Default - cat /dev/brain
The reality is, if you have created a free/open source piece of software, you are putting it out there for people to use as is without any requirement of you to fix anything or accept any improvement. People have become conditioned (and one might say entitled) by GitHub's "social" model of coding to believe that all bug fixes must be accepted and all feature requests are inherently reasonable and should be accepted.
No Should be Your Default - cat /dev/brain
With software, it's very hard to deprecate or remove a feature and so your "Yes" must be closely guarded. While the wisdom of this was immediately obvious to me, the longer I've thought on this the more I realize it applies to more and more aspects of software - open source, professional development - and also life in general.
You do need a technical co-founder [video]
This is what I constantly tell my students: The hard part about doing a tech product for the most part isn't the what beginners think makes tech hard — the hard part is wrangling systemic complexity in a good, sustainable and reliable way. Many non-tech people e.g. look at programmers and think the hard part is knowing what this garble of weird text means. But this is the easy part. And if you are a person who would think it is hard, you probably don't know about all the demons out there that will come to haunt you if you don't build a foundation that helps you actively keeping them away.
I worked in Amazon HR and was disgusted at what I was seeing with PIP plans
> But I had a huge stock investment coming up. So there was no way I was going to rock the boat in any way, shape, or form just trying to get to this date. So the employer has a financial inventive program to encourage people to stay in the organization long term, and some hyper-rational VP repurposes that reward as a kind of tenure cliff forcing people out just ahead of it? All the pieces are in the article, just waiting for folks to put them together. If you're someone considering moving to a company that aggressively uses "performance management" like this ... the target of this system is you, not because you're bad at you're job but because you're new. The human toll of people in positions of trust essentially gaslighting their colleagues about their performance to confiscate special comp or satisfy the gods of analytics.... Deeply misanthropic.
Labor unions are pushing hard for better pay and hours – and winning
Software engineers often have a strange attitude, thinking they don't need unions, when pro athletes who make $millions per year all have unions. There's a myth that with unions, everyone will make the same compensation, but that's not even remotely true in pro sports. The reason to have a union is that no matter how much money you make, the owners of the business have more money and power. Collective bargaining is a counterweight to the power of ownership. And it's not just about money, it's about working conditions. For example, labor unions could fight back against back-to-the-office demands, whereas without a union, employees are forced to individually consent or lose their job.
Do people have a problem with the GNU Affero v3 license?
The AGPL is often misunderstood. Here is my 3 bullet point AGPL: * Users interacting with a modified AGPL program over a network must be offered the source of the program. * Unmodified AGPL programs don't have such source providing requirements. * Otherise GPL rules apply. Programs that access AGPL programs over a network don't become AGPL themselves. People understand that the GPL is "viral", so they assume the AGPL is "network viral" when it's not. The virality rules are the same as the GPL. Once you understand this, the AGPL is actually fairly narrow. E.g. if you use an AGPL database, it probably doesn't matter because * Users don't access the database, so you don't have to provide them source * You probably don't modify the AGPL code, so you don't have to provide source * Any modifications you would make are fairly minor and you could just offer the source. That said, the institutions that rule your life don't understand this. E.g. some "creative" mongodb lawyer probably scared your company's investors/clo/ceo before, so they will insist that you don't use any AGPL code for anything.
Beto Dealmeida (@[email protected])
Thirty years of Internet have shown me that: 1. Self hosting is always worth the trouble. 2. Open source is always a better option in the long run.
Developers are not happy with .NET MAUI, but nobody in the team cares about it
Three decades of experience talking here: If you don't see a vendor like Microsoft using a GUI framework for at least 50% of their new applications, then you've made a terrible mistake in adopting it yourself. Microsoft uses Google's framework for their own applications: Electron. Yes, you heard me right. Microsoft, a nearly 3 trillion dollar company, uses the framework of a competitor for their own desktop applications. Teams: Electron. Visual Studio Code: Electron. Azure Data Studio: Electron. Now, let's make a similar list for Microsoft MAUI apps! Umm... err... hmm...
Ansel | Darktable : crashing into the wall in slow-motion
Darktable has become the highschool computer club, where geeks have their fun. It’s globally a sum-up of all the worst stories of IT companies, with the difference that the project doesn’t make a penny, which makes it urgent to ask ourselves why we impose that upon ourselves : there are no profits to share, but everybody shares the costs. It’s a chaotic and toxic working environment which would only manufacture burn-out if the part-time amateurs were bound to deliver results and had to work full-time. Being the only full-time dude on it, I let you imagine the amount of stress and lost energy to stay up-to-date with the permanent cacophony, only to be sure to not miss the 2 % actually relevant to me in the amount of noise produced by unregulated discussions.
Ansel | Darktable : crashing into the wall in slow-motion
It’s simple : the work done costs more and more work, and the maintenance is not assured, as the decline of closed issues shows, because it’s simply too much. In a company, this is the time where you need to stop the bleeding before having emptied the vaults. But a team of amateurs bound to deliver no result can sustain an infinite amount of losses. Only the work created by the work is more tedious, frustrating and difficult as time goes by, and end-users are taken hostage by a gang of self-serving pricks and will pay it in terms of GUI complexity, needless CPU load, and need to relearn how to achieve basic tasks with the software at least once a year.
Ansel | Darktable : crashing into the wall in slow-motion
Software implies development, maintenance, documentation and project management. That’s several layers of overhead atop the previous. Yet the fact that the manpower in open-source projects doesn’t ask for compensation should not hinder the fact that the time spent (lost ?) on the software, its use, its development, its maintenance, is in itself a non-refundable cost.
Ansel | Darktable : crashing into the wall in slow-motion
We are photographers. The fact that we need a computer to do photography is a novelty (20 years old), linked to the digital imaging technology which replaced for all sorts of reasons (good and bad) a 160 years-old technology, known and mastered. In the process, the fact that we need a computer and a software to produce images is pure and simple overhead. Forcing people who don’t understand computers to use them to perform tasks they could perfectly manage manually before is also a form of oppression, and hiding it as some technical progress is a form of psychological violence.
Ansel | Darktable : crashing into the wall in slow-motion
The true problem of this kind of code is that you can’t improve it without rewriting it more or less entirely : to fix it, you first need to understand it, but the reason why it needs to be fixed is precisely that it’s not understandable and dangerous long-term. We call that technical debt. In short, all the work invested on this feature will create extra work because it is unreasonable to keep that kind of code in the middle of a code base of several hundreds of thousands of lines and expect it to not blow up in our face one day.
Ansel | Darktable : crashing into the wall in slow-motion
Programmers understand what I’m talking about ; for the others, just know that I don’t understand more than you what this does : it’s shit code, and if several bugs are not hidden in there, it will be pure luck. Hunting bugs in this shithole is sewer’s bottom archaelogy, all the more considering that Darktable does not have a developer documentation and, in the absence of meaningful comments in the code, any modification of the aforementionned code will necessarily start with a reverse-engineering phase becoming harder and harder as time goes by.
Ansel | Darktable : crashing into the wall in slow-motion
In this story, everyone looses their time thanks to an interface design trying to be so flexible that it can’t be made safe and robust by default. In the human body, every joint as some degrees of freedom along certains axes ; if every joint could revolve 340 ° around each axis of the 3D space, the structure would be instable for being too flexible, and unable to work with high loads. The metaphor holds in industry software. We swim in the FLOSS cargo cult, where people love to have the illusion of choice, that is being offered many options of which most are unusable or dangerous, at the expense of simplicity (KISS), and where the majority of users don’t understand the implications of each option (and don’t have the slightest desire to understand).
The Failed Commodification Of Technical Work — Ludicity
I think every engineering manager has either worked for or interviewed with a company that believes this stuff. Software dev is still at the craftsman[0] level. It might move out of that, eventually. But not yet, and probably not in the next 20 years or so. We haven't solved some intrinsic problems around defining a problem completely, precisely and succinctly without having to write code[1]. And getting five engineers to write a single piece of software is exactly as complex as it was when Fred Brooks wrote about it, I think the only improvement we've had since then is Git. [0] craftsperson? that doesn't feel like the right gender-neutral expression. I guess "artisanal" but that looks rude. Suggestions? [1] The "I got ChatGPT to write this application without writing a single line of code" phenomenon is interesting, but it seems like an alternate skill path - you can write code, or you can write prompts. The complexity is the same, and the amount of effort is within an order of magnitude. I'm not sure, though - I haven't managed to get ChatGPT to solve a single technical problem successfully yet.
The Failed Commodification Of Technical Work — Ludicity
Programming is still a craft, not engineering, or manufacturing. A software house should work like bespoke tailoring, or fine cabinetry, or glass blowing. There's still no better training for programming than the equivalent of master/journeyman/apprentice. Apologies for the gender specific terms, but they are specific to how tradespeople operated from medieval times. The worst thing to ever happen to the practice of business is the invention of the MBA. MBAs are imbued with the misleading axiom that management is a craft and science of its own, independent of the type of process or practice that is being managed. Combined with endless selling of the latest buzzword theories by consultants is why we end up with JIRA-Driven-Development, nonsense like t-shirt sizes, 2 hour wankfests called "Sprint Reviews", let alone all the scrumming and standing-up and backlog-refining and endless make work.
The Failed Commodification Of Technical Work — Ludicity
One key problem is nobody, none of the suits anyway, want to believe that there are essential, hard problems that can't be outsourced, can't be commodified, can't be shortcut in any way. It's the business version of the get-rich-quick scam course hucksters. The truth that there's no silver bullets can't compete.
The Failed Commodification Of Technical Work — Ludicity
There are harsh realities to grapple with, and society runs on commodification, but anyone that thinks that you can run on pure commodification, without any understanding of their specific craft, or the human complexities, needs, and frailties of the people around them, who think that you can just buy more enterprise licenses and that giving someone a salary is enough reason for them to subjugate the entirety of themselves as they turn up to work every day... Well, you're wrong, and you can fucking bite me.
The Failed Commodification Of Technical Work — Ludicity
They just buy a license for bad software, say that they've successfully implemented it since no one can really check, then leave before the ambient hatred radiating off the people forced to work with the system reaches a level that they can't tolerate.
The Failed Commodification Of Technical Work — Ludicity
"You can get rid of thousands of lines of all that SQL you hate!" - no I can't, fucko, because your application is still connecting to Postgres so it's just writing the SQL for me with another layer of licensed abstraction on top of it. Why would I pay to have more abstractions designed for you to sell software to multiple clients, you blue-suited dementor? Eight times out of ten, I want to pay you to remove them from my codebase.
Mitchell Hashimoto (@mitchellh)
10x engineers blah blah blah who cares. The engineers that impress me the most are the ones that fearlessly dive into code, domains they know absolutely nothing about and end up producing amazing work in a short amount of time anyways. Truly awe-inspiring every time I see it.
We have reached an agreement in principle for Sam to return to OpenAI as CEO
Developers are clearly the weak link today, have given up all power over product and it is sad and why software sucks so bad. It pains the soul that value creators have let the value extractors run the show, because it is now a reality TV / circus like market where power is consolidating. Developers and value creators with power are like an anti-trust on consolidation and concentration and they have instead turned towards authoritarianism instead of anti-authoritarianism. What happened? Many think they can still get rich, those days are over because of giving up power. Now quality of life for everyone and value creators is worse off. Everyone loses.
Alda Vigdís :topspicy: 🧱 (@[email protected])
Move slow and mend things.
A coder considers the waning days of the craft
I was on a team developing a critical public safety system on a tight deadline a few years ago, and i had to translate some wireframes for the admin back-end into CSS. I did a passable job but it wasn’t a perfect match. I was asked to redo it by the team-lead. It had zero business value, but such was the state of our team…being pixel perfect was a source of pride. It was one of the incidents that made me to stop front-end development. As an exercise, I recently asked ChatGPT to produce similar CSS and it did so flawlessly. I’m certainly a middling programmer when it comes to CSS. But with ChatGPT I can produce stuff close to the quality of what the CSS masters do. The article points this out: middling generalists can now compete with specialists.
Proposal: an HTML element for spoilers - Seirdy
Ad companies probably won’t want to hide ads containing sexual/erotic, anxiety-inducing, or shock content behind spoilers; they profit from what spoilers protect against. Sites with such ads probably won’t benefit from hiding such content behind spoilers if ads are exempt. A good solution would be for ads to identify themselves as such along with the psychological weaknesses they prey on (porn addictions, anxiety, eating disorders, gambling addictions, etc.) so that user-agents could selectively or globally block them. For some reason, I don’t think adtech companies would like this very much. More research is required to find a form of basic compassion that allows dominant advertising business models to exist.
An Overview of Nix in Practice
I must say that as far as I'm concerned, Nix is very much emblematic of simpler technology. Nix tries to solve problems (such as build impurities) very much by trying to fix things at the source (and contributing back to upstream!), rather than slapping things that work on top of each other, mindlessly using magic like containers. That's one of the reasons why I adore nixpkgs, that they have heroically attempted to fix problems in packages at their sources, and have very much succeeded.
Goodbye Spotify
If HN was Spotify client, the add comment button would randomly shift position on every visit and have one of these as the text: create, send, publish, add to thread And sometimes if you clicked it, nothing would happen
2023-11-11T04:17:20.455492Z / 1 request per view enjoyer (same thing) (@htmx_org)
The US government can't legally pay the salaries required to find AI experts right now (just look at the salaries on the open positions). So all this work will end up being done by contractors who _can_ pay those salaries while the contractors skim off the top. The US really needs a special schedule for software developers if they want technical expertise in house (and they really should have technical expertise in house)
Giving a Shit as a Service (2022)
why is everybody so obsessed with scale? the good things in live don't scale, like relationships. the dream of scaling everything to a 1 billion business is just greed talking. caring about a craft and building relationships will be more fulfilling than a billion in the bank
Giving a Shit as a Service (2022)
The Forks episode of The Bear is one of my favorite stories on giving a shit: a fork polisher at a fancy restaurant learns you don’t work in food service because you love polishing forks. You do it because you want to bring people joy and polishing the forks is one of many steps to that end. You can find purpose in fork polishing through both excellence and empathy for your customers. There’s a lot of days the code I write is about as exciting as fork polishing, but you do it for your teammates and users.
Giving a Shit as a Service (2022)
I know a lot of people here are writing about how this can be done for small consulting companies, but I also saw it in Big Tech. Amazon until 2022 really genuinely exemplified this. I saw it for more than a decade leading up to this. Just an unbelievable collection of people that truly Gave A Shit. Publicly we called it "Customer Obsession" and through that lens you could move mountains around here in the pursuit of Doing The Right Thing. The first sign of trouble was 2021. Salaries skyrocketed in the industry. Amazon didn't keep up. A lot of great people left because they got obscene offers, and you know, who could blame them? Our core of "intermediate" engineers (L5 here) got decimated - why bust your ass for a promotion when you can just get a Senior offer from one of 100 over-funded Unicorns for more money than you would've made here. Sensible. Then in 2022 the stock price dropped in half and a bunch of folks who seems like were only putting up with the bullshit as long as the stock grew indefinitely left too. Then 2023 brought layoffs. There's still a lot of us around that Give A Shit, but I feel like we are outnumbered more and more by those that just want to punch in and out and no longer Make History. I get it. I can't blame anyone individually. But I miss it.
What is your typical team's approach to tickets that were underestimated?
We don’t. Because the best engineering organizations in the world have long ago learned that trying to estimate work to this granularity is an exercise in futility. The most important thing to hedge against variability in estimations is to derisk projects as much as possible. The moment you find that a project is much harder than expected you pour more resources into figuring out a worst case, likely case, best case scenario. That helps you figure out planning. Also make it a point to focus on identifying cross team dependencies asap. It’s possible other teams don’t have the resources or need to rearrange priorities to make it happen. Last thing you want is to be totally blocked on one or more teams. Have a backup. Source: ex Google and Netflix tech lead, Engineering manager, and now director at a “FAANG” adjacent company. Helped ship major initiatives you’ve likely used.
WiFi without internet on a Southwest flight
When my son was younger - maybe 9 or 10 or so, we were on a plane and he was using his phone and I looked over his shoulder and realized he was on the internet... but I hadn't paid for an internet plan. I said, "son, how are you using the internet?" He said, "oh, a kid at school showed me - if you go here" (he opened up the wifi settings where the DHCP assigned IP address is) "and start changing the numbers, eventually the internet will work." Apparently, at the time, on American Airlines, when somebody bought and paid for an internet plan, it gave them an IP address and authorized it to use the internet... if somebody else guessed your IP address (which was pretty easy, it was a 192.168 address) and spoofed it, they could take over your internet connection with no further authorization. I had to tell him not to do that, but I was kind of proud of him for having the temerity to go for it.
Are we wasting our time with these LLMs?
In the early days of the "world wide web" internet, we made geocities websites, used apache everywhere, had websites with crappy javascript that never worked because everyone turned it off, etc etc. Was that a waste of time? Hell no, because we learned a lot. The folks who got to do that had huge benefits in understanding how a lot of things worked under the hood simply because they played with it, not because they studied it. A lot of people learned to program because of the old MUDs. We're learning how this crap works. Those of us tinkering with this stuff are learning about data science, neural networks, understanding prompts and weights, getting a look under the hood at how generative AI is generating, etc. And some of us (not me lol) are enterprising enough to find ways to make money doing so. I have no doubt some folks will be getting mega-rich who are playing with this now. Not one minute of this is being wasted. Keep having fun. Keep fine-tuning. Keep learning. The knowledge will only help you down the road.
I finally got to demonstrate "fearless refactoring" in action!
And here's the big thing. mypy doesn't actually statically type check Python. You cannot statically type check Python, because even if your entire codebase and all your dependencies make ubiquitous use of type hints, your entire program is still 100% purely dynamically typed. Type hints and static types are completely orthogonal concepts. Therefore mypy statically typechecks mypy. A statically typed language with an incompatible type system to Python's type system. Therefore the mypy type system has to be insanely permissive in order to not mark so much correctly typed Python code as invalid as to render it effectively unusable. In addition the Python and mypy type systems have extremely low expressive power compared to Rust's type system rendering most of what Rust APIs encodes about the domain impossible to encode in a Python API or otherwise impossible to encode nearly as well. All of this means that mypy only barely gives an inkling of a taste of the advantages of the Rust type system.
It's okay to Make Something Nobody Wants
Products seem to be made for users, but I think this might be an illusion; they are more like a medium for self-expression. Different expressions, conceived by various minds, undergo a form of natural selection, with the surviving expression being the one that resonates most with users. I mean, the process unfolds like this: you create something not because “I think they might need this,” but because “I find this so fucking interesting.” Then, when others use your product, they feel the same emotions you did, and they say, “I find this so fucking interesting.” From this perspective, a product is like a container for emotions; the creator puts them in, duplicates it a thousand times, and users extract them from the product. You can’t be devoid of emotion and expect users to experience emotion after using it.
Mitchell Hashimoto (@mitchellh)
My favorite part about installing Windows is seeing the 47 different progress bar animation styles and font styles before first boot. It really sets the tone for the entire Windows experience.
I never want to return to Python
Python doesn't have static typing. It has type hints, which are... more like guidelines than actual rules. And last I checked, there were still many places in the standard library that didn't have type hints. Things like Python type hints and TypeScript will forever baffle me. They seem to me like putting an airbag on a motorcycle. If you want safer transportation, _just don't use a motorcycle._ (TypeScript at least has the rationale that JavaScript has long been the only option in a browser environment, but WASM is starting to open the door to viable alternatives.)
Ray (@raysan5)
For the last 12 years MANY gamedev educational institutions have focused their courses on specific game engines (mostly Unity), many of those students, now devs, will need to learn other low-level technologies in a hurry, it could be tough. Schools should learn a lesson from this
𝕤𝕠𝕗𝕚 (@sincerelysofi)
feeling of shame when you are unable to adjust to having a top panel on your desktop (as in Mac OS) or no panel at all (like in twm) because you subconsciously view bottom panels as pedestrian / Windows-like
Ask HN: Why did Visual Basic die?
Visual Basic is one of the best arguments for open source and community ownership in the history of computing, IMO. Microsoft's decision to tank it was hugely painful for companies that had made major investments in it -- no company should make that kind of investment in a proprietary platform that can be killed off by a single company and not forked and maintained by others.
2023-09-11T19:34:31.322081Z (@mycoliza)
kinda feels like the main difference between Zig and C is that people who write Zig are actually *choosing* to have a bad time, while a lot of write C because they're kinda forced to
Paul Butler (@paulgb)
When I get frustrated with Rust and work through it, I come out feeling like I learned a new fundamental truth about the universe. When I get frustrated with JS and work through it, I feel like I spent so long at a carnival game that the operator gave me a toy to get rid of me.
Teaching with AI
Those who really desire to understand how things work will be undeterred by the temptation of AI. There are two types of people: those who care to know and really understand and those who don’t. Should we really force people, past a certain point, to care when it’s clear they don’t and are only doing something because they are forced to? I would argue that people should spend more time on the things they truly care about. That’s the critical difference; when you care about something and get enjoyment and satisfaction out of it, you want to understand all the fine details and have a thirst for knowledge and true insight. When you don’t care, you take the absolute shortest path so you can make time to do whatever it is that brings you true satisfaction. That’s perfectly okay with me because I do it all the time for things I couldn’t care less about. If someone who wants to be a software engineer can’t be bothered to learn and understand the fundamentals I’d argue that software engineering isn’t the discipline for them. The more you understand, the larger the surface area of the problem you have for which to explore further.
I am afraid to inform you that you have built a compiler (2022)
To some extent my entire career has been searching for and destroying said half baked implementations. This saying can be adapted to infra: “half baked, bug ridden kubernetes”, “half baked, bug ridden proxySQL”, “half baked, bug ridden redis”, the list goes on and on. In some ways I feel like my impact has been quite boring, in other ways quite vital. But it’s never made me friends with the kind of developers who look sideways at the idea that other peoples life’s work might be better than their 5 year old weekend project.
Andreas Kling (@awesomekling)
~2 years ago I became convinced that meticulously checking every heap allocation for failure would lead to robust GUI applications that don't fall apart under resource pressure. Fast-forward to today, we have made the SerenityOS codebase significantly uglier and less pleasant to work on as a direct result of pursuing this goal. At the same time, the sought-after robustness remains a hypothetical mirage. It's time to admit I was wrong about this. Not because it's impossible, but because it's costing us way more than it's giving us. On reflection, I believe the main mistake here was adopting the meticulous checks wholesale across the entire operating system. It should have instead been limited to specific, critical services and libraries. Adopting new patterns is easy. Admitting that you adopted the wrong pattern and reversing course is harder. However, I now believe we need to walk backwards a bit to make GUI programming on SerenityOS fun again. 🤓🐞
How a startup loses its spark
I really like the approach of Netflix of 10 years ago when it was still small. They hired mature people so they could get rid of processes. Indeed, they actually tried to de-process everything. As a result, things just happened. Non-event was often mentioned and expected in Netflix at that time. Case in point, active-active regions just happened in a few months. A really easy to use deployment tool, Asgard, just happened. The VP of CDN at that time said Netflix would build its own CDN and partner with ISPs. Well, it just happened in merely 6 months with 12 people or so. Netflix said it was going to support streaming and move away from its monolithic Tomcat app, and it just happened. And the engineers there? I can't speak for others but I myself had just one meeting a week -- our team meeting where we just casually chatted with each other, to the point that the team members still stayed close to each other and regularly meet nowadays. I also learned that the managers and directors had tons of meetings to set the right context for the team so engineers could just go wild and be productive. At that time, I thought it was natural, but it turned out it was a really high bar.
teej dv 🔭 (@teej_dv)
"I use Linux as my operating system," I state proudly to the unkempt, bearded man. He swivels around in his desk chair with a devilish gleam in his eyes, ready to mansplain with extreme precision. "Actually," he says with a grin, "Linux is just the kernel. you use GNU+Linux." I don't miss a beat and reply with a smirk, "I use Alpine, a distro that doesn't include the GNU coreutils, or any other GNU code. It's Linux, but it's not GNU+Linux." The smile quickly drops from the man's face. His body begins convulsing and he foams at the mouth as he drop to the floor with a sickly thud. As he writhes around he screams "I-IT WAS COMPILED WITH GCC! THAT MEANS IT'S STILL GNU!" I interrupt his response with "and work is being made on the kernel to make it more compiler-agnostic. Even if you were correct, you won't be for long."
A world where people pay for software
Software has no marginal cost. You can make something that's used by untold millions of people. Even if many people pirate enough people won't for you to recoup your development cost and then some. Software is easier to produce, sell, and distribute than any physical product. You don't have to worry about warehouses filled with unsold inventory. You don't have to worry about quality control and returns. It still blows my mind how much easier it is to run a business that deals with bytes instead of atoms. The OP talks about software having no copy protection, but Amazon sells DVD players and cordless drills for $30. Imagine for a second how hard it is to compete with that. Competing with Google or Microsoft or some startup is a walk in the park in comparison. In software the hard part is making an excellent product. And let's face it, that's where most people fail. It has nothing to do with monetization.
What Is Nix?
Dockerfiles which just pull packages from distribution repositories are not reproducible in the same way that Nix expressions are. Rebuilding the Dockerfile will give you different results if the packages in the distribution repositories change. A Nix expression specifies the entire tree of dependencies, and can be built from scratch anywhere at any time and get the same result.
Hailey (@[email protected])
I remain unconvinced that docker layers are a good abstraction. Building an image that uses another image as a base? Sure, that makes sense, but keeping all those layers around and exposing them to the user as a domain concept does not. There's an ongoing runtime perf cost to supporting them, and they're just not that effective when it comes to deduplicating image contents. Consider your `bundle install` layer, yeah you can reuse it between app versions that don't bump any gems, but the moment you bump even one gem, you're paying hundreds of MB if not close to a GB for that bump. I keep thinking about this paper ( out of the AWS Lambda team where they mention flattening layers into a single ext4 with a deterministic extent layout. Deduplicating 512 KiB chunks of this image turns out to be a lot more effective for them than layer-based deduplication ever was, plus it enables image lazy loading in a way that layers simply can't achieve.
Looking for people's experiences moving from front end to backend
I didn’t make the transition (I’ve always been infra backend) but I’ve worked with folks that have. IMO the sooner you make the jump the better. Infra especially is something you can spend 30 years honing your craft on and there’s still more to learn. Most of the technically challenging problems are in infra. However I found product engineers especially frontend really struggle with the switch because strong CS and OS fundamentals actually matter especially at larger companies. You need to understand performance, latency, networking, in many cases lower level concepts too like storage. However you’ll be rewarded with way more mentally stimulating work, less grinding, and better job security (of the junior engineers the VAST majority did not go into infra/backend so the shortage will persist for a fairly long time). So try and make the switch where you are or be proactive about it. Don’t wait
Glyph (@[email protected])
One of my litmus tests for a software product these days is that, if it has search, I should be able to search for a nonsense phrase and get an answer that says “no results”. Every website and app is so damn thirsty for clicks now that it will just show an infinite scroll of useless garbage no matter what I’m looking for, which means I can’t get “no results” and then refine my search quickly, I have to page through the “results” to see if they’re plausibly related to my query. Please stop it.
Sourcegraph is no longer Open Source
Never found a startup on the premise that someone else's product will be inadequate forever. The recent rewrite of github search has probably made sourcegraph irrelevant. If you may recall, original github search used almost the most horrible algorithm possible. It dropped all punctuation and spacing and just searched for identifiers. No patterns allowed, no quoting allowed. One of the only meta-arguments was filename:xyz. Now that github has improved its basic search functionality, sourcegraph might be doomed. I used sourcegraph at Lyft which (at the time) had unlimited money to waste on software tools, and installed the open-source version at Databricks but nobody cared.
:pdx_elk: (@[email protected])
The reason I hate "opsec" as a term is it feels like military larping, and I think it creates a culture and mindset around that. Digital safety is a better term, imo. We care about each other, and we want to keep each other, and ourselves, safe while also living our lives and taking measured risks.
Mitchell Hashimoto (@mitchellh)
I'm convinced everyone who actually likes JS/TS and the whole ecosystem is just suffering from Stockholm syndrome paired with being forced to use it. We're all just stuck with this reality. 😵‍💫 Layers and layers of madness, pure madness.
Why did Nix adopt Flakes?
We use it for devshells, and it’s awesome. New devs install nix and direnv and they instantly have all the right versions of all of our tooling. A first day setup process is now done in minutes instead of a day. Flakes made it possible for us to package up internal and external tools and ensure consistency across a team. I have no experience running it in production, but I imagine if you don’t want to use containers it’d be a pretty good option.
Apollo will close down on June 30th
This makes me indescribably sad. Apart from mourning the loss of a fantastic app by an awesome developer, to me it signals the end of a golden era of small indie client only apps. Since the APIs for the likes of reddit, twitter (RIP tweetbot) and others were available for free or a reasonable fee it spawned a whole cottage industry of developers who made a living selling alternate front ends for these services. These apps invented many of the conventions and designs that eventually percolated to the official clients. Sometimes these innovations even became platform wide conventions (pull to refresh anyone?). The writing was on the wall for a while, but now the door is firmly closed on that era - and we will all be poorer for it.
Diesel 2.1
It's not true that diesel is "incompatible" with async, it just does not expose an async interface. Now having an async database interface usually not required for several reasons: * Your service likely does not get the required amount of traffic to care about that (which means you won't see more traffic than which uses diesel) * Even if you get that amount of traffic your main bottle neck is not the communication with the database itself, but getting a database connection, because there are usually only a few tens of those connections. For those fixed number of connections you can then easily use a thread pool with the corresponding number of threads. Additionally as already mentioned by others: There is `diesel-async` for a complete async connection implementation.
Read Every Single Error | Pulumi Blog
Error budgets and the SRE model are haute couture. Some preach that we should never look at errors at this level of granularity and instead use expensive tools that aggregate, categorize, and collect statistics on errors flowing through your system. But all of this automation can actually make things worse when you reach for it prematurely. Aggregating errors is a great way to gloss over important details early on. Collecting fancy metrics does not matter if your users are not happy. Cutting your teeth with the tools and processes that make sense for your level of scale is the only way to build a high-performance culture. Skipping straight to step 100 does not always help.
The Maddest My Code Made Anyone | Blog |
Programmers sometimes have that experience, as do musicians, hardware designers, film directors, novelists, painters, game designers and all other professions that create things regular people interact with casually. Consumers (specifically the sub-genre of critics) often have no real imagination what making a thing means and under which constraints it happens. They often see publishing a imperfect work to the public as an affront to their sophisticated intellect and taste, even (or especially?) if it is free. In German there is the saying: Wer macht, hat recht which translates to who makes is right. Complaining is simple, just shut up and do it better. Of course complaining is totally okay if we e.g. talk about social or political conditions, or some mandatory process you have to subject yourself to by law. But even there I hate people who just complain and leave it at that without even trying to change a thing.
Mitchell Hashimoto (@mitchellh)
I'm actively trying to work through my Nix God complex. It's been so long that when I see non-Nix users complain about issues getting software to run, I'm truly confused. It's like someone looking at a river lamenting about having to ford it while I'm riding a bicycle on a bridge
Rome v12.1: a Rust-based linter formatter for TypeScript, JSX and JSON
I've got mixed feelings about Rome. There's so much room to cover with ridiculously slow tools today. But I'm sick and tired of these people in the industry dropping their toys because they're tired of working on stuff people actually use instead of just improving what they currently have. Would it have been impossible to nudge Node.js in the direction of where Deno is today? Would it have been impossible to replace Babel with a Go implementation? I also don't want tools that want to be literally everything. Imagine if Daniel Stenberg was like, "You know what I'm tired of cURL, let me rebuild literally the same thing in another language and give it a new name, and entirely different opts."
The Legend of Zelda: Tears of the Kingdom Release
Mechanical sympathy. Rather than designing a game on a PC to take arbitrary advantage of modern tech and then trying to cram it down onto a more limited console platform, Nintendo ask, at design time, what the most interesting things they can do are that would work perfectly within the constraints of the platform — and then do that. (And Nintendo engineers can have perfect knowledge of "the constraints of the platform", because 1. they built the platform; 2. it's the only platform they ever code for, never porting to anything else; and 3. for late-in-generation titles, they have been developing for it for years already, while also doing platform-SDK support for every third-party development studio.) Oh, and besides that, because they design each platform initially specifically to work well for the types of games they want to make. (This goes all the way back to the Famicom, which has hardware PPU registers that were specifically implemented clearly to make the launch-title port of Donkey Kong extremely easy to code.)
The JavaScript Ecosystem Is Delightfully Weird
Javascript is this generation's C++. It's a massive language and the only way to stay sane on a project is to agree to use a well demarcated subset of it. Nothing wrong with being C++. The reason JS is so massive and weird is because it's the language that everybody uses, or has to use at some point. Upsides and downsides.
TS to JSDoc Conversion
Lordy, I did not expect an internal refactoring PR to end up #1 on Hacker News. Let me provide some context, since a lot of people make a lot of assumptions whenever this stuff comes up! If you're rabidly anti-TypeScript and think that us doing this vindicates your position, I'm about to disappoint you. If you're rabidly pro-TypeScript and think we're a bunch of luddite numpties, I'm about to disappoint you as well. Firstly: we are not abandoning type safety or anything daft like that — we're just moving type declarations from .ts files to .js files with JSDoc annotations. As a user of Svelte, this won't affect your ability to use TypeScript with Svelte at all — functions exported from Svelte will still have all the same benefits of TypeScript that you're used to (typechecking, intellisense, inline documentation etc). Our commitment to TypeScript is stronger than ever (for an example of this, see I _would_ say that this will result in no changes that are observable to users of the framework, but that's not quite true — it will result in smaller packages (no need to ship giant sourcemaps etc), and you'll be able to e.g. debug the framework by cmd-clicking on functions you import from `svelte` and its subpackages (instead of taking you to an unhelpful type declaration, it will take you to the actual source, which you'll be able to edit right inside `node_modules` to see changes happen). I expect this to lower the bar to contributing to the framework quite substantially, since you'll no longer need to a) figure out how to link the repo, b) run our build process in watch mode, and c) understand the mapping between source and dist code in order to see changes. So this will ultimately benefit our users and contributors. But it will also benefit _us_, since we're often testing changes to the source code against sandbox projects, and this workflow is drastically nicer than dealing with build steps. We also eliminate an entire class of annoying papercuts that will be familiar to anyone who has worked with the uneven landscape of TypeScript tooling. The downside is that writing types in JSDoc isn't quite as nice as writing in TypeScript. It's a relatively small price to pay (though opinions on this do differ among the team - this is a regular source of lively debate). We're doing this for practical reasons, not ideological ones — we've been building SvelteKit (as opposed to Svelte) this way for a long time and it's been miraculous for productivity.
How to recover from microservices
Making a large, resilient, performant system is hard. Trying to design one for a novel problem space on day one is impossible. Heed the timeless advice of John Gall: A complex system that works is invariably found to have evolved from a simple system that works. The inverse proposition also appears to be true: A complex system designed from scratch never works and cannot be made to work. Simplicity demands that you do not start by inviting the beast of complexity – distributed systems – to the first dance. It's possible you'll one day end up with a complex, distributed systems that use microservices with justification, but that will only happen in good conscience if you started with a simple, monolithic design.
Give It the Craigslist Test
Similar to this: I raised a seed round with a deck that was (deliberately) just black Times New Roman text on a white background, plus a few screenshots. The product was also deliberately simple and rough around the edges. I stole an idea from Joel Spolskey and made beta features in the app have graphics that were literally drawn in crayon, to make it clear they were unfinished and to make it easy to test changes. Investors liked the deck. It made it clear that what mattered was the content, not the presentation.
So this guy is now S3. All of S3
Here's how I think about it: * ActivityPub -> AT Protocol ( * Mastadon -> Bluesky ( Right now, federation is not turned on for the Bluesky instance. There are differences in both, however. I'm not going to speak about my impressions of the Mastadon vs Bluesky teams because frankly, Mastadon never really caught on with me, so they're probably biased. ('they' being my impressions, that is, I just realized that may be ambiguous.) At the protocol level, I haven't implemented ActivityPub in a decade, so I'm a bit behind developments there personally, but the mental model for AT Protocol is best analogized as git, honestly. Users have a PDS, a personal data server, that is identified by a domain, and signed. The location of the PDS does not have to match the domain, enabling you to do what you see here: a user with a domain as their handle, yet all the PDS data is stored on bluesky's servers. You can make a backup of your data at any time, and move your PDS somewhere else with ease (again, once federation is actually implemented, the path there is straightforward though). This is analogous to how you have a git repository locally, and on GitHub, and you point people at the GitHub, but say you decide you hate GitHub, and move to GitLab: you just upload your git repo there, and you're good. Same thing, except since identity is on your own domain, you don't even need to do a redirect, everything Just Works. This analogy is also fruitful for understanding current limitations: "delete a post" is kind of like "git revert" currently: that is, it's a logical deletion, not an actual deletion. Enabling that ("git rebase") is currently underway. Private messaging does not yet exist. Anyway if you want to know more the high-level aspects of the docs are very good. Like shockingly so. They fall down a bit once you get into the details, but stuff is still changing and the team has 10,000 things to do, so it's understandable.
Mitchell Hashimoto (@mitchellh)
The idea of using verified domains as a username is so obvious in hindsight its shocking no mainstream app I know of did this before. Proving domain ownership has been used for so many other things of course, just shocked domain-as-identity is effectively nowhere until now…
Searches for VPN Soar in Utah Amidst Pornhub Blockage
I have a favorite Utah story that is I think appropriate here. Many years ago as a young and green consultant I was sent to Salt Lake to help with some ASP.NET/C# app with Utah Department of Liquor. I was told to look for the tallest building in SLC and the warehouse did not disappoint, it was huge (well, SLC is really flat and squat too). They showed me the warehouse full of really fancy robotic stuff (all made in Utah, and they were correct to be proud of it). We got to work looking over the code of the app, and along the way they learn that I am originally from USSR/Russia. "Oh" the devs say, "do you want to see our Russia module"? I am of course intrigued, and discover that during the process of organization of 2002 SLC winter Olympics (Mitt Romney's baby/rise to prominence), there was a huge diplomatic incident. The rules of State of UT at the time limit the number of bottles sold to any one in a given transaction, and the Russian delegation was refusing to come to Utah because they would not be allowed to buy as much liquor (likely vodka) as they wanted to. This got escalated to the highest levels of State department, and the intrepid UT legislature found a way! They [very quickly] passed the law that any person with Russian citizenship could buy whatever the heck they want in any amount. Now it was up to the poor saps in the UT Dept. of Liquor to implement it. But you couldn't just rely on people showing passport! No, the software team feverishly coded up the "Russian Module" that implemented passport number validation, making sure that if you did show a red passport with double-headed eagle, its number was valid. There was serious collaboration on the numbering schemes and maybe even some proto API validation to the Russian Federation servers. Yeah, legit module. Used for 2 weeks, and then decommissioned as the law sunset very rapidly. So, where there is a will, there is a way. And a VPN.
“Why I develop on Windows”
> I know a lot of developers who will opt to do all of their scripting in python these days, even putting #!/bin/python3 at the head of a script so that it runs through the shell. ...which is exactly what you're meant to do. This is not an example of how bad Bash it, it shows that you didn't understand what Bash is. It's expected to use various languages to write code on Linux, nobody wants you to do things in a language that wasn't made for the task. Imagine you had to use Python on the shell and, any time you open a terminal, needed to import os and do something like print(os.path.glob("*")) instead of just opening a terminal and typing "ls" to get your directory listing. Different tools for different jobs. Also the point they try to make about bash looking like a foreign language and having weird syntax. Yes, that's the thing: it's a very specific thing called a shell, not just any old programming language that you're meant to use for things that are not shell scripts. If Python feels more natural to you, that's probably what you should be using. Don't feel like you need to use Bash for bigger tasks than a few lines of code for no reason other than because you're on a system that has it.
Horrible Code, Clean Performance
It is absolutely true that some hot-path code needs to be mangled into an ugly mess to reach performance requirements. The problem is that I have encountered people who somehow take this as a blanket justification for writing unreadable code, and who operate on a false dichotomy about a choice between readable code and performant code. It is important to keep in mind that: 1) Most code, i.e. at least 80% of the code in a codebase, will never be a performance hotspot and does not need specific optimizations (i.e. as long as the code does not do stupidly inefficient things, it's probably good enough). 2) Even in performance hotspot codepath, you should not write unnecessarily hard to read code unless it is strictly necessary to achieve required performance. In both cases, the key point is to benchmark and profile to find the specific places where the ugly hacks need to be introduced, and do not introduce more than is strictly necessary to get the job done.
Marcin Krzyzanowski (@krzyzanowskim)
TIL companies like Facebook and Amazon rely on my OpenSSL distribution for Apple systems. I didn't even have to sign DNA nor solve leetcode to influence millions of developers for free
AI-enhanced development makes me more ambitious with my projects
In Jiro Dreams of Sushi, the new staff starts from cooking rice perfectly first and perfecting roasted seaweed before moving on to preparing an egg sushi and then graduating to fish. It's not grunt work. It's how new engineers learn the ropes and gain experience doing low risk work; it's part of the learning process that only feels like grunt work to a senior dev.
Incompetent but Nice
I used to think the same way as you, and then I started a company and had to pay out of pocket for employees, and the sad truth that I almost hate myself for admitting is that if you have to pick between incompetent but nice, and competent but a jerk, you take the jerk. And yes, multiple people will even quit because you picked the jerk over the nice guy, and I still found it's worth it to take the jerk because of how competency scales. A good/competent software engineer can genuinely do the work of many, many mediocre developers and you're almost always better off with a small number of really solid developers over a large number of nice but mediocre ones. Now of course we can always exaggerate things to an extreme and compare a racist, sexist, jerk who swears nonstop, to someone who is mildly incompetent, and there are certain principles and boundaries that are worth upholding with respect to how people treat each other regardless of their productivity, but in actuality that's not really the difficult choice you end up facing. The really difficult choice you end up facing is someone who is nice and gets along with people but is ultimately too dependent on others to do their job versus someone who works independently and does an excellent job but is very blunt and can be an asshole in regards to the expectations they hold others to. Good software developers often expect their peers to also be at a high standard and will speak in very plain, rude, and blunt language if they feel others are not pulling their weight. And finally, I have observed that in the long run, competent people tend to prefer to work with others whose skill they respect and they feel they can learn from because they're really good at their job, compared to working with someone who is pleasant but is always dependent on others. Being nice is a good short term skill to have, but people get used to those who are nice but they never get used to someone who is incompetent.
Hetzner launches three new dedicated servers
I've been using Hetzner servers for ~15 years with multiple clients and employers, and always been disappointed with other providers compared to what Hetzner delivers. OVH with their frequent network level outages, the 2021 fire and so on. DigitalOcean with their way too frequent and long lasting maintenance windows. And AWS/GCP/Azure with their obscene pricing, ridiculous SLA and occasional hour-lasting outages. One application platform I managed was migrated from DO to Hetzner with huge cost savings, much better uptime and insanely much higher performance running on bare metal servers rather than cheapo VMs. If you need more than two vCPUs and a few gigs of RAM, I see absolutely no reason to use overpriced AWS/GCP/Azure VMs.
Is Setting Up a VPS Worth It?
We used to manage 500+ servers with Ansible for almost 10 years. It was a nightmare. With so many servers Ansible script would ocassionally fail on some servers (weird bugs, network issues, ...). Since the operations weren't always atomic we couldn't just re-run the script. it required fixing things manually. Thanks to this and emergency patches/fixes on individual servers, we ended up with slightly different setup on the servers. This made debugging and upgrading a nightmare. Can this bug happen on all the server or just this one because it has a different minor version of package 'x'? We switched to NixOS. It had a steep learning curve for us, with lots of doubts if this was the right decision. Converting all the servers to NixOS was a huge 2-year task. Having all the servers running same configuration that is commited to GitHub, fully reproducable and tested in CI, on top of automatic updates of the servers done with GitHub action, was worth all the troubles we had with learning NixOS. This entire blog post could be a NixOS config.
Leveraging Rust and the GPU to render user interfaces at 120 FPS
That's not sufficient, though. You want the 0.1% case where you press undo a couple of times and big edit operations get reversed to be smooth. You have to hit your frame target when a lot is happening, the individual key press case is easy. It's just like a video game. A consistent 60fps is much better than an average frame rate of 120fps that drops to 15fps when the shooting starts and things get blown up. You spend all the time optimizing for the worst case where all your caches get invalidated at the same time.
Nix journey part 0: Learning and reference materials
The “What’s missing” section is on point. There are a lot of tutorials for helping someone learn the basics of Nix and build Baby’s First Package. There are not many tutorials about how to package real software you are actually developing. I think this is because it is (relatively) easy to explain what attrsets are or how to type `nix run some-flake` and press enter, and it is hard to explain what `LD_LIBRARY_PATH` is, or how Python environments are built, or why you should have known to apply this `.patch` file at that step, etc. It is, in the words of the authors of Zero to Nix, “fun and exciting to make a splash” by writing a completely new Nix 101 tutorial. That’s why we have half a dozen Nix 101s, very little Nix 201, and Nix 301 is searching GitHub for code snippets that solve a problem adjacent to yours.
The Lone Developer Problem
Yeah, exactly the same in my experience too. In fact, the biggest software atrocities I ever saw were team-based, with people having different opinions and wanting to modify the architecture every six months. And getting away with it because there was no vision. This is where a good team lead or technical lead, or even Fred Brooks' "Surgical team", or your example of "single developer and contributors": have one person with the vision making the difficult architectural decisions and you'll get some conceptual integrity. What I see a lot is people with little experience who learned things one way and become unable to understand or respect working code and want to change everything purely for personal preference. Maybe this is where the bias against lone developer code comes from.
The Lone Developer Problem
In my experience it's more often the other way around. Most projects I've seen with actually readable code and a consistent overall structure have been written (mostly) by a single coder, of course usually with contributions from others, but not real 'team work'. Of course there are also messy projects by single authors, and readable code bases by teams. But in the latter case: the more the responsibilities are spread, the messier the outcome (IME at least). I think in the end it comes down to the experience of the people involved. And then of course there's personal taste, one person's readable code is a complete mess to another. In any case, the post reads like the author stumbled over one messy project written by a single author and extrapolates from there to all other projects.
CoffeeScript for TypeScript
Way back in the early 2010s I was very "excited" about coffee script and similar projects. They sounded like they should be great for productivity. When I actually tried to write a project in coffee script, the results were the opposite of what I expected. The code was harder to read, harder to modify, harder to understand, harder to reason about. There's something about removing stuff from syntax that makes programming harder. My hypothesis is this: your brain has to spend extra effort to "decompress" the terse syntax in order to understand it, and this makes reading code unnecessarily difficult. So I fundamentally disagree with the underlying premise of these projects, which seems to be based on PG's concept of "terse is power". My experience suggests the opposite: there's power in being explicit. Type declaration is an example of such a feature: it makes explicit something about the code that was implicit. Type declarations add more to the parse tree, and require you to type more, but they actually give you more power. The same can be said about being explicit in the language constructs. There of course has to be a balance. If everything is way too explicit (more so than needed) then your brain will do the opposite of what it needs to do with terse code: it has to spend more effort to remove the extra fluff to get to the essence of what the code is doing. Being terse is good, up to a point. Same with being explicit. Languages that try to bias too strongly towards one extreme or the other tend to miss the mark. Instead of aiming for balance, they start to aim for fulfilling some higher telos.
The Power of “Yes, If”: Iterating on Our RFC Process – Squarespace / Engineering
Yes we should rewrite it in language Y if everyone on the team is comfortable with the language, it provides nonfunctional benefits, and has potential to drive business value. It’s just about acknowledging the conditions that would make an idea a good one. All ideas are good in a specific context. Instead of assuming everyone’s aware of the current context, state the ideal context for an idea.
Disqualified from a National Web Design Competition for Using GitHub
This is one of those important events in life where you realise that sometimes those who hold seniority over you aren't necessarily as smart as you are. This experience will help you to cultivate a healthy disrespect for authority. We all go through something like this at some point. The best thing to do is to find some sort of constructive way to channel your experience. One path I would suggest is to consider launching your own rival competition, where the judges are volunteers from industry, and the prize is an internship at a company or something like that. This would not only provide your peers with a great opportunity to get quality feedback, but also serve as a really useful experience that would help you in your future career. What have you got to lose? Perhaps you could even get GitHub to sponsor it :)
I love building a startup in Rust. I wouldn't pick it again
If you're thinking about building something in Rust, a good question to ask is, "what would I use if Rust didn't exist?" If your answer is something like Go or Node.js, then Rust is probably not the right choice. If your answer is C or C++ or something similar, then Rust is very likely the right choice. Obv, there are always exceptions here, but this helps you work through things a bit more objectively. Rust can be a fantastic language for many purposes, but it has a very high development cost.
Maybe people do care about performance and reliability • Buttondown
Anyway, my point is that it’s complicated, you can’t just blame it on apathetic devs or apathetic consumers. Dysfunction is omnipresent, but it’s also complex.
Maybe people do care about performance and reliability • Buttondown
More and more I feel like software is dysfunctional because everything is dysfunctional, because complex and interlocking societal factors make excellence a pipe dream. I’ve got this idea that the administrative overhead required to do something scales with the square of the size of the task, and doing something efficiently scales even faster than that. The more you scale, the more of those complex factors come into play, interacting with each other to produce emergent phenomena that makes everything more difficult. I’d say you could only change the factors that lead to slow software by changing society itself, but I’m not sure that any society would have globally fast software.
Browsers are essential and how operating systems are holding them back (2022) [pdf] (2022)
> Nothing any OS vendor or browser vendor has done in the last decade has been a user-focused positive experience. They have become delivery tools for revenue only rather than information access. Succinctly put. I've felt this shift everywhere; it killed the fun and curiosity I felt when I first encountered computers and the web. I can't recommend anything in good faith. When I open a new website or program I dread to think what it is collecting from me... who is looking at it, where it is stored... forever. It just seems so powerless to resist, especially when so much of wider society expects you to use $CHATAPP or even $DATING_APP. I can't imagine a first date where I scold the lady on her use of proprietary software: "Please install this XAMPP-Mastodon-Matrix chat app from the F-Droid store or I won't speak to you again"
Ubuntu 19.10: It’s fast
I tried both Wayland and X11. I feel like I'm going crazy because every time I mention the words Linux and HiDPI I have this same conversation, and it's been happening for years. The my takeaway is always, Linux users have ridiculously low standards for what works when it comes to UI. The conversation usually goes something like: "I don't know Wayland works for me with X setup" "What about the blurriness with fractional scaling" "Oh I'm used to it/It only happens with some programs <usually all programs using some incredibly ubiquitous UI toolkit>" Or: "What about when you move a window from one screen to another" "Oh I don't do that/Oh it gets a little blurry/Oh just use X11 and <insert Xrandr hack to mess with the frame buffer>" Or: "What about the tearing" "I got used to it/What tearing, I'm not gaming?" Or: "What resolution are your screens" "2k small screen and 4k big screen , I can just run the same scaling on both" I remember one time I had this conversation in person, and we failed at the, "move that window to the other monitor" step when it blew up the window to 200% size on the smaller screen. "Why do you expect the window to automatically resize itself and change the font" "Because the application is unusable when every UI element is twice as big as it should be?" "But I want my application to be unusable [paraphrase], you just think it should resize because that's what OSX does, stop bringing your OSX mentality to it and it's fine" I think that's when I should have stopped ever hoping for anything better and stop saying Linux and HiDPI in one sentence... but here we are...
Ubuntu 19.10: It’s fast
Well there is fractional scaling, it just looked like garbage and had tearing. But also handling a mix of low and high DPI displays... and any solution that includes the command `xrandr` is wrong, either because of clarity issues, or tearing/performance issues, or graphical bugs in the DE, or a mix of all of the above I don't get it, why can't we all just copy what OSX did. They got HiDPI so right with such a flexible solution, that I literally forgot that was still a thing until my latest endeavor with Linux
Will Nix Overtake Docker?
No, it definitely (but unfortunately) will not. Nix does everything docker does better than docker does, except most crucially, integrate with non nix tooling. Nix vs Docker is like Rust vs JavaScript - you can point out every reason js is terrible and rust is better, but for the common developer looking to get things done, they’ll often gravitate to the tool that gets them the biggest impact with the least upfront investment, even if that tool ends up causing major predicable headaches in the future.
Ask HN: What Next After Ubuntu?
I’ve ran NixOS on my last two machines. I like it more than the alternatives, but it isn’t without flaws. At this time, you must be sold on the idea of declarative configuration and willing to learn at least the basics of how Nix the language works. It’s cool that you can Git pull and build an OS, but management of the project can be very slow. Using a ‘pull requests’ model majorly slows down progress; if you need to revise changes for a new package or package update, you will make the correction and get approval, even as a maintainer, but no one will come back around to merge it. With a patch-based model maintainers can waste less time by just making those few modification to the patch and getting updates upstreamed faster without the back-and-forth. That said, it’s still something I’d recommend for someone with the experience and interest. There’s never been a system I was as confident with running patches and just updating the system myself for when stuff wasn’t working. But also, Guix is out there doing similar stuff and you must admire the free software goals even if it can sometimes be impractical (I just do not like the Scheme-based syntax).
A Linux evening...
This post resonates strongly with me. I love the term "a linux evening." This was precisely my experience when I used Linux full time: mostly it worked great, but then occasionally something wouldn't work (some personal examples: touchpad doesn't work after OS update, wifi card stops working etc.) and then I have to spend a few frustrating hours debugging the issue. All I can think in these moments is "you don't get this time back. Is this really how I want to spend three precious hours of my life, when, if I used a different platform, I could avoid this hassle completely?" I know it's a tradeoff and I sacrifice a lot to live in my current Macintosh rut, but I just don't have the motivation to be my own DIY tech support wiz after a full day on computers for work.
What if you delete the “Program Files” folder in Windows? [video]
I developed windows during the Windows 10 timeframe. Although I left before windows 11 was conceived, it's painfully obvious that it is just a UI reskin on top of 10. This was preordained by certain organizational choices made during my time there; namely, that the "Shell" team responsible for the start menu, desktop, and other UI tidbits[0] was completely divorced from the rest of windows development, with their own business priorities and so on. This was the team responsible for Windows 8/.1, so as you can imagine they were somewhat sidelined during Windows 10 development. It appears they have their revenge, first and foremost from the promised-never-to-happen rebranding (whereby they jettisoned the Windows 10 brand which was an embarrassment for that team and that team only). That the result is only a reskinned 10 is the natural result because that is the only part of the product they have the authority or ability to change. The Shell team was trying to push this same new UI during my whole time at Msft, with at least three cancelled attempts that I was aware of even from an IC perspective. By the end the embarrassment was contagious. [0] Plus Edge, as part of the same vestigial business unit. This explains the central position of advertising in the reskin, because Edge in all of its forms was always meant to drive ad revenue. That is the distinct business priority I mentioned earlier, which sets this organization apart from Windows (NT,win32,etc.) development proper, which was shifted to Azure.
Goodbye, data science
Unfortunately it seemed pretty clear from the start that this is what data science would turn into. Data science effectively rebranded statistics but removed the requirement of deep statistical knowledge to allow people to get by with a cursory understanding of how to get some python library to spit out a result. For research and analysis data scientists must have a strong understanding of underlying statistical theory and at least a decent ability write passable code. With regard to engineering ability, certainly people exists with both skill sets, but its an awfully high bar. It is similar in my field (quant finance), the number of people that understand financial theory, valuation, etc and have the ability to design and implement robust production systems are few and you need to pay them. I don't see data science openings paying anywhere near what you would need to pay a "unicorn", you can't really expect the folks that fill those roles to perform at that level.
Toot!.app ↙︎↙︎↙︎ (@[email protected])
Notice: I've disabled the issue tracker for now. Having a huge, never-ending list of unsolved small bugs that people keep adding to is unfortunately *massively* de-motivating for working, and it's better for me and the development process to get rid of it for now. I have lots of things to work on at the moment anyway, and I could not respond to 99% of the requests anyway. If you do have critical bugs, please message me instead. And if at all possible, plaese go easy on requesting features.
Thorsten Ball - How can you not be romantic about programming?
I think there is a lot of romanticism in computing because there is a lot of irrationality. We don't like to admit that. We pretend to be "scientists". Irrationality is as much the engine of progress as reason. Both can be directed toward good or evil ends. Ada Lovelace saw one romantic side of computing as the possibility of machines writing poetry, music and song. Today as much fear, horror and loathing as joy surrounds that idea - but that is also romantic in Mary Shelley's sense. Big-R Romantic features are in both; possibility, drama, tragedy, and rejection of reason according to a counter-enlightenment embrace of emotivism. Ours is the age of impossibility - the hopeless inevitability of the status-quo, the lack of vision for alternative systems, amidst a grinding project to render all human affairs predictable, legible, identifiable, and controlled. Today "computer love" (the romance in computing) derives from the struggle to overcome the ignorant, cowering bureaucracy to which lesser men put machines in pursuit of mediocrity and dull power.
Thorsten Ball - How can you not be romantic about programming?
If you haven’t been here long enough and try to guess how much there is and how many generations are layered on top of each other — you won’t even come close. But stay around. After a while, more and more, you’ll find yourself in moments of awe, stunned by the size and fragility of it all; the mountains of work and talent and creativity and foresight and intelligence and luck that went into it. And you’ll reach for the word “magic” because you won’t know how else to describe it and then you lean back and smile, wondering how someone could not.
Thorsten Ball - How can you not be romantic about programming?
This world of programming is held together by code. Millions and millions of lines of code. Nobody knows how much there is. Some of it is more than 30 years old, some less than a week, and chances are you used parts of both yesterday. There are lines of code floating around on our computers that haven’t been executed by a machine in years and probably won’t be for another lifetime. Others are the golden threads of this world, holding it together at the seams with no more than a dozen people knowing about it. Remove one of these and it all comes crashing down.
Thorsten Ball - How can you not be romantic about programming?
Fantastic amounts of code have been written, from beginning to end, by a single person, typing away night after night after night, for years, until one day the code is fed to a machine and, abracadabra, a brightly coloured amusement park appears on screen. Other code has been written, re-written, torn apart and stitched back together across time zones, country borders and decades, not by a single person, but by hundreds or even thousands of different people.
Is Our Definition Of Burnout All Wrong?
One of the things I've spent time helping other engineering managers understand is that burnout doesn't relate only to exhaustion. Instead, as the Maslach Burnout Inventory points out, it tends to be a three-factored issue. The MBI is a tool widely used in research studies to assess burnout, and it measures three scales: 1) *Exhaustion* measures feelings of being overextended and exhausted by one's work. 2) *Cynicism* measures an indifference or a distant attitude towards your work. 3) *Professional Efficacy* measures satisfaction with past and present accomplishments, and it explicitly assesses an individual's expectations of continued effectiveness at work. So you can absolutely be experiencing burnout even if you're not experiencing exhaustion, if the other two scales are tipped hard enough. Among the questions that help measure Cynicism and Professional Efficacy: * I really don't care what happens to some of my colleagues/clients. * I have the impression that some of my colleagues/clients make me responsible for their problems. * I have achieved many rewarding objectives in my work For more details about the MBI, check out
Moxie Marlinspike (@moxie)
One unique thing about software as an engineering discipline is that it offers abstractions which allow ppl to start contributing in the field w/o having to understand the whole field. To be great, though, imo understanding what’s under the abstractions is really important: 1/ These abstractions are the “black boxes” in your work. Maybe you make HTTP requests all the time, or submit queries to a DB, or read and write to files, or make a syscall, or even type useState—but have never interrogated what’s happening under the abstraction when you do. 2/ These abstractions are great for most things, but are still “leaky” at some point — and understanding their underlying complexity is incredibly valuable for being a great software dev. Here are some books I found valuable for learning about these abstractions early on: 3/ 1. TCP/IP Illustrated, Volumes 1, 2, and 3: A lot has changed since this was written (in all volumes, but particularly 2&3), but I think it’s still a valuable resource for understanding the basis of what’s happening every time you make an HTTP request. This really pays off. 4/ 2. Computer Organization and Design: The Hardware Software Interface The hardware software interface is the ultimate abstraction layer. You’d be surprised how often knowing how cache lines work will help you. 5/ 3. Transaction Processing: Concepts and Techniques A lot has also changed since this was written, but it’s still a great exploration of an area that is perhaps the leakiest abstraction of all and where understanding the underlying system is of enormous value. 6/ 4. Understanding the Linux Kernel / The Design and Implementation of the 4.4 BSD Operating System Great for understanding the complexities and limitations of the the filesystem, memory, network interface, etc. Abstractions that effect almost every aspect of your software. 7/ Maybe there are better references now, but studying these early on has and continues to help me immensely. Abstractions are great for getting people contributing in the field quickly, but imo looking through the abstractions is hugely rewarding and will make you super effective.
Sean Hood (@[email protected])
If MySpace taught a generation HTML; is the Mastodon era going to create a generation of sysadmins?
Sean Hood (@[email protected])
If MySpace taught a generation HTML; is the Mastodon era going to create a generation of sysadmins?
How to build a Semantic Search Engine in Rust | by Sacha Arbonel | Nov, 2022 | Medium
We need more project oriented tutorials like this if we want to promote rust. That's one reason why python and js are so successful. I like the last part where you left the links of libraries where readers can explore on. Thanks and good job!
Building a semantic search engine in Rust
I remember when "semantic search" was the Next Big Thing (back when all we had were simple keyword searches). I don't know enough about the internals of Google's search engine to know if it could be called a "semantic search engine", but not, it gets close enough to fool me. But I feel like I'm still stuck on keyword searches for a lot of other things, like email (outlook and mutt), grepping IRC logs, searching for products in small online stores, and sometimes even things like searching for text in a long webpage. I'm sure people have thought about these things: what technical challenges exist in improving search in these areas? is it just a matter of integrating engines like the one that's linked here? Or maybe keyword searches are often Good Enough, so no one is really clamoring for something better
Dave Temkin (@dtemkin)
We built Netflix streaming from scratch without ever spending a night in the office. Any employer that tells you that you need to do otherwise is toxic and you deserve better.
Dave Temkin (@dtemkin)
We built Netflix streaming from scratch without ever spending a night in the office. Any employer that tells you that you need to do otherwise is toxic and you deserve better.
Being Ridiculed for My Open Source Project (2013)
The other day I wrote a fan letter to a developer who has been maintaining a popular and useful library for several years. In his reply, he said that this was the first fan letter he had ever received. I think we need to show Open Source developers a lot more love and a lot less snark...
In Defense of Linked Lists
>When people asking my opinion for Rust, I loved to share them the Linkedin List implementation link: This LinkedList obsession is a bit bizarre to me, and tends to come from older programmers who come from a time when coding interviews involved writing linked lists and balancing b-trees. To me though it also represents the stubbornness of C programmers who refuse to consider things like growable vectors a solved problem. My reaction to the LinkedList coders is not "well Rust needs to maintain ownership", its why does your benchmark for how easy a language is involve how easy it is to fuck around with raw pointers?. LinkedLists are a tool, but to C programmers that are an invaluable fundamental building block that shows up early in any C programmers education due to how simple they are to implement and the wide range of use cases they can be used for. But they are technically an unsafe data structure and if you willing to let some of that stubbornness go and finally accept some guard rails, you have to be able to see that a data structure like linkedlists will be harder to implement. It has nothing to do with the language; implementing with LinkedLists with any sort of guardrails adds a ton of complexity, either up front (e.g. borrowchecker) or behind the scenes (e.g. a garbage collector). When you accept this fact, it becomes ludicrous to imply that a LinkedList implementation is a good benchmark for the ergonomics of a language like Rust.
Functional programming should be the future of software
I immediately distrust any article that makes sweeping claims about one-paradigm-to-rule-them-all. The reason why multiple paradigms exist is because here in the real world, the competing issues and constraints are never equal, and never the same. A big part of engineering is navigating all of the offerings, examining their trade-offs, and figuring out which ones fit best to the system being built in terms of constraints, requirements, interfaces, maintenance, expansion, manpower, etc. You won't get a very optimal solution by sticking to one paradigm at the expense of others. One of the big reasons why FP languages have so little penetration is because the advocacy usually feels like someone trying to talk you into a religion. (The other major impediment is gatekeeping)
Functional programming should be the future of software
Functional programming won't succeed until the tooling problem is fixed. 'Tsoding' said it best: "developers are great at making tooling, but suck at making programming languages. Mathematicians are great at making programming languages, but suck at making tooling." This is why Rust is such a success story in my opinion: it is heavily influenced by FP, but developers are responsible for the tooling. Anecdotally, the tooling is why I gave up on Ocaml (given Rust's ML roots, I was seriously interested) and Haskell. I seriously couldn't figure out the idiomatic Ocaml workflow/developer inner loop after more than a day of struggling. As for Haskell, I gave up maybe 20min in of waiting for deps to come down for a Dhall contribution I wanted to make. Institutionally, it's a hard sell if you need to train the whole team to just compile a project, vs. `make` or `cargo build` or `npm install && npm build`.
Show HN: A tool to help you remember shit you are interested in
This seems really well built. It's fast and responsive. It looks nice. But I just don't understand what I would use it for. It seems like the idea is to build a database of people, movies, Wikipedia articles and such and then be able to find them via search/links. But I'm not at all sold on why I need this in my life. Is there a way to make the value clearer? Am I just not in the target audience? Who is going to see this and say "TAKE MY MONEY" and why? I'm thinking of products that were instant sign-ups for me... Spotify: For one price, listen to all the music on Earth whenever you want. TAKE MY MONEY! Gmail: Fast email with 2 GB storage. This was such an instant sign-up they had to make an invite system to slow people getting access. Maybe could add something like Lichess: Chess training and games, with modern UX, offered open source as a public good. I mean, if you're at all interested in chess, that's an instant sign-up, right? Trying to say, this idea of presenting a clear value isn't limited to big players like Spotify and Gmail, but can also be done by smaller companies if the value presented is really clear. What should someone see that makes them instantly recognize they need this in their life, because that's what I'm totally missing here.
Jony Ive on Life After Apple
“Language is so powerful,” says Ive, who often begins a new project with conversation or writing, not sketches. “If [I say] I’m going to design a chair, think how dangerous that is. Because you’ve just said ‘chair,’ you’ve said no to a thousand ideas.” The older I get the more I believe this to be the most difficult aspect of making decisions. Saying 'no' to thousands of potentialities seems scary because it's a memento mori of the finiteness of individual lives.
Do you use Nix or equivalent to manage projects and/or systems?
We use nix very conservatively. We only use it for managing local developer environments, ie. build toolchain and other cli tools (ansible, terraform, etc). That has worked out amazingly for us. I’m in general a lot more skeptical about nix for production. You don’t clearly get the kind of support like you would from, for example, Ubuntu’s packages. There’s no “LTS” as far as I know for nix, merely the stable NixOS release. Though, that being said, nixpkgs tends to be way ahead of other package managers’ versions of software. We’ve started messing around with using nixpkg’s docker tools for some web projects. That would be the first time that we’d be using nix in our production environment. In general, it’s really easy to go overboard with nix and start using it really inappropriately. But if you use some discipline, it can be an *amazing* tool. It’s completely solved our python problems related to installing ansible. That’s invaluable.
Got promoted to Director after boss quit. Any advice?
Learn how to back off and trust others. You're not an IC anymore. Focus on enabling work getting done. You're the cat herder now. Make sure you have good cats, that they get fed enough, and that they're in the right barn or field. Don't try to catch mice, or tell your cats how to catch mice. Focus on overall velocity, removing roadblocks, and setting directions. And try to not get stressed out by the fact that you aren't directly contributing. Lots of people going through this transition have a hard time. In the past if something wasn't going well you could just take direct action by working harder, learning a new approach, or rethinking the problem. Your hard work and thinking is what lead you to success. Lots of people making this transition have a hard time because their work doesn't directly lead to success anymore. And when things go bad, or they run into problems, they try to DO something about it by making changes themselves, micromanaging, demanding over time, etc. They often feel they can't control the situation so they try and do SOMETHING to help them feel in control. But it's counter productive and puts you in a spiral of ever escalating issues. You need to focus on helping the tree grow and helping it grow in the right direction. So, yeah. Good luck. :)
Laws barring noncompete clauses spreading
Over the years so many different jurisdictions around the US and the world have stated their desire to be the "next Silicon Valley" and have poured an immense amount of money and effort to make it so, whether in the form of incentives for businesses, tax breaks, education, job training, or even just straight paying smart people to move there. Every such scheme has generally failed because they refused to emulate the one key piece of California law that is necessary for a startup ecosystem to exist – banning noncompetes. "But I'll spend money to train my employees and they'll just take those skills to go work for a competitor or start their own business!" Yes, that's a feature of the system, not a bug.
Ask HN: What was being a software developer like about 30 years ago?
It was great. Full stop. A sense of mastery and adventure permeated everything I did. Over the decades those feelings slowly faded, never to be recaptured. Now I understand nothing about anything. :-) Starting in 1986 I worked on bespoke firmware (burned into EPROMs) that ran on bespoke embedded hardware. Some systems were written entirely in assembly language (8085, 6805) and other systems were written mostly in C (68HC11, 68000). Self taught and written entirely by one person (me). In retrospect, perhaps the best part about it was that even the biggest systems were sufficiently unsophisticated that a single person could wrap their head around all of the hardware and all of the software. Bugs in production were exceedingly rare. The relative simplicity of the systems was a huge factor, to be sure, but knowing that a bug meant burning new EPROMs made you think twice or thrice before you declared something "done". Schedules were no less stringent than today; there was constant pressure to finish a product that would make or break the company's revenue for the next quarter, or so the company president/CEO repeatedly told me. :-) Nonetheless, this dinosaur would gladly trade today's "modern" development practices for those good ol' days(tm).
Ask HN: What was being a software developer like about 30 years ago?
Fun! Precarious. Very slow. Like a game of Jenga, things made you nervous. Waiting for tapes to rewind, or slowly feeding in a stack of floppies, knowing that one bad sector would ruin the whole enterprise. But that was also excitement. Running a C program that had taken all night to compile was a heart-in-your-mouth moment. Hands on. They say beware a computer scientist with a screwdriver. Yes, we had screwdrivers back then. Or rather, developing software also meant a lot of changing cables and moving heavy boxes. Interpersonal. Contrary to the stereotype of the "isolated geek" rampant at the time, developing software required extraordinary communication habits, seeking other experts, careful reading, formulating concise questions, and patiently awaiting mailing list replies. Caring. Maybe this is what I miss the most. 30 years ago we really, truly believed in what we were doing... making the world a better place.
Incidents caused by unappreciated OSS maintainers or underfunded OSS projects
our model of society is not compatible with open source there needs to be a massive shift and appreciate more the work of volunteers, contributors and benevolent until then, these problems will amplify and i'm not talking about github sponsors since it's opt in, and it's more of a popularity check than anything else i'm talking about that dude who will randomly appear to send a PR that fixes something important, the dude who decide overnight to open source his work but is agoraphobic, that other dude who help write documentation, that other dude who help triage issues, countless hidden people who never are rewarded
Don't Be A Free User (Pinboard Blog)
I love free software and could not have built my site without it. But free web services are not like free software. If your free software project suddenly gets popular, you gain resources: testers, developers and people willing to pitch in. If your free website takes off, you lose resources. Your time is spent firefighting and your money all goes to the nice people at Linode.
Don't Be A Free User (Pinboard Blog)
I love free software and could not have built my site without it. But free web services are not like free software. If your free software project suddenly gets popular, you gain resources: testers, developers and people willing to pitch in. If your free website takes off, you lose resources. Your time is spent firefighting and your money all goes to the nice people at Linode.
Don't Be A Free User (Pinboard Blog)
If every additional user is putting money in the developers' pockets, then you're less likely to see the site disappear overnight. If every new user is costing the developers money, and the site is really taking off, then get ready to read about those synergies.
What “work” looks like
Software development is creative work. Creative insight can come anywhere, any time. Better ideas can make difficult things easy. And make the impossible– possible. So the most important thing on a software team (or really any team creating high technology products or services) is an environment where team members feel safe to be themselves– psychologically safe, where they can try out new things, make mistakes, fail, and not be punished or belittled. Say their ideas and have them improved by others, not criticized. It's an environment where team members take care of themselves so they can be creative– sleep enough, exercise enough, be with friends and family enough, play enough. You have to be at your keyboard or lab bench or whatever enough to make things. But if you are there too much your creativity plummets. This is what I try to get across to my teams.
Why we're leaving the cloud
Of course it's expensive to rent your computers from someone else. But it's never presented in those terms. The cloud is sold as computing on demand, which sounds futuristic and cool, and very much not like something as mundane as "renting computers", even though that's mostly what it is. But this isn't just about cost. It's also about what kind of internet we want to operate in the future. It strikes me as downright tragic that this decentralized wonder of the world is now largely operating on computers owned by a handful of mega corporations. If one of the primary AWS regions go down, seemingly half the internet is offline along with it.
Write Better Error Messages
Watched the new Quantum Leap yesterday (it's not great) and there was this really cringeworthy moment when something goes wrong with their awesome supercomputer and the screen flashes a giant "INTERNAL SYNTAX ERROR". Apparently, somebody didn't run their linter before sending people through time. Too bad.
Write Better Error Messages
Probably just me, but I am less concerned with how good my error messages are, and more concerned with trying very very hard to make the errors happen closer to the cause of the problem, rather than further away. "Fail early, fail hard" i.e. if I can make the error message happen near the beginning of a process, I can get away with making it a hard error. Hard errors in the middle of a multi-hour operation tend to annoy people.
Product vs. Engineering
> What I have noticed even in top engineering companies is an interesting dichotomy. Product determines the "innovation", engineering determines how to build it. I wonder if thats because, if you let engineers do both, you end up with a mess and accomplish nothing. The top companies have product and engineering working closely together. This allows product people to go deep on optimizing their product skills and engineers to go deep on optimizing their development skills, both of which are most effective when performed in conjunction with each other as part of a strong team. There are great product-minded engineers and great engineering-minded product managers out there, but it's much easier to find people who are simply good at their domain and know how to work closely with people in other domains to get things done. Some companies try to cargo-cult this by drawing a dividing line: Product defines the "what" and engineers define the "how". Product works in isolation, hands things off to engineers, then engineers churn through tickets in isolation. This is not good at all.
Syntax Design
I find the section on "syntactic salt" interesting: > The opposite of syntactic sugar, a feature designed to make it harder to write bad code. Specifically, syntactic salt is a hoop the programmer must jump through just to prove that he knows what’s going on, rather than to express a program action. This is perhaps an uncharitable way to describe it, but the concept does ring a bell. Rust's unsafe {}, C++'s reinterpret_cast<>(), etc - all slightly verbose. More important than jumping through hoops, the verbosity helps when reading code to know that something out of the ordinary is going on.
Protein interface – how to change aproach to building software?
Interviewing is outside my skill set, so take this with a grain of salt, as it's just the sort of question I'd like to answer: "We have an application that needs to run inside a vehicle, which means the power will be killed at regular, but unpredictable, intervals. How would you design this to ensure data integrity?" It's weird enough that few people will have solved it before, but it can be solved at every layer between circuit and application, so you can actively brainstorm with the candidate to draw out some of their solutions into more detail. And if they start with, "well, I'd build a react app," you can go straight into the trash can with their resume, because you can have that whole discussion without deciding on so much as a language, much less a framework, so you can see who jumps too hastily to wrong assumptions.
Digital Gardening
We run our company with a forest in mind. Client projects are gardens within the forest. We have a green house for seedlings (innovation projects), we have a fire in the center, where we regularly meet and hang out. We have an outlook point, where we look out to sense what’s on the horizon… obviously, we don’t want our gardens full of weeds or trash laying around.
Moving from React to Htmx
I think this take does yourself a disservice: htmx is an extension of HTML and, in general, of the hypermedia model, and it is this model that should be contrasted with JSON data APIs. I think that you should learn JavaScript, and I certainly think you should learn HTML(!!!) and CSS. But I also think you should consider how much more can be achieved in a pure hypermedia model with the (relatively small) extensions to HTML that htmx gives you. I have a collection of essays on this topic here: Including an essay on how I feel scripting should be done in Hypermedia Driven Application: There is more to all this than simply avoiding JavaScript.
A Real World React – Htmx Port
> as you reach a more “app like” experience with multiple layers of state control on the front end you need to reach for a front end JS framework I think that if you fully embrace HTMX's model, you can go far further than anticipated without a JS framework. Do you really need to be managing state on the client? Is it really faster to communicate via JSON, or protobuf, whatever, rather than atomic data and returning just small bits of replacement HTML -- inserted seamlessly that it's a better UI than many client-side components? Why have HTML elements react to changes in data or state, rather than just insert new HTML elements already updated with the new state? I think you're describing a, let's do React in HTMX mindset, rather than let's go all in on the HTMX model. And I might be giving HTMX too much credit, but it has totally changed how I go about building web applications.
Using a Framework will harm the maintenance of your software
From my own experience, writing something without a framework often seems very elegant to yourself, but the moment you try to onboard other people to your framework-less code it becomes a nightmare. Turns out most folks don't want to get familiar with e.g. intrinsics of browser technologies, HTTP request processing or other complex things that you've reimplemented in your code, they just want to deliver working software using frameworks and conventions they know. You can think of frameworks like conventions: If enough people know them, it makes life so much easier for everyone, even though the convention might not always be the best fit. To state an analogy, imagine each municipality would invent their own traffic signs from first principles - because it makes maintenance easier for them - and you were tasked to drive through such a city with large speed, learning the conventions as you go. An absolute nightmare. I think that's how most programmers feel about code that brings its own framework-less abstractions and technologies. So while I would've been able to write my own frameworks I've become humble and reasonable enough to just default to something that's popular and well-known, because it will make life easier for my colleagues or employees.
Using a Framework will harm the maintenance of your software
Rails will harm the maintenance of your software* *Is really the accurate summation of the article. And yes, this is well known. Every article about a company upgrading Rails is "it took us several years and only three people died." And we know better than to use MVC nowadays. No offense to Rubyists, but in the Ruby ecosystem, I have seen a disturbing lack of absorbing information from other programming ecosystems. This article smells like that to me. If you've only used Ruby and Rails, you might not realize some of the dangers and inherent limitations of the design unless you've worked in other ecosystems.
The 4th Year of SerenityOS
You are already powerful enough! Some of our most active developers today didn't even know C++ when they started. If you're interested, look for something small that annoys you, and then see if you can't figure out enough of the code to fix it. :^)
The 4th Year of SerenityOS
I follow Andreas on twitter and he is a big inspiration for me when I go look at more challenging problems now. I have an addictive personality, so far cigarettes are the only thing that got me and only for 4 years, but I largely stay away anything else now because I see how it effected members of my family and how easily someone like myself could go the same way. Because of that I very much appreciate channeling yourself into something as ambitious as an operating system instead. It's actually the same way I've built any of my best work and how I've gotten even this far in my career. The line I say is: programming keeps me sane.
The 4th Year of SerenityOS
I think it’s that most people are doomers and/or are defeated by doomerism. Most people think it’s impossible to build an OS or a web browser (and are told this when they ask for help building one.) In reality engineering is straightforward, you just need someone to show you how to properly write data structures and algorithms and to break problems down. Andreas showed these kids this, reinvigorating the web based hacker culture I grew up in. Anything is possible and even if a problem ends up being more than you can handle at least you learned a ton along the way. Now days searching for how to code leads you to a ton of tutorials about gluing modules together. I feel sorry for young people with that thirst who won’t be satisfied thanks to the commoditization of learning to code.
Take a Break You Idiot
It's funny isn't it. Recently in a job with "unlimited" vacation, because of a dubious message from one of my two bosses who was a bit of a dick, I was too scared to take a real vacation. Until Christmas. Then I decided I was going to take some. It had been a rough year, isolating from Covid, not enough money, and living in shitty circumstances. It was the first PTO I'd had in over a decade, as working as a freelancer/consultant often means no PTO, so I decided to savor it, come what may. I took just under 3 weeks, like almost everyone else: there was a shared vacation calendar where I could see everyone else's Christmas break. My reward when I got back? Low performance metrics "in December" were cited when laying me off. It wasn't just about December, but December was the month they decided to measure and "give me a chance". They didn't take into account the break, and the only way their "assessment" could be satisfied would have been to work through Christmas. I then worked my ass off to ship a technically difficult, world-record-beating feature during my notice month, which they told me if I delivered it would surely be impressive, and turn it around. I did ship it, but not until the very end of the notice period, which was too late. If they had cared, they would have seen it was on track. If they had kept me on, let me relax, and worked with me rather than their choice of how to assess work, they would now have a world-beating product. It's their choice of course, and I now don't think they were serious about trying to build a real product. I think it's a bit of a smoke-and-mirrors scheme to keep grant money flowing in. After all, in about 4 years nobody has ever run the product on the real data it is designed for, except me, and I had to pay for servers from my own pocket to run those tests. Even now, I believe I'm the only person ever to run it, or even be able to run it. It's been interesting to watch how the product has stayed in the doldrums since I left, and how the folks working on it are now starting to implement things for which I have had working, high-performance functionality for months in my private fork since leaving. (It's open source.) It will be particularly interesting to see if their version is ever able to run on the real world data it was created for, or if their perpetual optimism will be forever misplaced. Ironically, I'd say the company had the nicest, most helpful HR, legal and accounting teams I've ever seen at any company. There was a lot to like, and I'm sad to have had to leave. But I don't miss feeling constantly afraid there. And, as a person who really enjoys creating things, I don't miss watching another team member shipping garbage commits that usually didn't work, and doing fine, while I was the only person on the project providing real functionality but not scoring well on the right metrics, because I spent too much time solving the product's blocker problems. To score well I'd have to ship garbage too. Oh well.
Take a Break You Idiot
There was a time a dozen years ago when I was working alone on my (over-elaborate, uncontrollably sprawling) graphics software product. One time I wrote a multi-thousand-line refactoring of existing code into a new class and felt very happy about getting it done. The next day I discovered that I had already done the exact same work the previous week, just as a slightly differently named class. That wasn’t an isolated memory loss experience in those days. I ordered lunch, sat down, then five minutes later just stood up and left, assuming I’d already eaten. An hour later I realized what happened. Long-term productivity is impossible without proper rest, including regular vacations where you’re truly out of work mode preferably for a week at the minimum.
Take a Break You Idiot
There was a time a dozen years ago when I was working alone on my (over-elaborate, uncontrollably sprawling) graphics software product. One time I wrote a multi-thousand-line refactoring of existing code into a new class and felt very happy about getting it done. The next day I discovered that I had already done the exact same work the previous week, just as a slightly differently named class. That wasn’t an isolated memory loss experience in those days. I ordered lunch, sat down, then five minutes later just stood up and left, assuming I’d already eaten. An hour later I realized what happened. Long-term productivity is impossible without proper rest, including regular vacations where you’re truly out of work mode preferably for a week at the minimum.
Helix: A Neovim inspired editor, written in Rust
What's preventing it is their existing codebases, mostly. IIRC, one of the first things Neovim did was throw out literally tens of thousands of lines of legacy code from Vim. Meanwhile, Helix can just add the `lsp-types` crate as a dependency and they're already a quarter of the way to making LSP work. The difference between adding something to a 30-year-old C project and a new Rust project is so massive that it can actually be the difference between "fairly straightforward" and "essentially impossible".
André Staltz - Software below the poverty line
Marx wrote a famous piece called "Fragment on Machines". It actually predates Capital volume 1. He talks about the mix of knowledge and labour to produce machines that are capable of transforming nature (doing labour). From here, Marx explores a world where labour can be produced entirely (or almost entirely) by machines, for him machines are capable of undoing capitalism. The so called post-scarcity society. I think the key part here is that software is actually capable of replacing large portions of labour; think about how much book keeping work is saved through Excel. But what happens when capital owners own all the machines, what happens to people? This is a fundamental problem that Marx explores through out his whole work. I think OSS is actually what machines should look like for Marx, available for everyone at the cost of production and upkeep of the machines which in our case is the cost of copying and storage of the bits that compose the software. But Marx through out his work also explores deeply the relationship between labour and capital, and obviously producing machines requires labour! I know you're probably joking, but I we can learn a lot about OSS from Marx. I mean, a big part of Stallman's philosophy behind the free software movement is inspired by marxist ideas.
André Staltz - Software below the poverty line
This is why I think that open source / free software is the greatest trick that late stage capitalism ever pulled. It exploits the generosity and naivity of devs who have committed to a particular ideology that, while well motivated at the start, has nevertheless turned out to be extremely easily exploited by corporations who now essentially get an enormous amount of labour for free. What's more there is intense social pressure from large segments of the dev community to both contribute to open source and to publicly endorse and promote "open source values". Even the author refuses to acknowledge that the problem with open source is open source licensing. Dropping the non discrimination clause in open source licenses and demanding payment for labour from large companies, would be enough to solve all these issues. But that is anathema to the ideologues who dominate the conversation.
André Staltz - Software below the poverty line
This is why I think that open source / free software is the greatest trick that late stage capitalism ever pulled. It exploits the generosity and naivity of devs who have committed to a particular ideology that, while well motivated at the start, has nevertheless turned out to be extremely easily exploited by corporations who now essentially get an enormous amount of labour for free. What's more there is intense social pressure from large segments of the dev community to both contribute to open source and to publicly endorse and promote "open source values". Even the author refuses to acknowledge that the problem with open source is open source licensing. Dropping the non discrimination clause in open source licenses and demanding payment for labour from large companies, would be enough to solve all these issues. But that is anathema to the ideologues who dominate the conversation.
André Staltz - Software below the poverty line
There are two alternatives possible. One is that we collectively decide to stop shaming software developers for having the audacity to want some level of ownership over the product of their work. We don't shame authors for wanting copyright on their books; we don't shame musicians, artists, designers, or aerospace engineers for asking for some copyright protection for their creative babies. Yet when a software developer does it: fuck that guy! He's trying to take control of what's running on your computer (or the internet server that you're sending requests to ...). Nobody throws a hissy fit when J.K. Rowling has (gasp!) copyright over the Harry Potter books that are sitting on your Kindle. It's your Kindle! Shouldn't you have the right to copy off the words in the books and re-sell it to other people for much less money, undercutting Rowling? How dare she try to get some legal protection that says you can't do that! It's fucking ridiculous when we talk about authors that way, but somehow it's OK to talk about software developers that way. Do you think "open source authors" would make a living from their books? It's already difficult enough for new authors to get any notice; how much worse would it be if prominent authors (who were already rich) came out and founded the "Free Books Foundation" that comes out and says every young author who's trying to sell her books for money is being a greedy asshole and we should fight against them and every author needs to spend a significant portion of their free time contributing to "open books" or they're assholes? Of-fucking-course it's not sustainable. That's because it's always been OK to want copyright on your creative work. I'll be the first to say patents are a huge problem right now and we might be better off without any patent law, but copyright is not the same. Yes, the terms are way too long, and the family of Marvin Gaye proves that "copyright trolls" are possible, but the fundamental concept of copyright is actually critical if we want creative people to ever get a paycheck. The other alternative is Universal Basic Income, so that making "below the minimum wage" doesn't mean "fuck you, you get to die sick and homeless in a tent on the side of the highway". Then people could actually just contribute to OSS because they want to.
André Staltz - Software below the poverty line
The struggle of open source sustainability is the millennium-old struggle of humanity to free itself from slavery, colonization, and exploitation. This is not the first time hard-working honest people are giving their all, for unfair compensation. This is therefore not a new problem, and it does not require complicated new solutions. It is simply a version of injustice. To fix it is not a matter of receiving compassion and moral behavior from companies, for companies are fundamentally built to do something else than that. Companies simply follow some basic financial rules of society while trying to optimize for profit and/or domination. Open source infrastructure is a commons, much like our ecological systems. Because our societies did not have rules to prevent the ecological systems from being exploited, companies have engaged in industrialized resource extraction. Over many decades this is depleting the environment, and now we are facing a climate crisis, proven through scientific consensus to be a substantial threat to humanity and all life on the planet. Open source misappropriation is simply a small version of that, with less dramatic consequences.
SQLite: QEMU All over Again?
SQLite only works as a concept because it is not networked. Nobody truly understands the vast and unsolveable problem that is random shit going wrong within the communication of an application over vast distances. SQLite works great because it rejects the dogma that having one piece of software deal with all of that shit is in any way a good idea. Back your dinky microservice with SQLite, run multiple copies, have them talk to each other and fumble about trying to get consensus over the data they contain in a very loose way. That will be much, much less difficult than managing a distributed decentralized database (I speak from experience). It's good enough for 90% of cases. Remember P2P applications? That was basically the same thing. A single process running on thousands of computers with their own independent storage, shuffling around information about other nodes and advertising searches until two nodes "found each other" and shared their data (aw, love at first byte!). It's not great, but it works, and is a lot less trouble than a real distributed database.
SQLite: QEMU All over Again?
I feel like a lot of fantastic software is made by a small number of people whose explicit culture is a mix of abnormally strong opinionatedness plus the dedication to execute on that by developing the tools and flow that feel just right. Much like a lot of other "eccentric" artists in other realms, that eccentricity is, at least in part, a bravery of knowing what one wants and making that a reality, usually with compromises that others might not be comfortable making (efficiency, time, social interaction from a larger group, etc).
The 'attention economy' corrupts science
And yet, in my career, I've noticed the rewards are increasing for being the person who is willing to focus on one thing for a long time (for several weeks, or months). For instance, I've never been the kind of software developer who could write obviously clever code. But I have written code that was admired and praised, and sometimes seen as the salvation of the company I was working for -- but not because I'm especially skilled as a software developer, but only because I was willing to think about specific problems, deeply, for longer than anyone else at the company. In 2012/2013, to the extent that I helped re-invent the tech stack at, it was because I was willing to spend weeks thinking about exactly why we'd reached the limits of what we could do with various cache strategies, and then what would come next. I then introduced the idea of "an architecture of small apps" which was the phrase I used because the phrase "microservices" didn't really become widespread until Martin Fowler wrote his essay about it at the very end of of 2013. Likewise, I now work as the principal software architect at, and my main contribution has been my willingness to spend weeks thinking about the flaws in the old database schema, and what we needed to do to streamline our data model and overcome the tech debt that built up over the 7 years before I was hired. We live in a world where there are large economic rewards for the kinds of people who are willing to think about one thing, deeply, for weeks and weeks or even months and months, until finally understanding a problem better than anyone else. I have to hope some young people eventually escape the attention-sucking technologies that try to sabotage their concentration, and eventually discover the satisfactions of thinking about complex problems, continuously, for months and months and months.
Ask HN: In what ways is programming more difficult today than it was years ago?
> Spending months to get the basics up and running in their React frontends just to be able to think independently of hand-holding tutorials for the most basic operations. Frontend devs who were present before the advent of the major web frameworks, and worked with the simplicity of js script + DOM (or perhaps jquery as a somewhat transparent wrapper) benefited from seeing the evolution of these frameworks, understanding the motivations behind the problem they solve, and knowing what DOM operations must be going on behind the curtain of these libraries. Approaching it today not from the 'ground up' but from high level down is imo responsible for a lot of jr web devs have surprising lack of knowledge on basic website features. Some, probably a minority, of student web devs may get conditioned to reach for libraries for every problem they encounter, until the kludge of libraries starts to cause bugs in and of itself or they reach a problem that no library is solving for them. I feel like this is particularly bad outcome for web devs because web I feel is uniquely accessible for aspiring developers. You can achieve a ton just piggybacking off the browser and DOM and it's API, the developer tools in the browser etc. But not if you are convinced or otherwise forced to only approach it from the other side -- running before you crawl, or trying to setup a webpack config before you even understand script loading, etc.
Ask HN: In what ways is programming more difficult today than it was years ago?
Programming today is easier in many ways: Information is readily available for free (I recall saving up a lot of money for a kid to buy specific programming books at the book store after exhausting my library’s offerings). Compilers and tooling are free. Salaries are much higher and developers are a respected career that isn’t just “IT”. Online programming communities are more abundant and welcoming than impenetrable IRC cliques of years past. We have a lot that makes programming today more comfortable and accessible than it was in the past. However, everything feels vastly more complicated. My friends and I would put together little toy websites with PHP or Rails in a span of weeks and everyone thought they were awesome. Now I see young people spending months to get the basics up and running in their React front ends just to be able to think independently of hand-holding tutorials for the most basic operations. Even business software felt simpler. The scope was smaller and you didn’t have to set up complicated cloud services architectures to accomplish everything. I won’t say the old ways were better, because the modern tools do have their place. However, it’s easy to look back with rose-tinted glasses on the vastly simpler business requirements and lower expectations that allowed us to get away with really simple things. I enjoy working with teams on complex projects using modern tools and frameworks, but I admit I do have a lot of nostalgia for the days past when a single programmer could understand and handle entire systems by themselves because the scope and requirements were just so much simpler.
DALL·E Now Available Without Waitlist
It's really amazing how DALL-E missed the boat. When it was launched, it was a truly amazing service that had no equal. In the months since then, both Midjourney and Stable Diffusion emerged and got to the point where they produce images of equal or better quality than DALL-E. And you didn't have to wait in a long waitlist in order to gain access! They effectively gave these tools free exposure by not allowing people to use DALL-E. Furthermore, the pricing model is much worse for DALL-E than any of its competitors. DALL-E makes you think about how much money you're losing continuously - a truly awful choice for a creative tool! Imagine if you had to pay photoshop a cent every time you made a brushstroke. Midjourney has a much better scheme (and unlimited at only 30/month!), and, of course, Stable Diffusion is free. This is a step in the right direction, but I feel that it is too little, too late. Just compare the rate of development. Midjourney has cranked out a number of different models, including an extremely exciting new model ("--testp"), new upscaling features, improved facial features, and a bunch more. They're also super responsive to their communtiy. In the meantime, OpenAI did... what? Outpainting? (And for months, DALL-E had an issue where clicking on any image on the homepage would instantly consume a token. How could it take so long to fix such a serious error?) You have this incredible tool everyone is so excited to use that they're producing hundred-page documents on how to get better results out of it, and somehow none of that actually makes it into the product?
fasterthanlime 🌌 (@fasterthanlime)
rustaceans really will implement TryFrom<(Lol, Lmao, GoodLuck)> instead of adding a single associated func, smh
Get in Zoomer, We're Saving React
What's really frustrating about all this is how passive and helpless the current generation of web developers seem to be in all this. It's as if they've all been lulled into complacency by convenience. They seem afraid to carve out their own ambitious paths, and lack serious gusto for engineering. If there isn't a "friendly" bot spewing encouraging messages with plenty of 👏 emoji at every turn, they won't engage.
Get in Zoomer, We're Saving React
If there's one solid criticism I've heard of React, it's this: that no two React codebases ever look alike. This is generally true, but it's somewhat similar to another old adage: that happy families all look alike, but every broken family is broken in its own particular way. The reason bad React codebases are bad is because the people who code it have no idea what they're supposed to be doing. Without a model of how to reason about their code in a structured way, they just keep adding on hack upon hack, until it's better to throw the entire thing away and start from scratch. This is no different from any other codebase made up as they go along, React or not.
Cinder is Meta's internal performance-oriented production version of CPython
To me it looks like lock-in. They chose a language good for prototyping and quick iteration, and then their codebase gets stuck with a permanent performance problem. You see the same problem in Python with regards to correctness - it's hard to refactor Python or change a large codebase and have it keep working correctly, so huge Python projects tends to ossify. It may be a rational solution only in the short-term, but still an objectively bad solution overall.
Cinder is Meta's internal performance-oriented production version of CPython
It's bizarre. I don't think it's an exaggeration that it's the 10th project I've heard about to speed up Python. Seriously, use a faster language. If you need a performant fork of Python, you're using the wrong tool for the job.
Show HN: I made 7k images with DALL-E 2 to create a reference/inspiration table
End of the day, unless it's opened up Dall-E 2 will be seen as an evolutionary dead end of this tech and a misstep. It's gone from potentially one of the most innovative companies on the horizon to a dead product now I can spin up equivalent tech on my own machine, hook into my workflow and tools in an afternoon all because Stable Diffusion released their model into the wild.
1Password delisting forum posts critical of their new Electron based 1Password 8
One of the very best things I ever did while working on an Android app was to buy a dirt cheap phone. Every performance problem was obvious. Every fix was a clear improvement. And when things were acceptable there, the app absolutely screamed on modern phones. We had startup times faster than Android's launch animation with a little bit of care. Our users loved it.
Neubrutalism is taking over the web?
> "People simply get bored with how their apps and websites look after six to seven years. They need a change" Real world objects rarely change design because of the costs involved. When they do, the change needs to justify that cost. For example, I'm not going to change the buttons on my microwave because I'm "bored" with them. The costs of changing software design is far less impractical and expensive, and therefore isn't driven by the same high level of justification. I strongly suspect then, there are two reasons for these design changes we see every couple of years in software: The first is easy, and most of us probably already agree; designers gotta design. They have to justify their salary _somehow_. The second is more philosophical. The west — and especially the U.S.A. — looks to alleviate existential crisis with distractions. Shiny new toys keeps us from having to face uncomfortable truths about the nature of reality (if you're not religious).
Neubrutalism is taking over the web?
> "People simply get bored with how their apps and websites look after six to seven years. They need a change" Real world objects rarely change design because of the costs involved. When they do, the change needs to justify that cost. For example, I'm not going to change the buttons on my microwave because I'm "bored" with them. The costs of changing software design is far less impractical and expensive, and therefore isn't driven by the same high level of justification. I strongly suspect then, there are two reasons for these design changes we see every couple of years in software: The first is easy, and most of us probably already agree; designers gotta design. They have to justify their salary _somehow_. The second is more philosophical. The west — and especially the U.S.A. — looks to alleviate existential crisis with distractions. Shiny new toys keeps us from having to face uncomfortable truths about the nature of reality (if you're not religious).
Hydration is pure overhead
if none of this makes sense to you - don’t try to make sense of it or you’ll be disappointed do you need 2000+ of dependencies to essentially show a HTML page in a web browser? why should you have to wait 5 minutes to generate a static website? Netlify and Vercel are well aware of these inefficiencies and offer you a “cloud” solution that promises to solve the problems you shouldn’t even have had in the first place if you think you need things like Gatsby or Next.js you’ve been brainwashed by capitalists
Automation is the serialization of understanding
To paraphrase the maxim, working automated systems evolve from working manual systems. But only some manual systems work. I start CI/CD by doing the whole process manually. For example, type the commands to build a Docker image, or spawn a VM, or obtain a secret. I encode all this in pseudo code, then put it in Bash (or Python). When a conditional branch appears (a different environment with different credentials), I treat it like any other code. Separate the bits that stay the same, and inject the bits that change. The problem with most CI/CD systems is that people tightly couple themselves to the tool without really understanding it - the point the article is making. They over-complicate the solution because the documentation encourages you to do that. When they want to customise, debug, or even migrate away from it, it’s very difficult.
Windows 95 – How Does It Look Today?
I lol'd at your comment. Poor UX designers. In an age of gentleness, I wish I could barge into their houses and rearrange all their furniture, toss the contents of their refrigerators into the bathtub, and spraypaint their bedrooms a cheap pink color. Because that's what they do to my computer interfaces at random intervals, and I have no power over it anymore.
Windows 95 – How Does It Look Today?
I lol'd at your comment. Poor UX designers. In an age of gentleness, I wish I could barge into their houses and rearrange all their furniture, toss the contents of their refrigerators into the bathtub, and spraypaint their bedrooms a cheap pink color. Because that's what they do to my computer interfaces at random intervals, and I have no power over it anymore.
Tell HN: AWS appears to be down again
We are barbarians occupying a city built by an advanced civilization, marveling at the hot baths but know nothing about how their builders keep them running. One day, the baths will drain and anyone who remembers how to fill them up will have died.
Had my first "Rust Moment" today.
Wait until you go back to Python after some time in Rust. Returning whatever you feel like from a function, having potentially uninitialized variables of whatever type, and all the other things that make Python fun feel like drunk driving a 747 when you come back.
How I made Google’s data grid scroll faster with a line of CSS
I work in UX, I am constantly being given designs that don't work well with native/semantic elements- a great example is tables. As soon as the table needs some kind of animation, drag-drop behavior, anything like that, I can't use a "table" anymore; or it becomes some frankenstein kafkaesque amalgamation that is impossible to maintain. Does the table really need an animation? (probably not) drag and drop? (probably not) But management and the people in charge of OK'ing these designs have a 'make-it-happen' attitude and nobody really cares about semantic, native feel when they've invested so much into a "design system" that is largely antithetical to that. Select elements are the bane of my existence. Impossible to style. I am constantly re-implementing a "select" because it has to look a certain way. Just terrible.
My ideal Rust workflow
> How do people develop in Rust? I'm trying to learn it, but it's hard to jump into code-bases and understand the code as I cannot run snippets. I might be able to help answer this! I've spent over 10 years of my career writing production code in Lisp or Scheme, and about 5 years now writing occasional production code in Rust. So maybe I can explain how the two workflows differ. In Lisp, it's super-easy to redefine a function or test some code. You can constantly test small things as you work. And you can easily open a listener on errors and inspect the current state. It's genuinely great. In Rust, you rely much more heavily on types and tests. You begin by nailing down a nice, clean set of data structures that represent your problem. This often involves heavy use of "enum" to represent various possible cases. Once you know what your data structures look like, you start writing code. If you're using "rust-analyzer", you'll see errors marked as you type (and things will autocomplete). If you want to verify that something works, you create a function marked "#[test]", and fill it with exactly the same code you'd type into a listener. Maybe you run "cargo watch -x test" in the background to re-run unit tests on save. Then, maybe 2 hours later, you'll actually run your program. Between rust-analyzer and the unit tests, everything will probably work on the first or second try. If not, you write more "#[test]" functions to narrow down the problem. If that still fails, you can start throwing in "trace!", or fire up a C debugger. This workflow is really common in functional languages. GHC Haskell has a lovely listener, for example, but I rarely use it to actually run code. Mostly I use it to look up types. The difference is that in strongly-typed languages, especially functional ones, types get you very close to finished code. And some unit tests or QuickCheck declarations take you almost the rest of the way. You don't need to run much code, because the default assumption is that once code compiles, it generally works. And tests are basically just a written record of what you'd type in a listener. For understanding code, the trick is to look at the data structures and the type signatures on the functions. That will tell you most of what you want to know in Rust, and even more in Haskell. So that's why I don't particularly miss a listener when working in Rust. Does this answer your question?
GitHub stale bot considered harmful
In my experience, these auto-closing bots are the natural result of software development workflows that treat issues as tasks to be closed, rather than a data point that a user is experiencing a problem of some kind (maybe they are doing things wrong, expecting something the project doesn't provide, or triggering a real problem – the exact cause is immaterial). This treatment of issue-as-a-task is made worse by micro-management frameworks like Agile, which encourage metrics on how many of these issues-as-tasks are closed, which leads to ill-advised features like this that close them automatically because "Duh, no one said anything in 30 days". If I were to design this myself, I would argue that the correct way to treat an issue is not to have it have a closed or open state at all. If the issue spawn a task or related tasks, you can close them. Or you can provide feedback on the issue that states that it is invalid. The user has already experienced a problem or wants a feature, there is no value in putting a red label that indicates "I'm done with this, please go away". It unnecessarily invalidates the experience of users who have their provided valuable time to report something to your software project. I think this is similar to the approach used by forums like Discourse, where a thread about a problem will usually not be closed or locked, but will just age out of current discussion if nobody brings it up.
Tech workers warned they were going to quit. Now, the problem is spiralling out of control. Tech workers complain of toxic work environments, unrealistic demands from employers, and a lack of career progression. Research suggests that they may have reached their limit.
It’s not just a lack of career progression for the technically inclined. It’s also the fact that extroverted project managers with no technical skills tend to shoot up into higher ranks despite holding a fraction of the experience of the technical staff. We’re literally being led by loud-mouthed idiots whose defining traits are that they don’t think deeply, they talk over people, and they thrive off meetings. If I have one more manager state, “I don’t understand technology, hahaha,” I’m going to scream. We’re a technology company. You work managing developers. You should understand technology! No manager working with developers in a tech company should feel comfortable admitting they don’t understand technology, let alone mention it to the whole team repeatedly. In fact, they shouldn’t have been hired in the first place. They damn sure shouldn’t be promoted!
Facts every web dev should know before they burn out and turn to painting
The thing that burns out web developer is web development. The constant shift of technologies (just for the sake of it) and the nerfed (to oblivion) environment is hell. As a web developer, after you learn the basics (~5 years) there is no feeling of where the "up" direction is. Everything feels like sidesteps. There is no feeling of permanence to your knowledge, unlike a lawyer or a doctor. The knowledge feels like water pouring into a full container - as much of it goes out, as it goes in. Switched to embedded systems 7 years ago to avoid the burnout. It is better, in my opinion. Funny enough, there is a natural barrier to entry that keeps most programmers away - You have to understand how computers work, how operating systems work and you have to be able to code in C/assembly. I have a lot of fun and actually picked up electronics as a hobby, which will help me better myself professionally. I think there is enough here to keep me entertained until I retire.
Facebook going down meant more than just a social network being unavailable
Was talking about this with a friend today, and I think this incident highlights why I sometimes get really depressed about my career and technology. I'm a Gen X-er, and I started my career in the late 90s. Before that I was a ham radio operator in junior high and HS (back when they had Morse code tests!). I remember the heady euphoria around the Internet then, and the vision of "tech utopia" was certainly the dominant one: the Internet would bring a "democratization of information" where anyone with a computer could connect to the Internet, publish a website, and communicate with people across the world. Really cool new services came online frequently. I still remember the first time I used Google, and at the time I was blown away by how good it was ("like magic!" I said) because the results were so much better than other search engines of the time. But these days, the older I get the more and more I feel like tech is having a negative impact on both society at large and me personally. In the 90s we all thought the Internet would lead to a decentralization of power, but literally the exact opposite happened. Sure, telcos sucked, but there were tons of them spread across all corners of the globe. Now there is 1 single megacorp that a sizable portion of humanity depends on for phone/text communication. It just makes me sad. Sure, there are pluses to tech I'm ignoring here, but I just think that how reality turned out so 180 from the expectations of the late 90s is what really hurts.
Facebook going down meant more than just a social network being unavailable
Was talking about this with a friend today, and I think this incident highlights why I sometimes get really depressed about my career and technology. I'm a Gen X-er, and I started my career in the late 90s. Before that I was a ham radio operator in junior high and HS (back when they had Morse code tests!). I remember the heady euphoria around the Internet then, and the vision of "tech utopia" was certainly the dominant one: the Internet would bring a "democratization of information" where anyone with a computer could connect to the Internet, publish a website, and communicate with people across the world. Really cool new services came online frequently. I still remember the first time I used Google, and at the time I was blown away by how good it was ("like magic!" I said) because the results were so much better than other search engines of the time. But these days, the older I get the more and more I feel like tech is having a negative impact on both society at large and me personally. In the 90s we all thought the Internet would lead to a decentralization of power, but literally the exact opposite happened. Sure, telcos sucked, but there were tons of them spread across all corners of the globe. Now there is 1 single megacorp that a sizable portion of humanity depends on for phone/text communication. It just makes me sad. Sure, there are pluses to tech I'm ignoring here, but I just think that how reality turned out so 180 from the expectations of the late 90s is what really hurts.
Do programmers dream of electronic poems?
I have never been called a massive wanker, but I do often get confused stares when I try to explain this. For me both literature, or creative writing, to be less presumptuous, and programming are ways of expressing the ideas, stories and models that float around in my mind when I am thinking about the world. Some stories are better told with fiction, others by software. Many can be told by both, in the same way that a painter can paint the same picture with different techniques and get results that highlight different aspects of the picture. As with all forms of expression, it is never possible to completely transfer the inner world of my brain to that of someone else. So we use approximations. Programming and creative writing are different techniques for making those approximations and both use text as a storage format. And thus they are naturally closely related to each other.
Enterprise Software Projects Killed the Software Developer
Elegant and clever code wont live through a maintenance cycle. I'll take a software developer who writes and structures code so change requests and code are written in a way that the DSL is the same across the organization. This makes changes easy. Clever people should be writing libraries or doing research. Don't kid yourself, you are either the guy who builds the building and its easy because its greenfield, or you are doing remodeling and the hard part is making the upgrade fit in the building and not look like shit.
Parser generators vs. handwritten parsers: surveying major languages in 2021
I took the compilers class at Stanford and never really understood the algorithms of bottom up parsing, or even really how grammars worked. I just made the tool work. I then went to work at a private company, and an older guy who had gone to a state school that taught recursive descent (his was the last class to teach it) taught me how to do it. In a month or so I had learned more about how grammars actually work, what ambiguity is, and so forth, than in my whole class at Stanford. I now teach compilers at a university, and I teach recursive descent.
Psst: Fast Spotify client with native GUI, without Electron, built in Rust
What's funny about having to rely on unauthorized clones to provide a fast native UX was that Spotify's original client back in 2008 started out as beautifully light, custom rendered native client. Few Apps ever had that wow factor the first time I used it, it was so much lighter and more responsive than anything else of the day. I remember being perplexed at how I could search and skip to any part of a song quicker than iTunes could looking at a local library. Everything was latency-free and instantaneous. We were building a Music Startup at the time, so we investigated how it worked. We we’re very surprised we couldn’t find any evidence of an established UI toolkit. It looked as though they had built their own custom UI renderer and optimized TCP protocol which sent back its metadata in XML. Their traffic looked like it was initially seeded from their own (or CDN) servers (for best latency) and then overtime we would see some P2P traffic on the wire. Our QT/C++ client had decent performance but was noticeably heavier than Spotify's. I was disappointed to see their native client eventually be abandoned and succumb to become yet another Chromium wrapper. I expect it fell to the pressures of a growing startup adding 100s of developers (without the skill of their original CTO/devs) where a native UI couldn't be updated and re-iterated as fast as a Web App. I wish they maintained 2 desktop clients, and left their native client alone to just be an audio player and push all their new social features to their new flagship CEF app. It's unfortunate the skill and desire of building fast native UIs are being lost to Electron and CEF wrappers. Seems the larger the organization the more likely they are to build new Web rendered Desktop Apps and we have to rely on unauthorized Indie efforts like this for fast, responsive native UIs.
Compiling rust is NP-hard
I have worked for 3 years on a project where it took a whole week to get the code compiled, signed by an external company and deployed to the device so that I could see the results. I just learned to work without compiling for a long time. Over time my productivity increased and the number of bugs fell dramatically. Working this way requires you to really think about what you are doing, which is always a good idea. This was over a decade ago and now I work mostly on Java backends and I am happy that I typically spend days or even weeks without ever compiling the code and that it usually works the first time I run it. I can't think how I could get back. It looks really strange to me to observe other developers constantly compiling and running their code just to see if it works. It kinda looks as if they did not exactly understand what they are doing because if they did, they would be confident the implementation works. The only time when I actually run a lot of compile/execute iterations is when I actually don't know how something works. I typically do this to learn, and I typically use a separate toy project for this.
‘Positive deviants’: Why rebellious workers spark great ideas
The fact that offering an idea that's better than what's already being done is seen as rebellious at all, as opposed to being the entire job of an engineer, or the definition of what engineers do, is not a good sign for any organization. Next they'll be talking about rebellious accountants who have recorded more numbers by the end of the day than were in the spreadsheet at the beginning, or subversive lawyers who review contracts that had not already been reviewed. Before long it will take a fifth-column delivery driver to move a pizza to a location it's never been before.
Untapped potential in Rust's type system
Interesting article, but I think the key to writing idiomatic Rust is not to stretch what the type system can do but rather be happy at what It can express and avoid unnecessary abstraction. The compile-time guarantees that we have to prove in Rust, also serve to give a hint for when not to abstract.
Rethinking the computer ‘desktop’ as a concept
The desktop is broken not because of the file/folder paradigm but because we stopped using files to represent information. Figma, Slack, and Notion should save their information to disk. You should be able to open a Notion document, or a Figma design, from your desktop, instead of through their Web interface. You should be able to save a Facebook post or Tweet and their replies to disk. Why can't you? Well, for one, social media companies don't want you to save stuff locally, because they can't serve ads with local content. Furthermore, browser APIs have never embraced the file system because there is still a large group of techies who think the browser should be for browsing documents and not virtualizing apps (spoiler: this argument is dead and nobody will ever go back to native apps again). Finally, the file system paradigm fails with shared content; you can't save a Google Doc to disk because then how can your friends or coworkers update it? It's much easier for Google to store the data on their server so that everyone can access it instead of you setting up some god-awful FTP-or-whatever solution so that your wife can pull up the grocery list at the store. I'm hoping the new Chrome file system API will bring a new era of Web apps that respect the file system and allow you to e.g. load and save documents off your disk. However, this still won't be good enough for multiplayer apps, where many devices need to access the same content at the same time. I don't know if there is any real way we can go back to the P2P paradigm without destroying NAT - WebRTC tries but WebRTC itself resorts to server-based communication (TURN) when STUN fails.
Ask HN: Does anyone else find the AWS Lambda developer experience frustrating?
You've discovered what many other people have: The cloud is the new time-share mainframe. Programming in the 1960s to 80s was like this too. You'd develop some program in isolation, unable to properly run it. You "submit" it to the system, and it would be scheduled to run along with other workloads. You'd get a printout of the results back hours later, or even tomorrow. Rinse and repeat. This work loop is incredibly inefficient, and was replaced by development that happened entirely locally on a workstation. This dramatically tightened the edit-compile-debug loop, down to seconds or at most minutes. Productivity skyrocketed, and most enterprises shifted the majority of their workload away from mainframes. Now, in the 2020s, mainframes are back! They're just called "the cloud" now, but not much of their essential nature has changed other than the vendor name. The cloud, just like mainframes: - Does not provide all-local workstations. The only full-fidelity platform is the shared server. - Is closed source. Only Amazon provides AWS. Only Microsoft provides Azure. Only Google provides GCP. You can't peer into their source code, it is all proprietary and even secret. - Has a poor debugging experience. Shared platforms can't generally allow "invasive" debugging for security reasons. Their sheer size and complexity will mean that your visibility will always be limited. You'll never been able to get a stack trace that crosses into the internal calls of the platform services like S3 or Lambda. Contrast this with typical debugging where you can even trace into the OS kernel if you so choose. - Are generally based on the "print the logs out" feedback mechanism, with all the usual issues of mainframes such as hours-long delays.
I can only think that modern front end development has failed
What upsets and concerns me the most is when I see poorly developed SPA on really important sites. For example, government service application websites. If reddit or nytimes has a bloated, intermittently failing SPA site, that's an annoyance. When it's a form to apply for unemployment, ACA health care, DMV, or other critical services, it's a critical failure. Especially since these services are most often used by exactly the population most impacted by bloated SPA (they tend to have slow or unreliable internet and slow computers, maybe even a cheap android phone is all they have). Such sites should be using minimal or no JS. These aren't meant to be pretty interactive sites, they need to be solid bulletproof sites so people can get critical services. And I haven't even mentioned how SPA sites often lack any accessibility features (which is so much easier to implement if sticking to standard HTML+CSS and no/minimal JS).
The Space of Developer Productivity
The problem starts with name. Developers are creating not producing. They don't make the same widget every day. When you are measuring productivity instead of creativity, you hinder creativity and therefore output.
The tree-based approach to organizing documentation sucks
Documentation sucks because nothing is used very often anymore. In the good old days (TM), software was used for much longer in pretty much the same shape. Think of GNU coreutils. In contrast, your API or your frontend code or your Amazon Lambda or your Microservice is quite likely not feature-complete, does some things that should be handled by a different component and was developed with exactly one use case in mind until it was "good enough". Thanks to scrum, no one cares about completeness, orthogonal design, or composition of smaller parts anymore. Hence documentation has only token value. Except, maybe, for end user documentation, but I am yet to encounter a "user story" that begins with "As a user, I want to do X, read how to do X in the documentation, follow the instructions and get the desired results."
Modules, Monoliths, and Microservices
My observation is that much of industry does not care about any of these technical or security issues. In theory microservices are technical artifacts, but what they tend to be are cultural artifacts. Microservice adoption is often driven by cargo culting, or (better) a considered decision to work around a lack of organisational cohesion. What microservices let you do is ship your org chart directly, and also map back from some functionality to an owning team. You know who to page when there's a problem and it's easier to tell who is late delivering and whose code is slow. In cultures with "lax" technical leadership (aka no everyone uses same thing mandate, I'm not judging) it lets teams use their favourite tools, for better or worse. Other org related things are the ability to have independent (per team) release schedules. Separable billing. Ability to get metrics on performance, cost and failures that can be used as indicators of team performance and promotion material. Microservices can also act as "firewalls", limiting the impact a bad hire or team can have across your codebase. None of this is intended to be negative judgement; microservices can (among other things), help teams feel a sense of agency and ownership that can be hard to maintain as org size scales up.
Why Databricks Is Winning
The one thing I see in my current company, and a growing trend with SaaS apps is that companies are forgetting how to actually engineer. Like Boeing- the more you outsource the less able you're able to react to changing market forces and fix issues. We run Hadoop & Spark internally, but the team is underfunded and stuck in a constant cycle of fighting fires. And the result (and part of a larger push of the company due to the same cycle of under-funding and culture issues) is that we're moving our petabytes of data to cloud providers into their systems. Not only is the cost of doing this dwarfing that it would take to actually fix our issues, but we're going to lose the people who know how to design and manage petabyte scale hadoop clusters. We wind up in a situation where we locked up data fundamental to our company and our position in the market with a 3rd party, and losing the talent that would allow us to maintain full control over the data. If the service increases prices, changes it's offering, or we get to a point where the offering doesn't meet our needs- we're fucked. It's nice that Databricks has a nice "offramp" that you can take to go somewhere else, but the general idea is the same.
The web didn't change; you did
The web really didn't change. It really didn't become complex. The web development process is not one single path. There is simply more choice and more options. We, you and I, the developers, consumers and businesses are responsible for demanding more complicated (and more thorough) tools. We are not, however, beholden to complexity.
Show HN: Straw.Page – Extremely simple website builder
i'm convinced this style is the next big thing in web UI - at least for startups/simple web tools/anything more dekstop-oriented than mobile-oriented. it's such great a rejection of all the stale, boring, "clean" UI convention that we're drowning in today. it's not just nostalgia - it's fun, it's rebellious, it has real character. it shouts "I'm having fun, why shouldn't you?"
I don't want to do front-end anymore
I tell anyone asking me for career advice the same two things. The first: the deeper in the world’s dependency tree you are, the less frequently things will churn, and the longer your skills will last. TCP doesn’t change very often. [Theoretical skills][1] may be applicable for your entire career. Human skills are more durable than any technical skill. Kernels don’t change very often (but more than TCP). Databases don’t change very often (but more than kernels). There is a spectrum of skill durability, and you will burn out faster if you find that all of your skills become worthless after a very short time. Dependency forces things not to change their interface, which causes the work to shift toward performance and reliability among other things that some people find far more rewarding over time. The second: the more people who do what you do, the worse you will be treated, the more BS you will have to put up with, the worse your pay will be, the faster you will be fired, the harder it will be to find a job that values you, etc… etc… Supply and demand applies to our labor market, and if you want to be happier, you should exploit this dynamic as heavily as possible. Avoid competition like the plague. But don’t avoid funding. How do you avoid competition without going off into the wilderness where there is no money to be made? Hype drives funding, but it also drives a lot of competition. However, using rule #1 above, the hyped things depend on other things. Many of these dependencies are viewed as “too hard” for one reason or another. That’s the best place to be. Go where other people are afraid, but nevertheless have a lot of money depending on. All hyped things rely on things that for one reason or another are not commonly understood, and tend not to change quickly. That’s a good place to find work involving durable skills that tend to have lower competition. Go where the dependency is high but the competition is low, and you have a better chance of being happy than people who go where the competition is high or the dependency is low. Bonus points if it’s actually “hard” because then you won’t get bored as quickly. There are areas of front-end that are high-dependency, durable, slow-changing, and low-competition. That’s where engineers are likely to be happiest. But these two principles apply to every field or zooming out to any business generally. I’m pretty happy working on new distributed systems and database storage engines for the time being. But I’m always looking for the things that are viewed as hard while also receiving significant investment, as these are the things that will ultimately give me more opportunities to live life on my own terms. [1]
Respect Your Power Users
I would also add that there are a few different types of power users. Two off the top of my head are "very active users" and "very technical users". The foremost often can be maintainers of communities. Example: Reddit or Discord. These same communities might end up being the main part of your product. Other examples include social media like Youtube, or even Instagram. To these users a different set of power tools are needed, than for the "technical power users" who need different kinds of power tools. For the "very active users", you might want to be able to provide things like UI customization, social media linking, statistics and easy tools for moderation. Examples of tools for "technical power users" might be providing a large set of actions that can be custom-key-bound. A macro/scripting API. An alternate API to your service completely (REST-ful), or support for modding. You can guess what those tools will later be used for quite simply I'm sure. :)
The unreasonable effectiveness of simple HTML – Terence Eden’s Blog
For some content, I think extremely simple HTML design is preferable over sexy styling and functionality. A lack of styling is a style itself and it sends signals to a user. The following link signals to me that there is no bullshit to be found (Warren Buffet’s website): [][1] Contrast the above website with this website that is trying to sell the user something and keep in mind that both websites are owned by the same organization: [][2] Same organization. Different goals embodied by different design choices. [1] [2]
New Intel CEO rehiring retired CPU architects
This is an encouraging move. My secondhand understanding was that Intel was losing top talent due to pressure to pay closer to median industry compensation. Top engineers recognized they were underpaid and left the company. I've been part of a similar downhill slide at a smaller company in the billion dollar revenue range. To be blunt, once the [mediocre] MBAs start realizing that the engineers are getting paid more than they are, the pressure to reduce engineering compensation is strong. Frankly, there are plenty of engineering candidates on the market who are happy with median compensation. Many of them are even great engineers and great employees. However, being a top company in a winner-take-all market requires the top engineers. The only way to attract and retain them at scale is to offer high compensation. I'm hoping that's part of what's happening here.
Pirate Bay founder thinks Parler’s inability to stay online is 'embarrassing'
I personally don't find their ability to remain online that surprising. The Pirate Bay and other torrent networks were built by people with a passion for building, maintaining and hacking things. People who, even without a solid CS background, would spend hours a day learning new things, developing distributed protocols, evading DNS blocks and hosting their content wherever they could to make it accessible - included the small server in their own garage if needed. And they are used by people who don't mind learning a new protocol or how to use a new client to get the content they want. I don't see the same amount of passion for technology and hacking among the Parler users, nor its maintainers. Those who believe in conspiracy content are people characterized by a psychological tendency to take shortcuts whenever they can in order to minimize their efforts in learning and understanding new things. So when the first blocker hits they usually can't see alternative solutions, because it's not the way their brains are wired. They always expect somebody else to come up with solutions for them, and they always blame somebody else when the solution won't come. And even if they decided to migrate their content to the dark web or on a Tor network, not many people will follow them - both because they don't have the skills, and because they don't want to acquire those skills. Plus, they'd lose the "viral network effect" that they get when posting click-bait content on public networks, the new censorship-proof network will only attract a small bunch of already radicalized people. And even if they wanted to hire some smart engineers to do the job for them, we all know that engineers tend to swing on the other opposite of the ideological spectrum. Those who have built systems for escaping REAL authoritarian censorship would rightfully feel disgusted if asked to apply their knowledge to provide a safe harbour for rednecks to vomit their conspiracy-theories-fueled hate.
Moral Competence | Evan Conrad
What is most interesting to me is that the business model he rejected[1] is not just the one of his app, but essentially the one used by almost all therapists. [1] [][1]: "Unfortunately, in order for the business to work and for us to pay ourselves, we needed folks to be subscribed for a fair amount of time. But that wasn't the case and we honestly should have predicted it given my own experience: as people did better, they unsubscribed. Unfortunately, the opposite was true as well, if folks weren't doing better, but were giving it a good shot, they would stay subscribed longer. So in order to continue Quirk, a future Quirk would need to make people feel worse for longer, or otherwise not help the people we signed up to help. If the incentives of the business weren't aligned with the people, it would have been naive to assume that we could easily fix it as the organization grew. We didn't want to go down that path, so we pivoted the company." [1]
Load testing is hard, and the tools are... not great. But why? | nicholas@web
The best you can do here is probably at the API and system design time, not at your test time. If you design a simpler API, you're going to have far less surface area to test. If you design a system with more certainly independent pieces (distinct databases per service, for example) then it's easier to test them in isolation than in a monolith. Doing this also lets you use a tool that is simpler, so you get two wins!
Fostering a culture that values stability and reliability
Next time you see a git repo which is only getting a slow trickle of commits, don’t necessarily write it off as abandoned. A slow trickle of commits is the ultimate fate of software which aims to be stable and reliable. And, as a maintainer of your own projects, remember that turning a critical eye to new feature requests, and evaluating their cost in terms of complexity and stability, is another responsibility that your users are depending on you for.
Fostering a culture that values stability and reliability
There’s an idea which encounters a bizzare level of resistance from the broader software community: that software can be completed. This resistance manifests in several forms, perhaps the most common being the notion that a git repository which doesn’t receive many commits is abandoned or less worthwhile. For my part, I consider software that aims to be completed to be more worthwhile most of the time.
Beyond customization: build tools that grow with us |
When a tool is designed to be simply customizable with an abundance of settings and options, adding power means adding complexity and steepening the learning curve. If great tools are about multiplying our creativity, customization gets in the way of this mission, because it limits how flexible our tools can be, and how easily we can learn to use them. We need a better way to build tools that wrap around our workflows than simply adding levers and dials for every new option.
Coding as a tool of thought – Surfing Complexity
This article really gets to a fundamental misunderstanding I feel our whole industry has: Programming is not construction, it is design. Yeah, houses rarely collapse, but structural engineers don’t expect that their second draft will be taken out of their hands and built. Or that the fundamental requirements of their structure will be modified. I don’t mean to suggest that programming should behave more like construction. The value of programming is the design. Programming is the act of really thinking through how a process will work. And until those processes are really done and won’t change (which never happens) that design never stops.
Coding as a tool of thought – Surfing Complexity
Developers jump to coding not because they are sloppy, but because they have found it to be the most effective tool for sketching, for thinking about the problem and getting quick feedback as they construct their solution.
Coding as a tool of thought – Surfing Complexity
As software engineers, we don’t work in a visual medium in the way that mechanical engineers do. And yet, we also use tools to help us think through the problem. It just so happens that the tool we use is code. I can’t speak for other developers, but I certainly use the process of writing code to develop a deeper understanding of the problem I’m trying to solve. As I solve parts of the problem with code, my mental model of the problem space and solution space develops throughout the process.
Coding as a tool of thought – Surfing Complexity
By generating sketches and drawings, they develop a better understanding of the problem they trying to solve. They use drawing as a tool to help them think, to work through the problem.
HTML Over The Wire | Hotwire
I'm not going to lie, when I hear "SPA", I don't think "fast"; I think "10s of megs of javascript, increasingly unresponsive browser tab". Maybe that's an unfair genralisation from a small percent of poorly written SPAs, but that small percent have really had me hankering for multiple page websites with judicious use of JS.
Toolchains as Code
Just like Go set a new standard that languages should come with their own auto-formatter, I think rustup planted a seed that programming platforms should also come with their own tool manager. My hope for JavaScript is that eventually Node will ship with a tool manager similar to or even based on Volta.
My Engineering Axioms
Every program has state, but how that state is managed can make a world of difference. Poor management of state is a huge contributing factor to overall system complexity, and often occurs because it hasn't been thought about early enough, before it grew into a much worse version of the problem.
My Engineering Axioms
Unless you're working completely alone, it's not just your ability to solve technical problems, to write good code, etc, that matters. To the contrary, they matter even less if you make the people around you unhappy and less productive. Just like learning to write good code, you have to learn "to people" good as well. Empathy is a big part of this, as is recognising that people are different – be caring, be understanding, help others and ask for help yourself, be nice. Be an engineer others want to work with.
My Engineering Axioms
Until you have a high degree of confidence that your abstraction is going to pay for itself because it solves a real, abstract problem you really do have, don't do it. Wait and learn more. Until then, repeating code can help avoid dependency, which itself makes the code easier to change independently or delete. A premature abstraction creates complexity through dependency and indirection, and can become a bottleneck to your ability to respond to change.
Write code. Not too much. Mostly functions. | Brandon's Website
Code, like food, has value. I think those of us who write it can (hopefully) agree on that. Some, though, are so afraid of writing/eating too much that they avoid writing/eating what they should. In the context of programming, I think this translates to an unhealthy fear (again, for some) of duplication. A little bit of duplication - writing something in a way that doesn't completely maximize conciseness - isn't the end of the world. Sometimes it's the best path forward. Sometimes it's okay to copy-and-modify here and there, especially when you're still figuring out what your application will end up being.
Back to the '70s with Serverless
One thing that surprised me as a latecomer to software development coming from a visual arts background is how much choice of technology and working practices are purely fashion driven. The thing about fashion is that the way if develops is largely arbitrary. Changes in fashion resemble a drunken walk through possible design space with the drunkard receiving regular shoves from "influencers" who are usually trying to sell you something. Occasionally you have a fashion "revival" where someone takes an idea from the past, gives it a new spin, and then sells it back to newcomers as the next big thing. This seems especially true in the types of startups and companies many HN readers work at or aspire to join / build - that is ones which are low stake / high reward. I think when you combine the low stakes nature of the VC driven startup world with its cult of youth and the in group conformity of young people this is what you get. [1] by low stakes I mean no one will die and you won't be prosecuted if your single page app startup goes tits up. Indeed you're supposed to "fail fast" precisely because the cost of failure is so low. Even if a VC or angel has invested a few million in you, to them that's still low stakes because they exist on an entirely different plane of wealth and you are just one of multiple bets. [2] We're going to rebel by all dressing the same but not the same as our dad!
Playmaker: The Reality of 10x Engineer | by Ofer Karp | Nov, 2020 | Medium
10x engineer is underpaid senior working as middle/junior. or underpaid architech/principal workong as senior engineer.
Why software ends up complex · Alex Gaynor
Taking on the responsibility of pushing back hard on poorly conceived new features is one of the important hidden skills of being an effective software developer. Programmers who just do whatever they get told like marching ants end up shooting an organization in the foot long term. You have to develop an opinion of the thing you're building/maintaining and what it should be able to do and not do. You can't count on project managers to do that. The trick to doing this effectively is to find out the problem the feature is actually trying to solve and providing a better solution. Usually the request is from end users of software and they have identified the problem (we need to do xyz) and prescribed a solution (put the ability to do xyz in a modal on this page.) But if you can look to what other software has done, do a UX review and find a way to add a feature in that solves their problem in a way that makes sense in the larger context of the software, they won't have a problem with it since it solves their problem and the codebase will take less of a hit. Unfortunately, it's a lot easier to just add the modal without complaint.
Why software ends up complex · Alex Gaynor
Every feature request has a constituency – some group who wants it implemented, because they benefit from it. Simplicity does not have a constituency in the same way, it’s what economists call a non-excludable good – everyone benefits from it. This means that supporters can always point to concrete benefits to their specific use cases, while detractors claim far more abstract drawbacks. The result is that objectors to any given feature adition tend to be smaller in number and more easily ignored. Leading to constant addition of features, and subtraction of simplicity.
Why software ends up complex · Alex Gaynor
The most natural implementation of any feature request is additive, attempting to leave all other elements of the design in place and simply inserting one new component: a new button in a UI or a new parameter to a function. As this process is repeated, the simplicity of a system is lost and complexity takes its place. This pattern is often particularly obvious in enterprise software, where it’s clear that each new feature was written for one particularly large customer, adding complexity for all the others.
Can developer productivity be measured? - Stack Overflow Blog
In every organization I've worked in, it was obvious who the high performers were and who the low performers were. It was obvious to everyone. The only blind spots were people usually seriously misjudged their own performance. The problem, however, is that management is always being pushed to make objective measurements. For example, to fire someone, you have to first put him on an improvement plan with objective measurements. Otherwise, you're wide open to a lawsuit over discrimination, etc. You have to prove to a judge someone isn't performing, or that you gave raises based on performance. Management also gets pushed into these attempts at objective measurements by attempts to optimize the numbers like what works great for a manufacturing process.
Can developer productivity be measured? - Stack Overflow Blog
This assumes direct managers want productive developers - this is not my experience. The goal of managers is to increase the number of people they manage, and get more money. I have time and again done things fast only to have blocks put in place to slow things down - no one wants the job done easily and go home, where's the money in that. The inability to measure productivity is a direct result of this imho.
Can developer productivity be measured? - Stack Overflow Blog
Software engineering is a creative, not a manufacturing discipline. Every one of these attempts to measure or gauge developer productivity seems to miss that point.
Why is the Google Cloud UI so slow? | DebugBear
The real answer is that Google's promotion and hiring processes don't respect front end developers. Systems programming and distributed systems are considered "hard" and worthy of reward. This explains why Google's front ends are bad, and it also explains why there's a proliferation of non-composable distributed systems inside Google. As a second order effect, the design of those back ends also make it harder to make fast front ends. And front end devs are often using tools designed for back end devs, like the Bazel build system. (Compare that to FB having online / incremental compilers for Hack, as far as I understand.) So they either don't get the best people working on front ends, or the people they have aren't doing their best work because they're looking to move into a role that may be more respected or rewarded. Before 2005, Google built two of the most innovative AJAX apps ever: GMail and Maps. People may not remember how awesome they were. When GMail came out, it was faster than Microsoft Outlook on desktop, which I was using at the time. You would click and your message would appear instantly, which was not true of desktop apps. The app startup time was also better than desktop! (i.e. seeing all your messages from a cold start) When Maps came out, people didn't believe that the scrolling and zooming could be done without a Flash app. It also had incredibly low latency. But somewhere along the way the company lost its leadership and expertise in the web front end, which I find very sad. (I worked there for many years, but not on front ends.) The slow Google+ app circa 2011 was a great example of that, although I believe the structural problem had set in before that project. I don't think there's any question that FB and even MS are significantly more accomplished in these areas. They're the "thought leaders" (React, Reason, TypeScript, etc.) --- edit: Also, if you want to remember what Google UI looked like circa 2005, look at sourcehut: [][1] It was fast, simple, and had a minimalist style (though some people mistake that for no style). There is probably a generation of people who are scratching their heads at that claim, but yes that's pretty much what Google used to look like: the home page, which lacked JS; News; Groups; Webmaster Tools; Ads front end to some extent, etc. [1]
Winning back the Internet by building our own | ROAR Magazine
For Cubans, who were barred from connecting their own internets to the globally-networked Internet due to the US embargo, SNET provided everything you would expect to get through your computer, like news, games, blogs, social networking and more. It had all this even though it did not connect to the Internet we are most familiar with. Meanwhile, both Guifi and NYCMesh offer its users a combination of “intra-mesh services” and content for local residents similar to SNET along with more traditional Internet access, highlighting the fact that building our own internets is not an either-or proposition, nor a zero-sum game.
An ex-Googler's guide to dev tools
In short, the build system is often a big giant hairball, and one that you should be wary of trying to disentangle before you pick off the lower hanging developer productivity fruit. It may be tempting to tackle this earlier, because Blaze was worlds better than what you're using now and Google has even helpfully open-sourced a derivative of Blaze called Bazel. But Bazel is not Blaze—for one, it lacks a massive distributed build cluster that comes free alongside it—and the world outside of Google is not Google.
An ex-Googler's guide to dev tools
The most intractable part of the software development life cycle is often CI and the build system. This is because understanding the build often involves understanding every piece of the overall codebase in a fairly nuanced way. Speeding up the build is something that various people try to do over time, and so the build code accrues a growing set of hacks and optimizations until the point is reached where the number of people who actually understand enough about what is going on to make a change with zero negative consequences is very small.
An ex-Googler's guide to dev tools
As a new member of the team, you likely don't have the influence or authority to change all the tools your team uses. Moreover, you also lack knowledge—knowledge of how and why your new team behaves the way it does and why it uses its current set of tools. Simply copy-pasting whatever worked for Google is not necessarily going to work for your new team. So learn what is working for your new team along with what isn't.
Performance Matters • Hillel Wayne
Most of us aren’t writing critical software. But this isn’t “critical software”, either: nobody will suddenly die if it breaks. You just switch back to paper PCRs. But it could have saved lives. At scale, it could have saved people dying from PCR errors. It could have saved the person the EMTs couldn’t get to because they lose an hour a week from extra PCR overhead. If it was fast enough to use.
AWS Cognito is having issues and health dashboards are still green
We hired an engineer out of Amazon AWS at a previous company. Whenever one of our cloud services went down, he would go to great lengths to not update our status dashboard. When we finally forced him to update the status page, he would only change it to yellow and write vague updates about how service might be degraded for some customers. He flat out refused to ever admit that the cloud services were down. After some digging, he told us that admitting your services were down was considered a death sentence for your job at his previous team at Amazon. He was so scarred from the experience that he refused to ever take responsibility for outages. Ultimately, we had to put someone else in charge of updating the status page because he just couldn't be trusted. FWIW, I have other friends who work on different teams at Amazon who have not had such bad experiences.
Essay: How do you describe TikTok? - Kyle Chayka Industries
Thanks for this experiment in critical writing, it's appreciated. Looking forward to more critiques of algorithms from an experiential viewpoint. Reviewing an algorithm seems to me like reviewing architecture, in that social media creates a sense of space within its platforms. You noted that TikTok feels like a canal, being close to one-dimensional (which is what makes it so pleasant). There's a careful control/management of the space which separates a well-curated feed from a lesser one. On TikTok, you can go forwards, or you can go backwards. Instagram used to be one-dimensional, but over time has sprawled into 4 or 5 dimensions, ruining it in my opinion. The algorithm has a difficult time dealing with the added complexity, and it's not very beginner-friendly anymore. Meanwhile, users tend to navigate along the dimensions they're already used to, and automated suggestions are treated as an intrusion. TikTok's success is its well-defined boundaries which give it better control over the experience. (I could comment about the American obsession with having "choice", but I'll shelve that one for now.)
How we designed Dropbox’s ATF - an async task framework - Dropbox
I can understand the need for a company to be constantly trying to add value to their product, but that tendency to be changing so much can easily cause you to lose sight of what made you popular in the first place. I use Dropbox personally to keep documents synced between my computer and my wife's and also to grab documents I need from the web if I'm on another computer. I occasionally share a folder if I need to give a large number of files to someone. I recently had a notification come up on the dropbox taskbar icon and it popped up this huge window that looked like a massive electron app. In the old days, there wasn't even a UI, just a context menu that also showed the state of the sync. For me, Dropbox provides the most benefit when it's not visible, running invisibly in the background doing it's thing.
Geek Blight - Origins of the youtube-dl project
Last, but not least, tools like youtube-dl allow people to access online videos using only free software. I know there are not many free, libre and open source software purists out there. I don’t even consider myself one, by a long shot. Proprietary software is ever present in our modern lives and served to us every day in the form of vast amounts of Javascript code for our web browser to run, with many different and varied purposes and not always in the best interest of users. GDPR, with all its flaws and problems, is a testament to that. Accessing online videos using youtube-dl may give you a peace of mind incognito mode, uBlock Origin or Privacy Badger can only barely grasp.
No More Free Work from Marak: Pay Me or Fork This
Seriously. What's the point of open source if companies just steal it, build billion dollar industries on top, and then lock everything down? Apple is telling us we can't run our own software on their goddamned devices, yet they built their empire on open source. Look at Facebook, Google, Amazon. They've extracted all the blood they can and given us back scraps. AWS is repackaged software you pay more for. Yes, it's managed, but you're forever a renter. They've destroyed our open web, replaced RSS with DRM, left us with streaming and music options worse than cable and personal audio libraries. The web is bloated with ads and tracking, AMP is given preference, Facebook and Twitter are testing the limits of democracy and radicalizing everyone to cancel one another. Remember when the Internet was actually pleasant? When it was nice to build stuff for others to use? Stop giving your work away for free when the companies only take.
Technical debt as a lack of understanding
I've had to explain this to non-technical stakeholders many, many times over the years, and I always use the restaurant metaphor: If you run a commercial kitchen and you only ever cook food, because selling cooked food is your business -- if you never clean the dishes, never scrape the grill, never organize the freezer -- the health inspector will shut your shit down pretty quickly. Software, on the other hand, doesn't have health inspectors. It has kitchen staff who become more alarmed over time at the state of the kitchen they're working in every day, and if nothing is done about it, there will come a point where the kitchen starts failing to produce edible meals. Generally, you can either convince decision makers that cleaning the kitchen is more profitable in the long run or you can dust off your resume and get out before it burns down.
Technical debt as a lack of understanding
Software development looks a lot like evolution. The market and business requirements are the environment that weeds out the unfit software. Adapt or die. Codebases that are slow to adapt to outside changes are like species that are slow to adapt to selection pressures. So like the vestigial organs bursting from infection, so are companies that are unable to ship because devs are slowed down by messy code.
Technical debt as a lack of understanding
The ugly code can be dealt with. But we can't dealt the ugly environment. The most severe technical debt is the environment, that is OS, toolchains, framework, library, were fixed at the time development started. Updating the environment shall be part of the cost of development, but we tend to ignore it for more present short term gain, burden the cost to the future self. Within a few years, the environment is too old to work with. We have to deal with the bugs that were fixed years ago in upstream, reinvent the features that was also present in the upstream. 5 years past and we seriously consider updating the environment but since there was no update, existing code relies on old behaviours so we have to fix all of them but that doesn't introduce any short term gain so updating was abandoned. 10 years past and software is dead.
Technical debt as a lack of understanding
The problem with analogies is that software is fundamentally new. It's not debt where you can just pay it off after the launch. It's not a mess where a cleaning crew can have it taken care of in a day or a week. It's not a structure that will collapse because you added one too many storeys. Software takes all the guardrails off of complexity. A swiss watch is a mechanical masterwork, but the complexity is limited because you have to fit the gears into a limited space. Everything else we deal with has some kind of pushback on complexity, with the possible exception of biological systems that take millions of years to change. Software can grow in complexity with no obvious bound. You can tackle any one particular bug with an extra branch to say "don't let this happen". But a gigabyte of branches is a hell of a lot of complexity. Software engineering is an attempt to wrangle that complexity through all kinds of strategies from "architecture" (another poor analogy) to type systems and OOP and FP and the actor model and everything else. Technical "debt" is really the mismanagement of complexity. It's hard to understand the costs because the costs are inherently unknown unknowns. If you mismanage complexity, then all estimates are meaningless because at any point you could hit a never-ending fractal of problems. It might be completely intractable to add any significant new feature. Developers want to ship features, call it a job well done and take some time off for Christmas. When working with technical debt, no matter how smart the developer is, it's really just luck of the draw who hits a fractal of problems and never finishes and who doesn't and converges on a solution (and when it's bad enough, the latter just never happens).
So you want to buy a farm?
Also, I suspect many of us here "spent" time learning programming as children/teenagers and honed it as early twenty-somethings. At those stages of life time is essentially free and unlimited. You can easily pull allnighters and 40 hour hacking weekends and 80 hour weeks - and you do it because it's exciting and fun, and it has only very minor opportunity costs - you might miss a school or college assignment deadline, or a few shifts at your minimum wage part time job. Your bedroom at your parents house or you college dorm is paid for already (even if just by usurious student loans). Once you get to the "disillusioned with the damned tech industry" stage of your life though, you have responsibilities and rent/loans/bills to pay and probably family you need/want to spend time with and a circle of friends who're in the same stage of life who can't on zero notice order in pizza and mountain dew and hack from 6pm on Friday thru to midnight Sunday catching only naps on the couch as needed. I reckon there's almost as much of a hill to climb for a "woodworker since junior high" looking at programming as a way out of a woodworking career they've become jaded with - as there is for a thirty-something software engineer dreaming of building timber boats for a living instead of being part of "The best minds of my generation are thinking about how to make people click ads." -- Jeff Hammerbacher (But yeah, you don't need to buy new timber when you accidentally "move fast and break things" as a programmer. On the other hand, at least the tools you buy as a woodworker will still work and be useful in a decade or century's time...)
Write code that is easy to delete, not easy to... — programming is terrible
If we see ‘lines of code’ as ‘lines spent’, then when we delete lines of code, we are lowering the cost of maintenance. Instead of building re-usable software, we should try to build disposable software.
Technical debt as a lack of understanding
You can’t expect people to be productive in something that was a culmination of rushed code, poorly understood requirements, and shortcuts made by people who no longer work there. At that point your technical debt balloon has popped, you are in possession of a toxic asset, and it’s time to pay the piper.
Technical debt as a lack of understanding
Knowledge management is so important in organizations, but they rarely undergo that critical step of reorganizing to reflect the current understanding. Need evidence? Take a look at your nearest corporate wiki. I can almost guarantee it’s a mess because most companies should never have wikis. Successful wikis, like Wikipedia, are powered by an army of editors and most organizations will never prioritize that much time or content strategy. Poorly managed knowledge leaves organizations with the memory of goldfish. I can’t tell you how many new product initiative meetings I’ve been in where no one remembers the meeting about the exact same thing from two quarters ago. It’s like Groundhog’s Day, but you’re having the same meetings over and over.
Technical debt as a lack of understanding
In a go-go-go product cycle, that loss of understanding begins to create problems that have literal and figurative costs. A general sense of confusion builds and builds. The developer economics are fairly simple to quantify; either you slow down and pay someone to refactor and document the code after every major iteration, or you pay every developer who works on the project until the end of time to stare at the code for a few hours and wonder what the hell is going on. That dumbfounded staring at the codebase compounds over time. Organizationally, you pay in velocity and turnover; talented people are going to leave after a few rounds of bullshit.
Microservices – architecture nihilism in minimalism's clothes
In my opinion, microservices are all the rage because they're an easily digestible way for doing rewrites. Everyone hates their legacy monolith written in Java, .NET, Ruby, Python, or PHP, and wants to rewrite it in whatever flavor of the month it is. They get buy in by saying it'll be an incremental rewrite using microservices. Fast forward to six months or a year later, the monolith is still around, features are piling up, 20 microservices have been released, and no one has a flipping clue what does what, what to work on or who to blame. The person who originally sold the microservice concept has left the company for greener pastures ("I architected and deployed microservices at my last job!"), and everyone else is floating their resumes under the crushing weight of staying the course. Proceed with caution.
Microservices – architecture nihilism in minimalism's clothes
Microservices are popular because managing large teams is a pain in the ass and creating a small team to spin off some new business case is really easy to manage. You get budget, you create the new team, if it sucks, reorganize or fire the team (and offload the services to other teams). I'm telling you, it's all Conway's Law. We literally just don't want to think about the design in a complex way, so we make tiny little apps and then hand-wave the complexity away. I've watched software architects get red in the face when you ask them how they're managing dependencies and testing for 100s of interdependent services changing all the time, because they literally don't want to stop and figure it out. Microservices are just a giant cop-out so somebody can push some shit into production without thinking of the 80% maintenance cost.
Keeping Netflix Reliable Using Prioritized Load Shedding
I think you’re talking about SPAs in specific. Many have race conditions in frontend code that are not revealed on fast connections or when all resources are loaded with the same speed/consistency. Open the developer console next time it happens, I bet you’ll find a “foo is not a function” or similar error caused by something not having init yet and the code not properly awaiting it. If an SPA core loop errors out, load will be halted or even a previously loaded or partially loaded page will become blank or partially so. Refreshing it will load already retrieved resources from cache and often “fixes” the problem.
If Not SPAs, What?
I think there is another layer to that conversation. Frameworks become bureaucratic and boring because they are developed by large teams for large teams. Most developers are working on small projects and need more fun and less maintaining huge amount of boilerplate code that recreates the browser. The framework that I feel makes development less ugly is svelte. But still, I really don't like the idea of heavy client side websites. It really makes everything more complicated and the user's device slower. I love the simplicity of Turbolinks, I love how clean svelte code is and I am trying to figure out the "glue"
Sharp tools for emergencies and the --clowntown flag
This seems like a good compromise to me. The tools that provide safety eventually fail, but you need social pressure to avoid devs saying ‘f*** it. We’ll do it live.’ every day.
Sharp tools for emergencies and the --clowntown flag
The last thing you want is to normalize the use of a safety override. Best practices in software aren't usually "written in blood" like they are with "real" engineering disciplines, but they still need to be considered. The number of outages, privacy leaks, data loss events and other terrible things could be greatly reduced if we could just learn from our own collective history.
Sharp tools for emergencies and the --clowntown flag
In particular, "clowntown" made it out of the spoken realm and back into the computers in the form of command-line arguments you could pass to certain tools. By using them, you were affirming that whatever you were asking it to do was in fact broken, crazy, goofy, wacky, or maybe just plain stupid, but you needed it to happen anyway. It was a reminder to stop and think about what you were doing, and why you had to resort to that flag in the first place. Then, when the fire was out, you should go back and figure out what can be done to avoid ever having to do that again.
Software correctness is a lot like flossing • Hillel Wayne
One reason I don’t like the “developers don’t care” excuse is that it’s too nihilistic. If that’s the case, there is nothing that we can do to encourage people to use better correctness techniques. Changing “developers don’t care” would mean changing the fundamental culture of our society, which is way above our pay grades. On the other hand, if adoption is a “flossing problem”, then it’s within our power to change. We can improve our UI/UX, we can improve our teaching methods, and we can improve our auxiliary tooling.
Surviving disillusionment - spakhm's newsletter
If you work in technology, the monastery can be distant and vague, whereas Paul from marketing wants to circle back with you here and now. Then, as you circle back again and again, the monastery recedes further into the distance, and the drudgery appears closer and closer, until it occupies your entire field of vision and you can't see anything else.
Surviving disillusionment - spakhm's newsletter
But sitting at a mandated retrospective or mindlessly gluing APIs together doesn't put me over the moon. It makes me feel the opposite (whatever the opposite of being over the moon is). And so, engineers are faced with two realities. One reality is the atmosphere of new technology, its incredible power to transform the human condition, the joy of the art of doing science and engineering, the trials of the creative process, the romance of the frontier. The other reality is the frustration and drudgery of operating in a world of corporate politics, bureaucracy, envy and greed— a world so depressing, that many people quit in frustration, never to come back.
Surviving disillusionment - spakhm's newsletter
Once you observe the darker side of human nature in the technology industry, you cannot forget or unsee it. The subsequent cynicism can be so disheartening that the romance of the computer revolution is beat out of people completely. I've met many engineers with extraordinary talent who decided to stop making software. They wanted to program computers all their lives. They were born for it. After spending six, eight, ten years in the industry, they quit for good. Now they're running breweries and hydroponic farms, with no desire to ever again touch a compiler, let alone get back into the fray.
Be prolific
It's the same with software I imagine, because of several reasons. 1. Writing more code (and being conscious of it) makes you a better engineer. You'll run into more issues that you will fix and, hopefully, remember. 2. If you'd take the art example and say "Paint 20 cubist pieces", and then transfer that to "Write 20 authentication servers", each iteration you'll benefit from what you learned and be able to 'clean up' the code. It's essentially writing 20 PoCs where each PoC improves on the last one. EDIT: Writing more versions also allows you to explore more ideas without fear. If you have to write "one good version" you'll be less prone to exploring 'exotic' ideas. So you'd benefit from that as well.
Forcing Functions in Software Development
At an agency, we used to run our web apps on some crappy 08 model laptops running on a gig of memory with outdated browsers. If the webapp ran there without major hitches, it was considerd good enough. It made everyone on the team think hard about optimizing even before a single line of code was written. It really did force excessive simplicity and not jumping on new libs/frameworks just because we can.
You Reap What You Code
One we first adopt a new piece of technology, the thing we try to do—or tend to do—is to start with the easy systems first. Then we say "oh that's great! That's going to replace everything we have." Eventually, we try to migrate everything, but it doesn't always work. So an approach that makes sense is to start with the easy stuff to probe that it's workable for the basic cases. But also try something really, really hard, because that would be the endpoint. The endgame is to migrate the hardest thing that you've got. If you're not able to replace everything, consider framing things as adding it to your system rather than replacing. It's something you add to your stack. This framing is going to change the approach you have in terms of teaching, maintenance, and in terms of pretty much everything that you have to care about so you avoid the common trap of deprecating a piece of critical technology with nothing to replace it. If you can replace a piece of technology then do it, but if you can't, don't fool yourself. Assume the cost of keeping things going.
You Reap What You Code
The curb cut effect was noticed as a result from the various American laws about accessibility that started in the 60s. The idea is that to make sidewalks and streets accessible to people in wheelchairs, you would cut the part of the curb so that it would create a ramp from sidewalk to street. The thing that people noticed is that even though you'd cut the curb for handicapped people, getting around was now easier for people carrying luggage, pushing strollers, on skateboards or bicycles, and so on. Some studies saw that people without handicaps would even deviate from their course to use the curb cuts. Similar effects are found when you think of something like subtitles which were put in place for people with hearing problems. When you look at the raw number of users today, there are probably more students using them to learn a second or third language than people using them with actual hearing disabilities. Automatic doors that open when you step in front of them are also very useful for people carrying loads of any kind, and are a common example of doing accessibility without "dumbing things down." I'm mentioning all of this because I think that keeping accessibility in mind when building things is one of the ways we can turn nasty negative surprises into pleasant emerging behaviour. And generally, accessibility is easier to build in than to retrofit. In the case of the web, accessibility also lines up with better performance.
Knolling | Andri is…
An efficient team is, invariably, a team that keeps the code tidy and all external aspects of it up to date. Always be knolling. This does not directly contribute to the solution or success of the current task, but the current task is not your entire job responsibility. In the long run, your job is to complete tasks consistently and in accordance to specifications. If you’re held up by ancillary tasks such as upgrading dependencies or unwinding an abstraction that was meant to solve duplication that turned out to be incidental, then you have failed to keep a tidy system.
The open source paradox - <antirez>
Like a writer will do her best when writing that novel that, maybe, nobody will pay a single cent for, and not when doing copywriting work for a well known company, programmers are likely to spend more energies in their open source side projects than during office hours, while writing another piece of a project they feel stupid, boring, pointless.
How To Be An Anti-Casteist
The strict social hierarchical dynamics of Indian culture is damaging to a lot of workplaces. 2nd generation Indians are great. The Indians that are from lower casts or from oppressed groups like Christians or Muslims are really great. But the higher castes are extremely insular, and treat anyone of any race poorly. This might be taboo, but every time I see a situation where there are multiple Indians in a reporting chain, I run. If you have an Indian above and below, you will be bypassed on work, undermined, and given absurd directions, almost designed to drive you out. Then there is the case where if an Indian gets into management, they will start filling everything with their friends. Other management positions, they will start fighting to bring in some contractors from some place like Infosys. Its the death knell of the IT division at the company. Being on a team where you are the only non-Indian means you will be an outcast. You'll not be invited to meetings, they'll talk in their native tongue to exclude you. I've been the only white guy working with Chinese, and they don't do that. I've been in similar situations with Africans / and African Americans and they will welcome you right along. This is the truth, no matter how politically incorrect it is, and every time you walk into an IT office and there are 80% Indians, that's the reason.
Distance traveled |
There are so many forces pushing us to move as fast as possible, but little about doing good work is about getting places as fast as we can.
Is revenue model more important than culture?
This is why I always gravitate towards software projects that are centered around making money (within ethical bounds, of course). The closer to the bottom line my code is, the larger the sales and support team is around my code, and the more customers there are (real paying customers, not internal employees who like to be called customers) using my code, the better. It may sound overly hard-nosed and cynical to some people, but I find it's just the opposite. The drive to make more money is the only thing that trumps every other petty motivation people follow at work. It trumps favoritism, empire building, and intra-office rivalries. It trumps good ol' boys networks and tech bro networks. Money brings people into the same room who would never normally be in a room together, and they do it willingly. It forces people in power to listen to small fries. While money corrupts on an individual level, it purifies on an institutional level. Its universally accepted value allows a variety of individual motives to flourish. This seems to change once a company goes public and hits a certain size, as the flow of money becomes less and less tied to actual sales and consumer behavior and more and more based on financial engineering and stock price.
Update on Mozilla WebThings
Mozilla seems to be really underperforming in upper management - all of these initiatives that have failed have resulted in engineering layoffs. When will the business unit leaders responsible for repeated failure be let go and replaced?
We need physical audio kill switches
I've been thinking the same about power switches lately. If I turn a flashlight, or an old radio on or off, I flip a switch and get the result I want. With my 65 EUR gamepad, or 300 EUR headphones, I hold a button and wait several seconds for the result. Why has UX regressed so much in these areas?
“I no longer build software”
Add me to the woodworking ex-developers. I built a website that pays the bills, and now I have a lot of time on my hands. I am finishing my first piece of furniture today. It's pretty scary to work without an undo button. The physical world isn't just instructions, but movements. A little twitch can ruin a cut. A clumsy movement can dent a piece of wood you spent an hour sanding. You truly experience the meaning of "measure twice, cut once". Resources also feel tangibly limited. You can't just spin up another server, you must drive across town to buy more lumber. I still enjoy coding though. My passion for it returned once I could do it on my own time, without stakeholders, sprints, meetings, deadlines or even schedules. I sit down and work until the coffee wears off, then go do something else. It's a hobby again. I don't think programming is the problem. Anything you do 40 hours a week for other people will get to you just the same. Programming is a pretty sweet gig, all things considered.
React is becoming a black box
This is a symptom of a lot of developers thinking they have to write code exactly like everyone else (or at least, strictly adhere to "best practices"). It's a very subtle disease, but I've noticed it again and again over the years. Reading between the lines, this is a criticism of hooks if they're viewed as a wholesale replacement for classes; from experience I'd argue they're not—they're just a convenient tool for simplifying common patterns. I'd imagine the author knows that to be the case and instead of just using classes where appropriate (or where they wanted), they had to rationalize using hooks because of the aforementioned "but everybody else is using hooks" problem. I suffered from this behavior for years before I realized it was impeding my work. The term that came to mind for the phenomenon was "The Invisible Developer:" a non-existent developer sitting over your shoulder always judging you for your programming choices. That developer doesn't exist. If instead how "in fashion" your code is is the standard on your team: you're on the wrong team.
Why Johnny Won't Upgrade · Jacques Mattheij
More often than not automatic updates are not done with the interest of the user in mind. They are abused to the point where many users - me included - would rather forego all updates (let alone automatic ones) simply because we apparently can not trust the party on the other side of this transaction to have our, the users, interests at heart.
Don't marry your design after the first date - Tom Gamon
The more time you spend in the problem space, the more information you can gather and the better decision you can make when the time comes. For example, you can probably start working on your domain logic without knowing how the data is going to be served to the client, or what particular flavour of database you are going to use. Once you have chosen a database, by carefully encapsulating the access logic, if it turns out that this database isn’t the one, it is much easier to part ways amicably.
Show HN: HyScale – An abstraction framework over Kubernetes
This a hundred times. Do yourself a favour and use Dhall/Cue/Jsonnet to develop some abstractions that fit your workload and environment. There is not much value proposition in a tool like this if you can use a slightly lower-level, more generic tool (like a configuration-centric programming language, which is an actually full-fledged programming language) to accomplish the same goal in a more flexible and more powerful fashion, that leaves you space for evolution and unforeseen structure changes. The idea of tools mandating what 'environments' are is absurd, as it's pretty much always different for everyone (and that's good!).
The software industry is going through the “disposable plastic” crisis
In the micromanaged world of agile, ticket velocity is more important than any other metric. At least everywhere I've worked. Open source is the only place I regularly see high quality code. There the devs are allowed to love their code like pets not cattle.
The software industry is going through the “disposable plastic” crisis
The lie we tell ourselves is that the quality of code matters to non-engineers. It seems it doesn't. The most uncomfortable truth of our field is that there is no floor for how bad code can be, yet still make people billions of dollars. Because that's the outcome everyone else is seeking - making money. They don't care how good the code is. They care about whether it's making money or not.
The software industry is going through the “disposable plastic” crisis
People blame developers but it's all driven by a product mentality that favors rapid iterations and technical debt to run business experiments on customers. Slow-and-steady, carefully written software isn't tolerated within many product orgs these days.
Dear Google Cloud: Your Deprecation Policy Is Killing You
It is a total hassle to keep up with Googlers changing everything constantly. It's not just GCP it's every platform they control. Try keeping a website on the right side of Chrome policies, a G Suite app up, a Chrome extension running. Thousands of engineers chasing promotions by dropping support for live code. If it was their code they wouldn't do it. The org is broken. If you want to see what mature software support looks like, check out Microsoft. Win32 binaries I wrote in college still run on Win 10. Google looks unimpressive by comparison. But they all got promoted!
The day I accidentally built a nudity/porn platform
- anything that allows file upload -> porn / warez / movies / any form of copyright violation you care to come up with. - anything that allows anonymous file upload -> childporn + all of the above. - anything that allows communications -> spam, harassment, bots - anything that measures something -> destruction of that something (for instance, google, the links between pages) - any platform where the creator did not think long and hard about how it might be abused -> all of the abuse that wasn't dealt with beforehand. - anything that isn't secured -> all of the above. Going through a risk analysis exercise and detecting the abuse potential of whatever you are trying to build prior to launching it can go a long way towards ensuring that doesn't happen. Reacting very swiftly to any 'off label' uses for what you've built and shutting down categorically any form of abuse and you might even keep it alive. React too slow and before you know it your real users are drowned out by the trash. It's sad, but that's the state of affairs on the web as we have it today.
Stefan Hajnoczi: Why QEMU should move from C to Rust
Rust has a reputation for being a scary language due to the borrow checker. Most programmers have not thought about object lifetimes and ownership as systematically and explicitly as required by Rust. This raises the bar to learning the language, but I look at it this way: learning Rust is humanly possible, writing bug-free C code is not.
In spite of an increase in Internet speed, webpage speeds have not improved
>The more modern style of heavier client-side js apps lets you use software development best practices to structure, reuse, and test your code in ways that are more readable and intuitive. Sadly, this is probably where the core of the problem lies. "It makes code more readable and intuitive" is NOT the end goal. Making your job easier or more convenient is not the end goal. Making a good product for the user is! Software has got to be the only engineering discipline where people think it's acceptable to compromise the user experience for the sake of their convenience! I don't want to think to closely about data structures, I'll just use a list for everything: the users will eat the slowdown, because it makes my program easier to maintain. I want to program a server in a scripting language, it's easier for me: the users will eat the slowdown and the company budget will eat the inefficiency. And so on.
Laws of UX
It feels like modern website design conflates “better UX” with “surface level attractiveness” Craigslist is a great example, original reddit is another example: my UI/UX designer friend considers original reddit to be quote “ugly and horrible”, and while there definitely could be some improvements, the reddit redesign (which I know my friend would come up with something similar to) is quite literally orders of magnitude worse, but is aesthetically “nicer”. Original reddit looks ugly, but everything you want from an interface is there once you get through a 3 minute learning curve: information dense, enough white space (but not too much), consistent behaviour, fast, respects scrolling, etc etc. Where did we go “wrong” with web design that what we have now is seemingly worse? And what does a good balance of “actually functionally useful” and “aesthetically pleasing” look like?
The Fear Of Missing Out - Christine Dodrill
Infinite scrolling and live updating pages that make it feel like there's always something new to read. Uncountable hours of engineering and psychological testing spent making sure people click and scroll and click and consume all day until that little hit of dopamine becomes its own addiction. We have taken a system for displaying documents and accidentally turned it into a hulking abomination that consumes the souls of all who get trapped in it, crystallizing them in an endless cycle of checking notifications, looking for new posts on your newsfeed, scrolling down to find just that something you think you're looking for.
Why are CEOs failing software engineers?
CEOs can't really communicate with developers or designers if they have no practical experience with development or design. Lack of mutual respect can make it very hard to find a balance between giving enough creative freedom and setting deadlines. They may either give too much creative freedom to avoid problems (expensive in the short term), or don't give enough to play safe (toxic & expensive in the long term).
The Trick
The problem is the relative social status between the client and Geon. If the client had heart palpitations and Geon was a cardiologist, this wouldn't happen. You would not have Mr Alpha explaining to the doctor how he needs to do the scan and the surgery, and being very cross when he didn't get his way. Even though Mr Alpha probably cares more about his heart working than a user interface. The same goes for pilots and other professionals, they get less crap than they would if they didn't have some sort of status that prevents most of the I-know-best crowd from sticking their heads in. For some reason, software doesn't have that feel to it. In many places, it's a sort of implementation detail, where the generals have already decided the strategy, and the devs just have to follow the orders. It would be good with some cultural change around what people think devs do and what you can say to them.
Why Tacit Knowledge is More Important Than Deliberate Practice
And so if you are a programmer, or designer, or businessperson, an investor or a writer reading about deliberate practice, you may be asking: “Well, what about my field? What if there are no established pedagogical techniques for me?” And if you have started to ask this question, then you have begun travelling a more interesting path; this is really the right question to ask. The answer, of course, is that the field of NDM is a lot more useful if you find yourself in one of these fields. The process of learning tacit knowledge looks something like the following: you find a master, you work under them for a few years, and you learn the ropes through emulation, feedback, and osmosis — not through deliberate practice. (Think: Warren Buffett and the years he spent under Benjamin Graham, for instance). The field of NDM is focused on ways to make this practice more effective. And I think much of the world pays too much attention to deliberate practice and to cognitive bias research, and not enough to tacit knowledge acquisition.
Anxiety Driven Development
I think the serenity prayer, sans unnecessary theological content, is relevant here. Grant me the serenity to accept the things I cannot change, the courage to change the ones I can, and the wisdom to know the difference. For a lot of software products, there is no winning in the long run. You've got good product-market fit and customer loyalty, but your code base is a huge mess and the hard technical problems are solved by third-party libraries. Your tech is a liability and eventually someone with better tech will be smart enough to study your customers, or the students who will eventually replace your inevitably-retiring customers on the front lines and push adoption going forward. And this is okay. The advantage corporations have over government institutions is that they can be created and destroyed with much less friction. If you're lucky, your growth curve looks like double-sigmoid table-top. Probably it looks like an asymmetric Gaussian. What it doesn't look like is an exponential. Understand where your product is in its life-cycle, and maximize ROI.
Do you feel JS developers are pushing a bit too hard to use NodeJS everywhere? | Lobsters
There’s a huge funnel problem for computer science at the moment. Go and Rust have some pretty serious evangelical marketing teams, but they are a drop in the ocean compared to the emergent ultramarketing behemoth that feeds JavaScript to the new developer. Part of this is that JS is constantly “new and modern” – with the implication that it’s a bandwagon that you’ll be safe on, unlike some of the old cobwebbed bandwagons. Constant change and “improvement” is itself a safety generator. Another part is that it’s so easy to get to hello, webpage. The sweet spot on the racket is enormous. Every computer including your phone comes with at least 1 and usually several JS interpreters. Frictionlessness drives adoption. The problem is that JS is, violently, garbage for most purposes. It’s a local maxima that has essentially everyone trapped, including the next generation. It’s not clear how we escape from this one.
Microsoft Defender SmartScreen is hurting independent developers
Application signing is a mafia protection racket, plain and simple. If you aren't signed by an "authority", every user is told by default automatically that your code is unsafe until you pay money. It is 100% analogous to thugs walking into your store saying "It would be a real shame if something were to happen to scare people away." The message is "We Protected You" and "Unsafe". WHY? Because "WE don't recognize" it. Application signing certificates cost money. Always. And if you're making something for free either out of the goodness of your heart or because you like making things, that money has to come out of your pocket just so the thugs don't stand in front of your door with bats. Nobody should be ok with that. AND FUN FACT: malicious or incompetent actors can and do also pay money.
Rust: Dropping heavy things in another thread can make your code 10000 times faster | Lobsters
I would say that this is a kind of premature optimization in 99.9% cases. Where are we going as software developers if we have to spawn a thread to just release memory? Maybe I’m just old fashioned or maybe I’m just shocked.
You can't tie your shoes while running | Fatih's Personal Blog
At this point you may be on board with everything I said, but you’re still reluctant to stop the world and ignore business needs until you’re done. Unfortunately, you can’t tie your shoes while running. I’m not saying that you should do your improvements secretly, communicate this need to business. Technical debt is a reality of the world and it slows down development to a halt. It’s your responsibility to take the time to pay it back and increase the speed, don’t expect it to come from business. One reason I believe the boy scout rule sounds lucrative is that in a large team, it’s hard to communicate a best practice to everyone involved. You know in your heart that doing things the current way is bad for everyone, but you don’t feel up to the task of getting everyone on board. Maybe because there are too many people, maybe you don’t feel senior enough. So you just sweep around your own front door and feel like you made a positive change. But remember this, for the next person who reads your code, they’ll see two different ways of doing the same thing and they will be confused.
You can't tie your shoes while running | Fatih's Personal Blog
It’s easy to imagine that if you keep doing the improvement as you touch the code, at some point you’ll cover the whole codebase. This is a wrong assumption. Some parts of the code are never touched until the project is rewritten or gone out-of-commission. We don’t care about what will happen eventually, we care about making the code better now. Even if it was possible to apply an improvement incrementally over the lifetime of a project, it still wouldn’t make sense. Because there won’t be a single improvement over that lifetime, there will be a bunch of them at the same time and while it can be possible to keep one in mind, it’s not humanly possible to juggle many of them. Your codebase will be a graveyard of many ways of doing something. One underrated quality of every codebase is consistency, the law of least surprise. At any point in time, you want everything to be consistent. Business requirements are hard already, you don’t want to take on more challenge by adding different paradigms into the mix.
The best tool for the automation job
Do you believe in the abilities of your tech team ? Do you believe in your organization’s ability to train and develop talent ? If yes, then finding developers shouldn’t be a problem – just hire smart junior developers and train them well. If not, it’s time for some organizational soul-searching. If you’re startup and don’t have the time – you hopefully have a senior person already on your founding team. Take a couple months to train some smart junior people, and you’ve tripled or quadrupled your dev group for the cost of one or two seniors. And you’ve developed a training culture and infrastructure so you can hire and train more juniors much more easily. You’ve traded a small group of senior devs for a) the staff you needed, b) a learning, improvement-based culture, and c) a much easier path to more staff in the future.
Where Did Software Go Wrong? | Jesse Li
Software cannot be divorced from the human structures that create it, and for us, that structure is capitalism. To quote Godfrey Reggio, director of Koyaanisqatsi (1982), “it’s not the effect of, it’s that everything exists within. It’s not that we use technology, we live technology. Technology has become as ubiquitous as the air we breathe, so we are no longer conscious of its presence” (Essence of Life 2002).
Where Did Software Go Wrong? | Jesse Li
These examples give us a decent idea of what software is good for. On its own, it never enables anything truly new, but rather changes the constant factors of speed and marginal cost, and raises the barrier for participation arbitrarily high. Once the software train begins to leave the station, we have no choice but to jump and hang on, lest we get run over or left behind—and we are not sure which is worse.
Where Did Software Go Wrong? | Jesse Li
For many of us fortunate enough to stay home during the coronavirus outbreak, our only interface with the world outside our families and homes—the relays of connection between us, our families, communities and societies—have been filtered through our screens and earbuds. It is apparent now more than ever exactly what software does for us, and what kinds of inequalities it reinforces. Through Instacart, Amazon Fresh, and other grocery delivery services, we can use an app to purchase a delivery driver’s body for an hour to expose themself to the virus on our behalf. Unsatisfied with even this, some developers have written scripts to instantly reserve the scarce delivery slots on these services. One developer wrote to Vice’s Motherboard “I designed the bot for those who find it extremely inconvenient in these times to step out, or find it not safe for themselves to be outside. It is my contribution to help flatten the curve, I really hope this’ll help reduce the number of people going out” (Cox 2020). Is that right? Does a bot really reduce the number of people going out, or does it merely change the demographics of who gets to stay home, favoring those with the resources and technical skills to run a Python script and Selenium WebDriver? With a constant and limited number of delivery slots, Joseph Cox points out that these bots create “a tech divide between those who can use a bot to order their food and those who just have to keep trying during the pandemic” (2020).
Where Did Software Go Wrong? | Jesse Li
And that is exactly it: in the modern world, our social interactions, our devices, governments, and markets, are circulations and flows of the same realities under the same rules. Our software creates new problems—problems that we’ve never had before, like fake news, cyberbullying, and security vulnerabilities—and we patch them over with yet more layers of code. Software becomes quasi-cause of software. These are echoes of the same voices in a positive feedback loop, growing louder and less coherent with each cycle—garbage in, garbage out, a thousand times over.
The State of Go
Not only that, but rarely am I supporting a library that I'm the sole developer on. Go takes away so much "individuality" of code. On most teams I've been on with Python and Java I can open up a file and immediate tell who wrote the library based on various style and other such. It's a lot harder with Go and that's a very good thing.
Where Did Software Go Wrong? | Jesse Li
Every time we dive into a codebase, speak with a mentor, take a course, or watch a conference talk, we are deliberately adding new voices to the little bag of voices in our mind. This is not purely a process of consumption: in internalizing voices, we form counter-words, mentally argue with them, and ventriloquize them through our own work—in a word, we engage in a dialogue. Next time you settle down to read some code, listen carefully for the voices inside the code and the voices inside your mind, however faint they sound. I can hear the voice of a senior engineer from my last job every time I write a type definition.
Where Did Software Go Wrong? | Jesse Li
Software is at once a field of study, an industry, a career, a process of production, and a process of consumption—and only then a body of computer code. It is impossible to separate software from the human and historical context that it is situated in. Code is always addressed to someone. As Structure and Interpretation of Computer Programs puts it, “programs must be written for people to read, and only incidentally for machines to execute” (Abelson et al. 1996). We do not write code for our computers, but rather we write it for humans to read and use. And even the purest, most theoretical and impractical computer science research has as its aim to provoke new patterns of thought in human readers and scholars—and these are formulated using the human-constructed tools of mathematics, language, and code. As software engineers, we pride ourselves in writing “readable” or “clean” code, or code that “solves business problems”—synonyms for this property of addressivity that software seems to have. Perhaps the malware author knows this property best. Like any software, malware is addressed to people, and only incidentally for machines to execute. Whether a sample of malware steals money, hijacks social media accounts, or destabilizes governments, it operates in the human domain. The computer does not care about money, social media accounts, or governments; humans do. And when the malware author obfuscates their code, they do so with a human reader in mind. The computer does not care whether the code it executes is obfuscated; it only knows opcodes, clocks, and interrupts, and churns through them faithfully. Therefore, even malware—especially malware—whose code is deliberately made unreadable, is written with the intention of being read.
ROI in companies that decided to switch to Rust
I'm CTO of a legal tech firm, clausehound, and we're almost fully migrated to rust (from a Frankenstein's monster of WordPress PHP). We've built a web application that organizes legal language, bringing a lot of software-thinking (eg git-like versioning of contract drafts) to law, and the clarity demanded by rust has been a huge benefit. Our PaaS offering is a graphql API that lets you explore knowledge about contracts, so there's a lot of very strict relationships defined that rust has been perfect for, vs the willy-nilly, force-everything-into-a-string-or-hashmap approach that PHP forced on us. The learning curve was pretty steep, and there's no way we can afford any devs who come with rust experience already, so we've had to do lots of education in-house. Ownership has, unsurprisingly, been the big concept to teach. The flip side is that the more mature our product becomes, the more good examples devs can find throughout the codebase, because odds are someone has used a similar approach to borrowing to what they need already. I'm fortunate in that I'm in a position to make the decision myself, but I can see a huge drawback at some organizations. Rust is, in many ways, a major shift for some companies, and often people organize themselves around valuing abstractions, instead of the value that abstraction provides. Eg I can't think of a single test on our php we were running that even applies to rust, since every (and many more) cases we were checking are enforced at compile time. A lot of organizations are weird about testing: the purpose of testing isn't really to find bugs, it's to both maximize user satisfaction, and minimize their risk, by finding where expected behaviour differs from actual. But often, managers will look at total bugs found, or worse, total tests written, as a success metric. I can promise you that when you don't need to write phpunit, jest, etc tests just to make sure a variable actually is what you say it is, you'll find fewer bugs in testing and have a harder time writing lots of tests. Tests are just one easy example. Every org is going to have a bunch of metrics they care about a lot that won't make half as much sense on rust. You're going to need to do a lot of work (attending exec meetings, read sales materials, etc) to find the places where you can match the rust ROI to what they're measuring. You may need to question many of the metrics themselves, which is usually a big uphill battle. If you'd like to chat about it, I'm happy to talk
The Imperial High Modernist Cathedral vs The Bazaar – Ben Podgursky
At risk of appropriating the suffering of Soviet peasants, there’s another domain where the impositions of high modernism parallel closely with the world of software — in the mechanics of software development. First, a definition: Metis is a critical but fuzzy concept in SlaS, so I’ll attempt to define it here.  Metis is the on-the-ground, hard-to-codify, adaptive knowledge workers use to “get stuff done”.   In context of farming, it’s: “I have 30 variants of rice, but I’ll plant the ones suited to a particular amount of rainfall in a particular year in this particular soil, otherwise the rice will die and everyone will starve to death” Or in the context of a factory, it’s, “Sure, that machine works, but when it’s raining and the humidity is high, turning it on will short-circuit, arc through your brain, and turn the operator into pulpy organic fertilizer.” and so forth.   In the context of programming, metis is the tips and tricks that turn a mediocre new graduate into a great (dare I say, 10x) developer.  Using ZSH to get git color annotation.  Knowing that,  “yeah Lambda is generally cool and great best practice, but since the service is connected to a VPC fat layers, the bursty traffic is going to lead to horrible cold-start times, customers abandoning you, the company going bankrupt, Sales execs forced to live on the streets catching rats and eating them raw.”  Etc. Trusting developer metis means trusting developers to know which tools and technologies to use.  Not viewing developers as sources of execution independent of the expertise and tools which turned them into good developers.
The push for new and shiny solutions to old known problems - Jesper Reiche
In my book simplicity always wins. You win by subtracting complexity – not by adding it. Start with the simplest possible solution to the problem and see where it gets you. Simple makes it cheaper. Simple solutions makes it easier to implement, makes it easier to test, faster to ship and hence faster to get feedback. Once you have this feedback, whether from unit tests, Proof of Concept or user tests, then you can decide to add complexity if your simple solution proves too slow or too rudimentary. Always start with the simple solution.
buggy culture · Muvaffak
The moment a code is reviewed, approved and merged; it’s not yours anymore. If a bug occurs, it belongs to everyone who contributed to the software. Still not convinced? Think of it this way. Do you pay the developer extra who wrote “Buy Now” button every time a sale happens to congratulate how the code piece he wrote brings in sales? No, because you know every sale is enabled by all part of the web site collectively. The same goes for bugs, too; they are caused by the whole codebase being that way, not by that one if statement.
Ask HN: Who Regrets Choosing Elixir?
I've used Elixir since 2015 and I find Elixir to be unusable for any kind of intelligent domain modelling, but that's primarily because it's dynamically typed and has no concept of real sum types, etc., not necessarily because it's any worse at this than Ruby. Any codebase beyond fairly small will be harder and harder to work with to an unreasonable degree, in my experience, and any perceived "velocity" gained from the dynamic nature of it is paid for doubly so by the lack of safety you get beyond toy projects. I'm only slightly more for Erlang as a choice mostly because it's simpler than Elixir and doesn't have as much obfuscation of logic added to it that Elixir does, but in reality it's also a bad choice for any bigger system. The runtime is absolutely great, but the languages on it are generally not good enough.
Leave Scrum to Rugby, I Like Getting Stuff Done - Qvault
Sprints are useful like achievements in video games are useful; they make us feel warm and fuzzy inside. Motivation is a powerful tool, don’t misunderstand me. The problem is that those warm fuzzies are mostly for the sake of the management team. It makes them feel in control and informed. They know exactly what was done and when it was completed. I’m not against management being informed… but at what cost?
Fragile narrow laggy asynchronous mismatched pipes kill productivity
> Complexity. This is the enemy, the second enemy is bad attempts to reduce complexity which often end up adding more complexity than they take away, just harder to find. This is true at every level of the systems design process - often by trying to make a system "simpler" i.e. less complex for the end user, the complexity is shifted further down the stack into application code, or even to the underlying infrastructure. It's easy for those of us with technical backgrounds to see the beauty and simplicity in well designed interfaces, but as the realm of computing and computer interaction shifts away from technical to non-technical people, we start to absorb some of that complexity into our systems design to make up for knowledge shortcomings of end users. Your example of sed being better than the "fancy data tools" I feel is a good one - whilst sed is incredibly powerful for this use case, if the consumer of what needs to be run there only knows how to use excel, it's often required to create these abstraction layers to allow the end user to do their own primary function/role.
Second-Guessing the Modern Web
I’m tempted to step back and evaluate this on another level. Our industry is very big, and any industry that gets that big will be able to house a lot of people just for the sake of it. If you think we have a large amount of fresh frontend people, understand they are hired almost with a one to one correspondence with fresh product/business people. Modern product development is essentially a polishing job on every component that Twitter Bootstrap or Jquery UI ever invented. Over and over, we dress up a modal, with a slider, with a ‘user flow’, with some tooltips, and so on, and allow the process to masquerade around as real design/engineering. There’s so much money in this industry that we can hire entire teams to basically take a Bootstrap component, and theme it. This gets passed on as product development, and from the developer side, it gets passed on as engineering. If this is the level of masquerading occurring, why would a frontend developer ever go ‘what’s the right solution here?’. Something similar is happening on the backend and infrastructure. It too will take on a mask behind devops and data science and start pumping out what are probably straight up SQL queries and cron jobs. This will get passed off as design and engineering as well. We’re too big.
Ask HN: Name one idea that changed your life
"Premature optimization is the root of all evil" More and more, I'm realizing this applies more broadly than just for code. Abstraction is a form of optimization and shouldn't be done before the space has been properly explored to know what abstractions should be built. Standardization is a form of optimization and shouldn't be proposed until there's a body of evidence to support what's being standardized. Failure to validate a product before building it? Premature optimization. Build infrastructure without understanding the use case? Premature optimization. Build tools before using them for your end product/project? Premature optimization. This advice comes in different forms: "Progress over perfection", "Iteration quickly", "Move fast and break things", "Don't let perfection be the enemy of good enough", etc. but I find the umbrella statement of not prematurely optimizing to encompass them all.
Tools/practices to manage deeply nested thought-stacks? | Lobsters
Breadth-first, not depth. Defer relentlessly. Check with your primary goal regularly. Time-boxing. The trick with making meaningful progress and not spinning out on these tangents is *pausing to recognize them as tangents*. Only execute on a sub-task if it is *necessary* to complete your immediate goal. If a sub-tasks can be deferred, do that; You can evaluate if they are still useful later. Capturing them to get it out of your head should alleviate some of the pull that they have on you – they won’t be forgotten, but don’t need to be done now. And always be asking the question “Is this helping me solve my immediate problem?”. Why did you want the interactive debugger? Probably for more context. For debugging specifically, always be asking is there a dumber/simpler way to find concrete information? Just sitting and thinking through the specific context that you think you need may have allowed you to continue with print debugging might have short-circuited the tangent. The other tactic that can help is, when you start a sub-goal, estimate how much time it is worth to you, and set a timer. Had you valued the interactive debugger at 20m, and then the timer went off and you realized that you were about to re-install your interpreter. That is a good moment to re-evaluate. And having the concrete time box prevents you from losing an entire afternoon to a chain of those. As a reminder, maybe put a sticky note in front of you with your current goal. And keep checking in that you are still really working towards it. As for tooling, omnifocus and both have quick capture features for things you can defer until later. And and workflowy literally let you nest these tangents, which can be a visual signal when you’ve gone too far. But I think the crux of your question is more about focus and process and less about the tools.
Complexity Has to Live Somewhere
The trap is insidious in software architecture. When we adopt something like microservices, we try to make it so that each service is individually simple. But unless this simplicity is so constraining that your actual application inherits it and is forced into simplicity, it still has to go somewhere. If it's not in the individual microservices, then where is it? Complexity has to live somewhere. If you are lucky, it lives in well-defined places. In code where you decided a bit of complexity should go, in documentation that supports the code, in training sessions for your engineers. You give it a place without trying to hide all of it. You create ways to manage it. You know where to go to meet it when you need it. If you're unlucky and you just tried to pretend complexity could be avoided altogether, it has no place to go in this world. But it still doesn't stop existing. With nowhere to go, it has to roam everywhere in your system, both in your code and in people's heads. And as people shift around and leave, our understanding of it erodes. Complexity has to live somewhere. If you embrace it, give it the place it deserves, design your system and organisation knowing it exists, and focus on adapting, it might just become a strength.
Server-Side Rendering is a Thiel Truth
Client-side rendering is (obviously) necessary to support complex interactions with extremely low-latency: Figma or Google Docs could only be client side apps. It is useful for write-heavy applications people use interactively for long periods: email, chat. It is harmful for read-only, or read-mostly applications. Harmful to the implementors as it imposes unnecessary cost, harmful to users as it's likely slower, less likely to use the web platform correctly and less accessible. Inappropriate use of client-side rendering is why to find out my electricity bill I have to wait ~10 seconds while a big React app loads and then hits up 5 REST endpoints. So is your app mostly forms or displaying content? User preference panels? Mortgage applications? Implement it with server-side rendering with a sprinkling of JS to implement widgets the web lacks. If only part of your app requires low-latency interactions, use client-side rendering only there. p.s don't believe it can be fast? Have a quick wander around the D Forum - it's many, many times faster than most client-side rendered apps I use. Oh and GitHub (source: I worked there) is overwhelmingly server-side rendered (with Rails, gasp), and so is StackOverflow. It's quite surprising that this is a Thiel truth.
Agile's early evangelists wouldn't mind watching Agile die
IMO Agile has become regulatory capture. It's a means by which non-engineers can extract value from a booming market which doesn't directly benefit from their skills. That being said. I think there is a lot of wisdom in the original agile manifesto. The core principles are solid, but the methodology has clearly been co-opted by consultants and supported by management looking to increase the headcount under themselves. I've often struggled to understand why my team is made up of only 20% engineers with the other 80% pretending to create value by holding meetings to tell engineers what to build next when I feel like it's your clients that should be doing that. Ultimately it's engineering that becomes the constrained resource which leads to technical debt in favor of pushing out product's features. I would venture a guess that most engineers have used (critically) more software in their lives than any non-technical person driving the development of the product. Why then are engineers not the most consulted people on the efficacy and value of new features? I think there is a big myth out there that engineers are incapable of directly handling client feedback.
Stop apologizing for bugs – Dan Slimmon
Everyone knows that all code has bugs. Code is written under constraints. Deadlines. Goals other than quality. Imperfect knowledge of the future. Even your own skill as an engineer is a constraint. If we all tried to write perfect, bugless code, we’d never accomplish anything. So how does it make sense to apologize for bugs? This rule I’ve made for myself forces me to distinguish between problems caused by constraints and problems caused by my own faults. If I really think I caused a problem through some discrete action (or lack of action), then that’s something I’ll apologize for. But if I wrote code that got something done, and it just so happens that it didn’t work in a given situation, then I have nothing to apologize for. There was always bound to be something.