> Developing and testing a virtual version of Unix on OS/32 has practical advantages. There was no need for exclusive use of the machine; [...]. And the OS/32 interactive debugger was available for breakingpoint and single-stepping through the Unix kernel just like any other program.
A port of Unix v6, from before it was really meant to be portable. A lovely systems programming story.
A very amusing system programmer's lament.
Ignore the title. It's not actually a rant about Skype sucking, but a really cool article series on someone writing their own codec + packet-loss tolerant UDP networking for a prototype video conferencing app.
Micro-optimizing lockless message passing between threads.
Then use this to replace locks on data structures. Instead of data structures being shared, they're owned by a specific server process. If a client needs to operate on data structure, it asks the server to do it instead. Assuming heavy contention, this'll be much faster since fewer cache coherency roundtrips are required.
(Obviously not widely applicable, due to the scheme requiring busylooping to work well.)
This'll go into the hall of fame of great debugging stories.
The PDP-I 1 was designed to be a small computer, yet its design has been successfully extended to high-performance models. This paper recollects the experience of designing the PDP-I I, commenting on its success from the point of view of its goals, its use of technology, and on the people who designed, built and marketed it.
A lovely mid-life postmortem for the PDP-11.
(Via Dave Cheney; a useful companion piece putting the paper in the historical context, but not a replacement for reading the original.)
Could you replace B-Tree/hash/bloom filter database indexes with machine learning models? The depressing answer appears to be that it's viable. I thought the systems programmer was going to be the last job in the world!
But assuming this is the state of the art (rather than a more typical "this is what we were deploying 5 years ago" Google paper), it's not quite practical yet. CPUs aren't efficient enough, communication overhead with GPUs/TPUs too large. But that's an architecture problem that will get solved.