A tagged pointer setup for detecting buffer overflows with no branches or extra memory accesses.
Actually just the opposite of the title. A lovely story on why sometimes you actually need to write software to build a product, not just snap together some lego blocks.
Mmap is so great! Just do a single system call, and all the IO is hidden behind the scenes. Oh, wait. Turns out that sometimes that transparency isn't what you want at all. (Specifically sometimes you want non-blocking IO. Or as it happens, I found this post while looking for stuff on the MAP_POPULATE | MAP_NONBLOCK combination that Linux used to once support).
Robin Hood hash tables with linear probing are just sorted arrays. What a great way to think about it.
> This makes it surprisingly hard to answer a very simple question: what is the fastest join algorithm in 2015? In this paper we will try to develop an answer. We start with an end-to-end black box comparison of the most important methods. Afterwards, we inspect the internals of these algorithms in a white box comparison. We derive improved variants of stateof-the-art join algorithms by applying optimizations like softwarewrite combine buffers, various hash table implementations, as well as NUMA-awareness in terms of data placement and scheduling
> Zobrist hashing starts by randomly generating bitstrings for each possible element of a board game, i.e. for each combination of a piece and a position (in the game of chess, that's 12 pieces × 64 board positions, or 14 x 64 if a king that may still castle and a pawn that may capture en passant are treated separately). Now any board configuration can be broken up into independent piece/position components, which are mapped to the random bitstrings generated earlier. The final Zobrist hash is computed by combining those bitstrings using bitwise XOR.
And obviously the punchline being that given a board state and its hash, recomputing the hash of an output board state is simply xoring the original hash with the bitstrings of the moved piece; once before the move, once after.Using genetic programming to choose when to use which heuristic for directing a iterative deepening A* search. Though I have to say that the game trees this paper is talking of seem ludicrously small for 2009 (1.5M states).
Running multiple unrelated algorithms for Sokoban solving in parallel, on the assumption that the different algorithms have different blind spots. You get better average and worst cases by giving each of x algorithms 1/x% CPU than run just one with 100%. (Assuming the selection is diverse enough, of course).
Also had some ideas about having the different algorithms exchange information, but that seems very complicated and didn't seem to pan out.