<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>metajack.im</title>
  <link rel="alternate" type="text/html" href="https://metajack.im/" />
  <link rel="self" type="application/atom+xml" href="https://metajack.im/atom.xml" />
  <id>https://metajack.im/atom.xml</id>
  <updated>2019-01-24T15:23:02Z</updated>

  
  <entry>
    <title>Servo Talk at LCA 2017</title>
    <link rel='alternate' type='text/html' href='/2017/01/18/servo-talk-at-lca-2017/' />
    <id>tag:metajack.im:/2017/01/18/servo-talk-at-lca-2017/</id>
    <updated>2017-01-18T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>Watch my Linux.conf.au 2017 talk about the Servo constellation and WebRender.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>My talk from Linux.conf.au was just posted, and you
can <a href="https://www.youtube.com/watch?v=an5abNFba4Q">go watch it</a>. In it I cover
some of the features of Servo that make it unique and fast, including the
constellation and WebRender.</p>

<figure><figcaption>Servo Architecture: Safety &amp; Performance by Jack
Moffitt,  LCA 2017, Hobart, Australia.</figcaption><iframe width="560" height="315" src="https://www.youtube.com/embed/an5abNFba4Q?rel=0" frameborder="0" allowfullscreen=""></iframe></figure>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Servo Interview on The Changelog</title>
    <link rel='alternate' type='text/html' href='/2016/11/21/servo-interview-on-the-changelog/' />
    <id>tag:metajack.im:/2016/11/21/servo-interview-on-the-changelog/</id>
    <updated>2016-11-21T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>Listen to me talk about Servo on the Changelog podcast.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>The Changelog has just published an
<a href="https://changelog.com/podcast/228">episode about Servo</a>. It covers the
motivations and goals of the project, some aspects of Servo performance and
use of the Rust language, and even has a bit about our wonderful community. If
your curious about why Servo exists, how we plan to ship it to real users, or
what it was like to use Rust before it was stable, I recommend giving it a
listen.</p>

<audio src="https://cdn.changelog.com/uploads/podcast/228/the-changelog-228.mp3" preload="none" class="changelog-episode" data-src="https://changelog.com/podcast/228/embed" data-theme="night" controls=""></audio>
<p><a href="https://changelog.com/podcast/228">The Changelog
228: Servo and Rust with Jack Moffitt</a> – Listen on <a href="https://changelog.com/">Changelog.com</a></p>
<script async="" src="//cdn.changelog.com/embed.js"></script>

]]>
    </content>
  </entry>
  
  <entry>
    <title>Building Rust Code - Using Make Part 2</title>
    <link rel='alternate' type='text/html' href='/2013/12/19/building-rust-code-using-make-part-2/' />
    <id>tag:metajack.im:/2013/12/19/building-rust-code-using-make-part-2/</id>
    <updated>2013-12-19T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>Continuing my posts about building Rust code, I improve upon the make-based solution.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>This series of posts is about building Rust code. In the
<a href="https://metajack.im/2013/12/12/building-rust-code--using-make/">last post</a> I
showed some nice abstractions for using Make with Rust. Today I’ll show an
improved version of this integration.</p>

<h1 id="new-rust-compiler-flags">New Rust Compiler Flags</h1>

<p>After landing the <code class="highlighter-rouge">pkgid</code> attribute work there was much community discussion
about how that feature could be improved. The net result was:</p>

<ul>
  <li><code class="highlighter-rouge">pkgid</code> was renamed to <code class="highlighter-rouge">crate_id</code> it’s being used to identify a crate and
not a package, which is a grouping of crates. Actually, a package is still a
pretty fluid concept in Rust right now.</li>
  <li>The <code class="highlighter-rouge">crate_id</code> attribute can now override the inferred name of the crate
with new syntax. A <code class="highlighter-rouge">crate_id</code> of <code class="highlighter-rouge">github.com/foo/rust-bar#bar:1.0</code> names the
crate <code class="highlighter-rouge">bar</code> which can be found at <code class="highlighter-rouge">github.com/foo/rust-bar</code>. Previously the
crate name was inferred to be the last component of the path,  <code class="highlighter-rouge">rust-bar</code>.</li>
  <li>The compiler has several new flags to print out this information, saving
tooling the bother of parsing it out and computing crate hashes itself. You
can use <code class="highlighter-rouge">--crate-id</code>, <code class="highlighter-rouge">--crate-name</code>, and <code class="highlighter-rouge">--crate-file-name</code> so get the
value of the <code class="highlighter-rouge">crate_id</code> attribute, the crate’s name, and output filenames
the compiler will produce.</li>
</ul>

<p>These changes made a good thing even better.</p>

<h1 id="magical-makefiles-version-2">Magical Makefiles Version 2</h1>

<p>The
<a href="https://github.com/metajack/rust-geom/blob/makefile-abstract-2/Makefile"><code class="highlighter-rouge">Makefile</code></a>
hasn’t changed much, but here is a much simpler
<a href="https://github.com/metajack/rust-geom/blob/makefile-abstract-2/rust.mk"><code class="highlighter-rouge">rust.mk</code></a>
that the new compiler flags enable:</p>

<div class="highlighter-rouge"><pre class="highlight"><code>define RUST_CRATE

_rust_crate_dir = $(dir $(1))
_rust_crate_lib = $$(_rust_crate_dir)lib.rs
_rust_crate_test = $$(_rust_crate_dir)test.rs

_rust_crate_name = $$(shell $(RUSTC) --crate-name $$(_rust_crate_lib))
_rust_crate_dylib = $$(shell $(RUSTC) --crate-file-name --lib $$(_rust_crate_lib))

.PHONY : $$(_rust_crate_name)
$$(_rust_crate_name) : $$(_rust_crate_dylib)

$$(_rust_crate_dylib) : $$(_rust_crate_lib)
$$(RUSTC) $$(RUSTFLAGS) --dep-info --lib $$&lt;

-include $$(patsubst %.rs,%.d,$$(_rust_crate_lib))

ifneq ($$(wildcard $$(_rust_crate_test)),"")

.PHONY : check-$$(_rust_crate_name)
check-$$(_rust_crate_name): $$(_rust_crate_name)-test
    ./$$(_rust_crate_name)-test

$$(_rust_crate_name)-test : $$(_rust_crate_test)
    $$(RUSTC) $$(RUSTFLAGS) --dep-info --test $$&lt; -o $$@

-include $$(patsubst %.rs,%.d,$$(_rust_crate_test))

endif

endef
</code></pre>
</div>

<p>No more nasty sed scripts necessary, but of course, the crate hash is still
computable if you want to do it yourself for some reason.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Building Rust Code - Using Make</title>
    <link rel='alternate' type='text/html' href='/2013/12/12/building-rust-code-using-make/' />
    <id>tag:metajack.im:/2013/12/12/building-rust-code-using-make/</id>
    <updated>2013-12-12T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>Another post about building Rust code, this time with Make.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>This series of posts is about building Rust code. In the
<a href="https://metajack.im/2013/12/11/building-rust-code--current-issues/">first post</a>
I covered the current issues (and my solutions) around building Rust using
external tooling. This post will cover using Make to build Rust projects.</p>

<h1 id="the-example-crate">The Example Crate</h1>

<p>For this post, I’m going to use the
<a href="https://github.com/mozilla-servo/rust-geom">rust-geom</a> library as an
example. It is a simple Rust library used by
<a href="https://github.com/mozilla/servo">Servo</a> to handle common geometric tasks
like dealing with points, rectangles, and matrices. It is pure Rust code, has
no dependencies, and includes some unit tests.</p>

<p>We want to build a dynamic library and the test suite, and the <code class="highlighter-rouge">Makefile</code>
should be able to run the test suite by using <code class="highlighter-rouge">make check</code>. As much as
possible, we’ll use the same crate structure that
<a href="https://github.com/mozilla/rust/blob/master/doc/rustpkg.md">rustpkg</a> uses so
that once rustpkg is ready for real use, the transition to it will be
painless.</p>

<h1 id="makefile-abstractions">Makefile Abstractions</h1>

<p>Did you know that <code class="highlighter-rouge">Makefile</code>s can define functions? It’s a little clumsy, but
it works and you can abstract a bunch of the tedium away. I’d never really
noticed them before dealing with the Rust and Servo build systems, which use
them heavily.</p>

<p>By using shell commands like <code class="highlighter-rouge">shasum</code> and <code class="highlighter-rouge">sed</code>, we can compute crate hashes,
and by using Make’s <code class="highlighter-rouge">eval</code> function, we can dynamically define new
targets. I’ve created a <code class="highlighter-rouge">rust.mk</code> which can be included in a <code class="highlighter-rouge">Makefile</code> that
makes it really easy to build Rust crates.</p>

<h1 id="magical-makefiles">Magical Makefiles</h1>

<p>Let’s look at a
<a href="https://github.com/metajack/rust-geom/blob/makefile-abstract/Makefile"><code class="highlighter-rouge">Makefile</code> for rust-geom</a>
which uses <code class="highlighter-rouge">rust.mk</code>.</p>

<div class="highlighter-rouge"><pre class="highlight"><code>include rust.mk

RUSTC ?= rustc
RUSTFLAGS ?=

.PHONY : all
all: rust-geom

.PHONY : check
check: check-rust-geom

$(eval $(call RUST_CRATE, .))
</code></pre>
</div>

<p>It includes <code class="highlighter-rouge">rust.mk</code>, sets up some basic variables that control the compiler
and flags, and then defines the top level targets. The magic bit comes the
call to <code class="highlighter-rouge">RUST_CRATE</code> which takes a path to where a crate’s <code class="highlighter-rouge">lib.rs</code> and
<code class="highlighter-rouge">test.rs</code> are located. In this case the path is the current directory, <code class="highlighter-rouge">.</code>.</p>

<p><code class="highlighter-rouge">RUST_CRATE</code> finds the <code class="highlighter-rouge">pkgid</code> attribute in the crate and uses this to compute
the crate’s name, hash, and the output filename for the library. It then
creates a target with the same name as the crate name, in this case
<code class="highlighter-rouge">rust-geom</code>, and a target for the output file for the library. It uses the
Rust compiler’s support for dependency information so that it will know
exactly when it needs to recompile things.</p>

<p>If the crate contains a <code class="highlighter-rouge">test.rs</code> file, it will also create a target that
compiles the tests for the crates into an executable as well as a target to
run the tests. The executable will be named after the crate; for rust-geom it
will be named <code class="highlighter-rouge">rust-geom-test</code>. The check target is also named after the
crate, <code class="highlighter-rouge">check-rust-geom</code>.</p>

<p>The files <code class="highlighter-rouge">lib.rs</code> and <code class="highlighter-rouge">test.rs</code> are the files rustpkg itself uses by
default. This <code class="highlighter-rouge">Makefile</code> does not support the <code class="highlighter-rouge">pkg.rs</code> custom build logic, but
if you need custom logic, it is easy enough to modify this example. One
benefit of following in rustpkg’s footsteps here is that this same crate
should be buildable with rustpkg without modification.</p>

<h1 id="behind-the-scenes">Behind the Scenes</h1>

<p><a href="https://github.com/metajack/rust-geom/blob/makefile-abstract/rust.mk"><code class="highlighter-rouge">rust.mk</code></a>
is a a little ugly, but not too bad. It defines a few helper functions like
<code class="highlighter-rouge">RUST_CRATE_PKGID</code> and <code class="highlighter-rouge">RUST_CRATE_HASH</code> which are used by the main
<code class="highlighter-rouge">RUST_CRATE</code> function. The syntax is a bit silly because of the use of <code class="highlighter-rouge">eval</code>
and the need to escape <code class="highlighter-rouge">$</code>s, but it shouldn’t be too hard to follow if you’re
already familiar with Make syntax.</p>

<div class="highlighter-rouge"><pre class="highlight"><code>RUST_CRATE_PKGID = $(shell sed -ne 's/^\#\[ *pkgid *= *"\(.*\)" *];$$/\1/p' $(firstword $(1)))
RUST_CRATE_PATH = $(shell printf $(1) | sed -ne 's/^\([^\#]*\)\/.*$$/\1/p')
RUST_CRATE_NAME = $(shell printf $(1) | sed -ne 's/^\([^\#]*\/\)\{0,1\}\([^\#]*\).*$$/\2/p')
RUST_CRATE_VERSION = $(shell printf $(1) | sed -ne 's/^[^\#]*\#\(.*\)$$/\1/p')
RUST_CRATE_HASH = $(shell printf $(strip $(1)) | shasum -a 256 | sed -ne 's/^\(.\{8\}\).*$$/\1/p')

ifeq ($(shell uname),Darwin)
RUST_DYLIB_EXT=dylib
else
RUST_DYLIB_EXT=so
endif

define RUST_CRATE

_rust_crate_dir = $(dir $(1))
_rust_crate_lib = $$(_rust_crate_dir)lib.rs
_rust_crate_test = $$(_rust_crate_dir)test.rs

_rust_crate_pkgid = $$(call RUST_CRATE_PKGID, $$(_rust_crate_lib))
_rust_crate_name = $$(call RUST_CRATE_NAME, $$(_rust_crate_pkgid))
_rust_crate_version = $$(call RUST_CRATE_VERSION, $$(_rust_crate_pkgid))
_rust_crate_hash = $$(call RUST_CRATE_HASH, $$(_rust_crate_pkgid))
_rust_crate_dylib = lib$$(_rust_crate_name)-$$(_rust_crate_hash)-$$(_rust_crate_version).$(RUST_DYLIB_EXT)

.PHONY : $$(_rust_crate_name)
$$(_rust_crate_name) : $$(_rust_crate_dylib)

$$(_rust_crate_dylib) : $$(_rust_crate_lib)
    $$(RUSTC) $$(RUSTFLAGS) --dep-info --lib $$&lt;

-include $$(patsubst %.rs,%.d,$$(_rust_crate_lib))

ifneq ($$(wildcard $$(_rust_crate_test)),"")

.PHONY : check-$$(_rust_crate_name)
check-$$(_rust_crate_name): $$(_rust_crate_name)-test
    ./$$(_rust_crate_name)-test

$$(_rust_crate_name)-test : $$(_rust_crate_test)
    $$(RUSTC) $$(RUSTFLAGS) --dep-info --test $$&lt; -o $$@

-include $$(patsubst %.rs,%.d,$$(_rust_crate_test))

endif

endef
</code></pre>
</div>

<p>If you wanted, you could add the crate’s target and the check target to the
<code class="highlighter-rouge">all</code> and <code class="highlighter-rouge">check</code> targets within this function, simplifying the main
<code class="highlighter-rouge">Makefile</code>. You could also have it generate an appropriate <code class="highlighter-rouge">clean-rust-geom</code>
target as well.</p>

<p>It’s not going to win a beauty contest, but it will get the job done nicely.</p>

<h1 id="next-up">Next Up</h1>

<p>In the next post, I plan to show the same example, but using
<a href="http://cmake.org/">CMake</a>.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Building Rust Code - Current Issues</title>
    <link rel='alternate' type='text/html' href='/2013/12/11/building-rust-code-current-issues/' />
    <id>tag:metajack.im:/2013/12/11/building-rust-code-current-issues/</id>
    <updated>2013-12-11T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>There are still a number of issues builing Rust code with external tooling while we wait for rustpkg.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>As <a href="https://github.com/mozilla/rust/blob/master/doc/rustpkg.md">rustpkg</a> is
still in its infancy, most Rust code tends to be built with make, other tools,
or by hand. I’ve been working on updating Servo’s build system to something a
bit more reliable and fast, and so I’ve been giving a lot of thought to build
tooling with regards to Rust.</p>

<p>In this post, I want to cover what the current issues are with building Rust
code, especially with regards to external tooling. I’ll also describe some
recent work I did to address these issues. In the future, I want to cover
specific ways to integrate Rust with a few different build tools.</p>

<h1 id="current-issues">Current Issues</h1>

<p>Building Rust with existing build tools is a little difficult at the
moment. The main issues are related to Rust’s attempt to be a better systems
language than the existing options.</p>

<p>For example, Rust uses a larger compilation unit than C and C++ compilers, and
existing build tools are designed around single file compilation. Rust
libraries are output with unpredictable names. And dependency information must
be done manually.</p>

<h2 id="compilation-unit">Compilation Unit</h2>

<p>Many programming languages compile one source file to one output file and then
collect the results into some final product. In C, you compile <code class="highlighter-rouge">.c</code> files to
<code class="highlighter-rouge">.o</code> files, then archive or link them into <code class="highlighter-rouge">.lib</code>, <code class="highlighter-rouge">.a</code>, <code class="highlighter-rouge">.dylib</code>, and so on
depending on the platform and whether you are building an executable, static
library, or shared library. Even Java compiles <code class="highlighter-rouge">.java</code> inputs to one or more
<code class="highlighter-rouge">.class</code> outputs, which are then normally packaged into a <code class="highlighter-rouge">.jar</code>.</p>

<p>In Rust, the unit of compilation is the crate, which is a collection of
modules and items. A crate may consist of a single source file or an arbitrary
number of them in some directory hierarchy, but its output is a single
executable or library.</p>

<p>Using crates as the compilation unit makes sense from a compiler point of
view, as it has more knowledge during compilation to work from. It also makes
sense from a versioning point of view as all of the crate’s contents goes
together. Using crates as the compilation unit allows for cyclic dependencies
between modules in the same crates, which is useful to express some things. It
also means that separate declaration and implementation pieces are not needed,
such as the header files in C and C++.</p>

<p>Most build tools assume a model similar to that of a typical C compiler. For
example, make has pattern rules that can take and input to and output based on
on filename transformations. These work great if one input produces one
output, but they don’t work well in other cases.</p>

<p>Rust still has a main input file, the one you pass to the compiler, so this
difference doesn’t have a lot of ramifications when using existing build
tools.</p>

<h2 id="output-names">Output Names</h2>

<p>Compilers generally have an option for what to name their output files, or
else they derive the output name with some simple formula. C compilers use the
<code class="highlighter-rouge">-o</code> option to name the output; Java just names the files after the classes
they contain. Rust also has a <code class="highlighter-rouge">-o</code> option, which works like you expect, except
in the case of libraries where it is ignored.</p>

<p>Libraries in Rust are special in order to avoid naming collisions. Since
libraries often end up stored centrally, only one library can have a given
name. If I create a library called libgeom it will conflict with someone
else’s libgeom. Operating systems and distributions end up resolving these
conflicts by changing the names slightly, but it’s a huge annoyance. To avoid
collisions, Rust includes a unique identifier called the crate hash in the
name. Now my Rust library libgeom-f32ab99 doesn’t conflict with
libgeom-00a9edc.</p>

<p>Unfortunately, the current Rust compiler computes the crate hash by hashing
the link metadata, such as name and version, along with the link metadata of
its dependencies. This results in a crate hash that only the Rust compiler is
realistically able to compute, making it seem pseudo-random. This causes a
huge problem for build tooling as the output filename for libraries in
unknown.</p>

<p>To work around this problem when using make, the Rust and Servo build systems
use a dummy target called <code class="highlighter-rouge">libfoo.dummy</code> for a library called foo, and after
running <code class="highlighter-rouge">rustc</code> to build the library, it creates the <code class="highlighter-rouge">libfoo.dummy</code> file so
that make has some well known output to reason about. This workaround is a bit
messy and pollutes the build files.</p>

<p>Here’s an
<a href="https://github.com/metajack/rust-geom/blob/makefile-dummy/Makefile">example</a>
of what a <code class="highlighter-rouge">Makefile</code> looks like with this <code class="highlighter-rouge">.dummy</code> workaround:</p>

<div class="highlighter-rouge"><pre class="highlight"><code>RUSTC ?= rustc

SOURCES = $(find . -name '*.rs')

all: librust-geom.dummy

librust-geom.dummy: lib.rs $(SOURCES)
    @$(RUSTC) --lib $&lt;
    @touch $@

clean:
    @rm -f *.dummy *.so *.dylib *.dll
</code></pre>
</div>

<p>While this works, it also has some drawbacks. For example, if you edit a file
during a long compile, the <code class="highlighter-rouge">libfoo.dummy</code> will get updated after the compile
is finished, and rerunning the build won’t detect any changes. The timestamp
of the input file will be older than the final output file that the build tool
is checking. If the build system knew the real output file name, it could
compare the correct timestamps, but that information has been locked inside
the Rust compiler.</p>

<h2 id="dependency-information">Dependency Information</h2>

<p>Build systems need to be reliable. When you edit a file, it should trigger the
correct things to get rebuilt. If nothing changes, nothing should get
rebuilt. It’s extremely frustrating if you edit a file, rebuild the library,
and find that your code changes aren’t reflected in the new output for some
reason or that the library is not rebuilt at all. Reliable builds need
accurate dependency information in order to accomplish this.</p>

<p>There’s currently no way for external build tools to get dependency
information about Rust crates. This means that developers tend to list
dependencies by hand which is pretty fragile.</p>

<p>One quick way to approximate dependency info is just to recursively find every
<code class="highlighter-rouge">*.rs</code> in the crate’s source directory. This can be wrong for multiple reasons;
perhaps the <code class="highlighter-rouge">include!</code> or <code class="highlighter-rouge">include_str!</code> macros are used to pull in files that
aren’t named <code class="highlighter-rouge">*.rs</code> or conditional compilation may omit several files.</p>

<p>This is similar to dealing with header dependencies by hand when working with
C and C++ code. C compilers have options to generate dependency info to deal
with this, which used by tools like CMake.</p>

<p>The price of inaccurate or missing dependency info is an unreliable build and
a frustrated developer. If you find yourself reaching for <code class="highlighter-rouge">make clean</code>, you’re
probably suffering from this.</p>

<h1 id="making-it-better">Making It Better</h1>

<p>It’s possible to solve these problems without sacrificing the things we want
and falling back to doing exactly what C compilers do. By making the output
file knowable and handling dependencies automatically we make make build tool
integration easy and the resulting builds reliable. This is exactly what I’ve
been working on the last few weeks.</p>

<h2 id="stable-and-computable-hashes">Stable and Computable Hashes</h2>

<p>The first thing we need is to make the crate hash stable and easily computable
by external tools. Internally, the Rust compiler uses
<a href="https://131002.net/siphash/">SipHash</a> to compute the crate hash, and takes
into account arbitrary link metadata as well as the link metadata of its
dependencies. SipHash is not something easily computed from a <code class="highlighter-rouge">Makefile</code> and
the link metadata is not so easy to slurp and normalize from some dependency
graph.</p>

<p>I’ve just landed a <a href="https://github.com/mozilla/rust/pull/10593">pull request</a>
that replaces the link metadata with a package identifier, which is a crate
level attribute called <code class="highlighter-rouge">pkgid</code>. You declare it like
<code class="highlighter-rouge">#[pkgid="github.com/mozilla-servo/rust-geom#0.1"];</code> at the top of your
<code class="highlighter-rouge">lib.rs</code>. The first part, <code class="highlighter-rouge">github.com/mozilla-servo</code>, is a path, which serves
as both a namespace for your crate and a location hint as to where it can be
obtained (for use by rustpkg for example). Then comes the crate’s name,
<code class="highlighter-rouge">rust-geom</code>. Following that is the version identifier <code class="highlighter-rouge">0.1</code>. If no <code class="highlighter-rouge">pkgid</code>
attribute is provided, one is inferred with an empty path, a 0.0 version, and
a name based on the name of the input file.</p>

<p>To generate a crate hash, we take the SHA256 digest of the <code class="highlighter-rouge">pkgid</code>
attribute. SHA256 is readily available in most languages or on the command
line, and the <code class="highlighter-rouge">pkgid</code> attribute is very easy to find by running a regular
expression over the main input file. The first eight digits of this hash are
used for the filename, but the full hash is stored in the crate metadata and
used as part of the symbol hashes.</p>

<p>Since the crate hash no longer depends on the crate’s dependencies, it is
stable so long as the <code class="highlighter-rouge">pkgid</code> attribute doesn’t change. This should happen
very infrequently, for instance when the library changes versions.</p>

<p>This makes the crate hash computable by pretty much any build tool you can
find, and means rustc generates predictable output filenames for libraries.</p>

<h2 id="dependency-management">Dependency Management</h2>

<p>I’ve also got a <a href="https://github.com/mozilla/rust/pull/10698">pull request</a>,
which should land soon, to enable rustc to output make-compatible dependency
information similar to the <code class="highlighter-rouge">-MMD</code> flag of gcc. To use it, you give rustc the
<code class="highlighter-rouge">--dep-info</code> option and for an input file of <code class="highlighter-rouge">lib.rs</code> it will create a <code class="highlighter-rouge">lib.d</code>
which can be used by make or other tools to learn the true dependencies.</p>

<p>The <code class="highlighter-rouge">lib.d</code> file will look something like this:</p>

<div class="highlighter-rouge"><pre class="highlight"><code>librust-geom-da91df73-0.0.dylib: lib.rs matrix.rs matrix2d.rs point.rs rect.rs side_offsets.rs size.rs
</code></pre>
</div>

<p>Note that this list of dependencies will include code pulled in via the
<code class="highlighter-rouge">include!</code> and <code class="highlighter-rouge">include_str!</code> macros as well.</p>

<p>Here’s an
<a href="https://github.com/metajack/rust-geom/blob/makefile-depinfo/Makefile">example</a>
of a handwritten <code class="highlighter-rouge">Makefile</code> using dependency info. Note that this uses a
hard-coded output file name, which works because crate hash is stable unless
the <code class="highlighter-rouge">pkgid</code> attribute is changed:</p>

<div class="highlighter-rouge"><pre class="highlight"><code>RUSTC ?= rustc

all: librust-geom-851fed20-0.1.dylib

librust-geom-851fed20-0.1.dylib: lib.rs
    @$(RUSTC) --dep-info --lib $&lt;

-include lib.d
</code></pre>
</div>

<p>Now it will notice when you change any of the <code class="highlighter-rouge">.rs</code> files without needed to
explicitly list them, and this will get updated as your code changes
automatically. A little <code class="highlighter-rouge">Makefile</code> abstraction on top of this can make it
quite nice and portable.</p>

<h1 id="next-up">Next Up</h1>

<p>In the next few posts, I’ll show examples of integrating the improved Rust
compiler with some existing build systems like make,
<a href="http://cmake.org/">CMake</a>, and <a href="http://gittup.org/tup/">tup</a>.</p>

<p>(Update: the next post covers <a href="https://metajack.im/2013/12/12/building-rust-code--using-make/">building Rust with Make</a>.)</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Seven Web Frameworks in Seven Weeks in Beta</title>
    <link rel='alternate' type='text/html' href='/2013/08/21/seven-web-frameworks-in-seven-weeks-in-beta/' />
    <id>tag:metajack.im:/2013/08/21/seven-web-frameworks-in-seven-weeks-in-beta/</id>
    <updated>2013-08-21T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>My new book is now in beta, with five chapters available.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>I’m happy to announce that my new book
<a href="http://pragprog.com/book/7web/seven-web-frameworks-in-seven-weeks">Seven Web Frameworks in Seven Weeks: Adventures in Better Web Apps</a>
is now available in beta from Pragmatic Programmers. My co-author Fred Daoud
and I cover a wide variety of frameworks in different styles and
languages. The book covers Sinatra (Ruby), CanJS (JavaScript), AngularJS
(JavaScript), Ring (Clojure), Webmachine (Erlang), Yesod (Haskell),
and Immutant (Clojure). This first beta contains the first five of those.</p>

<p><img src="http://imagery.pragprog.com/products/299/7web.jpg" width="250" height="300" /></p>

<p>I hope you enjoy it!</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Servo Update: Navigation, Scrolling, GPU Rendering, Underlines, and more</title>
    <link rel='alternate' type='text/html' href='/2013/05/26/servo-update-navigation-scrolling-gpu-rendering-underlines-and-more/' />
    <id>tag:metajack.im:/2013/05/26/servo-update-navigation-scrolling-gpu-rendering-underlines-and-more/</id>
    <updated>2013-05-26T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary>Servo has been making rapid progress thanks to two new interns.</summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>Servo is changing rapidly, and with two new interns joining the team the pace
will only accelerate. The last few weeks have seen some big changes starting
to land in the tree.</p>

<p>The Servo team welcomes and encourages new contributors and I’ll note
particular projects where new contributors can easily get involved
below. These aren’t the only places you can help, of course, but I thought it
might be useful to know a few good places to start.</p>

<h1 id="navigation-and-scrolling">Navigation and Scrolling</h1>

<p><a href="https://twitter.com/pcwalton">Patrick Walton</a> has landed the beginnings of
navigation and scrolling support. The
<a href="https://github.com/mozilla-servo/rust-alert">rust-alert</a> library provides
simple popup dialog support, and using this, you can now hit <code class="highlighter-rouge">Ctrl-L</code> to bring
up a dialog to enter a new
URL. <a href="https://github.com/mozilla-servo/rust-glut">rust-glut</a> also got keyboard
handler support. Note that this only works on Mac OS X right now due to
missing support for Linux in rust-alert.</p>

<p>Scrolling is another important UI feature, and you can now pan the content in
the window. Servo does not currently draw new parts of the content that were
hidden, but that should be simple to add.</p>

<p><strong>For new contributors:</strong> If you’re looking to get started hacking on Servo or
just want to learn more about Rust, adding popup dialogs on Linux to
rust-alert would be a good project. Adding drawing of previously hidden areas
to the scrolling code should also be an easy project for someone.</p>

<h1 id="underlined-text">Underlined Text</h1>

<p>Eric Atkinson, one of Servo’s new interns, has just landed his first pull
request, adding the first bits of CSS’s <code class="highlighter-rouge">text-decoration</code> support for
<code class="highlighter-rouge">underline</code>.</p>

<p><strong>For new contributors:</strong> Eric didn’t know any Rust or anything about Servo
internals before he started last week. It doesn’t take much to get started,
and there is lots of low hanging fruit to pick on the Servo tree. For example,
based on Eric’s <code class="highlighter-rouge">underline</code> work, it should be fairly easy to add
<code class="highlighter-rouge">strike-through</code>.</p>

<h1 id="performance-metrics">Performance metrics</h1>

<p>Tim Kuehn, another of Servo’s new interns, has also been busy his first
week. He started overhauling how performance data is collected in
Servo. Instead of simply timing bits of code and output the results to the
console, there is now a separate task that handles performance data.</p>

<p><strong>For new contributors:</strong> We’re not yet doing anything with this data yet, but
we should be. It should be a pretty easy project to start outputting it more
systematically and doing something with the results. Another idea would just
to be report numbers for different platforms and compare them to similar
numbers from other browsers so we know where we should improve.</p>

<h1 id="gpu-rendering">GPU Rendering</h1>

<p>The first parts of GPU accelerated rendering have started to land in Servo,
specifically updates to <a href="https://github.com/mozilla-servo/skia">Skia</a> and
<a href="https://github.com/mozilla-servo/rust-azure">Azure</a> to support
framebuffer-backed draw targets. These framebuffers render to textures which
are shared with the GPU-based compositor. This avoids needing to render to CPU
memory and then upload textures to the GPU. There is still a bug or two to
work out with tiling support, but I expect GPU rendering to land in the tree
pretty soon.</p>

<h1 id="miscellaneous">Miscellaneous</h1>

<p>Servo now has continuous integration via Bors, the wonderful CI bot that the
Rust team has already been using for some time. Not only that, but Servo’s
Bors is now running on Mozilla’s release engineering infrastructure instead of
being hosted by the Rust team. This should keep the tree building cleanly from
now on. If you’ve previously had trouble compiling Servo, now would be a good
time to try again.</p>

<p>Patrick Walton has been heavily refactoring Servo’s directory layout and many
of its subsystems. <code class="highlighter-rouge">util</code> and <code class="highlighter-rouge">net</code> libraries were split out from the <code class="highlighter-rouge">gfx</code>
library, and compositing was made quite a bit simpler. He has also refactored
layout and is working on splitting Servo into more libraries, which make it
both easier to understand and build faster. Much documentation has been added
in these refactorings.</p>

<p>Samsung continues to work on Android support, improving the Rust compiler
along the way. That work should land in the tree in the near future.</p>

<p>Give all these new things a try and report any issues you find. The team hangs
out in <code class="highlighter-rouge">#servo</code> on irc.mozilla.org and is happy to answer questions or help
you get started hacking on Servo.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Servo Update: Upgrading Rust, GPU Rendering, and Automation</title>
    <link rel='alternate' type='text/html' href='/2013/04/12/servo-update-upgrading-rust-gpu-rendering-and-automation/' />
    <id>tag:metajack.im:/2013/04/12/servo-update-upgrading-rust-gpu-rendering-and-automation/</id>
    <updated>2013-04-12T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>I’ve been working on Servo for three weeks now. There’s an enormous amount of
work to do, and I want to capture what’s going on and how it’s
progressing. This should be the first of many such updates on the project.</p>

<h1 id="day-one">Day One</h1>

<p>When I arrived, Servo no longer built at all, at least not on OS X. Servo often
requires bleeding edge versions of Rust, and backwards incompatible changes to
Rust are still happening on a regular basis. Since all of the contributors to
Rust work on different platforms, when porting to a new Rust compiler, some
platforms have gotten left behind. This was particularly acute this time
because Rust 0.6 contained a lot of syntax changes, mostly things that got
removed from the language, and many pieces of Servo were using syntax that was
deprecated in Rust 0.5, and was finally deleted entirely in Rust 0.6.</p>

<h1 id="upgrading-to-rust-06">Upgrading to Rust 0.6</h1>

<p>Rust 0.6 removed a lot of keywords and syntax from the language. Porting Servo
required modifying all the constants, many function declarations, many import
statements, etc. These changes were largely mechanical. There were a few
changes that weren’t so easy.</p>

<p>Mutable fields are being removed from the language, and mutability will be
controlled by the mutability of the struct itself. Not all of these had to be
removed in Servo, but many of them did, and removing them often required
slightly changing the data structures and their type signatures. In some cases
this was trivial, but in a few cases these changes needed more care. In
particular, lots of these changes bumped up against the Rust borrow checker,
which ensures it’s safe to hand out pointers to memory. There are still some
bugs in the borrow check, and workarounds are not always straightforward.</p>

<p>It took me about a week and a half to work my way through all the dependent
libraries and Servo itself at which point I had a build. By the end of that
second week I had landed the language upgrade to servo as well as some Rust
library changes that were needed. The end result is that Servo is now using
Rust 0.6 syntax, but it requires a post-0.6 version of Rust due to the Rust
changes not landing quite in time for the 0.6 release.</p>

<h1 id="gpu-rendering">GPU Rendering</h1>

<p>Servo uses many forms of parallelism, but one bit of low hanging fruit is to
move to a fully GPU rendering path. Currently compositing is done on the GPU,
but rendering to the various layers is done on the CPU. This is how most
current browsers operate as well.</p>

<p>We’re moving to rendering on the GPU as well which should speed up some things
a bit. Instead of rendering in parallel to several layers, Servo will render
directly into textures on the GPU which the compositor can use without doing
CPU to GPU memory transfers.</p>

<p>This required upgrading the rendering stack to a newer version of Azure
(Mozilla’s drawing library) and a new version of Skia (the specific backend
that Azure uses on OS X, Linux, and Android). Now that this part is done,
we’ll be adding texture layers to the renderer and switching drawing to those.</p>

<h1 id="automation">Automation</h1>

<p>We’re setting up build and testing automation for Servo now, which should help
ensure Servo remains buildable on all platforms. Rust has an amazing set of
tools for this already, which we are hoping to reuse fully. Buildbot machines
run builds and tests, and a GitHub bot called Bors handles dispatching builds
for patches that have been reviewed and merging pull requests that have passed
tests.</p>

<p>For now this work will be on Linux, but we hope to expand it to cover OS
X and Android as well in the near future. Once Servo is a little farther
along, we plan to put up nightly snapshots so more people can follow along
with our progress.</p>

<h1 id="other-work">Other Work</h1>

<p>There’s tons of other work in progress on both Servo and Rust. The DOM
bindings are getting improved, a new Rust scheduler that will make performance
and I/O better is in progress, a more optimized C FFI in Rust should also
land soon, and the rustpkg package manager is shaping up which we’ll be
switching to for more and more of Servo as it matures.</p>

<p>We need more help in lots of areas. Please join us in IRC in
<a href="http://chat.mibbit.com/?server=irc.mozilla.org&amp;channel=%23servo">#servo</a> or
on the <a href="https://lists.mozilla.org/listinfo/dev-servo">mailing list</a>. We’ll be
trying to mark bugs and projects that are well suited for new contributors. If
you want to work on Servo and write Rust code all the time,
<a href="http://careers.mozilla.org/en-US/position/obMdXfwR">we’re hiring</a>.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Joining Mozilla</title>
    <link rel='alternate' type='text/html' href='/2013/03/22/joining-mozilla/' />
    <id>tag:metajack.im:/2013/03/22/joining-mozilla/</id>
    <updated>2013-03-22T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>On Monday, I join Mozilla to work on
<a href="https://github.com/mozilla/servo">Servo</a>, a new and experimental web browser
engine built on <a href="http://www.rust-lang.org/">Rust</a>, a new systems programming
language. I am perhaps the first professional Rust programmer.</p>

<p>This will also be the first time in over a decade that I’m not working in a
small company or a startup (usually both). I’ve been thinking for a while that
it would be nice to work for a company that has real resources to solve
problems, as opposed to being at the mercy of venture capitalists or the whims
of users. Mozilla’s mission statement is one that is easy for me to get
behind, and they are doing very interesting things at
<a href="http://www.mozilla.org/en-US/research/">Mozilla Research</a>.</p>

<p>I enjoy working on difficult and important projects, and it’s hard for me to
imagine much that is more difficult or important than web browsers. It’s an
added bonus to be working in and (hopefully) contributing to a new programming
language. I also love working with smart people, and Mozilla seems to have
those in abundance.</p>

<p>This is going to be awesome.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Digital Audio and Sampling Explained</title>
    <link rel='alternate' type='text/html' href='/2013/02/26/digital-audio-and-sampling-explained/' />
    <id>tag:metajack.im:/2013/02/26/digital-audio-and-sampling-explained/</id>
    <updated>2013-02-26T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p><a href="http://xiph.org">Xiph.org</a> has just posted the second in its
<a href="http://video.xiph.org">series of videos on digital media concepts and techniques</a>. It’s
packed with information and demonstrations, and you’re sure to learn a huge
amount. As an added bonus, it’s hosted by Monty, the creator of Ogg Vorbis
(and many other amazing things). You couldn’t ask for a more qualified
teacher.</p>

<p>Watch below, or <a href="http://video.xiph.org/vid2.shtml">on Xiph.org</a>.</p>

<video controls="" width="640" height="360">
  <source src="http://downloads.xiph.org/video/Digital_Show_and_Tell-360p.ogv" type="video/ogg" /> 
  <source src="http://downloads.xiph.org/video/Digital_Show_and_Tell-360p.webm" type="video/webm" />
</video>

<p>There is also a <a href="https://wiki.xiph.org/Digital_Show_and_Tell/Episode_02">detailed write up</a>.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>XEP Lookups on DuckDuckGo</title>
    <link rel='alternate' type='text/html' href='/2012/07/09/xep-lookups-on-duckduckgo/' />
    <id>tag:metajack.im:/2012/07/09/xep-lookups-on-duckduckgo/</id>
    <updated>2012-07-09T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>The search engine <a href="https://duckduckgo.com/">DuckDuckGo</a> recently added
the ability for developers to develop instant answer plugins for their
search engine. This project is called <a href="http://duckduckhack.com/">DuckDuckHack</a>.</p>

<p>Instant answers on DuckDuckGo are really nice, in that they highlight
a specific result in a context sensitive way. For example, Stack
Overflow questions that match will show up with the highest rated
answer at the top of the page, and Wikipedia articles will be
presented as a title an abstract.</p>

<p>I decided to play with DuckDuckHack and added a plugin for XMPP
Extension Proposal (XEP) lookups. I do XEP lookups often when I’m
answering people’s XMPP-related questions, and this plugin makes the
XEPs title and abstract appear as an instant answer. The plugin was
recently merged into the tree, and it is now live on DuckDuckGo itself.</p>

<p>Try it by
<a href="https://duckduckgo.com/?q=xep+45">searching for XEP 45</a>. View the
<a href="https://github.com/duckduckgo/zeroclickinfo-fathead/tree/master/xep">code on GitHub</a>.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>The Numbers Behind the Twitter Data Silo</title>
    <link rel='alternate' type='text/html' href='/2012/01/30/the-numbers-behind-the-twitter-data-silo/' />
    <id>tag:metajack.im:/2012/01/30/the-numbers-behind-the-twitter-data-silo/</id>
    <updated>2012-01-30T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>The
<a href="https://metajack.im/2012/01/12/the-potentially-dark-future-of-search/">dark future of search</a>
is being foreshadowed by this Twitter vs. Google fight. The latest
Twitter volley at Google is this quote (seen on
<a href="http://gigaom.com/2012/01/30/costolo-twitter-google/">GigaOm</a>) from
Twitter CEO Dick Costolo:</p>

<blockquote>
  <p>“Google crawls us at a rate of 1300 hits per second… They’ve indexed
3 billion of our pages,” Costolo said. “They have all the data they
need.”</p>
</blockquote>

<p>There’s no doubt that 1,300 hits per second is a large number, but
let’s put that in perspective:</p>

<ul>
  <li>In
<a href="http://mashable.com/2010/02/22/twitter-50-million-tweets/">February 2010</a>,
Twitter was at 50 million tweets per day. This is just under 600
tweets per second.</li>
  <li>In <a href="http://blog.twitter.com/2011/06/200-million-tweets-per-day.html">June 2011</a>, Twitter was at 200 million tweets per day. This is
over 2,300 per second.</li>
  <li>In <a href="http://techcrunch.com/2011/10/17/twitter-is-at-250-million-tweets-per-day/">October 2011</a>, Twitter hit 250 million tweets per day or just
under 3,000 per second.</li>
  <li>They have <a href="http://blog.twitter.com/2011/12/yearinreview-tweets-per-second.html">spikes</a> of over 7,000 tweets per second, with the
<a href="https://twitter.com/#!/twittercomms/status/146751974904311808">largest</a> (so far) being just over 25,000 tweets per second.</li>
</ul>

<p>For part of 2010, Google was perhaps able to keep up with the stream
at 1,300 requests per second. Somewhere between February and June, the
average volume of tweets outpaced them.</p>

<p>Let’s assume that they kept pace until June 2011, and that on June 1,
Twitter went from somewhere in the range of 1,300 tweets per second to
their reported 2,300 tweets per second. Google is 1,000 tweets behind
per second.</p>

<p>By the end of the year, Google missed 15.5 billion tweets. They are
two months behind if they didn’t skip any, and the tweet volume did
not increase. But it did increase by 25% or so by October, and surely
it has grown more since then.</p>

<p>If Google has only indexed 3 billion pages so far, they have
approximately 12 days of tweets at current volume. It’s pretty hard to
rationalize the 3 billion pages number against the 1,300 per second
number. Was Google indexing at a much slower rate before? Did they not
start until a few months ago?</p>

<p>Of course Google may be getting multiple tweets per request, perhaps
by crawling the timelines of important users. But this means that they
probably get a lot of requests that don’t give them any new tweets, or
else the timeliness of the data is poor.</p>

<p>No matter how you slice it, it appears Google would be unable to keep
up. Even if they were keeping up now, Twitter’s growth probably sets a
time limit for which keeping up remains possible.</p>

<p>Perhaps Google is super clever, and can index only the right
tweets. I think that it’s more probable they have “enough” data to
surface results for the super popular topics, and miss nearly
everything in the long tail of the distribution. I expect that this
adversely affects search quality, which one suspects is a high
priority for the world’s best search engine.</p>

<p>Google is no saint. They are just as guilty of the same data
hoarding. If you ran these numbers for YouTube indexing, I think you
will find the situation is much worse. I imagine that most of these
data silo companies purposefully set their crawl rates too low for
anyone to achieve high quality search results.</p>

<p>In the case of Twitter, the end result for users is even worse because
Twitter’s own attempts at search are terrible and are getting worse
over time. At least Google makes a decent YouTube search, even if no
one else can.</p>

<p>Even if Google could get all the tweets, they still would have very
little to no Facebook data. I still think the best strategy in this
situation for them is to create their own social data and use that
instead. It’s a tough road, but they seem to have little choice.</p>

<p>In the end, it’s not about Google or Twitter or Facebook, but the
stifling of innovation and competition around data. We can only hope
that some federated solution or some data-liberal company wins out in
the end.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>The More Things Change: A Review of The Soul of a New Machine</title>
    <link rel='alternate' type='text/html' href='/2012/01/20/the-more-things-change/' />
    <id>tag:metajack.im:/2012/01/20/the-more-things-change/</id>
    <updated>2012-01-20T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>Already in my career I’ve experienced enormous passion, burnout,
extraordinary dedication to my team and projects, and depression. I’m
sure many others have as well. Has it always been this way with
technology? I often wonder if this rollercoaster is necessary,
healthy, or normal.</p>

<p>I recently saw a recommendation for <a href="http://www.amazon.com/Soul-New-Machine-Tracy-Kidder/dp/0316491977/?tag=metajack-20">Soul of a New Machine</a>,
which tells the story of a team of engineers at <a href="http://en.wikipedia.org/wiki/Data_General">Data General</a>
who built a new 32-bit computer in the late 1970s. The book is
fascinating. Thirty year later, many of its descriptions of the
project and the way the team worked and was treated could apply to any
modern project.</p>

<p>The plot summary will no doubt sound familiar to you: A team of mostly
young, mostly male engineers works grueling hours to build something
amazing in too short an amount of time. They succeed, albeit a bit
over their original schedule. Despite the project’s commercial
success, the team is denied both recognition and financial rewards and
many end up leaving the company. Almost all of them ultimately enjoyed
it and would (and did) do it again.</p>

<p>There were many pieces of this story that resonated with me.</p>

<p>:EXTENDED:</p>

<h2 id="work-is-a-drug">Work is a Drug</h2>

<p>On overworking <a href="http://en.wikipedia.org/wiki/Tom_West">Tom West</a>, the manager of the team in the book,
says:</p>

<blockquote>
  <p>That’s the bear trap, the greatest vice. Your job. You can justify
just about any behavior with it. Maybe that’s why you do it, so you
don’t have to deal with all those other problems.</p>
</blockquote>

<p>Why deal with the unpredictable world, when the controllable world of
creation is available? It’s code as escapist drug, and I love to get
high on it. Mundane things like cleaning my house, and more
serious ones like taking care of my health, are all easy to avoid
while fixing bugs or starting a new project.</p>

<p>It’s both possible and important to find a balance.</p>

<p>The team’s secretary, who was much more than her title suggests,
suffered and succeeded with the rest of the team. Even she says:</p>

<blockquote>
  <p>I would do it again. I would be very grateful to do it again. I
think I would take a cut in pay to do it again.</p>
</blockquote>

<p>Even as I recover from projects that burned me out, I am constantly
thinking about how to do new ones. In fact, while I’m doing any
project, I’m already thinking about doing another. This sounds like
drugs again. But they are good drugs.</p>

<h2 id="harassment-and-treatment-of-women">Harassment and Treatment of Women</h2>

<p>The book describes how some team members tormented the lone female
engineer. This is something that still happens today, and it’s
terrible. And people then wonder why there are so few women in our
industry.</p>

<p>In addition to that, at the end when they hand out the peer awards,
their award to the woman was for putting up with them, not for any of
her actual accomplishments.</p>

<p><a href="http://societyofwomenengineers.swe.org/index.php?option=com_content&amp;task=view&amp;id=88&amp;Itemid=78">Betty Shanahan</a>
was that lone woman, and it looks to me that she deserved more than
just an award for thick skin. She’s the CEO of the Society of Women
Engineers, and she was “a member of the design team for the first
parallel processing minicomputer and manager of hardware design for
subsequent systems.”  She later moved to the business side of
technology, and I wonder if that had anything to do with her having to
put up with the Eagle team’s harassment.</p>

<h2 id="how-something-is-done-is-important-too">How Something is Done is Important Too</h2>

<p>Often we judge things by their properties, but one can also rightly
judge something by how it is made. Shoes made from child labor are
less good than those made in other ways.</p>

<p>Kidder, the book’s author, discusses this:</p>

<blockquote>
  <p>In <em>The Nature of the Gothic</em> John Ruskin decries the tendency of
the industrial age to fragment work into tasks so trivial that they
are fit to be performed ony by the equivalent of slave
labor. Writing in the nineteeth century, Ruskin was one of the
first, with Marx, to have raised this now-familiar complaint. In the
Gothic cathedrals of Europe, Ruskin believed, you can see the
glorious fruits of free labor given freely. What is usually meant by
the term craftsmanship is the production of things of high quality;
Ruskin makes the crucial point that a thing may also be judged
according to the conditions under which it was built.</p>
</blockquote>

<p>By this kind of measure, is the work many teams do good? Is the Eagle
computer that Tom West’s team built really a success since the team
worked much overtime, suffered divorces and other problems, and in the
end received little to no reward?</p>

<p>I think it’s time for entrepreneurs and workers in our industry to
demand better. Our outputs will be better if they are made
sustainably, and not just by the measure above. In retrospect, maybe
the reviewers of <a href="https://en.wikipedia.org/wiki/L.A._Noire">LA Noire</a> should have taken into the account
the <a href="https://en.wikipedia.org/wiki/L.A._Noire#Staff_complaints">trials</a> of its developers; it certainly would not have
fared well.</p>

<h2 id="freedom-of-expression">Freedom of Expression</h2>

<p>I want to hire <a href="http://paulgraham.com/word.html">resourceful</a> people. I want to describe a
general outline of a design and not have to describe it in intricate
detail in order for them to build it.</p>

<p>It turns out that this is critical for happiness. If we’re told
exactly how to do something, it takes much of the creativity and fun
out of the work.</p>

<blockquote>
  <p>Engineers are supposed to stand among the privileged members of
industrial enterprises, but several studies suggest that a fairly
large percentage of engineers in America are not content with their
jobs. Among the reasons cited are the nature of the jobs themselves
and the restrictive way sin which they are managed. Among the terms
used to describe their malaise are <em>declining technical challenge;
misutilization; limited freedom of action; tight control of working
conditions</em>.</p>
</blockquote>

<p>You must trust those you work with to be resourceful. If you don’t
trust them, you will end up micromanaging them into unhappiness, and
you will also remove their valuable creative input from your product.</p>

<p>There is a balance to be struck with feedback. The Eagle engineers
thought that the managers didn’t appreciate their efforts, but in
reality, some of this was them trying to stay out of the way. Kidder
asked the Tom West’s boss:</p>

<blockquote>
  <p>Had the Eagle project always interested him or had it grown in
importance gradually?</p>

  <p>“From the start it was a very important project.”</p>

  <p>Was he pleased with the work of the Eclipse group?</p>

  <p>“Absolutely!” His voice falls. “They did a hell of a job.”</p>

  <p>But some members of the team felt that they had been rather
neglected by the company.</p>

  <p>“That doesn’t surprise me,” he says. “That’s frequently the
case. There’s often a conflict in people’s minds. How much direction
do they want?”</p>
</blockquote>

<p>I’ve had this same issue with investors as well. You don’t want them
to meddle with your company or your product, but you also want their
advice and guidance. It’s possible to go too far in either direction,
but mostly you hear about stories where investors meddle too much. I
personally think it’s probably better to err on the side of too little
help than to end up with too much meddling.</p>

<h2 id="the-venture-capitalists">The Venture Capitalists</h2>

<p>Even thirty years ago, the VCs had a bad rap. Tom West was asked in a
<a href="http://www.wired.com/wired/archive/8.12/soul.html">Wired article</a> years after the book’s publishing why he stayed
at Data General until he retired:</p>

<blockquote>
  <p>“You could do new products and companies within the company, rather
than shag some venture capitalist and kill yourself for five years.”
To be an entrepreneur, he says, “you have to be interested in
networking, even with fools.”</p>
</blockquote>

<p>This is another reason why I would prefer to bootstrap companies if at
all possible.</p>

<p>Tom West ended up working on many interesting projects at Data
General, but ultimately, none of them got the support or recognition
they deserved. The other members of the Eagle team spread out and
started or worked for new companies, and in general seemed much
happier.</p>

<h2 id="final-thoughts">Final Thoughts</h2>

<p>In the end, it’s both a fascinating tale of heroism and creativity and
a saddening tale of undervalued and underpaid engineers. I am both
emboldened to keep following my passions and more mindful of its
dangers. My troubles are not unique - not even modern. Thirty years
after this book was written, I feel like it could have been written
yesterday.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>The Potentially Dark Future of Search</title>
    <link rel='alternate' type='text/html' href='/2012/01/12/the-potentially-dark-future-of-search/' />
    <id>tag:metajack.im:/2012/01/12/the-potentially-dark-future-of-search/</id>
    <updated>2012-01-12T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>Twitter sees Google’s latest Google+ feature, integration into Google
search, as anti-competitive, and it probably is. However, it brings to
the surface some real issues with the future of search and of data.</p>

<p>Twitter’s argument:</p>

<blockquote>
  <p>We’re concerned that as a result of Google’s changes, finding this
information will be much harder for everyone. We think that’s bad
for people, publishers, news organizations and Twitter users.</p>
</blockquote>

<p><a href="https://plus.google.com/u/0/116899029375914044550/posts/24uqWqvALud">Google’s response</a>
was:</p>

<blockquote>
  <p>We are a bit surprised by Twitter’s comments about Search plus Your
World, because they chose not to renew their agreement with us last
summer (<a href="http://goo.gl/chKwi">http://goo.gl/chKwi</a>), and since then
we have observed their rel=nofollow instructions.</p>
</blockquote>

<p>People have been digging into the semantics of nofollow (see
<a href="http://marketingland.com/schmidt-google-not-favored-happy-to-talk-twitter-facebook-integration-3151">Danny Sullivan</a> and
<a href="http://luigimontanez.com/2012/how-rel-nofollow-works/">Luigi Montanez</a>),
but there is a much bigger issue.</p>

<p>Google and other established and up-and-coming search engines have no
real way to include lots of data in their index. It’s easy to imagine
that the lack of access to Twitter and Facebook data was a motivator
for Google+ in the first place.</p>

<p>Lots of sites now generate enough data that it is unrealistic to crawl
them. For example, Youtube has more new content every day than they
allow anyone to crawl. Twitter is essentially the same. This means
there is no way to index this data without special arrangements with
the provider. Twitter has closely guarded their firehose of data, but
at least they have some mechanism to obtain it. Youtube, as far as I
am aware, has no such mechanism.</p>

<p>My team and I ran into this problem head on trying to build Collecta,
a real-time search engine. Access to the data was a primary blocker
for many features and product ideas, and over the too short life of
that company, access became significantly more difficult, not easier.</p>

<p>Google can build an effective search, even a real-time one, for
Youtube, but no one else can. Twitter can build search for their data,
but few others can, and their data access policies can and do change
on a whim.</p>

<p>If Google believes that microblogging data will improve their search
product, then a reasonable strategy to obtain that data is to try and
build their own microblogging service to generate it. I can’t fault
Google for trying. If I thought Collecta could have effectively
competed against Twitter for their audience, I would certainly have
attempted that as well.</p>

<p>Google, Twitter, Facebook and others are hoarding silos of otherwise
public data. Not only is this artificially limiting the features of
their products, but it squashes the potential for new and exciting
search applications. The search services that have sprung up are
limited to your own data, aggregate results from service-specific
search APIs, exist at the mercy of data providers, or make do with a
tiny subset of the data. I don’t think Google could have built their
own search engine if the Web were similarly hostile.</p>

<p>One could argue for requiring these bits of data to be openly
available, but unlike the data of the past, this data is expensive to
publish and consume. Most of these services may not even have a
mechanism to publish the data, even internally. Simply receiving the
Youtube or Twitter firehoses (and not counting video or image media)
would require significant engineering effort, and the rate of data
generation is only accelerating.</p>

<p>I think we must push for open access to data, even if it is
costly. These data wars benefit very few. If things don’t change, the
future of search is dark.</p>
]]>
    </content>
  </entry>
  
  <entry>
    <title>Strophe.js 1.0.2 Released</title>
    <link rel='alternate' type='text/html' href='/2011/06/19/strophejs-102-released/' />
    <id>tag:metajack.im:/2011/06/19/strophejs-102-released/</id>
    <updated>2011-06-19T00:00:00Z</updated>

    <author>
      <name>Jack Moffitt</name>
      <uri>https://metajack.im</uri>
      <email>jack@metajack.im</email>
    </author>

    <summary></summary>
    <content type='html' xml:lang='en' xml:base='https://metajack.im/'>
      <![CDATA[<p>I’ve just tagged and released Strophe.js 1.0.2. You can find it on the
<a href="http://strophe.im/strophejs">new Strophe.js site</a>.</p>

<p>Please consider upgrading as soon as possible, as a security problem
was found in Strophe.js 1.0.1. The DIGEST-MD5 SASL method used a
constant client nonce due to a bug in Strophe’s use of the underlying
MD5 library. I don’t know of any exploits for this bug, but it could
compromise your site’s security.</p>

<p>Much of the credit for this release goes to the many contributions and
pull requests that people have sent in the last year. The community’s
effort continues to make Strophe.js better and better.</p>
]]>
    </content>
  </entry>
  
</feed>
