Thursday, April 24, 2014

Bitcoin by analogy

Hackers are the animals that can detect a storm coming or an earthquake. They just know, even though they don’t know why, and there are two big things hackers are excited about now and can’t articulate why – Bitcoin and 3D printing.

Paul Graham, Wall Street Journal
Bitcoin is the first purely digital, decentralized money. It has been on my mind a lot lately and apparently, I'm not the only one. Paul Graham called it a paradigm shift; Marc Andreessen believes Bitcoin is as big of a technological breakthrough as PCs and the Internet; Ben Bernanke said virtual currencies may hold long-term promise; Chris Dixon is investing millions in it; Google is interested in Bitcoin; Apple is afraid of it. In short, Bitcoin is something you should be paying attention to.

Using Bitcoin is easy: to pay with Bitcoin, you just install an app called a Bitcoin wallet on your computer or mobile device and click some buttons in the app to send money to other Bitcoin users over the Internet:

To receive money, you just give people your bitcoin address, or create a QR code. For example, here's mine:

Understanding what's happening under the hood is a bit trickier. I read a number of articles, but still couldn't wrap my head around many of the details. Therefore, in the spirit of "the best way to learn is to teach", I forced myself to write this blog post as a learning tool.

The post is broken up into a series of questions about Bitcoin:
  1. Validity: is Bitcoin really money?
  2. Decentralized money: how can you have a currency that no one controls?
  3. Decentralized mint: how is money created in Bitcoin?
  4. Decentralized transactions: how do you transfer money in Bitcoin?
  5. Decentralized ledgers: how are transactions recorded in Bitcoin?
  6. Further reading: where can I find more info?
Also, in the spirit of "if you can't explain it simply, you don't understand it well enough", I've tried to make the key Bitcoin concepts accessible to audiences without a programming background. Most sections in this post start with a simple analogy for Bitcoin that involves no tech whatsoever before diving into the tech details.

Of course, I'm a Bitcoin novice myself, so if after reading this you're still confused, or I've made any errors or omissions, please leave a comment!


Before discussing how Bitcoin works, let's answer a common question: how can Bitcoin be considered money? It was created by a bunch of computer programmers with no government backing; it's purely digital, so it exists only on computers and has no "intrinsic value"; it might not even fit the standard definition of money because it's not a stable store of value and rarely used as a unit of account.

Despite all that, Bitcoin is still used as a medium of exchange: thousands of merchants are willing to accept Bitcoin in trade for real goods or services. Why? Because these merchants see Bitcoin as an effective medium of exchange and they believe that other merchants will feel the same way in the future. Or, to put it another way:
It’s not as much that the Bitcoin currency has some arbitrary value and then people are trading with it; it’s more that people can trade with Bitcoin (anywhere, everywhere, with no fraud and no or very low fees) and as a result it has value.

Marc Andreessen, Why Bitcoin Matters
Decentralized money (analogy)

Yap is a nation in the South Pacific that uses a unique form of money called Rai stones, which are circular disks carved out of limestone, with a hole in the middle, that can be up to 12 feet in diameter and weigh up to 8,800 lbs!

Trading stones of this size is difficult: no one wants to cart around a 4 ton stone every time they make a purchase. As a result, the Yapese came up with a clever solution: they decided to determine ownership by verbal agreement. Whenever there was a trade, the parties involved would communicate to the rest of the tribe the amount of stone that had been exchanged. The stones wouldn't actually move from one house to another, but the knowledge of who owned what was memorized and handed down through oral history.

At this point, I must apologize to historians: in the rest of the post, I'm going to completely reinvent Yap history so I can use it as an effective analogy for understanding Bitcoin.

As a first step, imagine that a tribe on Yap struggled to accurately track Rai ownership purely through memory and oral history. After many arguments and fights over who owns what, they decided they were going to write down every transaction in a book called a ledger, which would be considered the source of truth.

The chief appointed one of the tribesmen as the bookkeeper. If Alice wanted to pay Bob 10 lbs of Rai, Alice would go to the bookkeeper's house and announce the transaction. The bookkeeper would go through the ledger, make sure Alice actually owned 10 lbs of Rai, and if she did, add the new transaction to the book.

Everything worked well for a while, but gradually, problems appeared: the bookkeeper started charging transaction fees; trade would sometimes halt entirely because the bookkeeper was on vacation or sick;  pressured by the chief, the bookkeeper would charge very high fees or completely ban certain transactions, especially with other tribes; sometimes, after a dispute, the bookkeeper would seize someone's assets entirely. Eventually, the bookkeeper became one of the most rich, powerful, and controversial figures in society: despite rumors of corruption, fraud, and favoritsm, who would dare question the person who controls all the money?

10 families in in the tribe, upset with the bookkeeper's behavior, decided to find a new way to manage their money. Since a single person cannot be trusted to maintain the ledger, these families had a radical idea: every family would maintain its own ledger!

For example, if Alice wanted to to pay Bob 10 lbs of Rai, Alice would go to the center of town and announce the new transaction to all the other families. Each family would then check their own ledger, make sure Alice really had 10 lbs of Rai, and if she did, add the new transaction to their ledger. Since each family now kept a ledger, no one family had more power than any other!

Decentralized money (tech details)

All modern currencies, such as the US Dollar and Euro, are centralized: that is, they are controlled by a small number of institutions, such as governments, banks, and credit card companies. For example, if Alice wants to transfer $10 to Bob, she'll login to her bank's website, wait for the bank to verify her identity, send the transfer information to the bank's servers, wait a few days for the bank to clear the transaction, at which point the bank will update her balance, and send the info over to Bob's bank. In short, even though it's Alice's money, the entire process is controlled by the bank's rules and procedures:

This centralized approach has the same problems as the Yap bookkeeper: even though it's your money, a small group of institutions controls almost everything about it, including who can spend money, when they can spend it, what they can spend it on, where they can spend it, what fees and taxes are imposed, and so on.

Bitcoin offers an alternative without all of these limitations: a decentralized currency. As in the Yap analogy, Bitcoin uses a distributed ledger approach. Of course, instead of 10 families, Bitcoin consists of thousands of computers, each of which maintains its own ledger; and instead of someone yelling from the town center, these computers communicate with each other by sending messages over the Internet.

For example, if Alice was transferring 10 Bitcoins to Bob, she'd click some buttons in her Bitcoin wallet, and it would broadcast this transaction to all other Bitcoin users:

This collection of computers is known as the Bitcoin network and anyone can join it by installing the Bitcoin software on their computer, which will download the global ledger onto your computer. This allows you to see every transaction that has ever happened in Bitcoin's history! You can even browse them online on sites like and This is a very different model of privacy than you get with banks, so you may want to check out Bitcoin privacy and Is Bitcoin anonymous for more info.

When each of these computers - I'll refer to them as "nodes" to indicate that they are running Bitcoin software that automatically handles all of these tech details - receives Alice's message, it'll check its ledger to make sure Alice owns 10 Bitcoins, and if she does, add the new entry to the ledger.

Decentralized mint (analogy)

In a decentralized system, there is no mint to create money. So how do more Rai stones enter circulation? The answer is mining!

There is no limestone on the Yap islands themselves, so the villagers have to sail to distant islands to find it. The limestone is scarce and randomly distributed, so finding it can take a long time and is mostly a matter of luck. Once a villager finds some stone, they bring it back, display it in front of their home, and announce it to the other villagers to make sure it gets entered in their ledgers.

Decentralized mint (tech details)

How do Bitcoins get created? The answer here is also mining!

However, Bitcoin mining does not involve any pick axes: it's purely digital. Any Bitcoin node that solves a computationally expensive math problem gets Bitcoin as a reward! Much like the stone mining, these math problems can take a long time to solve, and which node solves it first is mostly a matter of luck. However, once some lucky node finds a solution, it will broadcast it out to the rest of the Bitcoin network, and all other nodes will record in their ledgers that the lucky node has earned some new Bitcoin.

The problems to solve involve cryptographic hash functions and the math for them is beyond the scope of this post. However, I'll describe them at a high level so we can see how they are useful to Bitcoin.

If you pass a string of text through a cryptographic hash function, it will convert it a different string of text, called the "digest", in a totally unpredictable manner:

cryptographic-function("Hello World") = 124610xktj1l32kxjcj24j1
Given the slightest change to the text, such as adding an exclamation point, you get a totally different digest as output:

cryptographic-function("Hello World!") = 444lkxckl15lsck51lk234
The most important feature of cryptographic hash functions is that they are "one way": for a given string T, you'll always get back the same digest D; however, given an arbitrary digest D, there is no way to figure out what was the original text T. In other words, there is no way to "reverse" the hash function.

In Bitcoin mining, you pass two pieces of data into the SHA-256 cryptographic hash function:
  1. Information about a block, B: we'll discuss blocks and the block chain a little later
  2. A random guess, R
sha-256(B, R) = digest
The goal is to find the right R so that you get back a digest that starts with some required number of leading zeroes:

sha-256(B, R) = 00000000000001234sdfxxc1414...
Since cryptographic hash functions are "one way", there is no way to know what value(s) of R will produce a digest that starts with zeroes. All you can do is repeatedly guess random values of R until you accidentally stumble across one that works. Since SHA-256 has 2256 possible outputs, as the number of required leading zeroes goes up, the odds of any one guess being right becomes extremely small.

In fact, the problem is intentionally designed - and occasionally recalibrated (see Blockchain stats) - to take a very long time: a single computer will have to guess non-stop, on average, for several years to find the right value of R. However, with all the nodes on the Bitcoin network guessing, the average time to find the right value of R is roughly 10 minutes.

Spending so much CPU time and energy on useless calculations may seem wasteful, but as we'll see later, the fact that the calculations are expensive is essential in establishing a consistent timeline.

A few interesting notes on Bitcoin mining:
  1. There is pre-determined, fixed supply of Bitcoin: every year, the number of Bitcoins that can be mined will drop by half and the total supply will max out at 21 million.
  2. After all Bitcoins have been mined, the reward for mining will switch to small fees on each Bitcoin transaction. These are expected to be significantly smaller than bank or credit card fees.
  3. The number of nodes participating in Bitcoin mining today means it is not practical or cost effective to try to do mining on your home computer. Instead, the recommendation is to join a mining pool and even invest in dedicated hardware (see for more info).
  4. The fixed supply of Bitcoin is not a problem as you can pay with tiny fractions of a Bitcoin, all the way down to 0.00000001 BTC, known as a Satoshi.
Decentralized transactions (analogy)

Each of the 10 families maintains a ledger by hand that lists every transaction that has ever happened. Each transaction consists of 3 sections:
  1. From: the name of the person sending money.
  2. To: the name of the recipient of the money.
  3. Amount: how many pounds of Rai stone to transfer.
To find out Alice's balance, you start at the beginning of the book and go through every transaction, adding up transactions where she received money, and subtracting transactions where she sent money. When Alice announces a new transaction, before adding it to the book, you can use this balance calculation to make sure she has enough money for the transaction.

Decentralized transactions (tech details)

The Yap transactions contained three pieces of data: From, To, and Amount. With Bitcoin, the To and From fields are tricky: we can't just look to the town center to see who is announcing a new transaction and in a decentralized world, there is no central system to store usernames and passwords.

To manage and verify identity in a decentralized fashion, Bitcoin uses public key cryptography. Public key cryptography involves lots of math that is beyond the scope of this post, so this section is just a brief intro to make it clear how it applies to Bitcon.

In this form of cryptography, there are two keys, or long strings of letters and numbers, that are mathematically linked:
  1. Public key: a public identifier that can be freely shared with others.
  2. Private key: a secret or password that must never be shared with anyone.
When you install a Bitcoin wallet on your computer, it will automatically generate a public and private key pair for you. You can freely share your public key: in fact, the public key is your identity or address in Bitcoin.

Public/private keys can be used for several tasks, but the main one we care about is "authenticity": that is, we can use them to mathematically verify that a message really came from the person we expect and that the message contents have not been modified along the way.

Every time Alice sends a message, she can pass the contents of the message, along with her private key, through an encrypt function:

encrypt("Hello World", Alice's private key) = n67n54n6l10xf15
The value she gets back, called a digital signature, is a completely unrecognizable string of letters and numbers. Without Alice's private key, there is no way to guess what that string should be. Moreover, if you change the input text at all, even by a single character, such as adding an exclamation point to "Hello World", the signature will change in totally unpredictable ways:

encrypt("Hello World!", Alice's private key) = vk34jxl140501025
This is similar to the cryptographic hash functions we saw earlier, except that the mathematical link between the private and public keys means the operations are reversible: Alice can decrypt the digital signature to get back the original text; more importantly for our use case, Bob can use Alice's public key to verify the signature:

verify("n67n54n6l10xf15", "Hello World", Alice's public key) = valid or invalid
If the signature is "valid", then Bob can be confident that it was really Alice who sent the message and that the message is exactly as she originally created it.

We now know enough to take a look at a typical Bitcoin transaction. For example, if Alice was sending 10 Bitcoins to Bob, the message might look something like this:

Signature:mn546yhg (signed with Alice's private key)

Inputs:nhn3891a (transaction where Alice got 7 BTC)
vc4232v32 (transaction where Alice got 3 BTC)

Outputs:To: 60sdfs951sdfxo66 (Bob's public key)
Amount: 10 Bitcoins

The message consists of 3 sections:
  1. Signature: Alice includes a digital signature with her messages so that other Bitcoin nodes can verify the message really came from her.
  2. Inputs: this is a list of the signatures of transactions already in the ledger where Alice was the recipient of Bitcoins. In other words, these are the funds Alice is using in this transaction, a total of 10 Bitcoins.
  3. Outputs: this is a list of how the funds in the inputs should be distributed. To keep calculations simple, you are required to redistribute all the funds in the inputs. You can include more than one recipient in the ouputs section - including yourself, if you need change. In this case, Alice is sending 10 Bitcoin to a single recipient, Bob, identified by his public key.
Since each transaction references a previous transaction in its "inputs" section, it is possible to follow the graph of transactions all the way back to the beginning of Bitcoin. This is the mechanism for checking the ownership of bitcoins!

For example, to calculate a user's balance, we use an approach very similar to the Yap ledger: you go through every transaction, add up the ones where the user was a recipient, and subtract the ones where they were a sender. To check that a new message is valid, such as Alice's transfer to Bob, you can check that the inputs refer to valid transactions already in the ledger where Alice was the recipient.
Decentralized ledgers (analogy)

Over time, the number of families using the distributed ledger system grew from 10 to 30. The homes for the new families were spread out, so instead of one town center, there were now 3 town centers, one for every 10 houses. Therefore, when Alice wanted to transfer 10 lbs of Rai to Bob, she had to announce the transaction 3 times.

Eventually, the villagers noticed that this led to a problem: the order of transactions was not consistent across all ledgers! This could lead to problems. For example, Alice wants to transfer 10 lbs of Rai to Bob. She announces it at village center #1 and heads off to announce it at the other village centers. Bob, who lives near village center #1, hears Alice's announcement and, excited to finally have some money, immediately starts his own transaction to transfer 10lbs of Rai to Carole. Bob announces his transaction at village center #1, where all the families now have the follow transaction order:
  1. Alice -> Bob, 10 lbs Rai
  2. Bob -> Carole, 10 lbs Rai
Bob then heads off to village center #2 to make the same announcement. However, it turns out that Alice went to village center #3 first and hasn't made it to #2 yet. As a result, the families in village center #2 have the following transaction order:
  1. Bob -> Carole, 10 lbs Rai
  2. Alice -> Bob, 10 lbs Rai
The ledgers in different villages centers are out of sync! Moreover, since Bob had no money before Alice's transaction, the families in village center #2 reject his transaction, even though the families in village center #1 accepted it!

We need a way to establish a consistent order of transactions in all ledgers in decentralized way. For example, simply letting a single family choose the order wouldn't work since that family could use that power to their advantage (e.g. by constantly delaying a rival family's transactions). This is a hard problem, but the ingenious villagers once again came up with a clever solution: use random chance!

Here's the idea: when a new transaction comes in, the first step is to add it to an "unverified" transactions list. For example, when Alice announces her transfer of 10 lbs of Rai to Bob, the ledger and unverified lists would look like this:

Every time a family mines new Rai stone, to get all the families to recognize the new stone in their ledgers, this family must pick one transaction to move from the unverified list to the ledger.

This mechanic accomplishes several goals at once:
  1. Since limestone is scarce and randomly distributed, it's a matter of luck which family will get to "verify" the next transaction, so no family can control transaction order to their advantage.
  2. Every family now has a strong incentive to participate in maintaining ledgers: it's the only way that their newly mined limestone will be allowed to enter circulation!
  3. Since new limestone is randomly distributed and takes a long time to find, the odds of two families overlapping in finding new stone are essentially zero*. This give us a consistent ordering for transactions: in the case above, either all village centers accept Bob's transfer to Carole or all of them reject it.
* To be honest, point #3 is not entirely accurate: an overlap would be rare, but could still happen. Unfortunately, this is one place where I can't find a simple analogy that captures the math Bitcoin uses to handle these occasional overlaps. Nevertheless, the main point still holds: a random, unpredictable process means no family will be able to control the order of multiple transactions in a row.

Decentralized ledgers (tech details)

The naive strategy of immediately adding new transactions to the ledger didn't work for the Yap when they got too big; since Bitcoin is several orders of magnitude bigger (thousands of nodes, multiple transactions per second), the strategy definitely doesn't work for Bitcoin. So how does Bitcoin establish a consistent timeline amongst distributed ledgers?

It turns out that this is a general problem in distributed systems research known as the Byzantine Generals Problem. As we'll discuss next, Bitcoin offers the first practical (read: probabilistic) solution to this problem - see Bitcoin & the Byzantine Generals Problem for more info.

The ledger in Bitcoin is called the block chain. Here's a rough idea of what a block chain might look like:

The block chain consists of a series of blocks (3 are shown above), where each block contains:
  1. Transactions: transactions or messages sent between users.
  2. Proof of work: this is the digest from Bitcoin mining!
  3. Previous reference: a reference to the digest of the previous block.
Notice how each block has a reference to the previous block: this chain of references is what defines the timeline in the Bitcoin network. The transactions in a single block happened "at the same time" (there must be no dependencies between them); the transactions in previous blocks happened earlier. This is different than the Yap ledger, where order is implicit from the order the transactions are written in the ledger.

You can follow the "previous references" from block to block, all the way back to the very first block, known as the genesis block. The genesis block was part of the seed data when Bitcoin was first created; how do all other blocks get added in a consistent order across all computers? Just as we discussed in the Yap analogy section, when new messages arrive at a Bitcoin node, they initially go into an "unverified" bucket:

Any node in the Bitcoin network can put several unverified transactions into a block and send it out to the rest of the network as the proposed next block in the chain. The catch is that the proposed block must include a "proof of work", which is the solution to a computationally expensive math problem involving cryptographic hash functions. Sound familiar? That's right, this is Bitcoin mining!

Just like the Yap families propose the next transaction when they mine new Rai stones, it is the Bitcoin miners who propose new blocks for the block chain when they mine new Bitcoin. Here are the rules: take all the text from several unverified transactions T, plus the digest of the most recent block in the ledger D, plus a random guess R, and do the following SHA-256 calculation:

sha-256(T, D, R) = digest
The miners keep guessing different values of R until they find a digest with the required number of leading zeroes. The first miner to find it gets a reward of Bitcoin: to receive it, the miner must send out out the new block, which includes the digest as the "proof of work", to all other Bitcoin nodes. Assuming the new block is valid, it becomes a part of the block chain:

In the example above, block 54 is now part of the block chain - that is, the Bitcoin timeline - and all the transactions in it, including Alice's, are considered "verified".

What if multiple nodes come up with a proof of work at the same time? This is a rare occurrence, but if it happens, the network will temporarily have multiple possible paths in the block chain:

The solution is simple: Bitcoin nodes always accept the longest available chain.

In the example above, some parts of the network will be mining a new block that has 56 as its previous reference, and others will be mining a new block with 57 as its previous reference. Eventually, someone will complete a proof of work on one of these paths, making it the longest one. For example, if the first proof of work calculation to finish was for a new block on the block 57 path, the network would switch to this path, and the transactions in block 56 would get put back into the unverified bucket:

Of course, it's possible that two blocks, one on each path, will be found simultaneously again, but a) this is even more unlikely and b) it just means that the block chain stays diverged for a little while longer while we wait for yet another block to be found. Eventually, some path will end up longer, and the network will converge on it.

Since nodes always accept the longest path, couldn't an attacker create their own block chain with lots of fraudulent transactions and get the whole network to adopt it, so long as it was longer? For example, if Mallory managed to generate blocks 59, 60, and 61 while the network was still working on 57 and 58, then Mallory's fraudulent blocks would be accepted and all the others would be dropped:

Attacks of this sort are very unlikely to succeed because Mallory is in a race against the entire Bitcoin network to generate those blocks. This is why the proof of work calculation is intentionally designed to be very expensive. Mallory would have to control more computing power than all other nodes on the Bitcoin network, combined, to have a viable chance of winning this race for a single block, let alone 3.

This is the real reason Bitcoins are offered as a reward to miners: they are an incentive to make the network as big and active as possible, so an attacker has no chance of being faster than the rest of the network. In fact, most attackers will find it more profitable to use their computing resources to mine bitcoins legitimately instead of taking the risk of trying to add fraudulent blocks to the block chain.

The odds of an attacker succeeding get even smaller as time goes on. For example, if you want to tamper with a transaction in block 51, you'll have to recalculate the proof of work for that block; since the proof of work calculation for block 52 includes the proof of work from block 51, you now also have to recalculate the proof of work for block 52 as well; and then block 53; and in the meantime, you're racing the rest of the network, which has since added blocks 54 and 55, so your chain is not the longest! This means that older blocks - those further back in the chain - are more "secure" than newer ones.

In fact, if Bob is a merchant, he can tune the level of risk he's willing to tolerate by deciding how many blocks must elapse before he considers a transaction verified. If it's a tiny transaction, a single block (~10 minutes) may be enough; if it's a large transaction, waiting for 6 blocks (~1 hour) may be more advisable. For a merchant, waiting 1 hour to avoid fraud may still be better than the situation today with credit cards, where a chargeback may appear a month after the transaction.

An important disclaimer: while Bitcoin's design seems sound from a security standpoint, it's still susceptible to fraud as a result of user error, just like any other system. The difference with Bitcoin is that it's a decentralized system: when something goes wrong, there is no one you can call for help.

For example, if you accidentally send Bitcoin to the wrong address, there is no way to get that money back. If you fall for a phishing attack, there is no fraud department to report it to. If you lose your private key (e.g. in a hard drive crash), you lose access to any Bitcoins associated with it, and there is nothing anyone can do to help you get them back. In fact, if a private key is lost, those Bitcoins are no longer accessible to anyone on the Bitcoin network; they are gone, effectively out of circulation, forever!

Further reading

In the spirit of giving credit where it's due, these are the resources that helped me put this post together, and may help you learn more:

Monday, April 14, 2014

Got fast download but slow upload speeds? Here's a fix.

If you've found that your download speed is great, but your upload speed is abysmal, I've got a possible solution for you. I struggled with this issue for a while and decided to write down my findings in a blog post in case I, or anyone else, runs into this in the future.

In fact, this is the second such blog post I'm writing: a couple years ago, I hit the the inverse issue and documented the solution in a blog post called Got slow download but fast upload speeds over wireless? Here's a fix. That post has had several hundred thousand views and helped many people (check out the comments - I even got a marriage proposal), so I'm hoping this post will be useful too!

Here's your tldr: upgrade your router's firmware.


I noticed that on all my devices - a Macbook Pro, iPhone, Windows desktop - webpages were sometimes taking a long time to load; it was a bit intermittent, but everything from google maps to gmail suddenly got very sluggish. I have one of their higher tier Internet plans from Comcast, so this was pretty disappointing.

I ran a bandwidth test on and the results were roughly the same across all of my devices:

At 57 Mb/s, the download speed was great; however, the upload speed was a mere 0.17 Mb/s, which is pretty much unusable. In fact, I had to re-run the test several times, as occasionally, the upload portion of the test would get stuck and never complete.

The solution

I tried rebooting the router, the cable modem, tweaking a bunch of settings, but nothing helped. I also checked with Comcast to ensure there were no issues our outages in my area, and of course, everything was fine.

Finally, I stumbled upon the solution: a firmware upgrade. My router, a Cisco/Linksys E1200, was using firmware version 2.0.02. I went over to Linksys' support page, found my router, and saw that a newer version, 2.0.06, was available. Here's a snippet from the release notes:

The notes for version 2.0.04 are especially interesting, as they fix bugs with WMM (which was the cause of problems in my previous blog post), QoS, and more.

I figured it was worth a shot, downloaded the 2.0.06 firmware, and installed it through my router's admin UI. The instructions for upgrading the firmware will not be the same for all routers, but here's roughly what you need to do:
  1. Go to and login to your router. If you've never done this, look for instructions that came with your router or do a google search to find the default username and password.
  2. Click on "administration".
  3. Click on "firmware upgrade".
  4. You should see a page like this:
  5. Click "Choose File" and select the firmware file you downloaded.
  6. Click "Start Upgrade". DO NOT unplug your router or click anything else in the meantime; let the upgrade complete!
  7. Wait a minute or so for your router to reboot.
The results

After the router restarted, I re-ran my speed test, and the results were much nicer:

The download speed is still a zippy 57 Mb/s, but now the upload speed is fast too, at 11 Mb/s, or nearly 70x faster than what it was before. Woohoo!

I hope you found the post helpful. If your router has a different firmware upgrade process, leave a comment with the steps you followed so others can find it. Happy web browsing!

Wednesday, April 9, 2014

Six programming paradigms that will change how you think about coding

Every now and then, I stumble across a programming language that does something so different that it changes how I think about coding. In this post, I want to share some of my favorite finds.

This is not your grandma's "functional programming will change the world!" blog post: this list is much more esoteric. I'd wager most readers haven't heard of the majority of the languages and paradigms below, so I hope you have as much fun learning about these new concepts as I did.

Note: I have only minimal experience with most of the languages below: I find the ideas behind them fascinating, but claim no expertise in them, so please point out any corrections and errors. Also, if you've found any new paradigms and ideas not covered here, please share them!

Update: this post hit the front page of r/programming and HN. Thank you for the great feedback! I've added some corrections below.

Concurrent by default

Example languages: ANI, Plaid

Let's kick things off with a real mind bender: there are programming languages out there that are concurrent by default. That is, every line of code is executed in parallel!

For example, imagine you wrote three lines of code, A, B, and C:

In most programming languages, A would execute first, then B, and then C. In a language like ANI, A, B, and C would all execute at the same time!

Control flow or ordering between lines of code in ANI is merely a side effect of explicit dependencies between lines of code. For example, if B had a reference to a variable defined in A, then A and C would execute at the same time, and B would execute only after A finished.

Let's look at an example in ANI. As described in the tutorial, ANI programs consists of "pipes" and "latches" that are used to manipulate streams and data flows. The unusual syntax is tough to parse, and the language seems dead, but the concepts are pretty interesting.

Here's a "Hello World" example in ANI:

In ANI terminology, we are sending the "Hello, World!" object (a string) to the std.out stream. What happens if we send another string to std.out?

Both of these lines of code execute in parallel, so they could end up in any order in the console. Now, look what happens when we introduce a variable on one line and reference it later:

The first line declares a "latch" (latches are a bit like variables) called s that contains a string; the second line sends the text "Hello, World!" to s; the third line "unlatches" s and sends the contents to std.out. Here, you can see ANI's implicit program sequencing: since each line depends on the previous one, this code will execute in the order it is written.

The Plaid language also claims to support concurrency by default, but uses a permissions model, as described in this paper, to setup control flow. Plaid also explores other interesting concepts, such as Typestate-Oriented Programming, where state changes become a first class citizen of the language: you define objects not as classes, but as a series of states and transitions that can be checked by the compiler. This seems like an interesting take on exposing time as a first class language construct as discussed in Rich Hickey's Are we there yet talk.

Multicore is on the rise and concurrency is still harder than it should be in most languages. ANI and Plaid offer a fresh a fresh take on this problem that could lead to amazing performance gains; the question is whether "parallel by default" makes concurrency easier or harder to manage.

Update: the description above captures the basic essence of ANI and Plaid, but I used the terms "concurrent" and "parallel" interchangeably, even though they have different meanings. See Concurrency Is Not Parallelism for more info.

Dependent types

Example languages: Idris, Agda, Coq

You're probably used to type systems in languages like C and Java, where the compiler can check that a variable is an integer, list, or string. But what if your compiler could check that a variable is "a positive integer", "a list of length 2", or "a string that is a palindrome"?

This is the idea behind languages that support dependent types: you can specify types that can check the value of your variables at compile time. The shapeless library for Scala adds partial, experimental support (read: probably not ready for primetime) for dependent types to Scala and offers an easy way to see some examples.

Here is how you can declare a Vector that contains the values 1, 2, 3 with the shapeless library:

This creates a variable l1 who's type signature specifies not only that it's a Vector that contains Ints, but also that it is a Vector of length 3. The compiler can use this information to catch errors. Let's use the vAdd method in Vector to perform a pairwise addition between two Vectors:

The example above works fine because the type system knows both Vectors have length 3. However, if we tried to vAdd two Vectors of different lengths, we'd get an error at compile time instead of having to wait until run time!

Shapeless is an amazing library, but from what I've seen, it's still a bit rough, only supports a subset of dependent typing, and leads to fairly verbose code and type signatures. Idris, on the other hand, makes types a first class member of the programming language, so the dependent type system seems much more powerful and clean. For a comparison, check out the Scala vs Idris: Dependent Types, Now and in the Future talk:

Formal verification methods have been around for a long type, but were often too cumbersome to be usable for general purpose programming. Dependent types in languages like Idris, and perhaps even Scala in the future, may offer lighter-weight and more practical alternatives that still dramatically increase the power of the type system in catching errors. Of course, no dependent type system can catch all errors due to to ineherent limitations from the halting problem, but if done well, dependent types may be the next big leap for static type systems.

Concatenative languages

Example languages: Forthcat, joy

Ever wonder what it would be like to program without variables and function application? No? Me neither. But apparently some folks did, and they came up with concatenative programming. The idea is that everything in the language is a function that pushes data onto a stack or pops data off the stack; programs are built up almost exclusively through functional composition (concatenation is composition).

This sounds pretty abstract, so let's look at a simple example in cat:

Here, we push two numbers onto the stack and then call the + function, which pops both numbers off the stack and pushes the result of adding them back onto the stack: the output of the code is 5. Here's a slightly more interesting example:

Let's walk through this line by line:
  1. First, we declare a function foo. Note that functions in cat specify no input parameters: all parameters are implicitly read from the stack. 
  2. foo calls the < function, which pops the first item on the stack, compares it to 10, and pushes either True or False back onto the stack. 
  3. Next, we push the values 0 and 42 onto the stack: we wrap them in brackets to ensure they get pushed onto the stack unevaluated. This is because they will be used as the "then" and "else" branches (respectively) for the call to the if function on the next line. 
  4. The if function pops 3 items off the stack: the boolean condition, the "then" branch, and the "else" branch. Depending on the value of the boolean condition, it'll push the result of either the "then" or "else" branch back onto the stack. 
  5. Finally, we push 20 onto the stack and call the foo function.
  6. When all is said and done, we'll end up with the number 42.
 For a much more detailed introduction, check out The Joy of Concatenative Languages.

This style of programming has some interesting properties: programs can be split and concatenated in countless ways to create new programs; remarkably minimal syntax (even more minimal than LISP) that leads to very concise programs; strong meta programming support. I found concatenative programming to be an eye opening thought experiment, but I'm not sold on its practicality. It seems like you have to remember or imagine the current state of the stack instead of being able to read it from the variable names in the code, which can make it hard to reason about the code.

Declarative programming

GNU Prolog
Example languages: Prolog, SQL

Declarative programming has been around for many years, but most programmers are still unaware of it as a concept. Here's the gist: in most mainstream languages, you describe how to solve a particular problem; in declarative languages, you merely describe the result you want, and the language itself figures out how to get there.

For example, if you're writing a sorting algorithm from scratch in C, you might write the instructions for merge sort, which describes, step by step, how to recursively split the data set in half and merge it back together in sorted order: here's an example. If you were sorting numbers in a declarative language like Prolog, you'd instead describe the output you want: "I want the same list of values, but each item at index i should be less than or equal to the item at index i + 1". Compare the previous C solution to this Prolog code:

If you've used SQL, you've done a form of declarative programming and may not have realized it: when you issue a query like select X from Y where Z, you are describing the data set you'd like to get back; it's the database engine that actually figures out how to execute the query. You can use the explain command in most databases to see the execution plan and figure out what happened under the hood.

The beauty of declarative languages is that they allow you to work at a much higher level of abstraction: your job is just to describe the specification for the output you want. For example, the code for a simple sudoku solver in prolog just lists out what each row, column, and diagonal of a solved sudoku puzzle should look like:

Here is how you would run the sudoku solver above:

The downside, unfortunately, is that declarative programming languages can easily hit performance bottlenecks. The naive sorting algorithm above is likely O(n!); the sudoku solver above does a brute force search; and most developers have had to provide database hints and extra indices to avoid expensive and inefficient plans when executing SQL queries.

Symbolic programming

Example languages: Aurora

The Aurora language is an example of symbolic programming: the "code" you write in these languages can include not only plain text, but also images, math equations, graphs, charts, and more. This allows you to manipulate and describe a large variety of data in the format native to that data, instead of describing it all in text. Aurora is also completely interactive, showing you the results from each line of code instantly, like a REPL on steroids.

The Aurora language was created by Chris Granger, who also built the Light Table IDE. Chris outlines the motivation for Aurora in his post Toward a better programming: some of the goals are to make programming more observable, direct, and reduce incidental complexity. For more info, be sure to see Bret Victor's incredible talks: Inventing on Principle, Media for Thinking the Unthinkable, and Learnable Programming.

Update: "symbolic programming" is probably not the right term to use for Aurora. See the Symbolic programming wiki for more info.

Knowledge-based programming

Examples: Wolfram Language

Much like the Aurora language mentioned above, The Wolfram Language is also based on symbolic programming. However, the symbolic layer is merely a way to provide a consistent interface to the core of the Wolfram Language, which is knowledge-based programming: built into the language is a vast array of libraries, algorithms, and data. This makes it easy to do everything from graphing your Facebook connections, to manipulating images, to looking up the weather, processing natural language queries, plotting directions on a map, solving mathematical equations, and much more.

I suspect the Wolfram Languages has the largest "standard library" and data set of any language in existence. I'm also excited by the idea that Internet connectivity is an inherent part of writing the code: it's almost like an IDE where the auto-complete function does a google search. It'll be very interesting to see if the symbolic programming model is as flexible as Wolfram claims and can truly take advantage of all of this data.

Update: although Wolfram claims the Wolfram Language supports "symbolic programming" and "knowledge programming", these terms have slightly different definitions. See the Knowledge level and Symbolic Programming wikis for more info. 

Wednesday, April 2, 2014

So long, and thanks for all the t-shirts

In 2009, I joined LinkedIn as a Software Engineer. 5 years, 25 t-shirts, 50 hackdays, 4000 employees, several hundred million members, a billion dollars in revenue, and 1 IPO later, I'm moving on to my next play. My last day will be in a couple weeks, and for the next few months after that, I'm going to relax, travel, and think.

LinkedIn transformed my career. I got the chance to work on amazing projects, including LinkedIn Recruiter, Hackdays, Resume Builder, the LinkedIn Platform, the LinkedIn Engineering Blog, Incubator, Play at LinkedIn, LinkedIn Open Source, and much more. I learned many new technologies, saw amazing talks, and traveled the world. But most importantly, I got to work with an incredible group of people. Relationships matter, and the relationships I built at LinkedIn will be with me for the rest of my career.

Thank you to everyone who made it possible.

I leave you with some photos that capture a few of the highlights of my time at LinkedIn.

We hit 50 million members shortly after I joined (October, 2009)
Holiday Party in San Francisco (December, 2009)
The monthly hackdays were an amazing way to learn and grow

Launched a redesigned LinkedIn Recruiter (February, 2010)
My desk (December, 2010)
Winners of LinkedIn's first Innovator Challenge in February, 2011
The IT team put on incredible parties, such as this Tron party in April, 2011

The first Intern Hackday (July, 2011)

100 million members (March, 2011)
The IPO (May, 2011)
Launched the LinkedIn Engineering Blog (June, 2011)
Launched Apply with LinkedIn (July, 2011)
Town Hall with President Obama (September, 2011)
Talent Connect Conference in Vegas (October, 2011)
Halloween at the office (October, 2011)
Veteran's Hackday (November, 2011)
Project Inversion (November, 2011)
Holiday party at Giants' Stadium (December, 2011)
My desk (December, 2011)
Celebrating 150 million members (February, 2012)
Linux at LinkedIn t-shirt (February, 2012)
The LinkedIn gym, complete with bumper plates (March, 2012)
A nice gift from Reid (March, 2012)
The first DevelopHer Hackday (June, 2012)

A visit to LinkedIn's NYC office in the Empire State Building (July, 2012)
The Berlin Hackday (October, 2012)
Launched LinkedIn Incubator (December, 2012)

200 million members (January, 2013)
Behind the scenes (January, 2013)
Toronto Hackday (February, 2013)
Jeff gives out iPads to all employees (February, 2013)
Announcing the Play Framework at LinkedIn (February, 2013)

Hackday and Incubator presentation in Latvia (February, 2013)
Matthew models a few of the t-shirts we've collected over the years (April, 2013)
Innovating (April, 2013)
Our new horizontally scalable infrastructure (June, 2013) 
Amsterdam Hackday (April, 2013) 
Hackers (April, 2013)
New cafe opens and we get amazing breakfast every day (August, 2013)
The lunch ain't bad either (August, 2013)
Dreamer Hackathon (November, 2013)
Play t-shirts (August, 2013)
Play keynote at Ping Conference in Budapest (January, 2014)
Engineering at LinkedIn (February, 2014)

Eat. Hack. Sleep.