Monday, April 14, 2014

Got fast download but slow upload speeds? Here's a fix.

If you've found that your download speed is great, but your upload speed is abysmal, I've got a possible solution for you. I struggled with this issue for a while and decided to write down my findings in a blog post in case I, or anyone else, runs into this in the future.

In fact, this is the second such blog post I'm writing: a couple years ago, I hit the the inverse issue and documented the solution in a blog post called Got slow download but fast upload speeds over wireless? Here's a fix. That post has had several hundred thousand views and helped many people (check out the comments - I even got a marriage proposal), so I'm hoping this post will be useful too!

Here's your tldr: upgrade your router's firmware.

Symptoms

I noticed that on all my devices - a Macbook Pro, iPhone, Windows desktop - webpages were sometimes taking a long time to load; it was a bit intermittent, but everything from google maps to gmail suddenly got very sluggish. I have one of their higher tier Internet plans from Comcast, so this was pretty disappointing.

I ran a bandwidth test on http://www.speedtest.net/ and the results were roughly the same across all of my devices:


At 57 Mb/s, the download speed was great; however, the upload speed was a mere 0.17 Mb/s, which is pretty much unusable. In fact, I had to re-run the test several times, as occasionally, the upload portion of the test would get stuck and never complete.

The solution

I tried rebooting the router, the cable modem, tweaking a bunch of settings, but nothing helped. I also checked with Comcast to ensure there were no issues our outages in my area, and of course, everything was fine.

Finally, I stumbled upon the solution: a firmware upgrade. My router, a Cisco/Linksys E1200, was using firmware version 2.0.02. I went over to Linksys' support page, found my router, and saw that a newer version, 2.0.06, was available. Here's a snippet from the release notes:

The notes for version 2.0.04 are especially interesting, as they fix bugs with WMM (which was the cause of problems in my previous blog post), QoS, and more.

I figured it was worth a shot, downloaded the 2.0.06 firmware, and installed it through my router's admin UI. The instructions for upgrading the firmware will not be the same for all routers, but here's roughly what you need to do:
  1. Go to http://192.168.1.1 and login to your router. If you've never done this, look for instructions that came with your router or do a google search to find the default username and password.
  2. Click on "administration".
  3. Click on "firmware upgrade".
  4. You should see a page like this:
  5. Click "Choose File" and select the firmware file you downloaded.
  6. Click "Start Upgrade". DO NOT unplug your router or click anything else in the meantime; let the upgrade complete!
  7. Wait a minute or so for your router to reboot.
The results

After the router restarted, I re-ran my speed test, and the results were much nicer:


The download speed is still a zippy 57 Mb/s, but now the upload speed is fast too, at 11 Mb/s, or nearly 70x faster than what it was before. Woohoo!

I hope you found the post helpful. If your router has a different firmware upgrade process, leave a comment with the steps you followed so others can find it. Happy web browsing!




Wednesday, April 9, 2014

Six programming paradigms that will change how you think about coding

Every now and then, I stumble across a programming language that does something so different that it changes how I think about coding. In this post, I want to share some of my favorite finds.

This is not your grandma's "functional programming will change the world!" blog post: this list is much more esoteric. I'd wager most readers haven't heard of the majority of the languages and paradigms below, so I hope you have as much fun learning about these new concepts as I did.

Note: I have only minimal experience with most of the languages below: I find the ideas behind them fascinating, but claim no expertise in them, so please point out any corrections and errors. Also, if you've found any new paradigms and ideas not covered here, please share them!

Update: this post hit the front page of r/programming and HN. Thank you for the great feedback! I've added some corrections below.

Concurrent by default

Example languages: ANI, Plaid

Let's kick things off with a real mind bender: there are programming languages out there that are concurrent by default. That is, every line of code is executed in parallel!

For example, imagine you wrote three lines of code, A, B, and C:

In most programming languages, A would execute first, then B, and then C. In a language like ANI, A, B, and C would all execute at the same time!

Control flow or ordering between lines of code in ANI is merely a side effect of explicit dependencies between lines of code. For example, if B had a reference to a variable defined in A, then A and C would execute at the same time, and B would execute only after A finished.

Let's look at an example in ANI. As described in the tutorial, ANI programs consists of "pipes" and "latches" that are used to manipulate streams and data flows. The unusual syntax is tough to parse, and the language seems dead, but the concepts are pretty interesting.

Here's a "Hello World" example in ANI:

In ANI terminology, we are sending the "Hello, World!" object (a string) to the std.out stream. What happens if we send another string to std.out?

Both of these lines of code execute in parallel, so they could end up in any order in the console. Now, look what happens when we introduce a variable on one line and reference it later:

The first line declares a "latch" (latches are a bit like variables) called s that contains a string; the second line sends the text "Hello, World!" to s; the third line "unlatches" s and sends the contents to std.out. Here, you can see ANI's implicit program sequencing: since each line depends on the previous one, this code will execute in the order it is written.

The Plaid language also claims to support concurrency by default, but uses a permissions model, as described in this paper, to setup control flow. Plaid also explores other interesting concepts, such as Typestate-Oriented Programming, where state changes become a first class citizen of the language: you define objects not as classes, but as a series of states and transitions that can be checked by the compiler. This seems like an interesting take on exposing time as a first class language construct as discussed in Rich Hickey's Are we there yet talk.

Multicore is on the rise and concurrency is still harder than it should be in most languages. ANI and Plaid offer a fresh a fresh take on this problem that could lead to amazing performance gains; the question is whether "parallel by default" makes concurrency easier or harder to manage.

Update: the description above captures the basic essence of ANI and Plaid, but I used the terms "concurrent" and "parallel" interchangeably, even though they have different meanings. See Concurrency Is Not Parallelism for more info.

Dependent types


Example languages: Idris, Agda, Coq

You're probably used to type systems in languages like C and Java, where the compiler can check that a variable is an integer, list, or string. But what if your compiler could check that a variable is "a positive integer", "a list of length 2", or "a string that is a palindrome"?

This is the idea behind languages that support dependent types: you can specify types that can check the value of your variables at compile time. The shapeless library for Scala adds partial, experimental support (read: probably not ready for primetime) for dependent types to Scala and offers an easy way to see some examples.

Here is how you can declare a Vector that contains the values 1, 2, 3 with the shapeless library:

This creates a variable l1 who's type signature specifies not only that it's a Vector that contains Ints, but also that it is a Vector of length 3. The compiler can use this information to catch errors. Let's use the vAdd method in Vector to perform a pairwise addition between two Vectors:

The example above works fine because the type system knows both Vectors have length 3. However, if we tried to vAdd two Vectors of different lengths, we'd get an error at compile time instead of having to wait until run time!

Shapeless is an amazing library, but from what I've seen, it's still a bit rough, only supports a subset of dependent typing, and leads to fairly verbose code and type signatures. Idris, on the other hand, makes types a first class member of the programming language, so the dependent type system seems much more powerful and clean. For a comparison, check out the Scala vs Idris: Dependent Types, Now and in the Future talk:


Formal verification methods have been around for a long type, but were often too cumbersome to be usable for general purpose programming. Dependent types in languages like Idris, and perhaps even Scala in the future, may offer lighter-weight and more practical alternatives that still dramatically increase the power of the type system in catching errors. Of course, no dependent type system can catch all errors due to to ineherent limitations from the halting problem, but if done well, dependent types may be the next big leap for static type systems.

Concatenative languages

cat
Example languages: Forthcat, joy

Ever wonder what it would be like to program without variables and function application? No? Me neither. But apparently some folks did, and they came up with concatenative programming. The idea is that everything in the language is a function that pushes data onto a stack or pops data off the stack; programs are built up almost exclusively through functional composition (concatenation is composition).

This sounds pretty abstract, so let's look at a simple example in cat:

Here, we push two numbers onto the stack and then call the + function, which pops both numbers off the stack and pushes the result of adding them back onto the stack: the output of the code is 5. Here's a slightly more interesting example:

Let's walk through this line by line:
  1. First, we declare a function foo. Note that functions in cat specify no input parameters: all parameters are implicitly read from the stack. 
  2. foo calls the < function, which pops the first item on the stack, compares it to 10, and pushes either True or False back onto the stack. 
  3. Next, we push the values 0 and 42 onto the stack: we wrap them in brackets to ensure they get pushed onto the stack unevaluated. This is because they will be used as the "then" and "else" branches (respectively) for the call to the if function on the next line. 
  4. The if function pops 3 items off the stack: the boolean condition, the "then" branch, and the "else" branch. Depending on the value of the boolean condition, it'll push the result of either the "then" or "else" branch back onto the stack. 
  5. Finally, we push 20 onto the stack and call the foo function.
  6. When all is said and done, we'll end up with the number 42.
 For a much more detailed introduction, check out The Joy of Concatenative Languages.

This style of programming has some interesting properties: programs can be split and concatenated in countless ways to create new programs; remarkably minimal syntax (even more minimal than LISP) that leads to very concise programs; strong meta programming support. I found concatenative programming to be an eye opening thought experiment, but I'm not sold on its practicality. It seems like you have to remember or imagine the current state of the stack instead of being able to read it from the variable names in the code, which can make it hard to reason about the code.

Declarative programming

GNU Prolog
Example languages: Prolog, SQL

Declarative programming has been around for many years, but most programmers are still unaware of it as a concept. Here's the gist: in most mainstream languages, you describe how to solve a particular problem; in declarative languages, you merely describe the result you want, and the language itself figures out how to get there.

For example, if you're writing a sorting algorithm from scratch in C, you might write the instructions for merge sort, which describes, step by step, how to recursively split the data set in half and merge it back together in sorted order: here's an example. If you were sorting numbers in a declarative language like Prolog, you'd instead describe the output you want: "I want the same list of values, but each item at index i should be less than or equal to the item at index i + 1". Compare the previous C solution to this Prolog code:

If you've used SQL, you've done a form of declarative programming and may not have realized it: when you issue a query like select X from Y where Z, you are describing the data set you'd like to get back; it's the database engine that actually figures out how to execute the query. You can use the explain command in most databases to see the execution plan and figure out what happened under the hood.

The beauty of declarative languages is that they allow you to work at a much higher level of abstraction: your job is just to describe the specification for the output you want. For example, the code for a simple sudoku solver in prolog just lists out what each row, column, and diagonal of a solved sudoku puzzle should look like:

Here is how you would run the sudoku solver above:

The downside, unfortunately, is that declarative programming languages can easily hit performance bottlenecks. The naive sorting algorithm above is likely O(n!); the sudoku solver above does a brute force search; and most developers have had to provide database hints and extra indices to avoid expensive and inefficient plans when executing SQL queries.

Symbolic programming


Example languages: Aurora

The Aurora language is an example of symbolic programming: the "code" you write in these languages can include not only plain text, but also images, math equations, graphs, charts, and more. This allows you to manipulate and describe a large variety of data in the format native to that data, instead of describing it all in text. Aurora is also completely interactive, showing you the results from each line of code instantly, like a REPL on steroids.


The Aurora language was created by Chris Granger, who also built the Light Table IDE. Chris outlines the motivation for Aurora in his post Toward a better programming: some of the goals are to make programming more observable, direct, and reduce incidental complexity. For more info, be sure to see Bret Victor's incredible talks: Inventing on Principle, Media for Thinking the Unthinkable, and Learnable Programming.

Update: "symbolic programming" is probably not the right term to use for Aurora. See the Symbolic programming wiki for more info.

Knowledge-based programming


Examples: Wolfram Language

Much like the Aurora language mentioned above, The Wolfram Language is also based on symbolic programming. However, the symbolic layer is merely a way to provide a consistent interface to the core of the Wolfram Language, which is knowledge-based programming: built into the language is a vast array of libraries, algorithms, and data. This makes it easy to do everything from graphing your Facebook connections, to manipulating images, to looking up the weather, processing natural language queries, plotting directions on a map, solving mathematical equations, and much more.


I suspect the Wolfram Languages has the largest "standard library" and data set of any language in existence. I'm also excited by the idea that Internet connectivity is an inherent part of writing the code: it's almost like an IDE where the auto-complete function does a google search. It'll be very interesting to see if the symbolic programming model is as flexible as Wolfram claims and can truly take advantage of all of this data.

Update: although Wolfram claims the Wolfram Language supports "symbolic programming" and "knowledge programming", these terms have slightly different definitions. See the Knowledge level and Symbolic Programming wikis for more info. 








Wednesday, April 2, 2014

So long, and thanks for all the t-shirts


In 2009, I joined LinkedIn as a Software Engineer. 5 years, 25 t-shirts, 50 hackdays, 4000 employees, several hundred million members, a billion dollars in revenue, and 1 IPO later, I'm moving on to my next play. My last day will be in a couple weeks, and for the next few months after that, I'm going to relax, travel, and think.

LinkedIn transformed my career. I got the chance to work on amazing projects, including LinkedIn Recruiter, Hackdays, Resume Builder, the LinkedIn Platform, the LinkedIn Engineering Blog, Incubator, Play at LinkedIn, LinkedIn Open Source, and much more. I learned many new technologies, saw amazing talks, and traveled the world. But most importantly, I got to work with an incredible group of people. Relationships matter, and the relationships I built at LinkedIn will be with me for the rest of my career.

Thank you to everyone who made it possible.

I leave you with some photos that capture a few of the highlights of my time at LinkedIn.

We hit 50 million members shortly after I joined (October, 2009)
Holiday Party in San Francisco (December, 2009)
The monthly hackdays were an amazing way to learn and grow

Launched a redesigned LinkedIn Recruiter (February, 2010)
My desk (December, 2010)
Winners of LinkedIn's first Innovator Challenge in February, 2011
The IT team put on incredible parties, such as this Tron party in April, 2011

The first Intern Hackday (July, 2011)

100 million members (March, 2011)
The IPO (May, 2011)
Launched the LinkedIn Engineering Blog (June, 2011)
Launched Apply with LinkedIn (July, 2011)
Town Hall with President Obama (September, 2011)
Talent Connect Conference in Vegas (October, 2011)
Halloween at the office (October, 2011)
Veteran's Hackday (November, 2011)
Project Inversion (November, 2011)
Holiday party at Giants' Stadium (December, 2011)
My desk (December, 2011)
Celebrating 150 million members (February, 2012)
Linux at LinkedIn t-shirt (February, 2012)
The LinkedIn gym, complete with bumper plates (March, 2012)
A nice gift from Reid (March, 2012)
The first DevelopHer Hackday (June, 2012)

A visit to LinkedIn's NYC office in the Empire State Building (July, 2012)
The Berlin Hackday (October, 2012)
Launched LinkedIn Incubator (December, 2012)

200 million members (January, 2013)
Behind the scenes (January, 2013)
Toronto Hackday (February, 2013)
Jeff gives out iPads to all employees (February, 2013)
Announcing the Play Framework at LinkedIn (February, 2013)

Hackday and Incubator presentation in Latvia (February, 2013)
Matthew models a few of the t-shirts we've collected over the years (April, 2013)
Innovating (April, 2013)
Our new horizontally scalable infrastructure (June, 2013) 
Amsterdam Hackday (April, 2013) 
Hackers (April, 2013)
New cafe opens and we get amazing breakfast every day (August, 2013)
The lunch ain't bad either (August, 2013)
Dreamer Hackathon (November, 2013)
Play t-shirts (August, 2013)
Play keynote at Ping Conference in Budapest (January, 2014)
Engineering at LinkedIn (February, 2014)

Eat. Hack. Sleep.












Monday, March 10, 2014

The Ultimate Guide to Getting Started with the Play Framework

http://www.playframework.com/

Are you trying to get started with the Play Framework? Struggling to wrap your head around Futures, SBT, Scala, Functional Programming, or Iteratees? Then you've come to the right place.

This post is a collection of the best resources I've found for getting started with Play. I've broken it down by category to make it easier to browse and jump to the topic you're most interested in.

The list below is not meant to be comprehensive documentation, but rather, a collection of resources that cover the main areas where new Play users tend to get stuck. If you've got any great resources that are missing from the list below, leave a comment!

Introduction to Play

  1. Introduction to Play Framework for Java Developers: official video intro to building apps with Play.
  2. Play Framework Documentation: the official docs are a must read to get a solid starting point.
  3. The Play Framework at LinkedIn: Performance and Productivity at Scale: video intro to Play and why LinkedIn uses it. See the accompanying slides and blog post.
  4. Typesafe Activator: very easy way to get started with Play. A simple script you run to generate Play app skeletons from the many available templates - including Hello Play (Scala)Hello Play (Java)Realtime and Reactive Play appsPlay with SlickPlay with AngularJS - and an in-browser UI that interactively walks you through changing, running, and testing those apps.
Scala and Functional Programming

 Functional Programming Principles in Scala
  1. Scala Documentation: lots of good resources for learning Scala.
  2. Scala API docs: expect to spend a lot of time reading these.
  3. Functional Programming Principles in Scala: terrific Coursera course on functional programming basics with Scala, taught by Scala creator Martin Odersky.
  4. http://www.simplyscala.com: learn Scala in the browser.
  5. http://scalatutorials.com: another in-browser introduction to Scala.
  6. Twitter Scala School: great series of language guides and best practices.
  7. 10 recipes for turning imperative Java code into functional Scala code: a mini guide for translating your Java code into equivalent idiomatic Scala code.
  8. Play Framework: Democratizing Functional Programming for modern Web Programmers: functional programming and the motivation behind Play 2.0.
Non-blocking I/O, concurrency

  1. Play Framework: async I/O with Java and Scala: a guide to writing async code with Play, with examples in both Java and Scala.
  2. Play Framework: async I/O without the thread pool and callback hell: an overview of why Play uses non-blocking I/O and how to manage non-blocking code without callbacks.
  3. Play is for Performance: a great talk about Play performance; remember kids, async isn't faster, but it can be more efficient with resources.
  4. Scala Futures: official docs on Scala Futures.
  5. Akka documentation: Play is built on top of Akka, which provides abstractions for managing concurrency, many of which Play uses under the hood and you can use in your own apps.
Real time web, streaming, Iteratees

  1. Play, Scala, and Iteratees vs. Node.js, JavaScript, and Socket.io: a side by side comparison of building the same websockets app with Play/Iteratees on one side and Node.js/Socket.io on the other. 
  2. Composable and Streamable Play apps: video intro to how to break Play apps down into composable pieces and significantly reduce page load time by using BigPipe style streaming with Enumerators. See the accompanying slides and code.
  3. Functional I/O with Play Iteratees: video intro on how Iteratees work.
  4. Non-blocking, composable and reactive realtime web: a video with lots of examples of how to build realtime web apps on top of Play.
  5. Understanding Play2 Iteratees for Normal Humans: an intro to Iteratees that's more human-friendly than the official Play Iteratee docs.
Build system, SBT, and deployment

  1. SBT in Action: the best intro to SBT I've seen. Must watch if you're getting started (video). 
  2. SBT documentation: the official documentation is dense, but important. The Getting Started Guide is required reading, or you will be very confused.
  3. SBT keys: the list of all settings and tasks built into SBT. Very useful for figuring out what you can tweak and what hooks are available.
  4. SBT Defaults: the default values for all the settings and tasks built into SBT.
  5. Play Keys: the list of all custom settings and tasks that Play adds on top of SBT's defaults.
  6. Play Settings: the default values for the custom settings and tasks Play adds to SBT.  
  7. Hooking in to Play! Framework’s sbt plugin lifecycle: nice guide on a few SBT hooks provided by Play.
  8. Separate multi-project deployment packages in Play! Framework: how to break a Play app into multiple SBT projects.
  9. Deploying Play apps at Coursera: how SBT deploys its Play apps to Amazon EC2 (video).
  10. Creating a standalone version of your application: official documentation on how to turn your Play app into a standalone package.
App structure, injection, plugins

  1. Structure your Play app with the Cake Pattern: a step by step introduction of using Scala's Cake Pattern to inject dependencies in your Play apps without any extra libraries.
  2. Using Guice with Play! Framework 2.1 for easy Dependency Injection: an intro to using Guice to inject dependencies into your Play app.
  3. Play Framework Modules: a list of open source modules for Play.
  4. Writing modules for Play 2: a guide to creating new modules for Play.
  5. Writing a Play 2.0 Module: a guide to creating Play plugins.
Database access

 Slick
  1. Slick documentation: the official docs for Slick, which is the recommended DB library for Scala.
  2. Managing database evolutions: official docs on how Play manages DB schemas.
  3. Using Scala Slick at FortyTwo: how FortyTwo uses Slick in their Play apps.
  4. Configuring Play's thread pools: Play is built for non-blocking I/O, but all JDBC libraries are blocking, so this guide is a must-read on how to configure your Play app if you're doing blocking DB queries. 
Where to get more info

 http://stackoverflow.com/questions/tagged/playframework
  1. Play Framework mailing list: very active google group plus mailing list that is great for discussions.
  2. StackOverflow: ask and answer all your Play related questions.
  3. Play Framework source code: use Play's github repo to browse the source code, file bugs, and submit pull requests.
  4. Commercial support from Typesafe: the folks at Typesafe provide terrific Play expertise.