From Mercurial to Git

December 1st, 2011 - Brendon Rapp

In a previous post, I detailed how we converted Subversion repositories into Git repositories.

As I mentioned in that post, we also used Mercurial before moving to Git. In the interest of consolidating everything to one VCS, we needed to convert these as well.

We came across a mention on the Git wiki to a tool called hg-to-git, which, by all appearances, has only been released to the world in the form of a mailing list post: Mercurial to git coverter.

We didn’t anticipate much success with this years-old tool copy-pasted to the mailing list, so we turned our efforts elsewhere.

Most blog posts out there suggest using fast-export. These posts included:

At the time, when we tried running fast-export (on OS X as well as Linux), it invariably failed and spat out an error:

AttributeError: 'httprepository' object has no attribute 'branchtags'

Interestingly, searching Google for this error yielded just one hit, a Github issue, which went unresolved. However, this issue did close with the recommendation to move on to a newer tool, a Mercurial extension called hg-git, which is where we headed next.

Spoiler alert – hg-git is where we finally achieved success.

Installation

The following are the exact steps we took to use hg-git, on a machine running Ubuntu 10.10:

1. Install dependencies

Ensure setuptools and the Python 2.6 header files are installed:

$ sudo aptitude install python-setuptools python2.6-dev

2. Install hg-git

Install hg-git with setuptools

$ sudo easy_install hg-git

3. Enable hg-git as a Mercurial extension

Edit ~/.hgrc and add the lines:

[extensions]
hgext.bookmarks =
hggit =

You can confirm that it is installed correctly by running “hg help” and looking underneath “enabled extensions:” You should see “bookmarks” and “hggit”

Conversion

Converting a repo to Git involves:

1. Create a Git repository to push to

We were using Gitosis to manage our Git installation, so the first step was to create a new repo in gitosis.conf. The details of setting up a Git repo are beyond the scope of this article – if you’re doing this, I assume you have a working Git installation or are using a hosted service, and already know how to generate a new remote repo.

2. Checking out the Mercurial repository

$ hg clone http://path-to-your-repo

3. Make a bookmark to tell Git which branch is “master”

The main branch of a Git repository is called “master”. This isn’t true of Mercurial. So, for the import, we need to add a bookmark of the name “master”, which hg-git will use to identify what the Git “master” branch should be.

In most cases, this will be the Hg default branch, “default”.

$ hg bookmark -r default master

Note: in the case of one Hg repo which had originally been a Subversion repo itself, there was no “default” branch. Instead, the branch name “trunk” had been preserved in whatever conversion process we had done originally. So in that case, we ran…

$ hg bookmark -r trunk master

… which let Git know that “trunk” is what should be made into the master branch

4. Run the push command

$ hg push git+ssh://(git repo location)

Note the git+ssh:// prefix. Normally when using git push over ssh, we just use user@hostname:repo-name.git, and the git+ssh is implied. For the “hg push” command, however, it was necessary to state it explicitly.

And that’s it. Once the push completes, the remote Git repo has everything it needs, and can be interacted with purely through Git from then on.

It was pretty impressive to see a repository get converted from Subversion to Mercurial to Git and maintain its history, branches and tags. We’re happy Git users now, and I’m thankful for how smooth both svn2git and hg-git made that transition.

A Good Drive is Hard to Find

November 29th, 2011 - Brendon Rapp

It’s rare to see the cost of a piece of computing equipment ever go up. Parts and systems are usually at their most expensive the day they’re released, and then proceed to drop sharply until they are no longer made. It’s the unyielding march of technology – Moore’s Law in action.

This past month, however, has seen skyrocketing prices of a critical PC component: the hard drive.

The reason? A large percentage of hard drive production takes place in Thailand, which has been hit by monsoon season flooding.

An aerial view of a flooded Bangkok

The floods, which began in late July, are now slowly receding. Hard drive production has been impacted since then, but it has taken until now for the existing supply chain to dry up and for prices to rise.

According to TechRadar, prices have risen as much as 150% and a shortfall of 70 million drives is projected for the final quarter of 2011.

Many of us now rely less on our own local storage and more on cloud storage, and those services are large consumers of hard drives. Analysts are forecasting problems as we head into 2012 and those services find it more difficult to meet their drive demands.

The flood’s impact on the tech industry is, of course, merely a secondary story to the loss of life and the hardship of those affected. 621 people so far have died as a result of the flooding, and far many more have suffered from the destruction of their homes and communities. If you would like to help, donations can be made directly to the Thai Red Cross (link goes to the Thai Red Cross’s English language page).

The Two Questions

November 23rd, 2011 - Brendon Rapp

Not those two questions

When posting job openings, especially on public forums like Craigslist, we are flooded with responses. These responses tend to fall into three categories:

1. Obvious Spam

Why iPhone scares and beats its competitor?

Broken English, a nonsensical introduction, and a random soup of claimed expertises – all the hallmarks of Obvious Spam.  This is my favorite spam response, which we’ve received multiple times, from a different domain name each time. I still chuckle at, “Why iPhone scares and beats its competitor?”

Despite adding language to my postings to explicitly state that we’re looking to fill a staff position, these spammy responses from freelancers and agencies come in every time. There’s little you can do about them, particularly when posting to somewhere like Craigslist. Simply file them into the Rejects folder and move on.

2. Resume Blasts

Some responses actually come from individuals, but are stock messages that simply get blasted to every job that gets posted in a Craigslist category (or, worse, every job that gets posted, period).

These respondents generally aren’t actually reading what they’re responding to, but rather are attacking their joblessness problem with the shotgun approach.

3. Legitimate Responses

These are the responses you want – applicants who responded specifically to your job, and (hopefully) are sufficiently qualified for consideration.

The Problem

Obvious spam is usually easy enough to spot, but it takes closer inspection to filter out the mass-blasted responses from the ones that actually read your posting, particularly if the mass-blasted response does fall within your posting’s general area (or if the auto-blaster cleverly mined some keywords from the post.)

Hiring is enough of a hassle as it is – time spent on that is time not spent Getting Work Done and off of my task list. After becoming annoyed with the time spent digging through responses to try and find the valid ones, I implemented a very simple test.

The Solution – The Two Questions

At the bottom of my postings, along with the request for a resume, I ask two simple questions:

1) What software tools do you typically use for development?
2) How do you keep up with and learn about new development technologies?

The purpose of the questions is multi-faceted, but the most immediate thing it does is make the first pass of filtering responses a lot easier. I simply skim in search of answers to the questions. If they’re there, the person took the time to actually read what they were responding to, and their response merits being read in full.

If the answers are not there, the person is simply mass replying. Or, equally bad, they responded specifically to this post but could not be bothered to read it (or is incapable of following simple directions.) It’s a very quick and easy filter, which our creative director has taken and adapted for her job postings as well.

If you’re reading this and saying, “Van Halen brown M&Ms contract rider!”, good on you. Subconsciously, that was probably an inspiration, although I didn’t make the connection until after I had started using it.

I chose these questions because they’re simple to answer, but reveal a lot about the respondent. A developer who doesn’t have much to say about their tools likely hasn’t done much rigorous work, or has simply not cared enough to use anything beyond what they’ve been handed. Also, I’m looking for developers who are constantly learning and staying current, rather than someone who has learned one trick and sticks with it. From those responses, I can generally sort out the non-hackers from the ones that merit interviewing.

Not Bad, For a Start

We haven’t yet jumped into crafting cool little challenges for prospective applicants, as companies like Bandcamp and Instagram has done, but maybe that’s the next step. (When I have time. Whenever that is).

For a very simple, no-effort thing to attach to a job posting, however, this technique has really paid off. It’s little extra burden on the applicant (most seem to enjoy it) and it makes the garbage responses easier to filter out at a glance.

Google Apps SMTP without SSL, part 2

November 19th, 2011 - Brendon Rapp

In part 1, I talked about how we were able to use Google’s servers for SMTP without SSL, so that our firewall appliance would be able to send logs and notifications.

The upside to the approach in part 1 is that it required nothing but using a different SMTP server name and port. The downsides, however, are that the messages are being sent over the wire “in the clear” (unencrypted), and that the account being used to send mail doesn’t record the outgoing messages in its Sent Mail folder (which is handy for confirming that messages are being sent, if there is a problem with receiving them).

So, for the next network-enabled device that we encountered that lacked SSL support for SMTP, I took a different route.

On an internal server, I set up stunnel – an SSL tunneling proxy. With it, I was able to make this server act as a go-between for this networked device and Google’s SSL-requiring SMTP server.

Here’s the relevant section of the stunnel.conf file, which creates port 225 on the server and establishes the SSL tunnel to Google’s SMTP server:

# in /etc/stunnel/stunnel.conf
[ssmtp]
accept = 225
connect = smtp.gmail.com:465

On the device doing the sending, I filled in all configuration settings as normal for using Gmail’s SMTP (authentication, etc), but changed the SMTP server address to my tunneling server’s IP address on our internal network, and set the port number to the port I opened with stunnel (225, in the above instance).

I fired up the stunnel daemon on the server, and the SSL-challenged device was able to send mail at will. It was pretty surprising how relatively painless the setup was (the only issue I encountered was the stunnel.conf being very touchy about syntax), and how transparent the solution was once in place. Frankly, I forget that it’s there until I do something else on that server and see the daemon running.

So, there’s another solution for using Gmail/Google Apps’ SMTP service on non-SSL capable devices. This one requires a server to be online and running the stunnel daemon at all times that you want the ability to send mail, but you regain the benefit of having your outgoing mail going over the wire from you to Google through an encrypted connection.

Default VirtualHost in Apache

November 15th, 2011 - Brendon Rapp

We host multiple sites per server. One issue I ran into recently is when I had a DNS address record (“A” record) pointing to our web server, with a name that’s no longer being served by any VirtualHost.

What was happening was that a site on one of our VirtualHosts was appearing when someone attempted to browse to that address! Obviously, we don’t want any of our sites appearing on a different name than the one that’s been defined as its ServerName.

The issue here is how Apache matches requests to server names. Let’s say that www.example.com points to our web server, but there’s no longer a VirtualHost in Apache with that ServerName. When the user browses to www.example.com, DNS turns that name into an IP address – the IP address of our web server. The request goes to our webserver, asking, “hey, give me www.example.com”.

Apache attempts to find a VirtualHost with a ServerName or ServerAlias that matches www.example.com. If it fails to find a match, Apache serves up the first VirtualHost on that port.

An easy way to deal with this is to define a new VirtualHost to deal with these wayward requests, and put it at the front of the line. VHost configs are loaded in the order of their filenames, so simply creating a “000default” file with a Vhost pointing to a landing page does the trick.

Using JSHint to improve JavaScript code quality

November 11th, 2011 - Brendon Rapp

JavaScript can be a harsh mistress. One of the best books on JavaScript is based on the idea of using the language’s good parts and avoiding the plentiful “bad parts”. One startup recently learned this lesson the hard way, when a simple missing ‘var’ statement ground their big l

JavaScript can be a harsh mistress. One of the best books on JavaScript is based on the idea of using the language’s good parts and avoiding the plentiful “bad parts”. One startup recently learned this lesson the hard way, when a simple missing ‘var’ statement ground their big launch to a halt.

One tool that can help with this is JSHint. JSHint is a JavaScript code quality tool. It is a community-driven fork of JSLint, which has fallen out of favor due to a growing divergence between the style opinions of its creator and the community at large. If this factoid interests you, see Antov Kovalyov’s blog post, “Why I Forked JSLint to JSHint”.

JSHint will parse JavaScript code and flag various syntax and semantic errors. It will find issues that are technically valid JavaScript but likely to cause problems or reflect poor style.

JSHint actually exists as a website to paste your code into for checking, but copy-pasting code is hardly convenient. There does, however, exist a command-line interface for JSHint, powered by Node.js. We will install this, so that we can run the JSHint tool locally.

These instructions are for Mac OS X with Homebrew, but should be easily adaptable to other platforms.

1. Install Node.js

$ brew install node

2. Install NPM – the Node Package Manager
If we try to install through Homebrew, it tells us:

$ brew install npm
npm can be installed thusly by following the instructions at http://npmjs.org/

To do it in one line, use this command:
curl http://npmjs.org/install.sh | sh

So, let’s do that.

$ curl http://npmjs.org/install.sh | sh

3. Install JSHint using NPM
We’ll use the -g flag to install globally (ie. to /usr/local/bin)

$ npm install jshint -g

OK, now we’ve got JSHint installed. We can run it from the command line like:

$ jshint my-script.js
my-script.js: line 15, col 61, Missing semicolon.
my-script.js: line 31, col 84, Don't make functions within a loop.
my-script.js: line 98, col 25, Bad for in variable 'index'.
my-script.js: line 191, col 23, Expected a conditional expression and instead saw an assignment.
my-script.js: line 208, col 41, Use '===' to compare with '0'.
my-script.js: line 227, col 37, Bad escapement.
my-script.js: line 421, col 2, Mixed spaces and tabs.

This gets us started, but checking script manually from the command line, while better than copy-pasting to a webpage, still isn’t particularly convenient. It would be much nicer if we could hook JSHint into our editor and make it part of our workflow.

Many editors do indeed have ways of doing exactly this:

Vim

jshint.vim allows you to run JSHint from within Vim. It will open the JSHint results in a window split, and selecting the error in the JSHint window will allow you to jump to the corresponding line in your edit buffer.

Syntastic is a plugin that supports various code quality tools, and JSHint is one of the supported tools. Syntastic automatically detects the presence of JSHint on the system, and will Just Work once you enable the plugin’s behavior in your Vim config.

Sublime Text 2

SublimeLinter is a plugin for Sublime Text 2 that facilitates running various “lint”-style code quality tools from Sublime Text. JSHint is one of the supported tools.

TextMate

JsLintMate is a TextMate plugin for running JSHint (or JSLint) from within TextMate. Install and hit Ctrl+Shift+L to trigger JSHint.

Notepad++

Notepad++ doesn’t appear to have a plugin supporting JSHint, but does have a JSLint plugin.

Emacs

My Vim fandom prevent me from recognizing Emacs as a valid editor choice, but Emacs users are people too, and so they get jshint-mode.

Big Fat IDEs

If you prefer IDEs to text editors, there’s a plugin for Visual Studio, and the PhoneGap mobile web development plugin for Eclipse adds JSHint functionality to Eclipse. I couldn’t find much for NetBeans, outside of a blog post (“Integrating JSLint More Tightly into NetBeans”) on using the jslint4java Java wrapper for JSLint.

Whatever your code writing preference, JSHint should be fairly easy to integrate into your workflow. Personally, I am using it with Vim and the Syntastic plugin, and the JSHint output messages popping up whenever I save a JavaScript file that JSHint flags with warnings/errors is incredibly convenient. It is now just another automatic tool in my development toolkit.

aunch to a halt.

One tool that can help with this is JSHint. JSHint is a JavaScript code quality tool. It is a community-driven fork of JSLint, which has fallen out of favor due to a growing divergence between the style opinions of its creator and the community at large. If this factoid interests you, see Antov Kovalyov’s blog post, “Why I Forked JSLint to JSHint”.

JSHint will parse JavaScript code and flag various syntax and semantic errors. It will find issues that are technically valid JavaScript but likely to cause problems or reflect poor style.

JSHint actually exists as a website to paste your code into for checking, but copy-pasting code is hardly convenient. There does, however, exist a command-line interface for JSHint, powered by Node.js. We will install this, so that we can run the JSHint tool locally.

These instructions are for Mac OS X with Homebrew, but should be easily adaptable to other platforms.

1. Install Node.js

$ brew install node

2. Install NPM – the Node Package Manager
If we try to install through Homebrew, it tells us:

$ brew install npm
npm can be installed thusly by following the instructions at http://npmjs.org/

To do it in one line, use this command:
curl http://npmjs.org/install.sh | sh

So, let’s do that.

$ curl http://npmjs.org/install.sh | sh

3. Install JSHint using NPM
We’ll use the -g flag to install globally (ie. to /usr/local/bin)

$ npm install jshint -g

OK, now we’ve got JSHint installed. We can run it from the command line like:

$ jshint my-script.js
my-script.js: line 15, col 61, Missing semicolon.
my-script.js: line 31, col 84, Don't make functions within a loop.
my-script.js: line 98, col 25, Bad for in variable 'index'.
my-script.js: line 191, col 23, Expected a conditional expression and instead saw an assignment.
my-script.js: line 208, col 41, Use '===' to compare with '0'.
my-script.js: line 227, col 37, Bad escapement.
my-script.js: line 421, col 2, Mixed spaces and tabs.

This gets us started, but checking script manually from the command line, while better than copy-pasting to a webpage, still isn’t particularly convenient. It would be much nicer if we could hook JSHint into our editor and make it part of our workflow.

Many editors do indeed have ways of doing exactly this:

Vim

jshint.vim allows you to run JSHint from within Vim. It will open the JSHint results in a window split, and selecting the error in the JSHint window will allow you to jump to the corresponding line in your edit buffer.

Syntastic is a plugin that supports various code quality tools, and JSHint is one of the supported tools. Syntastic automatically detects the presence of JSHint on the system, and will Just Work once you enable the plugin’s behavior in your Vim config.

Sublime Text 2

SublimeLinter is a plugin for Sublime Text 2 that facilitates running various “lint”-style code quality tools from Sublime Text. JSHint is one of the supported tools.

TextMate

JsLintMate is a TextMate plugin for running JSHint (or JSLint) from within TextMate. Install and hit Ctrl+Shift+L to trigger JSHint.

Notepad++

Notepad++ doesn’t appear to have a plugin supporting JSHint, but does have a JSLint plugin.

Emacs

My Vim fandom prevent me from recognizing Emacs as a valid editor choice, but Emacs users are people too, and so they get jshint-mode.

Big Fat IDEs

If you prefer IDEs to text editors, there’s a plugin for Visual Studio, and the PhoneGap mobile web development plugin for Eclipse adds JSHint functionality to Eclipse. I couldn’t find much for NetBeans, outside of a blog post (“Integrating JSLint More Tightly into NetBeans”) on using the jslint4java Java wrapper for JSLint.

Whatever your code writing preference, JSHint should be fairly easy to integrate into your workflow. Personally, I am using it with Vim and the Syntastic plugin, and the JSHint output messages popping up whenever I save a JavaScript file that JSHint flags with warnings/errors is incredibly convenient. It is now just another automatic tool in my development toolkit.

Moving from Subversion to Git

November 8th, 2011 - Brendon Rapp

Like many businesses that have grown into software development, Jaguar’s dev tool stack began fairly simple and has grown gradually since then. One of the first things I did upon taking this job was establishing a version control server. Distributed systems like Git were starting to grow in popularity, but for simplicity befitting my own inexperience, I opted to go with the tried-and-true choice, Subversion.

Subversion served us well, but over time, we’ve run into the usual set of issues:

  • Merging is messy
  • Branches being part of the filesystem is inelegant, ugly clutter
  • No offline functionality
  • “.svn” folders everywhere, all over the brand new rug

After having gained the experience of a couple years of SVN usage, I decided it was time to look at the new breed of alternatives. Specifically, I wanted to embrace “branchy” development workflows.

We experimented with Mercurial for a while, but we settled on Git. Git performed better with large repos that Mercurial would choke on (at that time, at least), and while Git originally lacked a native Windows port, the C rewrite and msysgit has solved that issue.

That brought us to the problem of this blog post: how to get Subversion repositories into our new Git setup.

There are a number of articles on this topic, including these, but there is a better, less-involved way to get the job done.

Of course, we can always just create a new Git repository, checkout everything from SVN (or, if we’re smart, svn export, so that we don’t get all those darn .svn/ folders), dump it into the newly created Git repo, and call it done. But if we do this, we lose our commit history, we lose branches and tags, and everything else VCS related from the repo’s past SVN life.

The tool we used was svn2git, a Ruby gem that provides a command-line tool for easy conversion of SVN repos to Git repos, bringing all the history and branches along for the ride.

To convert a repo, we simply do:

$ svn2git http://svn.example.com/path/to/repo

The svn2git command by default assumes a standard SVN layout: three subfolders named “trunk”, “branches”, and “tags”.

However, if this is not the case, we can communicate this to svn2git with command line switches:

$ svn2git http://svn.example.com/path/to/repo --trunk trunk --nobranches --notags

And if there are no subfolders at all, but rather the repo is in the root folder itself, there’s a switch for that too:

$ svn2git http://svn.example.com/path/to/repo --rootistrunk

There are even more settings, including the ability to supply passwords or filter out files in the conversion process, which are explained in the svn2git README.

Here at Jaguar, we had two basic repo structures to convert: the standard trunk/branches/tags layout, and the “root is trunk” layout, where the content is all in the root folder and not in any organizational subfolders.

In order to help preserve history, we needed an authors file. This will map SVN usernames to identities as used in Git.

# authors.txt
joe = Joe User
tom = Tom Selleck

This can either put this file somewhere world-readable and call it with the –authors switch (ie. –authors ~/authors.txt), or it can be stored as ~/.svn2git/authors (no .txt extension) and svn2git will load it automatically.

NOTE: svn2git turns your current working directory into a Git repo containing the files from the SVN repo you’re pulling from. That means you probably want to create an empty directory for your project and change to that directory before running svn2git. Make sure you’re *not* running “git init” – svn2git does this itself, so your directory should not be make into a Git repo ahead of time.

The step-by-step repo conversion process breaks down to this:

1. svn2git checks out into your current directory, so make a new directory for project and change to it

$ mkdir myproject
$ cd myproject

2. Run your svn2git command

$ svn2git (svn-path) (svn2git options)

3. Add remote for destination central git repo

$ git remote add origin (git-path)

4. Push to origin/master

$ git push origin master

At the end of Step 2, you will have a fully functional Git repo on your system. If you aren’t trying to move the repo to a remote Git server, you can skip steps 3 and 4. These steps aren’t specific to using svn2git, but are simply what you do in order to get a local Git repo moved over to a remote server.

It has actually been over a year since we used svn2git, but the project remains in active development. svn2git 2.0 appears to have added a feature for actively mirroring SVN repos to Git, as opposed to just doing a one-time conversion.

No One is Immune from QA Slip-Ups

November 2nd, 2011 - Brendon Rapp

When I hit the final Submit button to push our first iPhone application up to Apple for review, I was nervous. Had I remembered everything? Had I done it all correctly? The confusing way that Apple handles certificates and signing your binaries did not help.

Happily, the app submission was correct, and the working application showed up on the App Store a few days later. Still, the trepidation of being the final person to touch something before pushing it live is very real, especially when pushing to an environment where you don’t have the control to make an instant fix if something goes wrong.

It appears that no one is too big to get bitten by a missed last-minute error. Today, Google finally released a native Gmail client for iOS to the App Store. However, this release contained a bug, which, along with breaking notification functionality, generates an error message for every user when first running the application:

Earlier today we launched a new Gmail app for iOS. Unfortunately, it contained a bug which broke notifications and caused users to see an error message when first opening the app. We’ve removed the app while we correct the problem, and we’re working to bring you a new version soon.

I don’t know if this makes me feel better, or simply justified all over again for feeling nervous.

QA (Quality Assurance) is an ongoing process. As we have grown at Jaguar, our QA practices have required revisiting and revising. We have added to our practices incrementally, and we continue to add to them. The challenge is in introducing changes with minimal disruption. It used to be a no-brainer to make changes during quieter times between large projects, but we don’t seem to have those anymore! (A good problem to have, but a challenge for revising practices)

Much like your technology stack, your QA processes isn’t something you can just “set and forget “. They’re either being continuously improved, or they’re decaying.

Try Out Programming Languages from Your Browser

October 25th, 2011 - Brendon Rapp

repl.it – that’s REPL as in Read-Eval-Print-Loop – is a neat online project which allows users to play around with various programming language interpreters directly from their browser.

Each language runs in an interpreter build on top of JavaScript, and the interpreters run entirely client-side in the browser’s JavaScript environment. repl.it currently supports 16 languages, including Ruby, Python, Scheme, JavaScript, Lua, some old favorites like QBasic, and even some toy languages like LOLCODE.

The project is open source and all the code is available on GitHub. Many of the interpreters are written using Emscripten, an LLVM bytecode to JavaScript compiler.

Goodbye, John McCarthy

October 24th, 2011 - Brendon Rapp

The field of computer science is not having a very good month.

John McCarthy, pioneer in the field of artificial intelligence and inventor of the Lisp programming language, has passed away.

“Uncle” John McCarthy was a legendary figure at MIT, a time which is thoroughly chronicled in the book Hackers: Heroes of the Computer Revolution. It was during his time at MIT that McCarthy created the Lisp language, which remains one of the oldest programming languages still in use today. Along with Marvin Minsky, McCarthy founded what would later be known as the MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL).  McCarthy later started the Stanford Artificial Intelligence Laboratory (SAIL) when he left MIT to become a professor at Stanford.

Beyond his AI work and Lisp itself, McCarthy was behind some important ideas in computer science. McCarthy invented garbage collection (as part of his development of Lisp) and conducted the initial research on time-sharing systems. Today, most computer users run multi-user operating systems, and many modern programming languages implement garbage collection. McCarthy’s 1961 speech on time-sharing systems, in which he compared computing time to utilities like electricity and water, very closely resemble the modern world of cloud computing with services like Amazon AWS.

Lisp and its many dialects remain dominant in the field of artificial intelligence, and people continue to develop new Lisp dialects (a notable recent example being Clojure).

Paul Graham’s Roots of Lisp essay is required reading today.

Here is a 1984 broadcast featuring McCarthy along with other prominent computer scientists in the AI field:

And here is McCarthy speaking at the SAIL (Stanford AI Lab) Reunion, talking about the origins of SAIL: