Smart terminal, dumb terminal

November 21, 2009 · 2 min read

Saw this in my Twitter feed today (retweeted by @andrew_chen):

sachinrekhi Funny thing about Chrome OS is that it's analogous to going back to the days of Terminals where data & computation was server side

I don't know a lot about the history of computing, but haven't there been major shifts back and forth over the decades? The terminology I've heard includes "dumb terminal" vs. "smart terminal", "thin client" vs. "thick client", and "centralized" vs. "decentralized" computing. There's always been a major question of how much computation and data should be at the endpoints, and how much in relatively centralized resources, whether it's mainframes and their terminals, Web apps and browsers, or mobile apps running on smartphones.

I'd guess such shifts are natural, depending on the availability and cost of bandwidth, storage, memory, and CPU. Will they continue? The Web enables a hybrid approach that seems to provide the best of both worlds: apps and data live in the cloud, but some code (JavaScript, Flash) is downloaded and run by the endpoints. Hosting apps centrally affords many advantages, especially the ability to deploy software updates almost instantly, by a process that is usually effortless or even invisible to the user. Downloading some code (especially UI code) allows maximum performance and responsiveness, but—this is critical—that code, too, can be updated instantly and seamlessly. (Projects like Google Gears are even providing offline access without sacrificing this essential advantage. I love offline access in Gmail; for me, it was the biggest feature Gmail lacked compared to Outlook. Most services, such as Dropbox and Evernote, still achieve offline access using desktop clients, but I suspect this will change eventually.)

I don't think this hybrid approach could have been done from the first days of computing; it had to wait for a certain level of bandwidth, memory, and CPU (not to mention advances in programming languages and software design—could we have had Web apps running JavaScript before OO design?) The mobile world is still a bit before this point: Web apps work on mobile, but they're usually too slow and unresponsive, so most apps are native. (This is a major disadvantage for mobile apps; no mobile app can fully practice continuous deployment, and the iPhone App Store approval process greatly exacerbates the problem, as Paul Graham points out in a recent essay.) But in maybe five or ten years, mobile devices and networks will be powerful enough to just run Web apps, and software will shift again.

Does anyone know enough about the history of computing to provide some perspective here?

On a related note, read Naval of Venture Hacks on why you will eventually dump your laptop and just use your smartphone.

UPDATE: See also the Jargon File on "wheel of reincarnation" (thanks @mjgardner).

These days I do most of my writing at The Roots of Progress. If you liked this essay, check out my other work there.

Subscribe

Get my posts by email:

Follow @jasoncrawford on Twitter

Subscribe via RSS

Copyright © Jason Crawford. Some rights reserved: CC BY-ND 4.0