Why are scanners so slow?

Last week over lunch, my 18-year-old son Eli asked me, “Why are scanners so slow? They don’t even have as much to do as a copy machine. The copy machine has to move paper, put ink on the page. The scanner only has to scan.” He was referring to flatbed desktop scanners; we have a couple at work and one at home. I’m not sure where this observation came from, but he was right. My first reaction was to explain all the extra things the scanner is doing that a photocopier doesn’t do (allowing you to select the area you want scanned, de-skewing, scanning at different resolutions, optical character recognition, etc.). But as we talked I came to understand what’s wrong with these devices and why I don’t like them. As Eli said, they are unnecessarily slow. But they are also unnecessarily complex. Each of the ones we have has three or four buttons on the front. Then there is the user interface. You can tell (just from the panels, menus, forms and widgets) that a lot of work went into these UIs, but that doesn’t make them useful. I like to think I’m some sort of power user for most things related to PCs but I won’t invest the time needed to master this software just to scan one or two documents per month. I’ve found some combination of buttons and sequence of choices on the multiple screens that give me an acceptable outcome. Most of the time. God forbid someone uses it in between my occasional use, because then invariably the settings are changed and it takes me about as long to find the scanned document in my file system as it would to type it. So Eli and I started designing the desktop scanner of the future. Scanner manufacturers: you may have this design free of charge (send me a prototype if you’d like). The scanner has one button: “Scan.” (We went through a few designs where you could pick resolution or color, but as you’ll see those distinctions aren’t worth making). This business of “warming up” is pretty lame; just have an on/off switch. When you turn it on, it should warm up. So with the machine warm, you hit the “Scan” button and a screen pops up on the desktop with the image in full color in the highest resolution the machine is capable of. (I can hear the engineers saying, “It’s so wasteful to scan the whole flatbed if all you want is a photo or receipt,” or “It’s wasteful to scan in a higher resolution than you need.” Get over it. We’ll waste a few machine cycles to save some real time.) The desktop application is just the image with a simple menu: you can save the image anywhere you’d like in any of dozens of well known formats. You can print it. And you can do some pretty standard image manipulation such as clipping, reducing resolution, or adjusting color. Then you could pipeline the image over to some other program (either included with the scanner or whatever you have installed; imagine, “send to…” Photoshop or an OCR program or your email as an attachment). That’s it. This would be an incredibly more useful product, cheaper and easier to build and program. This reminded me, I had the same reaction to the user interface for the digital dictation device I had. I know they spent a lot on the UI. It would have been more useful if at the moment the device was plugged into the USB it just appeared in the file explorer window. You might be wondering why I’m writing this in our SOA blog. The first reason is a desire to make the world a better place. I would buy another scanner like this even though I already have three, because of the ease of use I’d get from it. But there is a broader message, one that echoes our credo at Semantic Arts: Let’s start taking complexity out of our systems. Most services and most applications get worse as more “features” are added. Our prescription for software: figure out the essential raison d’être of each service, get that right, and leave out all the other crap. This will work for devices, too. Eli will thank you for it.

Scroll to top