2011-01-19 »
Moore's law and iPad-sized "retina displays"
Normally I don't bother to engage in Apple product speculation, but in this case, I have some actual knowledge (from a brief foray into working at a semiconductor company, some time ago) that might be interesting to share, and Apple is a good excuse. Of course, that was a long time ago - before LCD monitors had any significant popularity - and I'm just a software guy, so I might totally misinterpret everything.
But if I were to take what I think I know and conjecture wildly without looking at any reference material, it would look like this:
LCD-style displays (at least in that family; that is, a two-dimensional array of transistors that light up or darken or whatever to form an image) are all built using something like semiconductor crystal growth processes.
Semiconductor processes more or less conform to Moore's Law, one statement of which is: the price per transistor approximately halves every 18 months. (It is often stated in terms of "performance," but we have already long passed the point where doubling the number of transistors automatically improves performance. And in a display, we care about the literal number of transistors anyway.)
Why does the price per transistor halve? I'm not really sure, but the short answer is "because we get better at growing perfect (imperfection-free) semiconductor crystals at a smaller size." So there are two variables: how small are the transistors? And how perfect are the transistors?
Roughly, if you can pack more transistors into a smaller space, the chance that you'll run into an imperfection in the crystal in the space you use will be less; the error rate in a crystal is about constant - and very low, but not quite zero.
Transistor sizes ("processes") come in steps. These are the sizes you hear about every time Intel builds a new fab: 0.04 micron, and so on. When a new process is introduced, it'll be buggy and produce a higher rate of imperfect transistors for a while; with practice, the manufacturing engineers get that error rate down, saving money because more of the chips on a particular wafer of silicon will be imperfection-free.
By the way, that's how it works: you buy these wafers of silicon, then you "print" circuits on it using one of many extremely crazy technologies, then you chop up the wafer and make chips out of it. Some of the chips will work (no imperfections), and some of them won't (at least one imperfection). Testing chips is a serious business, because it can sometimes be really hard to find the one transistor out of 25 million that is slightly wrong sometimes (at the wrong temperature) because of an imperfection. But they manage to do it. Very cool.
Also by the way, multicore processors are an interesting way to reduce the cost of imperfections. Normally, a single imperfection makes your chip garbage. If you print multiple cores on the same chip, it massively increases the area of the chip, making the probability of failure much higher. For example, a quad-core processor takes roughly 4x the area, so if p is the probability of failure for one core, the probability of overall success is (1-p)4. So if one core has a 90% success rate (where I worked, that was considered very good :)), four cores have only a 65% success rate, and so on. Similarly, a single-core megacomplicated processor four times as big as the original would have only a 65% success rate. But if you're building a multi-core processor, when you find that one of the cores is damaged, you can disable just that one core, and the rest of the processor is still good! So as a manufacturer, you save tons of money versus just throwing it away.
Remember those three-core AMD processors that people found out they could hack into being four-core AMD processors? I wouldn't, if I were you.
Anyway, where was I? Oh, right.
So every now and then, Intel will build a new fab with a newer, smaller process, and start making their old chips in the new process, which results in a vastly lower imperfection rate, which means far fewer of the chips will be thrown away, which means they make much more money per chip, which means it's time to cut the price. Along with improved manufacturing processes etc, this happens at approximately a 2x ratio every 18 months, if Mr. Moore was correct. And he seems to have been, more or less, although some of us might wonder whether it's a self-fulfilling prophecy at this point.
Now here's the interesting part: LCD-like displays, being based on similar crystal-growing methods, follow similar trends!
A "higher DPI" - more dots in less space - corresponds to a new crystal growth process. Initially, there will be a certain number of imperfections per unit area; over time, those wonderful manufacturing engineers will get the error rate down lower.
Remember when LCD monitors used to come with "dead pixels?" The small print in your monitor's warranty - at the time, maybe not now - said that up to 1, or 2, or 5 dead pixels was "unavoidable" and not grounds for returning the product to the manufacturer. Why? Because it was just way too expensive to make it perfect. Maybe the probability of one dead pixel in a 17" screen was, say, 80%, but the probability of two dead pixels was only 50%. If you allow zero dead pixels, you have to throw away 80% of your units; if you allow one, you only throw away 50%; and so on. Accepting those dead pixels made a huge difference to profitability. LCD-like displays might never have been cheap enough if we didn't put up with the stupid dead pixels.
But those were the old days. Nowadays, thanks to greatly improved processes and lots of competition, we've gotten used to zero dead pixels. Yay capitalism. So let's assume that from now on, all displays will always have zero dead pixels.
The question now is: how long until I can get an iPad with double the resolution?
Well, now we apply Moore's Law.
If you watched the Android vs. iPhone debate over the last couple of years, you saw a series of Android phones with successively higher DPI in the same physical area, followed by the iPhone 4, which had a just slightly higher DPI than the most recent Android phone at the time, but which happened to be double the resolution of the original iPhone, because the iPhone hadn't been trying to keep up. (Now that the pixels are so tiny that they're invisible, are Android phones still coming out with even higher resolutions? I haven't heard.)
The so-called "retina display" wasn't a revolution, of course - it was just the next step in displays. (Holding off until you could get an integer 2x multiplier, however, was frickin' genius. Sometimes the best ideas sound obvious in retrospect.)
Each of those successively higher resolution screens was a "new process" in semiconductor speak. With each new process came a higher density of imperfections per unit area, which is why smaller screens were the first to hit these crazy-high densities. We can assume that the biggest screens at "retina" density as of the iPhone 4's release - 960x640 in June 2010 - was the state of the art at the time. We can also assume that the iPad was the biggest you could go with the best process for that size at the time, 1024x768 in March 2010.
Let's assume that the iPad wants to achieve a similar DPI before it hops to the next resolution. I think it's safe to assume they'll wait until they can do a 2x hop like they did with the iPhone. However, the exact DPI isn't such a good assumption; people tolerate lower DPI on a bigger screen. But since I don't have real numbers, let's just assume the same DPI as the iPhone 4, ie. the same "process."
We want to achieve 1024x768 times two = 2048x1536 = (2.13960)x(2.40640) = about 5x the physical area of the iPhone 4.
Moore's law says we get 2x every 18 months. So 5x is 2**2.32, ie. 2.32 doubling periods, or 3.48 years.
By that calculation, we can expect to see a double-resolution iPad 3.48 years from its original release date in March 2010, ie. Christmas 2013. (We can expect that Apple will have secret test units long before that, as they would have with the iPhone 4, but that doesn't change anything since they do it consistently. We can also assume that if you're willing to pay zillions of dollars, you could have a large display like that - produced by lucky fluke in an error-prone process - much sooner. And of course you'll get almost-as-good-but-not-retina very-high-res Android tablets sooner than that.)
Note: all of the above assumes that Apple will choose to define "retina display" as the same DPI for the iPad as for the iPhone. They probably won't. Being masters of reality distortion, I would bet on them being able to deliver their newly-redefined "retina display" iPads at least a year sooner at a lower DPI. So let's say sometime in 2012 instead. But not 2011.
By the way, if we grew the screen further, to what's now a 1600x1200 display, that's another 2.44x the area beyond the iPad, so 1.29 more doublings, which is about 2 more years. So we can have retina quality laptop or desktop monitors in, say, Christmas 2015. Since I've been wanting 200dpi monitors for most of my life, I'll be looking forward to it.
Bonus Prediction
I bet "2x" mode for viewing iPhone apps on the iPad 2 will look a lot prettier than on the iPad 1, though. After all, iPhone 4 screens already have twice the pixels, so "scaling it up" by 2x isn't actually scaling it up, it's just displaying the same pixels at a larger size. The iPad 2 will probably pretend to be an iPhone 4 to iPhone apps.
Why would you follow me on twitter? Use RSS.