February 2011 Archives

The developer preview of Mac OS X Lion (version 10.7’s big-cat name), to which I do not have access, includes the amazing installation option to install Mac OS X Server. What once was a $1000 (since dropped to $500) standalone product will most likely be included in the standard-issue Mac OS X package. You know, the one that costs about $99 to buy. Amazing.

While others (via DaringFireball.net) have doubted that this is really the case, I’m going to go on record as saying it’s not only true, but will also guess as to why.

Let’s gather some dots and then connect them, shall we?

First, Apple has all but eliminated its line of server products. Other than the dedicated Mac mini Server, there are no more server-specific hardware products to be had. The Mac Pro Server is essentially the same box as the regular Mac Pro, after all. XServe is dead. XServe RAID is dead, too. With no serious computing iron to offer, a server room is going to be devoid of Apple products. There’s food for thought…

Second, Apple is building a huge data center with lots of serious serving power for the iTunes store. They also know a lot about how to serve data between computers using MobileMe. To me, that adds up to a lot of learned-by-experience knowledge about something called cloud computing, where the data are not necessarily in one particular place but are all over the place. As long as you don’t see any difference between having the data locally vs. in “the cloud,” it shouldn’t make any difference.

Third, Apple has some intellectual property brewing around cloud computing. Googling “Apple patent cloud computing” yields a pretty satisfying list of things to look at. Apple has its head in the cloud, quite literally.

Fourth, rumor has it that Apple is going to introduce a plan targeted at small businesses which will supply faster turnarounds and loaner machines for a very-reasonable $500 per year. Neat.

Fifth, Apple will be supplying Mac OS X Server technologies with every one of its desktop machines. Every one of them. Not just a few high-end machines. All of ‘em.

Finally, the cursor blinks at the same rate that it used to and I still type slowly, even though the processor power available to me, the user, has grown immensely. We may have gigaflops and terabytes on our desktops, but we still vastly underutilize them in a typical business setting.

Now let’s connect the dots.

(Oooo! That sorta’ looks like a unicorn!)

I think Apple is aiming to eliminate the server room entirely. Furthermore, I believe that with Server on every desk (and eventually I think at least some of it will be a default part of installation, mostly hidden), Apple will move the server room out to the front office. What once was one or more pieces of dedicated server hardware and software will be distributed across the machines in the workgroup or business. This approach makes use of Apple’s cloud technologies and will utilize all of that unused—but already bought!—computing power that we have on each user’s desk.

When? It won’t happen instantly, or even soon. No, I’ll peg the release date for this massive shift in computing to the release of version 11 of Mac OS X—what’s that, two, three years? It’ll take a big shift in mindset of Apple’s customers to accept this kind of radical change in how we think of “servers.” Also, the technology has to catch up with the plan, such as the need to implement some sort of new underlying file system which is certainly a prerequisite for this kind of thing (ZFS, anyone?). All of this will require some positive track record, presumably which Apple’s starting right now with OS X Lion.

I don’t think I’m going out on a limb here. Think about the advantages to both the users and how I think Apple sees things, and it makes good business sense for Apple as well as a good user experience.

For one, there’s no need to buy super-powerful hardware to do server stuff, especially if you have spare gigabytes and gigaflops sitting around idle. Why buy redundant capacity? Not buying more computers saves the business some money, and narrows Apple’s product line significantly. That’s good for both us and for Apple.

By getting rid of the server room, we users save money and space and possibly the time or expense of a dedicated system administrator. Why not administer the whole thing yourself? Or perhaps just hire an Apple-certified Consultant on an as-needed basis and pay the minimal fee per year to get your business-essential hardware replaced/repaired quickly? That sounds remarkably inexpensive to me. That’s another good deal for Apple, its consultant network, and us users, too.

If you are going to buy into this cloud server thing, presumably you’re somewhat locked into the Apple ecosystem. That benefits Apple, certainly, but it could also be perceived as a benefit to the consumer like the tight integration of iPod/iPhone/iPad and iTunes has proven beneficial to the user experience.

And what if that big data center in North Carolina were to sprout a twin? Could that become a backup for your business data cloud? Short answer: yeah. Can you say subscription model?

As a final thought, I don’t believe that Apple has any interest in big business with this initiative. As we’re often told these days, the heartbeat of America is the country’s small businesses (sorry, Chevy) and that’s a huge market. Maybe this will be trickle-up technology, but I doubt it: the Microsoft juggernaut has that one wrapped up for the foreseeable future, and I think the sysadmin community (which certainly “knows this can’t possibly work”) will be extremely resistant to the decentralized server model, at least one that is this decentralized. Who knows? It may not work on the big business scale. But I think… well, never mind. You can guess what I think.

I’m sure that I’m missing a few benefits and a few points which suggest that Apple really is headed this direction.

And I might be completely and totally wrong.

But, Oh! How I don’t want to be wrong… It’s just too cool an idea for it not to be real.

(Sorry I’ve turned comments off for the time being. Stupid spammers thought I needed to see their crap on my blog, so until I get hooked up with Disqus, things will be quiet. Let me know via E-mail—contact info over there on the left under “Pages”—and I’ll post your comments as part of my original posts.)

This article (via Daringfireball.net) does a good job of introducing people to the basics of Thunderbolt, the Apple-adopted, Intel-developed 10Gbps daisy-chain-architecture super-fast bus. But I have one big, hairy question that I’d like to see addressed before I jump whole-hog onto the bandwagon here.

What about hubs? Because without them, the daisy-chain architecture is just as hobbled as FireWire’s is.

The subject of hubs was mentioned in the comments on the article, but nobody has provided an answer to this question as I write this. If I get one port on my machine and I have to plug into it all of my peripherals and they are daisy-chained, what happens if I want to take a device out of the middle? As it stands now, I lose my video, which has to be downstream of that device—for the time being, anyway—and any downstream hard drives just got dismounted in a not-so-nice way.

And if Nikon is really introducing a Thunderbolt-based DSLR, where do I plug it in? If my monitor doesn’t have a pass-through Thunderbolt port, then the camera has to have two ports on it (unlikely—they are small, but not that small), and I have to disconnect my chain and add a cable to insert the camera. Ick.

Finally, what if one of the devices in my chain goes tango uniform? Does it take down everybody downstream or, worse yet, upstream, too? I have somewhat-old FireWire drives that have the ability to cause my XServe to go kerplooey! (requiring a cold restart) when they decide to go out for lunch without permission. Though USB is slower, it never, ever did that, so those drives sit on USB these days. (They are for backups, not serving, so I don’t notice the performance hit, but am glad for the reliability.)

Don’t get me wrong: I like the underlying technology which, essentially, externalizes the PCIe bus, which is a really cool thing. But I gotta’ see how this works in real life before I say it’s not “very, very frightening.”

(OK, a little hyperbole there just for the sake of the lyrical reference.)

This article tells how Microsoft essentially bought Nokia for a whole lotta’ money.

If I had to guess, I’d say this transaction was what Motorola was hoping for when it split the consumer mobile unit from its enterprise unit. I bet there are some people in Arizona who wish they’d managed to get Microsoft’s billions headed their way.

But now this news surfaces: Motorola Buys Android Security Company 3LM. The message to me? “We’re serious about Android.”

If the first article’s stated rumors of a bidding war between Google and Microsoft for Nokia are indeed true, then it’s only a matter of time before Google finds Motorola attractive enough—less than perfect, but made more attractive with acquisitions like the 3LM one—and buys it up.

At least, that’s my guess.

John Gruber and I have briefly conversed about what the resolution of an upgraded-resolution iPad will be. He firmly believes it will be 2x the current resolution because it’s just easier. (True, it is.) But I believe that some future revision of the iPad will have a higher-resolution display which is not a fixed integer multiple of the current resolution for reasons I’ve outlined previously. Two more things make me think I’m right.

First, Apple itself. With 10.4 and 10.5, resolution independence was introduced in phases and was “developer only.” (Via DaringFireball.net) 10.6 was supposed to be more resolution-independent than the previous two releases, but… alas, not so much. That’s bad.

Or… is it? Wouldn’t it be a convenient alignment of things if iOS 5 and Mac OS X 10.7 introduced resolution independence about the same time? They share the same rendering engines, after all, and many other parts of their architecture. If both OSs introduced the last bits of making their interfaces resolution independent, it would make good use of resources in the company. Not that Apple has to think frugally, but if they’re trying to converge the two platforms somehow…

I’m just sayin’.

Second, WebOS Enyo apparently does some resolution independent jiggery-pokery according to this Engadget note. Not that I understand exactly what this means, because I am not about to plunge into the WebOS SDK documentation, but they are making a concerted effort to attract developers by allowing devs to write once for multiple target resolutions. That’s powerful stuff.

Now, Apple does not promote that devs should write once for iPad and iPhone/iPod (because the interfaces themselves should really be tailored to the different screen sizes), though it does work. Instead, Apple would likely try the same approach as HP and promote resolution independence as the bridge between future higher-resolution devices and the past’s lower-resolution devices. WebOS is ahead of both Mac OS and iOS here—there’s some real competition, finally.

So that’s it. I further stick my neck out on the subject of the iPad X’s resolution.

I sure hope I’m right…

There’s been a lot of guessing about the resolution of the iPad 2. Everybody who has been speculating (except me, it seems) has been saying that in order for Apple to be happy with the product, they’ll have to call it a “Retina Display,” and that in order to do so, they’ll have to double the iPad’s 1024x768 resolution. That is, after all, what they did to the iPhone 3 to achieve Retina Display status for the iPhone 4.

Yes, this makes good sense for lots of good reasons, not the least of which is that graphics work better when the scaling factor is a nice, round two. And with the more-limited hardware of the iPhone, it makes a lot of sense. But as others have noted, that resolution would be a whole helluva’ lot of pixels to push, and that ain’t happening in the iPad. Not yet, anyway.

I, on the other hand, am guessing that Apple will choose something else and that there’s nothing special about “times two.” They’ll choose manufacturable, cost-effective, and, above all, marketable over “times two” any day. Technically, I believe “times x” is completely feasible as I outline in a previous entry, especially if the iPad 2 has more horsepower than an iPhone 4.

So I’ll guess that Apple will define “Retina Display” as it properly should be, i.e., they’ll base it on the resolving resolution of the human eye, and not on an arbitrary number of pixels “times two.”

The thing that most people miss in their guessing is that the average user doesn’t hold an iPad at one foot as she might with an iPhone 4. No, the typical iPhone user holds it at about 1.5’ to 2’—lap distance or so. If we start with Bryan Jones’ assertion that the eye can resolve about 287ppi at one foot and scale that triangle up to 1.5’, then the resolution required is only about 190ppi. Since the current resolution of the iPad as stated by Apple is 132ppi, then an iPad Retina Display would have a minimum of 1475x1106 pixels, which is pretty close to the SXGA+ standard of 1400x1050.

If Apple uses a 2’ figure for average use distance, then 142ppi will do, or 1101x826. But that’s too close to make any kind of marketable difference, and it’s at the hairy edge of acceptable Retina Display resolution. If anything, they’ll shoot for over, not under.

Anyway, at 1400x1050 pixels, that’s not even twice as many pixels to push, and is a much more reasonable expectation of the iPad 2’s hardware.

Bottom line? Here’s my bet: if—and that’s a big if—if Apple changes the iPad resolution for the iPad 2, I’m going to bet on somewhere between 175ppi and 200ppi, favoring the upper end of that range.

Speaking of bets, John Gruber hasn’t taken me up on my bet. I don’t expect him to—heck, he’s a celebrity! But I think my wager requires clarification: I’m not betting whether or not Apple will change the iPad resolution with the iPad 2, but rather that if they do, it’ll be to something other than (and less than) “x2.” So that’s the wager: if they do, it’s less than two.