Category: Mobile technologies


In 2010 I was quite enamored in the idea of “netbook” computers – low-powered laptops that not only had low price tags and long battery life, but also performed most tasks quite well, especially if the user was taking advantage of open source software ranging from Unix as the operating system, to Openoffice.org as the office suite. To prove the point, I wrote an entire book (When the Best is Free) on a netbook without any problems at all. It was so clear to me that this was the next big device that I predicted that every student would soon have a netbook, especially since they were cheaper than textbooks. In fact, since the introduction of this device, the cost of an average collection of textbooks was far greater than the cost of a netbook computer. The future of this new device was clear (at least to me.)

I was wrong – very wrong. The revolution did not unfold as I foresaw it. Netbooks became (and remain) a niche market and I think this is because of the emergence of a truly disruptive technology – the tablet. Led by the and famously successful release of the Apple iPad, and followed by Android-based devices from many other vendors (e.g., Samsung, Lenovo, Toshiba, Sony, Asus, Acer and others) tablets have caused netbooks to fade from view.

This phenomenon is quite interesting. First, as with most disruptive technologies, the tablet was not as powerful as the netbook. Furthermore, it generally cost more to buy, had less storage, and operated on a completely different premise. While the netbook had a clear evolutionary path from traditional laptops, the tablet did not. Devoid of traditional keyboards, the tablet operated with “soft keys” displayed on a multi-touch display. Gestures (pinches, swipes, etc.) previously seen in science fiction (e.g., Minority Report, Star Trek, Next Generation) became commonplace. These new ways of interacting with computer displays were readily adapted and traditional mouse movements were no longer needed.

So where did tablet computers come from? The origins of tablets had more to do with the evolution of advanced MP3 players (the Apple iTouch as an example) than with the evolution of computers. At the time of its introduction, the tablet had the potential to be a high-tech fad – like the Apple Newton from 1993. Instead, even with its limitations, the tablet became an overnight success. One could argue that this was a logical outgrowth of the popularity of smartphones (such as those based on the Android and iOS operating systems) but I think the tablet is generally seen in a different light. With screens ranging from 7” to 10”, tablets provide enough real estate to support web browsing and some limited text editing. While many people may find the light weight and long battery life of tablets to be compelling, and web-based tools (plus a few downloaded apps) to meet their day-to-day needs, most will find that they will still need a powerful laptop for the creation of documents, and rely on tablets for casual work on the road. The long battery life is one clear advantage, but another advantage of tablets is rarely mentioned. Because traditional clamshell laptops use a separate keyboard, users have to be sitting down, typically at a desk or table, in order to use the device. Tablets, on the other hand, can be used while standing. By holding the tablet in one hand, the other can provide the gestures needed to navigate applications. This plus reduced weight are quite compelling for some. Personally, if I had all my presentations running from my tablet, I would leave my laptop at home when I travel for speaking engagements.  My tablet computer fits in the seat back of an airplane, and allows me to watch several movies with enough power left over to run a 60-minute presentation without recharging.

Tablets also seem to meet the needs of a large number of people who never need the power of a real laptop  For them the tablet makes a great deal of sense, even though many high-end tablets cost as much as fully featured laptops. Of course, as tablets evolve, the prices will drop and the capabilities will increase, further cementing their role in creating a true disruption in personal computing.

All of which raises the question of what happens next. As tablets continue their evolutionary path, there will someday be another technology introduced that will have the same impact on tablets as tablets had on netbooks. Such is the nature of technological development.

Tablets have largely eclipsed the netbook market and some (myself included) have argued that this format of device will disrupt education profoundly.  In saying this, I in no way am suggesting that radically new technologies will not emerge.  In fact, they already have – even if they are not commercially available yet.

For example, in 2009, MIT Grad Student Pranav Mistry gave  TED presentation in which he showed his sixth-sense technology with which a special necklace held a camera and a projector to facilitate augmented reality explorations of really neat things.  For example, if you picked up a book and looked at the cover, information about that book, including reviews, would appear projected on the book itself.  His video is a whirlwind tour of amazingly cool stuff that seemed like science fiction at the time – only he made it work in the laboratory in preparation for becoming products.

Fast forward a year or so and the focus shifts to Google.  While the public face of Google Labs has been closed down, Google is continuing to explore cutting edge ideas.  One shot over the bow was a free app called Google Goggles that lets you use your smartphone to do many of the things dome by Panav Mistry’s system.  Take a photo of a book cover, for example, and it not only recognizes the book, but provides links to reviews and even a link to Amazon in case you want to get your own copy.  Stand in front of a landmark building, take a picture, and get links to information about the building.  Take a snapshot of a Sudoku puzzle, and it recognizes it as a puzzle and asks if you would like the solution.

Last year buzz started to build around the idea that the Google Goggles software was going to get its own dedicated hardware – a pair of glasses with a built-in heads-up display.  For example, a recent article in the New York Times blog describes some of the possible features such a device would have.  This would be a truly hand’s free device using head gestures to send commands to the system.  While this format probably tops out on the nerd scale – which is probably why I think it is cool – it may in fact represent the new face of computing.

Only it isn’t new.

In July, 1945, President Roosevelt’s science advisor, Vannevar Bush, wrote an article for the Atlantic in which he described his vision for the future.  One of his ideas was the following:

“Certainly progress in photography is not going to stop. Faster material and lenses, more automatic cameras, finer-grained sensitive compounds to allow an extension of the minicamera idea, are all imminent. Let us project this trend ahead to a logical, if not inevitable, outcome. The camera hound of the future wears on his forehead a lump a little larger than a walnut. It takes pictures 3 millimeters square, later to be projected or enlarged, which after all involves only a factor of 10 beyond present practice. The lens is of universal focus, down to any distance accommodated by the unaided eye, simply because it is of short focal length. There is a built-in photocell on the walnut such as we now have on at least one camera, which automatically adjusts exposure for a wide range of illumination. There is film in the walnut for a hundred exposures, and the spring for operating its shutter and shifting its film is wound once for all when the film clip is inserted. It produces its result in full color. It may well be stereoscopic, and record with two spaced glass eyes, for striking improvements in stereoscopic technique are just around the corner.”

Of course he was thinking in terms of the photography of the time  which was film-based.  He was aware of photocells and even speculated about their use in photographic elements.  Instead of Bush’s “walnut” Google is opting (it seems) to use glasses – something well accepted in our society.

No matter how it all shapes up, it seems the time is ripe for wearable computing.  And it would be foolish to think Google is alone.  Apple’s iPod nano comes with wrist straps, using arms instead of noses as the support for wearable technology.

Of course these technologies are not going to replace computers any more than tablets have – they will be additional tools that open new opportunities for creativity and productivity – and may even have a place in education.

Only time will tell.

Brazilians love their technology.  I remember decades ago when I first visited the country to see that people would mark their seat at a buffet by leaving their cell phone on the table.  In fact, Brazil was probably among the first country to have cell phones outnumber wired lines, although that was largely due to the difficulty of getting a new wired phone line at the time.

But technological romance remains quite high.  Our local shopping center’s Apple store is full of people.  Samsung’s store in the same center is also quite busy.  Even Nokia, whose future remains uncertain, gets some traffic – and this is not just window shopping!  The number of iPhones, Galaxy tablets, and iPads coming out the door is amazing to see.  In fact, a recent study by Accenture shows that Brazilians are three times more likely than the global average to be purchasing a tablet in 2012.

This caught me by surprise given the explosive growth of this sector worldwide.

While tablets are coming into US schools at a fairly good pace, some Brazilian schools are listing them as back to school accessories along with crayons and paper notebooks.  The explosion is not restricted to the private schools.  In Pernambuco (the state where I am in the northeast of the country) the government is purchasing 170,000 tablets in a pilot with second and third year high school students.  Nationwide, other pilots in the public sector are adding 350,000 more tablets to the mix, with the goal to bring these devices to every student in the country.

Now if tablets were cheap devices, this would be one thing, but they are not.  The duty on imported electronics is so high that, for example, Apple products are nearly twice as expensive in Brazil compared with their price in the US.  Of course, with the rapid growth in sales volume, Toshiba and other major players are opening Brazilian factories to avoid duties and thus bring the price down.

The alpha-geek in me loves to see all this activity.  I’m an avid and active tablet user myself.  But when it comes to education, huge projects are taking a big risk if they are not thought out in advance.  For example, what is the wireless telecommunications infrastructure of the school?  Can it handle a thousand kids online at the same time?  How will the tablets be used?  If they are just glorified textbooks, much cheaper alternatives exist.  If the uses are more in support of creativity and inquiry, what tools will the tablets have?  Most importantly, how (and when) will teachers be provided not just with the mechanics of tablet use, but with the pedagogical support to transform education in rich ways?

Without thinking these questions through, the huge influx of tablets will likely fail to effect permanent change.  With the right support, though, we may see that the consumer driven romance with technology (especially among the young) will produce benefits that far exceed the cost of these devices, and this is a result worth seeking.

Today Apple unveiled a free iBooks 2 application for the iPad that brings interactive textbooks to the popular tablet computer.  According to Philip Schiller, Apple’s senior vice president of marketing, “Education is deep in Apple’s DNA,” which is confusing to me since texbooks are a major component of an education that has been flawed since the late Middle Ages, and one would think that Apple’s DNA would recognize that schooling and education are sometimes at odds with each other.

“With iBooks 2 for iPad, students have a more dynamic, engaging and truly interactive way to read and learn.”  This quote is pure and utter garbage.  What is new about canned content from Pearson and the other companies drooling at the prospects of finding new ways to view children as bodies with wallets, and education as the memorization of mindless material that, most likely, can be found in better form in ten minutes with a well-crafted Google search?

He said the iPad is “rapidly being adopted by schools across the US and around the world” and 1.5 million iPads are already being used in educational institutions.  This should make us cry.  Apple has clearly lost its soul.

Back in the early days when Apple really cared about education, a variety of creative ideas were encouraged both inside and outside the company all centered on the idea that computers let us do things we simply couldn’t do before at all.  Languages like Logo were supported, along with other creative tools such as Hyperstudio, and some internal projects as well (especially Cocoa which spun off and became Stagecast Creator).

Then along comes the iPad – a potential game changer being driven into schools by the students themselves.  Scratch, an amazing programming environment for kids (and grownups) developed by Mitch Resnick’s group at the MIT Medialab, was REMOVED from the iTunes store.  And now, the offerings of the old guard publishers will be featured.  The message is clear – “school is fine the way it has always been – now buy some new toys that require no changes in the system at all.”

This didn’t happen by accident.  Careful thought went into Apple’s perspective on how tablets should be used by children.  Today they decided that the iPad should be a costly version of the Amazon Kindle Fire.  while this may be a lucrative move on Apple’s part, it destroys any semblance of Apple caring one whit about real learning.  It is as if Dewey, Piaget, Papert and other giants in the field had never been born.

The bright spot is that the MIT folks are currently working on bringing some of their creative projects for kids to the Android platform, so this is not a condemnation of tablet strategies in general, only of Apple’s astounding march to the 19th century (as so aptly put by my friend and colleague, Gary Stager).

I bear no ill will toward Apple, only sadness in their decision to sell out the nation’s youth to curry favor with the very publishers that have done everything in their power to hold education to the past – at any cost.

This is a sad day indeed.

The Las Vegas Consumer Electronics Show opens on January 10, and there are rumbles that this show will feature lots of ultrathin laptops similar to the Macintosh Air.  Last year was supposedly the year of the tablet, but the rollout didn’t take place until months later, leaving Apple with the market pretty much to itself.  Of course that has changed, with everyone from Toshiba to Samsung offering quite powerful tablets at reasonable prices.  Schools, in particular, seem eager to jump on the tablet bandwagon and, while a good case can be made for this, my guess is that much of the early enthusiasm was generated by the freshness of the product category.

And some of these tablet installations are huge!  The Brazilian State of Pernambuco is placing an order for 130,000 tablets as a trial run for high school students to use!  Other projects on the drawing board are larger than that.  Everyone who can create code is getting up to speed on the Android OS and educational apps of all kinds are in various states of preparation – apps that go way beyond e-books or other applications reflective of the outmoded educational practices found today.

So, if the tablet is just now starting to emerge as a big seller (and it is), what is the rush to create a new class of ultrathin laptops that will cost a bundle, and do nothing you can’t do with the laptops we already have?  My guess is that this move is just to embrace an idea and hope it becomes a trend.

We saw this with Netbooks – a technology I endorsed when it came out.  Netbooks never achieved their potential because the price differential was not big enough to keep people from buying full-sized laptops.  The death blow, though, was the tablet – a truly portable device that can be used while walking around.

And that brings me to an important point.  I was an early fan of the Netbook, and it didn’t take off.  I am a current fan of tablets, so what are the chances I will get this one right?  I think my chances are pretty good.  The relationship kids have with tablets is different from the one they have with laptops of any kind.  That is true for adults as well.  Yes, tablets do not currently offer the rich variety of software found on laptops, but that is starting to change.

CES may be where the dreams of Ultrabook designers get shared, but I’m sticking with tablets as a dominant platform for the foreseeable future.

Painting over rust

In 1972, Alan Kay gave a speech at the ACM conference on the design of a computer for children (http://mprove.de/diplom/gui/kay72.html).  This presentation introduced the world to the Dynabook, a concept of Alan’s from the 60’s that he was pursuing at Xerox PARC in the 70’s.

His comment, at the time, is that much that passes for “change” in education (and elsewhere) is simply “painting over rust.”  It looks pretty for a day or two, but then the paint falls off and you are back where you started.  When we look at the world of personal computing since the 1970’s, we’ve seen lots of attempts to force fit failed educational models inside the new tools, giving the illusion of change where none existed.  Like Seymour Papert, Kay was one of the few visionaries who understood from the beginning that the power of computers in kids hands came from the artifacts they created themselves.  This model (Papert calls it “constructionism” says that it is the act of creating something in which a child shows her true learning.  Whether (as Papert suggests) it is a sand castle, a poem, or a computer program, the point is the same – the student is not treated as some vessel to be pumped full of stuff.  Instead, the child’s mind should be triggered to do what comes naturally – to make observations about the world around him, and to create and test models of this world in the quest for understanding.

Which brings us to tablets today.  All across the world, we are seeing huge installations of tablets as the next big thing in education.  While there is much to like about these devices (their true portability, long battery life, etc.) I am still waiting to see the kind of child-appropriate programming environment envisioned by Kay and by Papert (to name two examples) with which children can build and run their own models.  This software exists on netbooks, laptops, and all the other computers we now seem to have put on the back burner and, as a result, we may be (in the short term) making a huge step backwards.  Search for Logo, Squeak or Scratch to see what I mean.  At this point, precious little exists to let kids harness the true power of the tablets they will be getting.

Textbook publishers love tablets.  Be afraid.  Be very afraid.  This romance is destined to drive tablet use as a distribution medium for the same content that has failed to meet the needs of all learners for generations while creating the illusion of newness.  It is, in fact, just another layer of paint over the rust.

Will this change?  Apple banned Scratch (a logo-ish language for kids developed at MIT) from the iPad.  This was one of the most stupid decisions that company ever made.  In the Android world, I expect Scratch to appear sometime in the next few months (at least that is my hope).  There is a language called Frink that runs on Androids, and while not based on Logo, still allows kids to write their own programs.

As schools race to embrace tablets, let’s stand up and ask: “Are you painting over rust?”  That is a question worth asking.

Economic recovery

I’m watching something emerge here in Brazil that could bounce the US economy back to full recovery pretty fast if we were to implement it as well!  Brazil’s educational system is putting hundreds of thousands of tablets in kids’ hands as part of a pilot project, prepping for nationwide roll-out on a grand scale.  What makes this interesting is not the technology, not even just the government’s funding of innovative educational programs.  There is something even more going on here!

The government has decided, since this effort is funded by taxpayers, that every tablet purchased by schools will be made in Brazil – no imports allowed.  As a result, we are seeing huge investments in high tech manufacturing.  If Toshiba wants to play (and they do) their devices will need to be made in Brazil by Brazilian workers.  The same goes for any other company.

Imagine what a policy like this would do for the United States – ANYTHING the government buys needs to be made in the USA.  Since governments at all levels spend a lot of money, this infusion of new business would put Americans back to work in droves.

Of course, this wouldn’t happen overnight.  I think that companies should be given 6 months to ramp up their assembly capacity.  Next they need to look at the components themselves and increase (or in many cases, rebuild) their manufacturing capacity in this area.

The spreading of US tax-funded purchases around the world in search of the cheapest deal is a luxury we can’t afford.  And, while the example from Brazil involves computer technology, I think the US should expand this vision to everything the government buys.

Will US-built products cost more?  Given the trade agreements we have, and our race to outsource to low-wage nations, yes they probably will.  But I’m asking myself what the real cost of products is when we factor in the cost to our jobs?  And, of course, we as individuals would remain free to buy products made anywhere we want.  My proposal only applies to those things purchased by the government using our tax dollars.

Is this a good idea?  what do you think?

Thoughts on Apple

One of my colleagues recently said that “Apple is art; the rest is a kludge.”  While the second part of his statement is arguable, the first is not – Apple IS art, and this art has been finely sculpted over decades to its present shape.  The artist behind Apple is Regis McKenna, the public relations guru that shaped Apple from the launch of its first commercial computer.  He took two scruffy kids, each bright in his own way, and built a mythos around them whose stories are still being told around the fire.  The creation myth involved two geniuses – one purely technical (Steve Wozniak) and one focused on marketing (Steve Jobs).  This was a much easier story to craft than one suggesting that each of these individuals might have additional strengths.  The idea was to make each of these characters bigger than life.  Woz was in the back, busily designing the future, while Jobs was in front, leading the company to the point where Apple became synonymous with “Personal Computer,” a task that was quite a challenge in the late 1970’s where several vendors also produced personal computers using the same 6502 processor chip.

Regis was successful.  Apple, through its story, built a following.  People became attached to their Apple II computers and remained loyal even though other options were available.  Price was not a factor – the original computer with a display and disk drive cost about $2000 which, in the late 70’s was quite a bit of money.  But Apple owners saw themselves as part of the future – a future where they could do wonderful things.  Of course, the entry of IBM into the market with its original PC attracted business customers who had already subscribed to the IBM myth that no one was ever fired for buying IBM.

But Apple raised the bar to new heights with the introduction of the Macintosh in 1984, bringing “the computer for the rest of us” to the masses.  Instead of a clunky command line interface, the Mac provided a graphical user interface developed at Xerox PARC, and in one stroke established themselves as a leader in this brave new world of computing – even when competing against the products of the company from whom they had appropriated the ideas in a personnel raid on PARC.  The Mac was important for another reason – it allowed the creation of the “reality distortion field” around Apple and its band of loyal customers.  They saw the Mac as the ticket to coolness, and provided a sense of belonging to a global community of like-minded folks.  Performance of the technology was never the issue – loyalty withstood all onslaughts.  If Steve Jobs proclaimed that something Apple did was “insanely great,” then it was.  No questions asked.

As one of the folks at PARC during the early days (I tell people I’ve been a Mac user since 1973), I was a big fan of Apple’s products from the start, and was one of the outside testers of the original Mac.  I maintained Mac loyalty for a long time, moving to the Windows platform only when I started consulting for HP.  The reality is that the Mac was a better system for many reasons.  It almost always booted up each time you turned it on (in contrast to seeing the dreaded “blue screen of death” Windows users know all too well).  If the number of Mac titles was small, they were generally well crafted – especially in the creative domains.  In fact, there are some astounding Mac-only tools on the market today that, by themselves, justify purchasing this platform.

This loyalty has reaped amazing rewards for Apple over the years.  Apple customers do not make decisions based on price – they continue to buy into the myth created decades before – a myth that has been embellished over the years.  Apple’s failures (and there have been a few) are not mentioned in polite company.  I say this looking at the two Apple Newtons in my basement and see them as the first pocket-sized tablet computers.  Unfortunately, thay were too under-powered to generate a lot of sales.

But there is a danger with over-reliance on myths – reality has a way of sneaking up on people.  Apple’s coolness in the face of Windows was due as much to the ineptitude of Microsoft as to anything Apple did.  But this is changing, and changing fast.

Take the iPad – a huge commercial success even though the first version was flawed by its exclusion of a camera.  Fortunately, two forces made the original iPad a success – the contiuned glow of the reality distortion field, and the absence of real competition. But now competition is emerging, and the alternatives are at least as elegant as the iPad, and are definitely not in the “kludge” category.

The rapid growth of quality tablets based on Google’s Android 3 technology is making Apple run just to stand still.  Little details such as automatic backup to the “cloud,” and the ability to update the operating system without tethering it to another computer are features just added to the new version of iOS software – but these features have been in Android 3 from the beginning.

Yes, it is true that Google has an evolving mythology of its own – and the stories driven by this myth are continuing to evolve.  One could even argue that Google has created its own reality distortion field.

All this means is that there will be epic battles between these titans of technology – battles in which consumers will be collateral beneficiaries as we reap the best each as to offer in our personal quest to have our personal technologies exceed our expectations.

In fact, one could argue that this growing battle is insanely great!

With the explosive growth of tablet sales around the world, the debate has started regarding access to these devices by young children – from toddlers on up.  Watching the ease with which my four-year-old granddaughter uses both the Android and iPad tablets, this is an interesting question – one that becomes more interesting as we see price points drop to the point where many parents will be getting powerful (and inexpensive) tablets for their kids this holiday season.  Name brand seven-inch tablets have already broken the $200 price barrier, and the prices will continue to drop for lower-end devices.

Make no mistake, though, these cheap tablets are powerful devices – not just for web searching, but as platforms for everything from painting programs to puzzles and (soon) programming languages for kids like MIT’s Scratch.

Arguing about whether kids should have access to these new devices reminds me of the arguments against children watching television that we (or more likely, our parents) followed during the rapid rise of that medium.  Yet the revolution today is far greater in scope than television.  Looking at just SmartPhones, for example, more than 50% of all new phones sold are Android-based devices.  With a subscriber base of 5.3 billion cell phone accounts in the world, the impact of this technology overwhelms that of televisions which are in only 1.6 billion homes (according to the data I’ve been able to find.)

So, if the tablet argument is like the discussions in the past, it is because we recognize how pervasive this new technology has become.  Today’s kids increasingly expect to be able to move things on a screen by swiping their finger across it.  They seem to be coming prewired for the game.

To the issue of suitability, I would make the following argument.  It is not access to the devices that we should be caring about – that will happen anyway.  Our focus should be on the things children do with these tools.  As I’ve said for decades, the hammer used to create Michelangelo’s Pieta is not that different from the one used by the vandal who tried to destroy it.  The tool’s use is the issue – not the tool itself.

One can argue that, unlike television, tablet use is interactive and, therefore, more engaging.  But that begs the question; engagement toward what end?

On this topic, MIT’s Seymour Papert has had plenty to say over the years.  F0r decades, Professor Papert has argued that the real power of computers (and by implication, today’s tablets) becomes unleashed when children use them to build programs of their own design.  He argued that Logo (a language whose development he fostered, and a precursor to Scratch) had “no floor, and no ceiling,” meaning that the novice user could work with the language and continue using it over the years as his/her sophistication increased.  In this sense, Logo was like a natural language in which users increase their sophistication over time.  He even displayed a version of Logo for pre-literate children, reinforcing the idea that age was not a factor.

My recommendation is that we all pay close attention to the goings-on at MIT (http://mitmobilelearning.org/) whose new lab will be home to some amazing projects, many of which will appeal to children of all ages.