Tuesday 12 November 2013

Tech trends, with double insights

This is a good article. It's a summary and opinion on the latest Gartner Tech predictions.

I agree with most of the opinions. Gartner makes its money on research, and, as such, is prone to exaggerate some preferred predictions - possibly in the hope that the very act of proffering them will lead to them occurring. Given their influence, it's not unreasonable, but it is a bias. So it's always healthy to view such research through the lens of a few opinions.

I agree with most of Chris Taylor's opinions, with some caveats of my own. It will be fun to review this post in a year's time...

Mobile device diversity & management: I'm with Chris on this. Enablement always trumps caution in technology, for better or worse. Technologists are always more interested in whether they can do something, rather than whether they should, and especially in software (that's why it's 'soft', right?). So ubiquity will work by 2018. The only constraint to it will be vendor lock-ins, not policies or standards.

Mobile apps & applications: I think Javascript is coming to the enterprise, so more enterprise apps will become web-based (or, more specifically, more desktop software companies will be able to build desktop-equivalent apps for browser). However, mobile-native apps will continue in the consumer space because there are a lot of vested skills in iOS and Android out there, and because companies like the vanity of having an app.

Internet of everything: it's seeping out, rather than exploding as many enthusiasts predicted it would. All it needs is one killer app, though, and it could rapidly come into its own. I imagine that app to be something like a cheap, SD card-sized Raspberry Pi, with sensors, micro USB connectivity (2, one for power, one for device(s) and really simplistic, user-friendly dev software.

Hybrid cloud and IT as service broker: I'm mostly with Chris on this. The issue with cloud is not security (that's a perception to address, though), it is integration. But, living in a developing country, I would not be as dismissive of the connectivity and jurisdictional issues.

Cloud/client architecture: this is just the same as the old client/server arguments. It ebbs and flows in roughly 5-10 year cycles: it's all about the client, then all about the server etc. I see no reason for that to change just because those servers are moving to the cloud.

Era of personal cloud: Agree, but I prefer to see it in more simplistic terms. As Marc Andreesen once wrote (in the WSJ, I think): software is eating the world. As more of our media is encapsulated in software (be it music, documents, patient records, news feeds, address books, hotel bookings etc.) the amorphous, invisible nature of software is taking hold in a very personal way.

Software defined anything: see above. Same thing, just the developer side of it.

Web-scale IT: see above. Same thing, just the enterprise side of it.

Smart machines: bit ambitious of Gartner to make predictions to 2020 (that's a century in IT terms!), but I agree with the gist that there will be some inflection point ahead based on smart machines, when we'll look back on Siri and Google Now as quaint pre-cursors to the main event.

3D printing: not convinced that this is any more than a cottage industry enabler. It's like printing: nearly everyone has a printer in their office and in their home, but we only really use them for office and home stuff.  If we want business cards or brochures we still go to the professionals. Same with 3D printing: the key inhibitors will always be cost of materials and mass customisation (why print my own trainers, when Nike can build me a custom pair for $120?). So the main benefit may be contextual: it stimulates the customisation efforts of mass producers.

Chris' own list is good for 2014, I think. That's the near-term direction things are heading.


Wednesday 23 October 2013

OSX Mavericks: free, because it's not worth paying for


Apple's reality distortion field often engendered a wry smile, even from hardened Apple fans. You knew they were stretching the truth horribly, but you forgave them because you knew that on most other dimensions they were surpassing your expectations.

The trouble with hyperbole is that it can tip from ballsy to bravado very quickly. Leading edge may become bleeding edge. Cool dude abruptly becomes 'bit of a dick'. So when I read a headline that
Apple Just Ended the Era of Paid Operating Systems my instincts prickled. Is it really a whole new OS? Or is it like Windows 8.1, which has some pretty drastic changes over Windows 8, but Microsoft decided to release as an update, for free?

Having duly upgraded installed the new OS X Mavericks, I could see little visible difference. No whizzy intro sequence like OSX of old. Hardly a 'new' OS, then. More an upgrade, which is fine unless you're spending a few $mill on pitching it as a new OS, with all the hyperbole trimmings.

So what has changed?

There were just a couple of extra icons added, without my permission, to my dock. Maps and iBooks: neither of which I use, or have interest in using.  I live in Barbados, which Apple Maps, until recently, rendered as a triangular blob.  It's slightly better now, but I still live in a field, according to Maps, so can't get directions to anywhere. So I'll stick to Google Maps/Earth, and forego the 'amazing' integration with Calendar, until I move to Cupertino, where I'm sure it works.

I live in the middle of a field, apparently.
As for iBooks, I think Apple's foray into books is faintly reminiscent of Microsoft's foray into music: Zune vs iTunes? Pah. iBooks vs Kindle? Good luck, iBooks, you will need it. Try supporting a couple of non-Apple devices for a start.

Almost all the other new Mavericks items are furniture re-arrangement:-

  • Finder: Tags? Already had them, they're just more prominent. Tabs? Long overdue - 3rd party Finder replacement apps have had them for years. Still not as good as Windows Explorer, I'm afraid.
  • Safari: reading list and bookmarks on a sidebar? So what? And a Sharing sidebar? Because it's too much effort to open Twitter in another tab?
  • Multiple displays: is different to Airplay.. because you have the menu bar on each screen? Jaw dropping.
  • iCloud keychain: so we're syncing my keychain to my iCloud account. That's just like syncing my bookmarks only with more encryption, right?
  • Notifications: like my phone. So my mac is now almost as good as my iPhone? (But is it as good as my Android phone notifications? Or Google Now?)

Apple have always been confident in their dictatorial stance, eg. "you don't need removable storage or spare batteries in your phone". And after some initial whinging you usually come round to their way of thinking, which grows into an admiration that perhaps they know your tech needs better than you do yourself.  I had that with Apple from 2001 to 2011.

With this latest 'new' OS I think they're starting to look like a dick.  Have the balls to spare us the hyperbole for when it really matters, Apple.

Tuesday 11 June 2013

Friday 7 June 2013

Beware Android users

The most sophisticated bug for a mobile phone ever has recently been found by Kaspersky Labs.


Not only can it do real damage (eg. sending SMS to premium rate numbers), it has no user interface, no indication of the privileges it has (circumventing all of Android's usual 'this app can..' lists) and cannot be removed from compromised devices!

The hackers found no less than 3 unknown critical vulnerabilities, one even obfuscating the code to prevent analysis.

Details here (Securelist.com).

Kaspersky have notified Google, but this could cast serious doubts on the enterprise use of Android.

Thursday 30 May 2013

The battle for the living room


This article (ReadWriteWeb) makes a good case for the next big consumer electronics battleground being in the living room.

It mentions, but doesn't really expand on, the key success criteria too: content, intelligence and user experience. Two of these are radically from traditional media. User experience is generational: older generations are used to being served content, to 'tuning in' to a broadcast. Whereas kids these days are very selective, and fickle about their content - they have to be because there's so much of it. So how to find the middle ground? That will be the critical challenge, and I suspect Apple will crack it first because that's what they excel at.

Intelligence on the web is massive, literally: every click or tap you make is tracked by something, somewhere. At the very least, it's the site you are on, more likely it is tracked by Facebook (if you browser is aware of your Facebook account), Google (ditto) and a few of the common advertising cookie trackers. Attach that to your TV and movie watching habits and that's a considerable portion of peoples' lives fully mapped. But data capture is only half the story. To engage and monetize you have to use the data intelligently. That's where Google have the advantage.

Content (which the article mainly focuses on) is the same issue as traditional media: it is still king, but cheaper to produce than ever. Sure, good stuff is still expensive, but that's about production, not distribution.

There's one other aspect that the article does not mention, but that I think will be vital: social.  Sharing content is a big thing already, but there's still some friction when you do it. Imagine a monetized version where you get a micropayment whenever you successfully share a piece of content (like, say, a movie trailer). There's also the participation aspect of social: gaming. Currently, there are very few multi-platform online games: an Xbox player cannot play with a Playstation player online. On some games they can play against PC players, but for quick reaction games the PC players have the advantage of a richer, more responsive interface (a mouse has a greater more accurate range of movement, plus a keyboard has more programmable combinations than a gamepad). This limitation is tolerable because consoles are primarily about games, with media playing being a bonus feature. In the future living room, audiences will not tolerate being restricted to sharing only with others on the same platform.

Thursday 23 May 2013

Optimum smart phone size?

I think smart phones are getting too big.

Usually technological progress is about miniaturisation: packing the same or more into less. The computer industry, while adhering vigorously to this principle -to the extend of inventing a law about it (Moore's Law) - also has a propensity to get ahead of itself. PC specs in the 1990's were all about mega and giga hertz processors. These days clock speeds are pretty standard: about 1.5GHz for small laptops, 2.5GHz for larger laptops, and 3GHz for desktops. They've plateau'd.

Now think about the progress of mobile phones. The first mobile I had was a Nokia 2110 that was the size and weight of a slim brick. It was cool because, as well as making calls, you could manage your contacts. Phones were less about features and more about size/weight, coverage and cost. Then came feature phones, with cameras and basic apps. Software and computer companies stirred at the potential: a new battle ground. The smartphone was born.

Software, by its nature, evolves rapidly and has an insatiable appetite for hardware. The old PC expression was that "Intel builds them, and Microsoft fills them"- no sooner was the latest PC processor released, then the latest Windows OS would require it to run properly. And the software process was always linear: more features = better.

But mobile is different. A modern mobile already has enough processing power for me to write a book, do my accounts and even edit and shoot a short film. But would I do any of that on my phone? Not if I valued my eyesight, thumbs and patience. What I want in a phone is still just portability and connectivity. Sure, I want to connect in more ways than ever before: text, voice, video and all the social gloop in between, but try carrying your phone around in airplane mode for a couple of days. It rapidly devolves into a toy: a walkman or a gameboy. Amusing, yes, but not essential.

So, it's really just a screen to all the things I want to connect to. A little window to my online world. So I sort of understand the implied trend that this little window should be as big and bright and hi-res as possible.

But what about that other essential attribute: portability? If I can't wear it and it doesn't fit in my jeans pocket it's not portable. Pouches on belts do not count: I'm not on maneuvers, nor am I a hobbit. While the latest big 5" screen, flat smartphones look slick and fun to fondle in the shop, once you get one and put a protective case on it, you're basically trying to carry a 1980's calculator in your pocket. I may have been a geek in the 80's but I only ever had a calculator in my bag for maths lessons.

So why bigger? Because marketers need a killer feature to brag about and screen size is simple to understand and literally visual. There's another more cunning reason: battery life. Mobile processors are getting faster, hotter and more energy consuming. That kills battery life, so make the battery bigger but do not make the phone thicker because thick is ugly. So make the screen bigger. People think they are getting more screen, and they are, but really it's for the bigger battery to support the bigger processor (and bigger screen!).

Apple did a clever thing with the iPhone 5. They made it bigger, but only taller: they kept it narrow, so that it could still fit in a pocket. Also, the screen is still small enough for your thumb to travel to all corners without having to adjust a one-handed grip, unlike all the bigger screen smartphones, that effectively require two hands to use.

The final silliness: screen density.  The latest smartphones have over 400 pixels per inch.  The naked eye can only effectively discern 300 ppi. So all those extra pixels are for the marketing feature lists, not for your eyes.

Where I'd like to see it go is where Mark Shuttleworth's Ubuntu are dabbling: using your phone as a PC. Hook it up to a display, keyboard and mouse, and you have a proper PC. Take off the peripherals, and you're back to simple touch interface. Add a secure wireless display standard to all displays (TVs and monitors) and you have no need of oversized screens. The touchscreens of today will become like the little outer screens on the old clamshell mobiles. That's worth putting in your pocket.

Wednesday 8 May 2013

China hacking

While watching this video ( http://bloomberg.com/share/video/mB_vR3VcTA$pjOWVcVRO7w source: Bloomberg) it struck me as ironic that China's supposed control over it's people could work so badly against it, at least from a diplomatic perspective.

The telling stat is that 40% of the worlds hacks emanate from China, but only 10% from the US. Yet nobody accuses "the US" of hacking because state-sponsored naughty behavior is not what the US is about, right? Whereas when a hack comes from China it is heavily implied that it is state-sponsored. 

Maybe they just have script kiddies and organized crime too? There are 1.2 billion Chinese, so they could have hacking clubs larger than formal cybercrime entities in the West. Certainly, the organized crime is bigger. Also, given China's regime, those hackers would have to be pretty adept at covering their tracks, so you'd imagine that they'd figure out how to point the blame at the regime's institutions. Especially as it would suit the west's ideology to blame the Chinese state rather than a bunch of very good hackers who happen to be from China.

Maybe it is state-funded Chinese hacking, and they are probing ways to disrupt western economies and defense industries. While I can appreciate the latter, I don't think they'd need to resort to hacking to disrupt western economies. So who has been hacking Wall Street firms? Organized criminals seems a better bet to me.

Thursday 21 February 2013

Biometric security

Nearly 7 years ago all 23 chromosome pairs of the human genome were mapped, after over 10 years effort and billions of dollars spent.

Now, you can get your DNA mapped for about $5000-$10,000 and that's expected to drop to $1000 soon. And the data amounts to about 8.5GB - small enough to store on your phone. This is amazing science and bodes well for disease cures and longevity, but what about security and identity?

Right now, if I handed a thumbdrive to you with my personal genomic sequence on it, there's not much you could do with it, except take it to some clever people to find out what diseases I'm genetically prone to. But what about in the future? Science fiction writers like to rave about DNA fingerprinting: locks only unlocked by your DNA, or guns imprinted with your DNA so that only you can fire them. Crimes solved because the perp's DNA was found at the scene.

Now, what if you could use my DNA to grow a replica of my hand, my blood, my eye etc.- enough to trick the lock, or the gun, or the forensics team? How would I then prove I was me?

The worrying thing about DNA for identity is that you can't just reset it, like a password. It IS you. That 8.5GB dataset is your recipe. Maybe the boffins have thought of that, or maybe not? From a security access perspective it's no big deal: we've had multi-factor security for years, whereby access requires something you have (eg. a card), and something you know (eg. a PIN). But if someone takes your card, you can cancel it. If someone takes your DNA, what can you do?

Monday 7 January 2013

Apple Developers

My team has an iPhone app, that was originally developed over a year ago. It's not great, developed by a third party who clearly didn't understand our core product, but it was a good start. The project manager at the time got us onto the Apple Enterprise Developer Program because it was envisaged that we'd only distribute the app per client, through their existing secure web interface.

I arrived and decided that we needed to make the app generic (rather than recompiling it each time for each client) and distributed through the App Store. The Apple Enterprise Developer Program was up for annual renewal in December so I renewed, thinking that, since it cost double the Basic Developer Program, that it would be an extension of the basic Program. That I could simply log into iTunes Connect and publish our app to the App Store.

But no.  I had to pay extra for the Basic Developer Program. Not only that, but the only way I could find this out was by phoning their support people. The Developer website was curiously dumb (user unfriendly, Apple?) about my options. It also listed the Programs I was enlisted on as though I could join multiple Programs, yet is seems I could not. Even the support staff person seemed surprised that I could not add the Basic Developer Program to my existing Enterprise Developer Program license.
So now I have to start the entire licensing process from scratch. If you've never done it, it is a palaver involving Dunn & Bradstree numbers, certificates galore (user certificates, app certificates etc.) and patience. Lots of patience.

All this so that I have the privilege of paying them 30c of each App Store-earned $ - the so-called Apple Tax. I'm not adverse to paying taxes if they are going towards a good cause, but when they're forcing me to pay twice for effectively the same thing (the Basic Developer Program), and then taxing my revenue stream I have to ask why I should bother.

The reason most people bother is because it's the "Jesus Phone", the darling of trendsetters and jetsetters all over the world. But the sheen is starting to fade, and the competition has not only closed the gap but they appear to be evolving faster. Faced with such adversity, the key audience you must do everything to retain are your loyal acolytes: the developers.  That is the true essence of Microsoft's longevity, and Apple would do well to learn from it.