Pages

Monday, December 12, 2011

After touchscreens ...

In this post, I'll share with you my vision of the after-touchscreen.

For years from now, touchscreens evolved from the very thick touchscreen which supports only one point to thinner ones which are massively used today, and which can support multipoint (more than one point detected).

Since Apple launched his iPhone and the gestures, more and more touchscreens support multipoint. I think most of the touchscreen used in mobile phones or laptop (tablet pc) can support multitouch. Most of these screens are made using the technology that requires a contact between the user and the screen.

Now I'll focus on Microsoft : the first product from Microsoft which illustrate the direction of the after-touchscreen is Microsoft Surface. It uses contacts (users need to touch the screen, the objects are in contact with the screen) but it uses sensors (infrared sensors) to determine the position of each point. I think that may be important because it is for me the beginning of the after-touchscreen.
The second product is the famous Microsoft Kinect. The Kinect was made to interface with the Xbox, it works with sensors, but here the user doesn't have to touch the screen to interact with the information displayed on the screen. This new direction is confirmed with the fact that Kinect is going to be used for PC applications (an open-source driver has been written, Microsoft has released the SDK).

And I think that the after-touchscreen is that kind of interface without contact : no need to analyze the variation on the screen to determine the action, the sensor will directly analyze the action.


It may be interesting to create some screens controlled by the movements of the body: like in the video "Kinect Effect", in a hospital, surgeons can see the digitalized X-ray photographies on the screen using their hand during a surgery. The touchscreen may be replaced by this method in all those situations to avoid disease transmission (done by touching the screen).
And I don't think it's mandatory to use a Kinect, I wish all the technologies about the gestures detection will be open to allow everyone using a webcam to use it.

Another example using the Cisco Telepresence technology: basically, each telepresence endpoint is a screen with a camera and a microphone. If an application is done to allow the camera to analyze the gesture or an improved camera (a Kinect-like) is used in each telepresence endpoint, it may be possible to have more immersive collaboration experience: a collaborative whiteboard controlled at each telepresence endpoint by the movement of each user.

I think that those ideas may come to reality in the next 24 months (if there is no apocalyspe in 2012 :D).

And after ... maybe that :


But that's "After Kinect ..." :D

No comments: