Augmented reality (AR) the term does not exactly jump off the tongue. But the concepts behind the technology are beginning to change what we think of ourselves, objects and the people in the world that surround us.
I am no expert on AR but over the past few months I have seen enough examples of the way mobile devices change our reality to start wondering if what I am looking at is really what I think it is. With Google Glass people will see a data layer that is not visible to the human eye. Through an iOS or Android device, a person can now use apps to provide a different context for playing games, monitoring environments or tracking one's brain activity.
I asked people developing technology for the AR world what they emerging. Here's what they said:
Vikas Reddy, Co-Founder of Occipital wrote in an email interview that AR has not quite lived up to its potential due to the lacking capability to track and map the real world. But as computer vision algorithms and hardware improve, the camera will become the most important sensor and input mechanism not just for AR but for all computing:
Think about how much visual information each person processes on a daily basis while going about their lives. Almost none of this information is accessible for computation yet.
Today, your smartphone's computational reach into its surroundings end at its touchscreen surface. To your device, the real world isn't a canvas of interactivity. Soon, however, computer vision will be used to make real-world environments computationally interactive and fun, thereby extending the computational reach of your device into the visual space around you.
At the Blur Conference, Sphero CEO Paul Berberian gave me a demo of a new game called "Sharky the Beaver," which TechCrunch's Romain Dillet wrote about earlier this month. Sharky is essentially a robotic ball that serves as a rolling marker. The user controls the ball through a Bluetooth-enabled device. As the ball rolls across the floor, the user sees Sharky bounce around eating cupcakes. By creating two streams of data, the experience goes between the real world and the virtual one fairly seamlessly.
Sharky is available to developers as an SDK. A likely outcome is a library of avatars that people control via the little, flashing robotic balls. For instance, a furniture company may create a network of avatars that people can use to see how tables and chairs look by rolling the ball around the living room.
I also had the chance to talk at Blur with InteraXon Co-Founder Ariel Gartern about the company's brain-sensing headband that allows your brainwaves to serve as a way for monitoring concentration levels or as a means for controlling window shades or the lights in a house. Its first in-house app helps with brain fitness for "better attention skills, improving your memory, reducing anxiety, building a more positive attitude and staying motivated."
Chris Aimone, InteraXon's CTO told me in an email how this form of technology intersects with AR.
There are a number of excellent ways that brainwaves and AR fit together. There are predominantly two kinds of AR that people refer to: glass style AR, where one wears a pair of glasses and the world is augmented or mediated on the screen; and iPhone-camera type AR, where one holds up an iPhone and new layers are added to a scene.
Google glass-style AR provides the opportunity for collecting brainwave data because you have a continuous-wear device that can continuously record brainwave signals. Adding brainwaves to this environment allows you to show real-time activity about you, presented all the time. For example it could continuously register and stream your level of stress throughout the workday. It also allows the computer system to do a better job of presenting contextually aware overlays. It can provide content and augmentations that take into consideration not just information informed by place or visual input, but also the context of the user. Many of these systems are "context aware," adding the context and state of the user, thus informing what kind of information is presented and in what way it will be presented. For example, are you sleepy and therefore want information about hotels in the area? Are you cognitively mazed, so only pertinent info should be presented?
Brainwaves in an AR system also allow for real-time neuro feedback. This would allow you to know your brain state and have the opportunity to optimize it being able to choose and be guided into the desired state as you go about your day.
But what is the future of augmented reality? Cyborg Anthropologist Amber Case and Co-Founder of Geoloqi, said augmented reality will become interesting when the barriers to creating custom objects, animations, apps and experiences is drastically lowered. Similar to Flash or the App Store, AR becomes interesting when these experiences become very personal or shared between friends.
She added:
Games and tacky 3D animations will only go so far in AR. The real measure of AR is when it solves real-world problems that may seem boring and everyday with realistic and minimal interface. When designing for AR, think of the minimum viable interface instead of the shiny one and work from there. Most AR has had the exciting "wow" factor which lasts for about 15 seconds. It is a big jump from there to useful everyday applications. Think of the interface of Google. There's practically nothing there. It doesn't get in the way of interaction it causes the data to be exposed in such a way that it can be interacted with.
Bonus! If you want to think of the future of AR, think about how it can be abused or pranked with. People think about negative things, but it's always focused on adults. Think of kids growing up with this tech with the ability to code. Think of a future in which AR bullying is a fun prank of kids that are just learning to hack and code. A bunch of kids can put an AR kick-me sign or augment some other kid and share that layer of reality with a small group of friends. Someone takes a picture and gets a bunch of upvotes from a bunch of friends. This is AR+social permissions. The person who is getting made fun of can't see the augmentation, but they understand and have to retaliate.
Occipital is creating world-class computer vision products for mobile platforms.
InteraXon is the maker of brainwave-controlled computing technology and applications. The Company is based in Toronto and the team is made up of a diverse set of individuals who posses backgrounds in; neuroscience, fashion, engineering, music, in addition to several PhD's on staff. InteraXon has created a hardware and software platform technology which converts brainwaves into digital signals that are fed into a computer. InteraXon then provides consumers with applications that use these brainwaves to perform...
Sphera Corporation, the leading global provider of Web hosting automation and management software, today announced that it has expanded its management team and moved its corporate headquarters to Denver, Colorado. In the coming months, Sphera will continue to leverage recent funding to support and build its infrastructure in high-potential markets, strengthen its technology portfolio, increase sales and marketing initiatives and increase support for its global customers.
No hay comentarios:
Publicar un comentario