Response to "Invisible Apps"

While perusing one of my favorite tech websites, TechCrunch, I found an interesting article by Eli Portnoy that was too close to my research to ignore: Where Are The Invisible Apps? As a quick summary, the article reflects on a prediction from Matthew Panzarino that apps would become increasingly invisible. The article described invisible as:

...apps that would live in the background, anticipating our needs based on sensor and contextual data, and do things for us before we even had to ask. What an exciting vision.

The article then outlines several barriers that applications currently face in order to realize this dream: privacy, battery constraints, programming complexity, and the need for intelligent reasoning from raw sensor data. These are all important considerations, but there is a greater problem that makes invisible apps harder to realize.

The problem is the siloed nature of applications.

Each application greedily grabs the sensor data it needs, might do some minimal computation locally, and then ships it off to their own backend. This approach works fine for current applications, but as we demand more intelligence and background processing, we will need to increase the frequency of sampling and sharing of data to give our devices enough knowledge to reason. In other words, we need to collect more data and share it across applications and devices to make our devices smarter. Let me outline why the siloed approach conflicts with this need particularly with the problems outlined by the original article.

Privacy:

The more we sense about a user the more it becomes frightening how much this reveals. Users are just starting to stand up for their data privacy and they will not likely allow for 3x or 10x more private sampling to be out of their control. With the current siloed approach, users aren't in control of their data as it's sent off to data warehouses. All they can do is choose to allow access to sensors or not use an application. Even for existing apps it seems ridiculous that we can either provide Facebook Messenger* with audio, location, camera, drawing over other apps, control phone sleep state, audio settings, etc. or not use the app at all. This kind of access control is much too simplistic for the next era of applications.

*I will say that Facebook Messenger is one of my favorite recent apps. The ridiculous permission list example is just to demonstrate the state of current app permissions.

Battery Constraints:

Devices continue to amaze with advances in battery life and computational power. We are now able to consider the possibility of continually polling for location and other sensor data. However, if each app gathers its own data selfishly, this will be a huge inefficiency with respect to battery life and storage. We need a centralized content provider that is able to frequently sample sensors and deliver it to the apps when they need it.

Programming Complexity

Increasing the number of sensors and the frequency of sampling will only further complicate the current API's. No longer could we use enumerations of 4 sensing states (i.e. No Power, Low Power, Balanced Power and High Accuracy). We need a higher level of abstraction, that of context, for programmers to efficiently and effectively deliver awesome applications.

Reasoning from Sensors

This one is particularly important so I'll split it into 2 parts:

  1. Sensor data will not be enough to make devices smarter.
    Sensors play an important role in context, but we will also need the domain knowledge each app acquires through use. This will allow for reasoning across applications within domains and across them. For example, your budgeting app might be able to see purchases made through Venmo to automatically update your budget or a music app might recommend you listen to a bands album immediately after purchasing concert tickets. It can go even deeper with deriving mood from contextual application data supplemented with sensor data. This kind of reasoning across apps is difficult if not impossible given current public siloed APIs.

  2. Context is most effective when it is shared.
    Not only is sharing between apps on a particular device important (as shown above), but sharing between devices as well. Knowing that everyone you pass has purchased coffee from a particular store might encourage you to try their brew. Or possibly passing by a fellow runner might give you information about a new running route. Ultimately, the people around you have rich context profiles to share and tapping into this resource will make our apps that much smarter.

So what am I doing about it? Let me give a quick summary of my related research. We are building a mobile architecture that can support the storage and indexing of vast amounts of contextual sensor data by onloading (storing on the device) the data. Our approach aims to give control of the data back to the user and gives them the opportunity to allow apps control across their entire historical context to reason in a more holistic and intelligent manner. We want to create a centralized context engine that consumes sensor data and application use in order to provide applications with rich contextual information across domains. If you would like to chat more about this feel free to email me: nathanielwendt [at] utexas [dot] edu.

As a brief aside, the author of the original article, Eli Portnoy, is the co-founder of Sense360 which seems to be trying to solve the problem posed by this article (clever play). I don't have enough insight into if/how they are working to solve some of the problems discussed in this post, but it is interesting to see a startup working in this space.

You might enjoy:

Next post No more posts

Comments

comments powered by Disqus