Context Aware Applications

Historically, computers have always assumed a simple interaction model with their users:

  • The user provides input to the computer
  • The computer processes the input
  • The computer presents the user with the output

That is a straightforward one-to-one interaction which is nearly always initiated by the user; the computer behaves in a reactive manner, not in a proactive manner. This simple way of working was fine in the context of a computer with a relatively simple set of features and functionalities which could count on the user’s full undivided attention for the tasks that needed to be performed.

Nowadays, however, computers are used for a very wide range of functionality, which are rarely limited to pure computing but include general organizational, communicative, and other kinds of capabilities that are single task oriented.  At the same time, there is an entry of many new form factors for devices and applications that want to interact with the same user and bring new or sometimes overlapping functionalities into the picture.

This abundance of available technology offers so many possibilities that the average user is not capable of, or doesn’t want to spend time at, selecting the functions and features that are most useful in a particular situation, making applications not immediately relevant to the user. The multitude of applications sending information to the user causes information overload and brings out a common flaw: they are not adapted to sharing the user’s attention with other applications and they don’t converse well in group, as they don’t listen to each other, but only to the user.

Information overload causes the user to be continuously interrupted and switching tasks, which reduces productivity. At its worst, users must consciously ignore computers to keep working efficiently, valuable information could be lost and the applications are in fact turned useless. A need has arisen to prioritize users’ attention and present them with the right functionality at the right moment or the right piece of information in the right context.

For this to become possible, computers need to be able to perceive the user’s situation, understand it to a level where they can decide what are the most useful functionalities to offer or perform, proactively go ahead and perform them without the user’s explicit request and finally present the output to the user if it is relevant. They should also move from a simple one-to-one communication model to a many-to-many interaction where they co-ordinate their efforts to be of value to the user. They have to become “context aware”.

Sounds like science fiction? It doesn’t need to be – a lot of the necessary technology is already available.

For a wide range of industries, different kinds of technologies have been developed that could be used to collect data on a user’s “context”:

  • Sensors to perceive information about the geophysical environment: location, spatial surroundings, noise, light, meteorological conditions
  • Medical technologies originally designed for monitoring patients to have input on the user’s biophysiological condition or emotional state
  • Algorithms like the ones used by social networks to learn about the user’s social situation: relationship, mood, peer groups
  • Networks detecting the proximity of other devices to determine the digital infrastructure in which the user finds him or herself
  • Relevant news that could influence behavior: traffic, flight status, stock prices, or the current phone conversation

Thanks to the many different kinds of devices that can be equipped with computers today (phone, tablet, glasses, watch, shoes, other wearables…), the data can be gathered through a variety of channels. The Internet is already spread across the planet to allow that interconnectivity. A lot of work still needs to be done, however, to standardize the communication between devices and applications, so they can accept contextual inputs from whatever digital infrastructure is available – everything should still work fine in case you forgot your phone for example, or you bought another brand’s watch.

It is important that the gathered data is correctly interpreted in order to have a good understanding of the user’s situation. Devices and applications need to have pre-set variants related to situations and locations, but also they need to have the ability to learn and adapt. This actually requires a fair amount of artificial intelligence. Nevertheless, with the current evolutions in business intelligence and big data analysis, we have shown that we can build self-learning programs that have the computing power to arrive at a good understanding of human concepts. We should be able to do the same to identify someone’s habits, recognize and categorize someone’s tasks, assign relative importance to a person’s contacts, etc.

To be able to listen to each other and be relevant in the “group conversation” with their user, applications need standards to interconnect and transfer knowledge to each other. Semantic technology could be used to share applications’ understanding of a user’s context, so they can add to each other’s knowledge, creating a complete picture.

All of this can happen implicitly, that is, the user doesn’t need to provide any input, and the applications don’t need to take action or provide the user with any output, unless they believe it is relevant, (a) given their understanding of the context, (b) based on what they know or have learnt about the user’s preferences, and (c) based on what they know about the other applications’ activities.

A lot of work still needs to be done, however, to ensure a good co-ordination of context-aware applications. The final goal is that together, they behave like a good management assistant who is well-organized, proactively gets ready for what’s next, is perfectly aware of their manager’s preferences, and does a hell of a job making the manager’s life easier. Think Tony Stark’s J.A.R.V.I.S., but non-fictional, and less of a wise guy – unless you want him to be.

Context aware applications create opportunities for companies to improve their business. Thanks to their implicit behavior, in its simplest form, context aware applications can be used to monitor employees: for example measure the time they spend on certain tasks, thus identifying priorities for efficiency improvements. Safety is another reason why you would want computers to understand context: arm wrists could warn surgeons that they haven’t properly washed their hands, or monitored truck drivers who seem to feel sleepy could be told to make a stop and take a break.

As the number of devices a user owns, carries, and wears, there is no single device or application that will maintain “primary device” status similar to the role PC’s held for decades.  Instead the user’s preferred cloud provider could become the manager of context for applications. By aggregating data from industries, technologies, sensors and news using shared services and computation systems, our applications can become incrementally more context aware and able to participate in group conversations to limit information overload and provide outputs that are immediately relevant.

In this new model, smart devices could become better at predicting needs and therefore making people more efficient.  Technology and application will appear more empathic and society will better prepared for interaction with other.  There will less individual task management when technology can take action on our behalf in response to a wider view of our current situation and context.