CAVE Language is a whiteboard, napkin and presentation -friendly visual design language for describing the behavior of Contextual Applications. Contextual Applications react to the context of the user - their specific circumstances - and dynamically respond according to that context.
CAVE Language was designed with the following goals in mind:
CAVE should adapt well to different formats as needed. It should look fine in highly polished documents or presentations in the boardroom, but at the same time be useful written on the back of a napkin. It should be practical to write CAVE drawings on a whiteboard while discussing application design. In other words, precise usage of CAVE is okay, but sloppy usage of CAVE is also okay.
CAVE is a language, not a methodology. CAVE isn't prescribing best practices for contextual application development, it's just providing a means of expressing them. Additionally, nothing in CAVE should prevent you from using your favorite methodology, nor should it interfere with other existing practices such as agile development or user-centered design.
CAVE should be suitable for sparing use, to describe a simple idea with minimal effort. At the same time you should also be able to use CAVE extensively, to describe large, complex systems, without encountering limitations in the language.
Contextual applications are built by teams of people with different skill sets. CAVE should serve everyone on the team, allowing each member to understand the application in the way they specifically need in order to do their job. CAVE drawings should connect the team, effectively acting as a canonical representation of the application.
CAVE language is primarily concerned with three areas:
A contextual app has to deal with the collection, acquisition and processing of data. CAVE Language data symbols show data and its origins, be they sensors, external data sources or user input. A CAVE data drawing can answer the question, "Where does this data come from?"
From raw data, meaningful context can be inferred. Context builds on data and, through inferences, creates insights into user context. For example, a CAVE context drawing might show how an inference maps user behavioral data to an insight about a user's affinity.
A contextual application will observe user context and respond in a specific way. CAVE modal response drawings can show how user context triggers a corresponding modal response.
We are witnessing the rise of contextual applications. In the past, applications, whether they were websites, desktop applications or otherwise, operated without regard to the circumstances of their users. Today, however, we are seeing a new generation of applications that react and respond dynamically to the context and circumstances of their users.
Contextual applications have the potential to interact with users in a more effective, efficient and authentic manner than ever before. Additionally, contextual applications can leverage a wider variety of user interfaces (such as voice and gestural control).
This trend towards contextual applications is driven by five forces identified by Robert Scoble and Shel Israel in their 2013 book The Age of Context.
The proliferation of mobile devices has embedded computing capabilities into a wide variety of real-world contexts. Using mobile phones, tablet computers, smart watches, Google Glass, etc., we are now building applications meant to be used while shopping, jogging, driving and so on. This is a far cry from the uniformity of application context historically, where an application designer could assume that the user was sitting at a desk in front of a computer.
The widespread adoption of social media such as Facebook, Twitter, LinkedIn and so forth has created a massive graph of human interaction data, enabling introspection into various behaviors and affinities of individuals as they interact with their peers. This data can form the basis of insights about users, to which applications can strategically respond.
The reduced cost of data collection and storage, combined with new techniques for analyzing data, have made automated insights into user circumstances, preferences and behavior more practical than ever before. The availability of user data makes dynamic responses by applications possible.
A significant force accompanying the proliferation of data and mobile devices is the ubiquity of sensors. GPS, micro-location, motion sensors, biometric sensors and, of course, cameras, all power the opportunity for passive collection of vast amounts of contextual data (Where is the user? Are they moving? Is their heart beating fast? e.g.) These sensors are becoming increasingly inexpensive, and they are everywhere.
Knowing a user's physical location is key to understanding their context. Whether a user is at work, at home, in transit, or performing an errand, their location implies much about their context. We now have significant amounts of metadata associated with geographical locations, which can be used to make contextual inferences.
A contextual application responds dynamically to user context. While the business case and the technology exists for the building of contextual applications, what is lacking is a design language in which to articulate them. To describe dynamic response to user context there have been various attempts to draft existing design and specification formats, such as heavily annotated wireframes, UML state diagrams and flowcharts, but none of these are optimized for contextual application design. The problem is exacerbated when trying to describe common modal functionality that can be expressed through multiple user interfaces, or when dealing with a purely contextual interface such as voice control, such as Siri or Google Now.
The lack of a language to articulate contextual applications creates a design bottleneck: the desire and capability to build such applications exists, but there is no way to describe them.
The best way to approach an abstract application design problem is to use a foundational metaphor upon which a system of interaction can be built. Some historical examples of foundational metaphors include the *desktop *metaphor in personal computing, and the web page. While there is nothing intrinsic in a computer operating system that requires the concept of a desktop, nor anything intrinsic in the networked exchange of files that requires the concept of a page, both metaphors are used to build an organizational model in the application designer's mind.
For contextual applications, the best foundational metaphor is the natural language conversation. A conversation conducted between two individuals is a perfect example of ongoing dynamic response to context. A conversational participant gathers context concerning their counterpart based both on explicitly communicated information as well as inferences that they make based on their counterpart's actions and circumstances. In response to this context, the participant dynamically alters their behavior in real time.
Describing how an application behaves as a participant in a two-way conversation is the best way to organize and specify the behavior of a contextual application.