Bhecht Mobilehci 2011

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

Falling Asleep with Angry Birds, Facebook and Kindle –

A Large Scale Study on Mobile Application Usage


Matthias Böhmer Brent Hecht Johannes Schöning
DFKI GmbH Northwestern University DFKI GmbH
Saarbrücken, Germany Evanston, IL, USA Saarbrücken, Germany
[email protected] [email protected] [email protected]
Antonio Krüger Gernot Bauer
DFKI GmbH Fachhochschule Münster
Saarbrücken, Germany Münster, Germany
[email protected] [email protected]

ABSTRACT music, sightseeing, and navigating. In this way, the mobile


While applications for mobile devices have become ex- phone has become increasingly analogous to a “Swiss Army
tremely important in the last few years, little public infor- Knife” [15, 17] in that mobile phones provide a plethora of
mation exists on mobile application usage behavior. We de- readily-accessible tools for everyday life. The number of
scribe a large-scale deployment-based research study that available applications for mobile phones – so called “apps”
logged detailed application usage information from over – is steadily increasing. Today, there are more than 370,000
4,100 users of Android-powered mobile devices. We present apps available for the Android platform and 425,000 for Ap-
two types of results from analyzing this data: basic descrip- ple’s iPhone1 . The iPhone platform has seen more than 10
tive statistics and contextual descriptive statistics. In the case billion app downloads2 .
of the former, we find that the average session with an appli-
cation lasts less than a minute, even though users spend al- Despite these large numbers, there is little public research
most an hour a day using their phones. Our contextual find- available on application usage behavior. Very basic ques-
ings include those related to time of day and location. For tions remain unanswered. For instance, how long does each
instance, we show that news applications are most popular interaction with an app last? Does this vary by application
in the morning and games are at night, but communication category? If so, which categories inspire the longest interac-
applications dominate through most of the day. We also find tions with their users? The data on context’s effect on appli-
that despite the variety of apps available, communication ap- cation usage is equally sparse, leading to additional interest-
plications are almost always the first used upon a device’s ing questions. How does the user’s context – e.g. location
waking from sleep. In addition, we discuss the notion of a and time of day – affect her app choices? What type of app
virtual application sensor, which we used to collect the data. is opened first? Does the opening of one application predict
the opening of another? In this paper, we provide data from a
Author Keywords large-scale study that begins to answer these basic app usage
Mobile apps, usage sensor, measuring, large-scale study. questions, as well as those related to contextual usage.

In addition to the descriptive results above, an additional


ACM Classification Keywords contribution of this paper is our method of data collection.
H.5.2 User Interfaces: Evaluation/methodology All of the data for this paper was gathered by AppSensor,
our “virtual sensor”, that is part of a large-scale deployment
General Terms of an implicit feedback-based mobile app recommender sys-
Human Factors, Measurement tem called appazaar [4]. appazaar is designed to tackle the
problem presented by the fact that, as mentioned above, an
INTRODUCTION enormous number of apps are available. Based on the user’s
Mobile phones have evolved from single-purpose commu- current and past locations and app usage, the system rec-
nication devices into dynamic tools that support their users ommends apps that might be of interest to the user. Within
in a wide variety of tasks, e.g. playing games, listening to the appazaar app we deployed AppSensor, that does the job
vital to this research of measuring which apps are used in
which contexts.

Permission to make digital or hard copies of all or part of this work for In the next section, we describe work related to this paper.
personal or classroom use is granted without fee provided that copies are Section three provides an overview of AppSensor and other
not made or distributed for profit or commercial advantage and that copies
bear this notice and the full citation on the first page. To copy otherwise, or 1
republish, to post on servers or to redistribute to lists, requires prior specific Wikipedia: List of digital distribution platforms for mobile de-
permission and/or a fee. vices, http://tiny.cc/j0irz
2
MobileHCI 2011, Aug 30–Sept 2, 2011, Stockholm, Sweden. http://www.apple.com/itunes/10-billion-app-countdown/
Copyright 2011 ACM 978-1-4503-0541-9/11/08-09....$10.00.
aspects of our data collection process. In section four, we studies with the fine-grained measuring of app usage. In this
present our basic and context-related findings. Discussion way, we are able to (1) study large numbers of users and (2)
of implications for design, as well as the limitations of our large numbers of applications, all over a long time period.
study, is the topic of section five. Finally, we conclude by Previous work has had to make sacrifices in at least one of
highlighting major findings and describing future work. these dimensions, as Table 1 shows. Furthermore, the mo-
bile phones used in related studies have been mostly from
the last generation, i.e. they could not be customized by the
RELATED WORK end-users in terms of installing new applications.
Work related to this paper includes that on mobile user needs
and mobile device usage and deployments in the wild. For APPSENSOR AND DATA COLLECTION
instance, Church and Smyth [6] analyzed mobile user needs In this section, we describe our data collection tool, AppSen-
and concluded that context – in form of location and time sor. Because context is a known important predictor of the
– is important for mobile web search. Cui and Roto [7] in- utility of an application [3], AppSensor has been designed
vestigated how people use the mobile web. They found that from the ground up to provide context attached to each sam-
the timeframe of web sessions is rather short in general but ple of application usage.
browser use is longer if users are connected to a WLAN.
Verkasalo [18] showed that people use certain types of mo- Lifecycle of a Mobile App
bile services in certain contexts. For example, they mostly In order to understand the AppSensor’s design, it is impor-
use browsers and multimedia services when they are on the tant to first give the AppSensor’s definition of the lifecycle
move but play more games while they are at home. of a mobile application (Figure 1). The AppSensor under-
stands five events in this lifecycle: installing, updating, unin-
Froehlich et al. [10] presented a system that collects real us- stalling, opening, and closing the app.
age data on mobile phones by keeping track of more than
140 types of events. They provide a method for mobile ex-
perience sampling and describe a system for gathering in-
situ data on a user’s device. The goal of Demieux and Los- !"#$%&'(")&
guin [8] was to collect objective data on the usage and inter-
actions with mobile phones to incorporate the findings into ,-*("&
the design process. Their framework is capable of tracking *."$&
the high-level functionality of phones, e.g. calling, playing
games, and downloading external programs. However, both $*+&!"#$%&'(")&
of these studies were very limited in number of users (maxi- #$(+/--& '$#$(+/--&
mum of 16), length of study (maximum 28 days), and num-
ber of apps.
'.)/+"&
Similar to this work, McMillan et al. [16] and Henze et
al. [12] make use of app stores for conducting deployment- Figure 1. The lifecycle of a mobile app on a user’s device according to
based research. McMillan et al. [16] describe how they different states and events.
gather feedback and quantitative data to design and improve
a game called Yoshi. Their idea is to inform the design of the The first event that we can observe is an app’s installation.
application itself based on a large amount of feedback from It reveals that the user has downloaded an app, e.g. from an
end-users. Henze et al. [12] designed a map-based applica- app market. Another event that is observable is the update
tion to analyze the visualization of off-screen objects. Their of an app, which might be interpreted as a sign of endur-
study is also designed as a game with tasks to be solved by ing interest in the application. However, since updates are
the players. The players’ performances within different tasks sometimes done automatically by the system and the update
are used to evaluate different approaches for map visualiza- frequency strongly depends on the release strategy of the de-
tions. However, app-store-based research is so far limited to veloper, the insight into usage behavior that can be gained
single applications and has a strong focus on research ques- from update events is relatively low. The last event we can
tions that are specific to the deployed apps itself. In this capture is the uninstall event, which expresses the opposite
work, we focus on gaining insights into general app usage of the installation event: a user does not want the app any-
by releasing an explorative app to the Android app store. more.

Another similar approach to this work is followed by the However, these maintenance events only occur a few times
AppAware project [11]. The system shows end-users “which per app. For some apps, there might even be only a single
apps are hot” by aggregating world-wide occurrences of app installation event (e.g. when the user has found a good app)
installation events. However, since AppAware only gathers or even none at all (e.g. for preinstalled apps like the phone
the installation, update, and deinstallation of an application, app). Maintenance events are also of limited utility for un-
the system is not aware of the actual usage of a specific app. derstanding the relationship between context and app usage.
For instance, a user might install an app somewhere but use
In summary, this research is unique (to our knowledge) in it elsewhere (e.g. an app for sightseeing that is installed at
that it combines the approach of large-scale, in-the-wild user home before traveling).
Users Apps Days Comment
Verkasalo [18] 324 ∼14 67 Investigation of contextual pattern of mobile device usage.
Froehlich et al. [10] 4-16 - 7-28 System for collecting in-situ data (pre-installed).
Demieux and Losguin [8] 11 - 2 A study with a strong focus on device usage (distributed via SMS).
Girardello & Michahellis [11] 19,000 - - Measuring popularity instead of usage (released to Android Market).
McMillan et al. [16] 8,674 1 154 Exploring world-wide field trials (released to iPhone App Store).
Henze et al. [12] 3,934 1 72 Evaluation of off-screen visualization (released to Android Market).
AppSensor (this paper) 4,125 22,626 127 Large scale study on app usage (released to Android Market).

Table 1. Overview of related app-based studies conducted in-situ on participants’ devices. The table shows fine grained usage analysis (rows 1-3) and
large-scale studies (rows 4-6).

Instead, AppSensor is designed to continuously sample a on whether the application a user is running is the same as
user’s application usage. In other words, we are especially before (if as(t1 ) = as(t2 )) or whether it has changed (if
interested in the two app states of being used and not be- as(t1 ) 6= as(t2 )).
ing used, which can both be inferred from the open and
close events. These events naturally appear much more often Implementation and Deployment
and in a much shorter period of time than the maintenance AppSensor is implemented as a background service within
events. They enable us to observe app usage on a more fine- Android and is installed by end users as part of the appazaar
grained level and provide a much more accurate understand- application. This app traces context information that is avail-
ing of context’s effects on app usage. able directly on the user’s device (e.g. location, local time,
previous app interaction) and app usage at the same time.
In order to gather data on the being used and not being used The recommender algorithms of appazaar rely on this data
states, AppSensor takes advantage of the fact that the An- and appazaar’s app was the means for enabling the data col-
droid operating system can report the most recently started lection reported in this paper. The applied sampling rate is
application. Because of this feature, we know the app with 2 Hz. AppSensor collects data every 500ms in a loop that
which the user is currently interacting. We are thus able to starts automatically as soon as the device’s screen is turned
infer which single app is in the state of being used owing to on and stops when the screen is turned off again. When
the fact that the Android operating system only shows one the device goes into standby-mode3 , we consider which app
app to the user (as does the iPhone OS). Therefore, we can was left open and omit the standby time from the applica-
presume that all other applications are consequently in the tion’s usage time. The measured data is written to a local
state of not being used in terms of not showing their graph- database on the user’s device and only periodically uploaded
ical interface. In this study, we do not consider background to a server. In case of connectivity failure, the data is kept in
applications that are not interacted with through a graphical the database and attached to the next transmission.
user interface, e.g. background music apps that can be con-
trolled through gestures. The first version of appazaar was released to the Android
Market in May 2010. In August 2010, we released a version
Formal Description of AppSensor with the AppSensor as presented in this paper. Of course, the
As noted above, the AppSensor is meant to be a sensor that data collected by AppSensor is primarily designed to provide
indicates the currently used application at a given time t. “the best app recommendation” within the appazaar appli-
Formally speaking, the sensor can be described as follows: cation, i.e. to inform the recommendation process of apps to
Let A = {a1 , . . . , an } be the set of apps that are available a user in a given context [5]. For security and privacy rea-
for a mobile device and let A∗ = A ∪ {} be the set of appli- sons, the system uses hash functions to anonymize all per-
cations with which a user can interact.  means that the user sonal identifiers before the data is collected, and we do not
is currently not using any application. For most current plat- query any additional personal information like the name, age
forms, e.g. Google’s Android, this set is usually defined by or sex from the user.
the applications available on the corresponding application
stores. Since the number of applications is growing, this set
Application Categorization
is not static, but has a defined number n of elements. With
In order to get a more high level understanding of our data,
time given as t, the AppSensor shall provide the following
we felt it was necessary to add categories to the applications
values:
 opened by our users. To do so, we mined the Android Mar-
ai if app ai is used, ket for each app’s category (see Table 2). As such, the cat-
as(t) =
 if no app is used. egories are largely given by the apps’ developers: they – as
domain experts – assign their apps to the categories when
With respect to the lifecycle of mobile apps the value as(t) uploading them to the Android market. The only exception
describes the application with which a user is currently in- to this rule occurred in some minor manual modifications.
teracting. The value is distributed on the nominal scale For instance, we merged all games of the categories Arcade
given by the set A∗ of available applications. Therefore, & Action, Brain & Puzzle, Cards & Casino, and Comics into
the only conclusion that can be drawn on the mere sensor
3
data of two measures at times t1 and t2 is a comparison Determined by screen-off and screen-on events.
one Games category. Due to the special nature of browsers variables such as time and location. In both sections, our
– they do not have clear cut domain scope – we have sepa- primary resolution of analysis is the “application category”
rated them into their own dedicated Browsers category. For as defined above, but in the second section we do highlight
some apps, no categories are available on the Android Mar- some interesting application-level temporal patterns.
ket. These are either test applications by developers that ap-
pear only on a few devices, applications that are distributed
via other channels (e.g. pre-installed by device manufactur- Basic Descriptive Statistics
ers), default Android apps (e.g. settings), or apps that have On average, our users spent 59.23 minutes per day on their
been taken out of the market and whose category was not devices. However, the average application session – from
available anymore4 . We manually added categories for some opening an app to closing it – lasted only 71.56 seconds.
apps where possible. For the branded Twitter clients of some
vendors (e.g. HTC), we added the category of the original Table 2 shows the average usage time of apps by category,
Twitter app (Social). To the default apps responsible for han- which ranged from 36.47 seconds for apps of unknown cat-
dling phone calls we added the Communication category. As egory and 31.91 seconds for apps of category Finance to
we did with the browser, we also put the settings app into its 274.23 seconds for category Libraries & Demos. The most-
own category (Settings) due to its special nature. Since the used Libraries & Demos apps as measured by total usage
main menu on Android phones itself is also an app and it time are inherent apps of the operating system (Google Ser-
is treated as such from the system’s perspective, we addi- vices Framework, default Updater, Motorola Updater). It
tionally removed such launcher apps from the results since was interesting to see that this category has a much longer
they give little insight into app usage behavior. Finally, it is average session than the games category, whose most used
important to note that each app can only have one category. applications are Angry Birds, Wordfeud FREE5 , and Soli-
taire. On the low end of the session length spectrum of
apps with known categories, we found the Finance category.
Characteristics of Final Dataset The most used apps of this category are for personal money
The results reported in this paper are based on data from management (Mint.com Personal Finance), stock market
the 4,125 users, who used appazaar between August 16th , (Google Finance app), and mobile banking (Bank of Amer-
2010 and January 25th , 2011. The users were spread out geo- ica). The briefness of the average session in this category
graphically, although most stayed in the United States or Eu- does not speak well for the success rate of financial applica-
rope during our study (see Figure 2). Within the timeframe tions on the Android platform.
of 163 days, they generated usage events for 22,626 differ-
ent applications and the deployment of our AppSensor was !"#$%&'( )**+ ),%-./+"%$ 01"2*3"'(.)**+
able to measure 4.92 million values for application usage. !"#"$%" &'()* *+,*-./01 2
We advertised appazaar on Facebook and Twitter and two 84"9,1$:.;0</$"5=.34"5"10'.>5"#.$?.@:0<415'.
34"5"10 *6- *-,67./01
posts about the system appeared on two well-known technol- A$$B=0.34"5"10'.4C9$1#85"5B0<
D<5E0= -() &&,-)./01 A$$B=0.85F/'.G0=F'.H5I0
ogy blogs (Gizmodo and ReadWriteWeb), helping us reach a
J$::!"41594$" ((7 &+,K)./01 A$$B=0.854='.L5"M10"9.C8C'.N2K.854=
growing number of users. ;<$M!194E49O 7'6+) +7,&K./01 J5=0"M5<'.PE0<"$90'.AD5/#/
CQ$FF4"B *)+ +7,-7./01 85<#09'.>5<1$M0.C15""0<'.J<54B/=4/9
C$145= R*( +),+K./01 3510S$$#.?$<.@"M<$4M'.D%4990<'.D%009T01#
G5Q$$U.35"95/O.3$$9S5==.V76'.PC;W.C1$<0J0"90<'.
CF$<9/ *(R +R,K(./01
W3X.8$S4=0
W0%/ -(& +(,77./01 W0%/Y$S'.<0MM49.4/.?!"'.>>J.W0%/
C0994"B/ 7 +(,-7./01 T0?5!=9.C0994"B/.@FF
T0?5!=9.><$%/0<'.C#O?4<0.><$%/0<'.T$=FQ4".
><$%/0< 76 -&,67./01
><$%/0<
P"90<954":0"9 (& -+,K6./01 Z8TS.8$E40/.[.D\'.D\.A!4M0.8$S4=0'.;Q$9$3!"45
8!=94:0M45 7*6 (),-K./01 ;5"M$<5.Y5M4$'.8!/41'.J5:0<5
J$:41/ *')&) K7,**./01 T54=OC9<4F'.]#1M\40%0<'.T4=S0<9.8$S4=0
A5:0/ )'()) 77&,)R./01 @"B<O.>4<M/'.H$<M?0!M.3YPP'.C$=4954<0
L05=9Q &)& 7R*,(6./01 J5<M4$D<54"0<'.C=00F.>$9.D<51#0<.X$B'.>5SO.PC;
X4?0/9O=0 KR+ 7+-,--./01 T54=OL$<$/1$F0'.A0"9=0.@=5<:'.PF41!<4$!/.Y014F0
Y0?0<0"10 -+& 7-+,)(./01 N4"M=0.?$<.@"M<$4M'.@=M4#$.>$$#.Y05M0<'.@!M4S=0
@FF><54"[email protected]<#09'.@FF/.^<B5"4I0<'.A$$B=0.
Figure 2. The geographic distribution of our users. Data classes deter- D$$=/ *'66& )6+,)+./01
A$BB=0/
mined via ESRI ArcMap’s ‘natural breaks’ algorithm, a well-known _!"0.L$:0'.34"B0<F<4"9.C1<00"/5E0<'.
DQ0:0/ 7'6+7 )R(,)(./01
standard in cartography and geovisualization that is helpful in accu- L$:0JQ5"B0
rately displaying the underlying distribution of the data. A$$B=0.C0<E410/.3<5:0%$<#'.M0?5!=9.`FM590<'.
X4S<5<40/.[.
)&6 )-&,)*./01 8$9$<$=5.`FM590<'.>!SS=0/.T0:$'.Y4M0.X$BB0<.
T0:$/
T0:$'.PC.D5/#.85"5B0<
RESULTS
This section is divided into two parts: (1) basic descrip- Table 2. Number of apps investigated in our study and average usage
tive statistics on application usage behavior and (2) context- time of every categories’ apps from opening to closing.
sensitive statistics. In the second section, we look at several
5
different forms of context, including an application’s place Despite its name Wordfeud FREE is a full game and not a demo
in an “app use chain”, as well as more standard contextual version since it provides the same full functionality like the non-
free version. The only difference is that it is for free and contains
4
We crawled the Android Market on February 3rd, 2011. advertisements.
#!!$!!!" ber of app launches. Mobile devices are most likely to be
('!$!!!" used for communication every hour of the day, especially in
!"#$%&'()'*+,-%+'./01/2'3,#45%'

(&!$!!!" the afternoon and evening (11am-10pm) with a probability


(%!$!!!" of more than 50%. News apps have the highest probability
(#!$!!!" of being used in the morning (from 7am to 9am). Around
(!!$!!!" 11am, finance apps briefly become quite prominent. After
'!$!!!" communication winds down late in the evening, games have
&!$!!!" their highest probability of use. Social applications also have
%!$!!!" their highest probability of use in the late evening (from 9pm
#!$!!!" to 1am). Sports apps seem to play their most important role
!" in the afternoon (2pm-5pm) and evening (8pm-10pm). Dur-
(#)*"

(!)*"
(()*"
(#/*"

(!/*"
((/*"
(/*"
#/*"
+/*"
%/*"
,/*"
&/*"
-/*"
'/*"
./*"
()*"
#)*"
+)*"
%)*"
,)*"
&)*"
-)*"
')*"
.)*"
ing the early morning, when total application usage is at its
lowest, people share their time with apps of various cate-
gories. This is also the time when communication app use
Figure 3. Total number of recorded app utilizations during a day. share is minimized.

)" Contextual Results: Chains of App Usage


!"#$%&#'()%&#'*+,#'-.'!//'/#$'0%1234

(" An important contextual variable in usage behavior are the


zero or more applications used before an application is
'"
opened and the zero or more applications used afterwards.
5+2'6+217#)8'

&" We defined an “application chain” as a sequence of apps that


%"
are used without the device being in standby mode for longer
than 30 seconds. In total, we can distinguish 1,841,226
$" such sessions in our data set. Examples include one in
#" which a user started with Grocery iQ (Shopping), switched
to GrubHub Food Delivery (Lifestyle), and ended with Epi-
!"
curious Recipe App (Lifestyle). Another user started with the
#$*+"

#!*+"
##*+"
#$.+"

#!.+"
##.+"
#.+"
$.+"
%.+"
&.+"
'.+"
(.+"
).+"
,.+"
-.+"
#*+"
$*+"
%*+"
&*+"
'*+"
(*+"
)*+"
,*+"
-*+"

AroundMe (Lifestyle) app and then continued with Find A


Starbucks (Shopping), Google Maps (Travel), Find A Star-
Figure 4. Daily average usage duration of opened apps per launch in
bucks, Google Maps, Find A Starbucks, Dolphin Browser
minutes. HD (Browser), Find A Starbucks, Google Maps, Find A Star-
bucks, and Google Maps.

Contextual Results: Application Usage over Time Figure 6 demonstrates the distribution of application chains
AppSensor allows us to record temporal information about by the number of applications that occur in the chain. As the
application usage. Figure 3 shows the total number of ap- y-axis of Figure 6 is on a log-scale, it can be seen that the
plication launches in our sample according to hour of the majority of sessions (68.2%) only contain a single applica-
day. It can be seen that total application usage (in terms of tion. In other words, people turn on their phone, use a single
launches) is at its maximum in the afternoon and evening, app, and put their phone back in standby. This tendency to-
peaking around 6pm. Our participants generally start using wards the use of a small number of applications during an
applications in the morning between 6am and 7am, and their interaction with the mobile device is further evidenced by
activity grows approximately linearly until 1pm. Activity in- the fact that only 19.5% of application chains contain two
creases slowly to a peak around at 6pm. Application usage apps, and only 6.6% contain three.
minimum is around 5am, although it never falls below 16%
of its maximum. We also looked into the number of unique apps used within
a session, as can be seen in Figure 7. The first bar in this
Figure 4 shows the average time people spent with an ap- figure is of course identical to the first bar in the preceding
plication once they opened it with regard to the hour of the figure. We found a maximum of 14 unique apps in an app
day. There is a peak around 5am with 6.26 minutes of av- chain. A vast majority of users use a very small number of
erage app usage time. The average application session is unique apps during an interaction with their device. Thus –
less than a minute, however, reaching a minimum of around according to our analysis of sessions – people who use more
40 seconds at 5pm. Interestingly, the graph in Figure 4 is than 14 apps in sequence tend to re-use apps they already
nearly opposite that in Figure 3. This means, that when peo- have used before within the same session.
ple actively start to use their devices, they spend less time
with each application. This might be due to apps that peo- Examining the amount of time our users spent in each appli-
ple explicitly leave active while sleeping with standby-mode cation chain, we found that 49.8% of all recorded sessions
prevented, but there are other possible explanations. are shorter than 5 seconds. The longest session we observed
has a length of 59 minutes and 48 minutes. Between these
Figure 5 shows the change in the relative usage of the appli- two end points, the curve has a similar exponential decay to
cation categories over the course of the day in terms of num- that in Figure 6 and 7.
./01/203#4/

!"-$

!,-$

!!-$
!"#$

!,#$

!!#$

!-$

"-$

%-$

&-$

'-$

(-$

)-$

*-$

+-$
!#$

"#$

%#$

&#$

'#$

(#$

)#$

*#$

+#$
5#6789:; <;:=; >--;
?=0@;:= )A+. )A). )A*. )A(. )A%. )A&. )A,. )A+. *A!. *A,. )A). )A%. )A,. (A+. (A*. (A&. (A(. (A(. (A&. (A(. )A,. )A&. )A'. )A&. (A*%. "B%+* +
C0$D8; &A'. 'A". 'A&. 'A*. 'A*. 'A(. 'A'. 'A". 'A&. 'A!. &A). &A%. &A%. &A". &A". &A%. &A&. &A,. &A&. &A". &A!. &A!. &A!. &A&. &A%!. "B!'! !B*!,
C0$$67D8#3D07 &&A+. &!A!. %*A%. %'A&. %!A(. %!A*. %"A). %&A). %+A&. &&A*. &+A,. '"A(. '&A*. ''A". ''A". '(A!. ''A). '(A*. ')A!. '(A!. '&A*. '%A%. '"A,. &+A,. &+A',. "B)(+ '',
E73:=3#D7$:73 ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,. ,A,". !"( &%
FD7#78: ,A". ,A%. ,A%. ,A". ,A!. ,A!. ,A!. ,A". ,A%. ,A%. ,A&. ,A'. ,A%. ,A%. ,A&. ,A%. ,A%. ,A". ,A". ,A". ,A". ,A". ,A". ,A". ,A"'. (,& !(&
G#$:; %A". %A,. %A,. "A). "A'. "A%. "A". !A). !A+. !A+. "A,. "A!. "A". "A". "A". "A%. "A%. "A". "A". "A&. "A). %A,. %A,. %A". "A%,. !B)!( !B),"
H:#439 ,A%. ,A&. ,A&. ,A&. ,A(. ,A(. ,A). ,A(. ,A&. ,A%. ,A%. ,A". ,A". ,A". ,A". ,A". ,A". ,A". ,A". ,A%. ,A". ,A%. ,A". ,A%. ,A"(. '&, "")
5DI=#=D:;/J/K:$0 ,A&. ,A'. ,A(. ,A). ,A+. ,A*. ,A). ,A(. ,A'. ,A&. ,A%. ,A%. ,A". ,A". ,A". ,A". ,A". ,A". ,A". ,A". ,A%. ,A%. ,A%. ,A%. ,A%,. !B"() !!)
5D1:;3L4: ,A*. ,A+. !A,. !A&. !A%. !A'. !A&. !A&. !A!. ,A+. ,A(. ,A(. ,A'. ,A'. ,A'. ,A'. ,A(. ,A'. ,A%. ,A&. ,A&. ,A'. ,A'. ,A'. ,A(,. "B!%" &'!
M643D$:ND# "A!. "A!. "A&. "A&. "A). "A&. !A*. !A*. !A+. !A). !A*. "A,. "A,. "A,. "A". "A!. "A". "A&. "A%. "A%. "A". "A!. !A+. "A,. "A,%. !B)!% )(
O:@; "A(. "A'. "A(. "A'. "A'. "A). %A%. %A). &A!. %A(. %A,. "A(. "A'. "A). "A'. "A&. "A". "A!. "A%. "A". "A%. "A". "A%. "A%. "A&(. !B))) &&,
P=0N683DQD3L %A(. 'A,. 'A,. 'A*. (A%. (A'. (A,. 'A&. &A*. 'A!. &A+. &A%. &A". &A,. &A,. %A). %A&. %A&. %A,. %A!. %A!. %A,. "A+. %A". %A)(. "B!+, (&*
R:1:=:78: ,A). ,A). ,A). ,A). ,A). ,A). ,A(. ,A(. ,A). ,A'. ,A'. ,A'. ,A&. ,A&. ,A&. ,A&. ,A%. ,A&. ,A&. ,A&. ,A'. ,A'. ,A'. ,A(. ,A&). +,% %&(
S:33D7T; !A%. !A(. !A'. !A%. !A(. !A". !A". !A!. !A%. !A&. !A&. !A&. !A". !A%. !A". !A". !A%. !A!. !A!. !A". !A". !A%. !A%. !A&. !A"%. "B!)* !
S90--D7T %A+. &A'. %A). %A&. %A". %A". %A!. %A,. %A!. %A%. %A". %A". %A". "A*. "A+. "A+. "A). "A). "A). "A). "A*. %A!. %A(. %A'. "A+(. "B''( !+*
S08D#4 'A). 'A,. &A+. &A%. &A". &A,. &A&. 'A!. 'A%. 'A&. 'A". 'A,. &A). &A*. &A+. &A'. &A'. &A(. &A(. &A+. 'A". 'A&. 'A*. 'A). &A)). !B+," %&"
S-0=3; ,A'. ,A%. ,A%. ,A". ,A%. ,A%. ,A". ,A%. ,A%. ,A%. ,A%. ,A&. ,A&. ,A(. ,A). ,A*. ,A+. ,A*. ,A(. ,A(. ,A). ,A*. ,A). ,A). ,A'(. ')! "!'
29:$:; ,A". ,A!. ,A". ,A%. ,A&. ,A&. ,A&. ,A". ,A". ,A". ,A!. ,A!. ,A!. ,A". ,A!. ,A!. ,A!. ,A". ,A!. ,A!. ,A". ,A!. ,A!. ,A!. ,A!&. "&+ "%!
2004; !,A+. !"A". !&A(. !)A(. ",A%. "!A'. "!A&. !*A(. !&A). !,A&. *A&. (A*. (A!. 'A+. 'A+. 'A+. (A,. (A!. 'A*. (A,. (A%. (A*. )A&. +A!. )A*+. "B'!" !B(**
2=#Q:4 !A&. !A(. "A!. "A". "A&. "A(. "A". !A+. "A,. "A!. "A,. !A*. !A+. !A+. !A+. !A*. "A,. !A+. "A". "A". !A+. !A). !A(. !A&. !A*(. !B)'" &,)
<7U70@7 &A). 'A%. 'A!. 'A,. 'A%. &A&. 'A,. 'A+. &A(. &A&. &A!. %A*. %A'. %A*. %A). %A). &A,. %A(. %A). %A). %A). %A+. &A!. &A'. %A**. "B"*& !B)+(
!,%B(,&/

!,+B'',/

!")B,(+/

!&"B(&"/

!'*B*)(/

!(*B,*"/

!(+B,!*/

!)"B+%'/

!)%B+(%/

!)+B*,!/

!*&B,!"/

!)(B,',/

!(%B,*,/

!'%B*%'/

!&!B%,%/

!"%B(%+/
))B,'%/

'%B(%%/

&,B%%"/

%%B&%*/

%,B+&+/

%*B!(!/

'(B*+'/

*%B&**/
203#4/5#6789:;/
-:=/H06=

Figure 5. Hourly relative app usage by category in terms of launches. Each cell value refers to the percentage of app launches done by our users
within each hour for each category. Colors are normalized by row, with green indicating each category’s maximum percentage of application time,
and white indicating each category’s minimum. For example, games reach their peak in the evening (green) and trough in the morning (white).

!#$###$###" !#$###$###"

!$###$###" !$###$###"
!"#$%&'()'*++"&&%,+%-'

!"#$%&'()'*++"&&%,+%-'

!##$###" !##$###"
!#$###" !#$###"
!$###" !$###"
!##"
!##"
!#"
!#"
!"
!"
!"

%"

!!"

!%"

&!"

&%"

'!"

'%"

(")#"

!"
%"
&"
'"
("
)"
*"
+"
,"
!#"
!!"
!%"
!&"
!'"
.%,/01'()'2%-3(,'4!"#$%&'()'25%"%,67889':-%;'<==->' !"#$%&'()'.,/0"%'122-'/,'3%--/(,'

Figure 6. Number of apps used in a session. We aggregated sessions Figure 7. Occurrences of sessions according to number of unique apps
longer than 40 apps since the graph flattens out and scarcity increases. used within a session.
Maximum length is 237.

Probably the most revealing statistic in our analysis of appli- #!"


cation chains is that for nearly half of all chains (49.60%) the '#"
'!"
first application belongs to the category Communication (as
&#"
Figure 8 shows). Digging deeper, we found that 15.7% of the
!"#$"%&'(")

&!"
chains within our sample were initiated with an SMS appli- %#"
cation (9.5% default sms app, 6.2% an app called Handcent %!"
$#"
SMS), 9.6% with the phone application, and 5.9% with the $!"
standard mail application. Interestingly, a browser was only #"
used first in 5.9% of the application chains. !"
()**+,-./0),"
1))23"

8).-/2"
()*-.3"

8?)@@-,A"
B/*73"
C763"
D+20*7:-/"
15/;72"
87E,A3"
8@)5<3"
F-G73<=27"
H7G757,.7"
F-I5/5-73"J"K7*)"
L7/2<?"
M-,/,.7"
1?7*73"
N,<75</-,*7,<"
+,>,)6,"
45)6375"

95):+.0;-<="

Figure 9 shows the transition probabilities between appli-


cation categories in an application chain. Accordingly, the
diagonal of the figure indicates transitions from one app to
another in the same category. As such, the values along the
diagonal are non-zero. This graph considers only those ses-
Figure 8. Categories of first used app within a session.
sions where two or more apps have been used. For each app,
it is very likely that the app used next is a communication
app, except for News and Lifestyle applications. Apart from total usage over all apps within the hours, but by each app’s
these two categories, the probability that the next app is a total usage per day.
communication app is at least 23.2% for all categories. For
communication apps, there is a 66.5% chance that the next Previously, we saw that social apps in general have their
app will be a communication app again. This is the highest highest probability to be used in the evening. This is some-
probability for users to stay within one category. Next are what true for Facebook, but its usage time is spread out
Tools, with a probability of 15.7% of staying within Tools, throughout the whole day. The same goes for Twitter, al-
and Games with a probability of 15.1% of staying within though it is not as much of a late-night activity.
Games. It can also be seen that apps from category Tools are
entered relatively frequently from apps of any category. A somewhat surprising finding can be found in the usage of
the Google Maps app (Travel), which has a relatively strong
There are also some important unique connections between peak in the early evening hours. Traffic checking is perhaps
application categories. Most notably, a browser is opened one possible cause, although one would expect this pattern
quite frequently following the use of a news application. The to be repeated during the morning commute. Another inter-
connection between Lifestyle and Shopping applications is esting result comes from the built-in Music app’s use, which
also quite strong, with Lifestyle applications frequently lead- is somewhat focused in the morning hours.
ing the user to enter into a Shopping application. The reverse
is also true, but to a lesser extent. Weather checking is, not surprisingly, largely a morning ac-
tivity, as is the checking of one’s calendar. On the other
hand, users’ desire to fling Angry Birds6 at pigs is absent in
Contextual Results: Application Usage by Location the morning, and only picks up in the early afternoon and
We also found clear empirical evidence for location as a co- into the evening. Kindle usage behavior is even more fo-
variate of app usage behavior. This occurs across changes cused in the late evening.
in both administrative regions (e.g. USA vs. Europe) and
functional regions (e.g. airports vs. outside of airports). We Another interesting phenomenon emerged from the study of
present some initial findings from our spatial analysis below. two different alarm clock apps. It seems, that alarm clock
apps are mostly used – i.e. being the only active app on the
We examined 13,190 samples recorded by the AppSensor device presenting its user interface – during the night (from
that occurred while a user was located within the spatial 2am until 9am). One reason for this might be, that people
footprint of a known airport in the United States. We found “use” the app while sleeping, e.g. as a desk clock preventing
that while in the airport, users were 2.78 times more likely the device from going into standby mode.
to be using a browser (by usage time) than a user located
in the United States as a whole. This may suggest that cer- More generally speaking, Figure 10 shows that some apps
tain functions related to air travel may not be sufficiently mi- have spikes in usage, whereas others are more broadly em-
grated into native applications (e.g. looking up flight status). ployed throughout the day.
On the other hand, users were less likely to be using games,
tool applications, or reference applications while in airports. DISCUSSION
This was somewhat surprising, especially given that the Kin- Implications for Design
dle app belongs to the reference category. The results reported here could be used to improve the de-
sign of mobile applications and mobile operating systems.
When traveling at speeds greater than 25kph, we found, not For instance, designers of “launcher” apps (like the home
surprisingly, that users were more than 2.26 more likely (by screen on the iPhone and Android) could vary app icon po-
usage time) to be using an app of the Multimedia category, to sition and size based on time of day and/or location. This
which most music-related applications belong. Interestingly, same idea could apply with regard to an application chain
we found that they were less likely (0.83) to be using apps with the last app opened providing the context rather than
in the Travel category. time/location. Similarly, app developers could design smart
links between apps that are used frequently in sequence.
We found some interesting differences between users in the
Since people often navigate from lifestyle apps to shopping
United States and in Europe. European users are 1.21 times
apps, the designers of the former might implement links to
more likely to be found using a browser (by usage time).
shopping apps. Additionally, the AppSensor gives insights
Americans, however, spent relatively much more time with
into the apps’ contexts of use. For instance, the design of
sports, health, and reference applications. Social and news
apps can be optimized if the developers know whether an
apps were the most equally used.
app is used only while commuting or solely in the evening.

Specific Application Usage


Our results show that mobile phones are still first and fore-
most communication devices. This is not only due to phone
Although we focused our analysis at the application category
calls, as smart phones provide a variety of new ways to
level, we did analyze several important and/or well-known
communicate (e.g. instant messengers, email, voice over
individual applications. Figure 10 shows the usage times
IP). Nevertheless, this finding certainly qualifies the mobile
of specific applications with regard to time. In contrast to
6
Figure 5, the numbers in Figure 10 are not normalized by See http://market.android.com/details?id=com.rovio.angrybirds
5)6"-")&%7879&(#
'#((+,)*-.)#,

/,.&".-),(&,.

?"#=+*.)@).;
<+3.)(&=)-

A&:&"&,*&

F,G,#$,
B4#DD),C
!"#$%&"

5):&%.;3&

B&..),C%

E4&(&%
0),-,*&
'#()*%

1-(&%

2&-3.4

BD#".%

E"-@&3
B#*)-3
>&$%

E##3%
B-(D3&% F%&"% HDD%
!"#$%&" IJKL MJNL MMJOL PJPL PJML MJQL PJIL PJIL PJKL RJQL RRJOL MJOL PJNL RJSL MJNL RQJNL PJQL PJML OJRL IJIL NJRL KOTMSU ITRUM U
'#()*% NJQL UJKL MNJRL PJPL PJIL KJOL PJNL PJIL PJNL QJIL IJSL KJRL PJNL IJIL QJIL KJML PJNL PJKL OJKL IJSL QJPL MRTIQO RTSQK RTIIP
'#((+,)*-.)#, QJSL IJSL NQJQL PJPL PJIL RJQL PJRL PJRL PJIL RJML IJRL IJQL PJML RJPL RJSL KJOL PJKL PJRL QJPL RJKL MJIL KMKTUSK ITOMU KKU
/,.&".-),(&,. NJSL NJRL INJRL PJPL PJPL MJML PJNL PJPL PJNL QJNL PJNL IJOL PJPL MJML SJIL MJML MJML PJPL OJML QJNL RNJSL ROP NQ IO
0),-,*& RPJML MJSL MSJML PJPL RJOL IJUL PJIL PJML PJIL RJQL OJNL MJQL PJRL RJQL QJQL NJRL PJSL PJRL RPJNL RJUL MJRL RTKUN MKS RRS
1-(&% RRJOL QJUL MPJKL PJPL PJML RQJRL PJML PJKL PJSL RJPL IJRL KJIL PJSL RJQL NJQL KJPL PJOL PJRL OJML RJSL KJIL OTNIP RTPSS UUQ
2&-3.4 MJOL KJOL MKJML PJPL PJML IJQL NJRL PJNL RJIL NJRL IJUL MJRL RJNL IJML NJPL KJUL PJOL PJPL RIJKL IJML MJUL RTKNN MIO RMP
5)6"-")&%7879&(# NJPL MJSL IMJML PJPL PJIL IJML PJML IJNL PJOL RJML RJSL MJIL PJML RNJIL RRJUL MJSL PJML PJRL RMJKL MJIL QJQL MTUMN RTPOI UP
5):&%.;3& OJIL QJML RSJML PJPL PJRL KJPL PJQL PJNL MJPL PJUL IJML KJML PJSL IJML IOJSL MJRL PJIL PJKL RPJIL IJIL QJQL KTNSM RTMOM MPM
<+3.)(&=)- NJIL RPJQL MOJIL PJPL PJIL RJKL PJNL PJIL PJKL IJQL IJQL NJIL PJML IJPL RJOL KJKL PJML PJKL UJQL MJIL UJRL RITKQR RTMSN QM
>&$% MMJNL MJML MMJML PJPL PJQL RJNL PJIL PJRL PJIL RJKL MJUL IJUL PJKL RJKL MJPL MJSL PJKL PJPL NJQL RJPL IJKL IQTRMR RTKKP MRI
?"#=+*.)@).; SJKL QJPL MOJQL PJPL PJKL IJNL PJKL PJIL PJNL IJOL IJOL SJIL RJRL MJOL KJOL QJRL PJNL PJML UJSL IJKL KJKL MRTRRM RTUQK KUO
A&:&"&,*& RMJRL KJQL MKJML PJPL PJIL SJQL PJNL PJML RJPL RJPL IJQL KJNL IJUL RJSL QJIL KJRL PJKL PJIL UJOL RJSL KJKL ITNRR QQI RUU
B&..),C% OJUL QJNL INJML PJRL PJIL RJOL PJKL QJIL PJSL IJPL IJNL NJUL PJQL PJPL QJNL KJSL PJNL PJQL RRJNL KJOL RRJRL RMTQSN RTONM R
B4#DD),C OJQL SJOL IMJIL PJPL PJKL KJOL PJKL PJUL UJNL PJUL IJOL QJIL PJSL MJPL KJSL KJML PJQL PJQL RNJNL RJNL MJOL IRTSOO ITIPS RMI
B#*)-3 IKJRL MJPL MQJML PJPL PJML IJML PJIL PJIL PJML RJIL IJUL IJOL PJML RJQL IJSL RIJKL PJSL PJRL QJML RJIL MJML MQTPON RTQUM IMU
BD#".% SJKL KJML KMJML PJRL PJKL IJQL PJKL PJIL PJML RJML MJPL KJOL PJQL IJKL MJOL QJKL SJNL PJPL SJPL RJQL MJUL ITSUM MOS RMQ
E4&(&% OJQL RPJIL MSJIL PJPL PJIL IJKL PJRL PJIL RJKL MJIL PJKL KJSL PJKL MJML NJQL MJNL PJRL RJIL OJNL MJML KJNL RTUIU RSQ RSQ
E##3% RRJPL QJRL MNJRL PJPL PJIL IJSL PJML PJKL PJNL IJRL IJKL KJIL PJNL IJRL QJQL KJRL PJKL PJIL RQJSL IJOL MJQL OOTURR ITMOK RTMRP
E"-@&3 NJSL UJRL MNJIL PJRL PJIL IJML PJML PJQL PJSL RJUL RJNL NJSL PJKL QJPL IJUL KJKL PJML PJIL RPJIL NJNL MJNL RITQQN RTKPM IOR
F,G,#$, RPJSL KJKL KPJOL PJRL PJIL IJRL PJIL PJML PJNL MJUL RJOL MJIL PJML MJUL IJUL KJSL PJML PJIL NJKL RJQL RRJNL KOTMSU RTUSI RTISS

Figure 9. Transition probabilities in app chains. The transitions are from categories in a row to categories in a column. The diagonal indicates
transitions between apps in the same category. The probability ranges from yellow (low) to green (high).

phones as “Swiss Army Knives” line of thinking. That said, between the shopper and the tourist. Even without any meta-
when people are not sleeping during the late hours of the information on the used apps itself, it would be possible to
night they make more use of the non-communication func- compare the contexts of two or more people. For instance,
tionality provided by different kinds of apps. Additionally, if two users are constantly swapping between a map app and
they spend more time within an app once they have opened a restaurant guide app they might be in the same activity –
it in the night. probably looking for a restaurant.

Our results also suggest that users frequently switch be- Context-aware recommender systems that suggest mobile
tween already used apps in application chains rather than applications can be made more efficient by exploiting an
only opening new apps. This suggests that there is a func- AppSensor. This was our main motivation for conducting
tional cohesion between the particular utilizations of single the presented study. For instance, recommender systems that
apps. As such, mobile phone operating systems should bet- follow a post-filtering approach – i.e. applying knowledge
ter support navigation between very recently used apps. on context-aware dependencies after using basic techniques
like collaborative filtering [1, 13] – can exploit the time-
Making Use of the AppSensor dependent usage share as factor on the estimated ranking of
The AppSensor gives rise to examining the eco-system of apps.
apps residing on a user’s device, this has potential to inform
the design and customization of novel applications as well
as new devices itself. Limitations
Some apps have a more general purpose that is not well un-
One may apply the AppSensor for inferring a user’s con- derstood by AppSensor. For instance, a web browser can be
text based on his actually used apps. According to Dey [9], used for everything from public transportation route plan-
context-awareness involves adapting services according to ning to looking up a word in a dictionary. The meaning that
a user’s context. For instance, the users’ needs for mobile can be deduced from such applications can be regarded as
services – i.e. apps in our case – depend on their loca- limited or imprecise. For these cases, the insight that the
tions [14]. We propose that by adding the AppSensor to AppSensor provides on the user’s context might be limited.
context-reasoning one can decrease the uncertainty of con- However, most services that are provided via a browser are
text recognition. For instance, even though two people may also available within dedicated applications. Since many
be walking through the same pedestrian mall in a famous users seem to prefer to employ native apps instead of web-
city (i.e, same location), if they use different apps (e.g. a sites on mobile devices [2], this should not have a large neg-
shopping list app vs. a sightseeing app) we can distinguish ative impact.
./01/203#4/

!"-$

!,-$

!!-$
!"#$

!,#$

!!#$

!-$

"-$

%-$

&-$

'-$

(-$

)-$

*-$

+-$
!#$

"#$

%#$

&#$

'#$

(#$

)#$

*#$

+#$
56#78/29$8 568:6
;#<8=00> &?*. %?+. &?,. %?&. %?". %?%. %?*. &?!. &?!. &?!. %?+. &?). &?,. &?". &?!. %?'. &?!. &?,. &?%. &?'. &?). &?*. '?(. &?*. !?+!. !@&()
A00748/B#-6 "?+. !?). "?,. !?*. !?*. !?*. !?+. "?". %?". &?,. &?,. '?,. '?). '?(. '?). '?*. (?*. (?&. )?%. (?(. '?,. &?*. &?(. %?&. ,?*!. !@'*&
C4#:$<40<>/! (?(. *?". *?*. !,?,. !,?&. +?). *?). (?*. &?*. %?%. "?". !?&. !?,. !?,. !?,. !?,. ,?+. ,?(. !?,. !?". !?&. "?,. %?,. '?,. &?''. %&!
C4#:$<40<>/" %?+. &?%. )?'. +?". !,?*. !,?). +?). +?". *?". )?). '?%. %?%. !?*. ,?'. ,?&. ,?%. ,?'. ,?(. ,?). ,?&. !?,. ,?(. !?!. "?,. ,?%". !(+
D8#3E8:/C-- "?!. ,?+. ,?(. ,?%. ,?'. "?,. %?*. "?". &?,. !,?!. !!?". +?*. *?!. %?". "?+. '?+. (?'. &?%. "?,. &?+. %?%. &?,. &?). "?'. ,?,(. %,+
2F9338: %?(. %?%. %?(. &?&. %?%. %?,. %?*. &?,. &?%. &?%. &?*. &?). &?%. &?&. &?). &?(. &?,. &?!. &?,. &?(. &?+. &?*. &?%. &?!. ,?'(. &')
GE0H8 "?(. "?". !?+. !?+. !?*. !?+. !?+. !?). "?&. %?%. &?!. &?*. '?". '?*. '?). (?'. (?&. )?&. )?(. (?*. '?*. &?*. &?!. %?'. !?+&. "@&,+
CH7:I/J9:K6 '?%. &?%. %?". "?&. !?(. !?*. !?'. "?,. !?*. "?*. %?'. %?'. &?(. '?(. &?+. (?'. &?). (?". '?(. (?,. '?!. (?,. '?&. '?). ,?(&. )")
L9HK48 +?!. )?). (?+. '?'. &?,. %?". "?+. "?%. !?). "?,. "?%. !?+. "?*. %?(. "?(. &?%. "?'. %?*. %?!. "?'. &?+. '?%. )?%. )?). ,?&). ",+
M#4<N4#30: "?(. "?%. "?,. ,?*. ,?). ,?*. !?!. !?). "?*. !?(. (?(. (?*. )?!. (?". '?(. )?(. )?). (?+. '?*. '?%. (?*. (?". %?%. !?*. ,?!+. (',
M#48HK#: '?!. %?(. ,?). ,?&. ,?". ,?&. %?*. "?". %?+. (?!. )?(. )?*. (?%. '?%. %?'. '?'. '?*. '?%. %?*. &?&. '?(. &?%. %?!. '?!. ,?!&. (!'
M#$8:# &?". &?%. "?). %?'. "?). "?,. %?,. "?". !?%. "?(. %?(. %?". &?&. '?). '?,. '?!. (?". (?). (?,. &?). '?,. '?*. &?'. '?). ,?!+. )*!
BN69< "?,. %?(. &?'. '?!. '?%. '?&. (?". (?!. '?*. &?'. '?%. %?). &?". %?*. %?(. %?(. %?&. %?(. %?!. &?!. %?(. %?*. "?+. "?*. ,?&!. &*%

Figure 10. Application usage time throughout the day. Within each row (i.e., for each app) low usage is indicated by white, increasing through yellow
and reaching a peak at red. Percentages indicate the usage time of each app and are normalized within each row.

The current design of the AppSensor is not capable of track- Of course, our findings cannot be transferred to general us-
ing multitasking. For instance, if a user is listening to mu- age of the underlying services. For instance, it might be the
sic – with the player running in the background from the case that people use Facebook during the day on their sta-
operating system’s perspective – and is browsing the Inter- tionary PC or laptop, and use their mobile device when they
net at the same time, the AppSensor will return the browser are lying in bed in the evening.
as the open application. Similarly, on the Android platform
we have the problem that applications’ widgets are part of Whether or not an AppSensor is widely deployable within
the home screen application. Therefore we cannot measure a system strongly depends on the underlying operating sys-
the widget-based usage of apps. However, most widgets are tem and the policies of the device’s vendor. The AppSensor
simply entry points into apps. used in this paper was implemented on the Android platform
because Android provides the required openness. The sen-
While we have no detailed information on the participants sor itself needed to be implemented as background service,
due to the domain of the underlying platform appazaar – i.e. which is not possible on every device. For these and other
supporting people to find new apps – we may assume that reasons an AppSensor is not possible on Apple’s iPhone, or
some of our users are early adopters with a high affinity to at least cannot be deployed in the wild.
apps. Thus, our participants in general may have a slightly
higher affinity toward app usage than the general population. CONCLUSION
In this paper, we conceptualized and studied the AppSen-
Like every sensor, the AppSensor is not error-free. For in- sor: a framework for analyzing smart phone application us-
stance, it might return values that do not relate to the user’s age. For the first time – to the best of our knowledge – the
current activity. A user might leave and put away the de- method of deployment-based research by means of app store
vice with an app still running. The uncertainty of the rea- deployments was combined with fine-grained data collec-
soned context will increase with the time that the user has tion on mobile application usage. In contrast to physical
not used her device. However, most devices go into standby sensors (e.g. GPS for locations), we defined a virtual sen-
after some time of non-usage, as long as the user does not sor for measuring the usage of mobile applications. This
intentionally use an app that prevents standby. Moreover, public deployment of AppSensor provided us with the data
app usage that occurs when standby mode is purposefully of more than 4,100 users over a period longer than four
disabled can also be valued as valid usage. months. In short, this paper included the following contri-
butions (amongst others):
Furthermore, the AppSensor cannot be used to reason on a
user’s context when no application is used at all, i.e. in de- • a descriptive analysis of our sample data showing that
vice standby and turned off. Secondly, the sensor is obvi- (among other findings) mobile device users spend almost
ously only available during active usage of the device. Oth- an hour a day using apps but spend less than 72 seconds
erwise it can only be deduced that the user is currently not with an app at a time (on average), and that average usage
using his device. Thirdly, the AppSensor’s accuracy also de- time differs extensively between app categories,
pends on its sample rate. This impacts the quality of the
measured data. The sample rate needs to be chosen de- • a context-related analysis that led to the following conclu-
pending on how often a user is switching between different sions (among other findings): (1) mobile phones are still
applications. If the swapping frequency is higher than the used mostly for communication (text and voice); (2) some
sample rate, the accuracy will decrease. However, at a high apps have somewhat intense spikes in relative usage (e.g.
frequency the system load might increase and impact power music and social apps), whereas others are more broadly
consumption. We believe our sample rate is correctly posi- employed throughout the day; (3) when people actively
tioned given these constraints, as we conducted an informal use their devices they spend less time with each app; (4)
pre-study on how fast one can start a new app. short sessions with only one app are much more frequent
than longer sessions with two or more apps, and the first
app within a session is very likely to be an app for com- 7. Cui, Y., and Roto, V. How people use the web on
munication; (5) when people are traveling they are more mobile devices. In Proceeding of the 17th international
likely to use multimedia apps and they are surprisingly conference on World Wide Web, WWW ’08, ACM
less likely to use travel apps, (New York, NY, USA, 2008), 905–914.
• the conceptual design of our research method, namely the 8. Demumieux, R., and Losquin, P. Gather customer’s real
AppSensor as a virtual sensor for measuring mobile app usage on mobile phones. In Proceedings of the 7th
usage. international conference on Human computer
interaction with mobile devices and services,
We believe that the MobileHCI community should be aware MobileHCI ’05, ACM (New York, NY, USA, 2005),
of this data set. Therefore it is our plan to make the whole 267–270.
data set available to the community7 , allowing others to draw 9. Dey, A. K. Understanding and using context. Personal
their own conclusions and perform their own analysis that and Ubiquitous Computing 5, 1 (Feb. 2001), 4–7–7.
may go beyond what we have found in the data. To our
knowledge this is the first attempt to analyze application us- 10. Froehlich, J., Chen, M. Y., Consolvo, S., Harrison, B.,
age at this scale and we believe that our work provides data and Landay, J. A. Myexperience: a system for in situ
to verify and deepen findings of the sort that Demieux and tracing and capturing of user feedback on mobile
Losguin [8] and Verkasalo [18] have presented in their pre- phones. In Proceedings of the 5th international
vious but smaller studies. conference on Mobile systems, applications and
services, MobiSys ’07, ACM (New York, NY, USA,
For future work, we will use the findings of this paper to 2007), 57–70.
further inform the design of the appazaar recommender sys- 11. Girardello, A., and Michahelles, F. Appaware: which
tem. The chain of previously used apps will provide much mobile applications are hot? In Proceedings of the 12th
information about users’ tasks and intentions. Develop- international conference on Human computer
ing models that are able to predict the next-to-be-used app interaction with mobile devices and services,
will dramatically increase the usefulness of an app recom- MobileHCI ’10, ACM (New York, NY, USA, 2010),
mender system. We are also working to better understand 431–434.
our location-based results with more detailed spatial analy-
sis. 12. Henze, N., Poppinga, B., and Boll, S. Experiments in
the wild: public evaluation of off-screen visualizations
REFERENCES in the android market. In Proceedings of the 6th Nordic
1. Adomavicius, G., and Tuzhilin, A. Context-Aware Conference on Human-Computer Interaction:
recommender systems. In Recommender Systems Extending Boundaries, NordiCHI ’10, ACM (New
Handbook, F. Ricci, L. Rokach, B. Shapira, and P. B. York, NY, USA, 2010), 675–678.
Kantor, Eds. Springer US, Boston, MA, 2011, ch. 7, 13. Herlocker, J. L., Konstan, J. A., and Riedl, J.
217–253. Explaining collaborative filtering recommendations. In
Proceedings of the 2000 ACM conference on Computer
2. AppsFire.com. Infographic: iOS Apps vs. Web Apps.
supported cooperative work, CSCW ’00, ACM (New
http://blog.appsfire.com/
York, NY, USA, 2000), 241–250.
infographic-ios-apps-vs-web-apps,
accessed on Feb. 15, 2010. 14. Kaasinen, E. User needs for location-aware mobile
services. Personal and Ubiquitous Computing 7, 1
3. Barkhuus, L., and Polichar, V. Empowerment through (May 2003), 70–79.
seamfulness: smart phones in everyday life. Personal
and Ubiquitous Computing (Dec. 2010), 1–11. 15. Kray, C., and Rohs, M. Swiss Army Knife meets
Camera Phone: Tool Selection and Interaction using
4. Böhmer, M., Bauer, G., and Krüger, A. Exploring the Visual Markers. In Proc. of Joint Workshops MIRW ’07
Design Space of Recommender Systems that Suggest and MGuides ’07 (2007).
Mobile Apps. In Proc. of Workshop CARS ’10 (2010).
16. McMillan, D., Morrison, A., Brown, O., Hall, M., and
5. Böhmer, M., Prinz, M., and Bauer, G. Contextualizing Chalmers, M. Further into the wild: Running
Mobile Applications for Context-aware worldwide trials of mobile systems. In Pervasive
Recommendation. In Adj. Proc. of Pervasive ’10 Computing, P. Floréen, A. Krüger, and M. Spasojevic,
(2010). Eds., vol. 6030 of Lecture Notes in Computer Science.
Springer Berlin / Heidelberg, Berlin, Heidelberg, 2010,
6. Church, K., and Smyth, B. Understanding mobile ch. 13, 210–227–227.
information needs. In Proceedings of the 10th
international conference on Human computer 17. Satyanarayanan, M. Swiss army knife or wallet? IEEE
interaction with mobile devices and services, Pervasive Computing 4, 2 (Apr. 2005), 2–3.
MobileHCI ’08, ACM (New York, NY, USA, 2008), 18. Verkasalo, H. Contextual patterns in mobile service
493–494. usage. Personal and Ubiquitous Computing 13, 5
7
Access to the anonymized data set is available upon request. (2009).
You  can  try  the  app  and  browse  the  data  on  your  Android  
 

You might also like