Vol2no9 2
Vol2no9 2
Vol2no9 2
9, September 2011
ISSN 2079-8407
ABSTRACT
A significant amount of research work in user interface design exists with a proportion of this extendable to mobile phone platforms. Some investigate the effect of user ability on interface generation for mobile applications. Other works analyzed how different contexts and mobile platforms affect the generation of these interfaces. However, most of these exist existing works require a significant degree of context requirements modeling before interface reconfiguration takes place. Few on onthe-fly reconfiguration approaches exist that learn from user interactions as well as contextual information received by a fly mobile phone. With the explosive growth of new applications for the mobile phone, its user interface is quickly becoming hone. flooded with application widgets. This work investigates some on on-the-fly approaches that learn and formulate rules from fly user interactions and contextual information received by the mobile phone. Performance evaluations demonstrate how a ontextual simple neural network-based engine is able to improve the prediction accuracy of the interface reconfiguration in a mobile based phone.
Keywords mobile phone, context-aware, intelligent interface, widget reconfiguration, neural network, rules
1. INTRODUCTION
People are becoming increasingly dependent on mobile information devices. With increased pervasiveness of wireless connectivity and technology advancements, the smart mobile phone is progressively taking on more t important roles to process, collate and delegate information. A contemporary mobile handset typically comes with many integrated features, which previously were available only on desktop PCs or laptops such as internet access, email, word processing, and video viewing. An increasing number of mobile phones are also equipped with additional hardware like sensors to extend its capabilities e.g. the accelerometer in the Samsung Blade S5600v [31], the motion recognition sensor in the ition Samsung S310 [29], the proximity sensor in the HTC Touch Pro 2 [25], and the Nokia 5500s tilt sensor that supports novel game genres [28]. Together with better processing power, these mobile phones have become mini multimedia computers proffering support for an increasing spectrum of new applications and features. However, these technological enhancements to a mobile phone also herald a new set of user problems. As the number of supported widgets increases, widget management becomes increasingly complex. Locating reasingly relevant or interesting widgets becomes a chore as the user interface gets cluttered with irrelevant widgets [17]. In a recent study [22], most new mobile phone owners indicated that they were adverse to using new services that were confusing or difficult to access. Contemporary mobile phones address these problems partially via screen organization tools like window managers, widget toolbars [31] or multiple home screen pages or scenes [23]. Usage of these tools requires proficiency with the mobile phones ncy key controls to be able to correctly and efficiently re reorganize the screens application widgets. For example, in the HTC Hero [24], each scene must be pre pre-specified by the user with appropriate widgets for different contexts e.g. work, travel, social etc. and are non-adaptive. adaptive.
When changes occur over time in a mobile phone users lifestyle and/or roles, new applications may need to be downloaded, existing applications may be upgraded or older applications rendered obsolete. In such cases, manual widget re-organization would have to be organization performed repeatedly with the above tools. This can be both tedious and time consuming. From a mobile phone users perspective, some degree of intelligent widget control should be provided as one goes about his/her daily activities. For instance, a user driving to work will more likely use the phone in handshands free mode for voice communications instead of SMS or email. A student who normally uses the phone for entertainment and social interaction will likely not use it w during curriculum time. It is also less likely that an employee would want to invoke a game application or view a movie at the work place than when travelling via public transport or resting at home in the evening. Similarly, a mobile business executive or tourist may place more emphasis on GPS and location-based location applications and functionality than an office-bound office employee, albeit using different categories of services. We ask the following questions: Does a user need all or even most of a mobile phones capability? Does a user need the same subset of applications and functionality all the time? Can a mobile phone be made to learn and recognize a users context?
It is quite apparent that different subsets of a mobile phones capabilities appeal to different users es depending on roles/interests. On a regular basis, many users also make use of specific applications and functionality in a fairly deterministic pattern depending on context. A typical mobile phone users context may be defined in terms of usage pattern, date, time of day, and location as a basis. With the aid of suitable sensor inputs,
413
ISSN 2079-8407
additional contextual information may be gleaned e.g. how far has the user moved from the previous location, how fast is the user moving now, heart rate (stress level) etc. rt However, at the time of this paper, there is no reported work that offers a learning engine with dynamic context contextaware reconfiguration of a mobile phone interface. As a consequence, mobile phone users would have to constantly navigate through and manually reconfigure a vigate complex and confusing set of excess widgets that they either do not need or no longer use.
Section 1 provides the background of the existing situation and associated problems while Section 2 presents the motivation and challenges behind the research. Section 3 Sect presents a review of related research. Design details of the system architecture and learning engine are in Sections 4 and 5. Section 6 presents the system test results as well as a performance analysis and evaluation. The paper concludes with Section 7 followed by the references. n
SUPPLE
Supple [11] automatically generates user interfaces according to the users physical ability, device and usage. Supple is able to create personalized interfaces better suited to the contexts of individual users. Users with physical limitations and impairments form a particular group targeted by Supple. A subsystem called Ability Modeler performs a one-time assessment of a persons time motor ability and builds a model of those abilities. The abilitie result is used during interface generation. These automatically generated interfaces greatly improve the speed, accuracy and satisfaction of users with motor impairments. A subsystem called Arnauld is also used to obtain user responses to generate a cost function that closely approximates the desired behavior. An optimization algorithm determines the user interface that satisfies the platform devices constraints while minimizing the cost function. By supplying different device constraints and cost functions, different styles of user interfaces may be unctions, produced. However, Supples functionality is currently restricted to window-based interfaces found on desktop based platforms. Although Supple is written in Java, it is currently unable to run on a mobile phone as it uses phon libraries from Java SE.
GAZE-X
Gaze-X [16] is an agent-based intelligent system X agent that supports multimodal human-computer interaction. human The system comprises 2 main parts context modeling and context sensing. It models users actions and emotions emotio and then adapts the interface to support the user in his activities. The context information, known as W5+ (who, where, what, when, why, how), is obtained through a number of human communication modalities, such as speech, eye gaze direction, face and facial expression, and f a number of standard interface modalities like mouse moves, keystrokes and active software identification. Various commercially available solutions such as voice
This paper proposes an intelligent, context context-aware interface reconfiguration engine prototype for the mobile ration phone called SmartIFace and is organized as follows.
414
ISSN 2079-8407
recognition software, image processing software are required for the multimodal inputs. The inference engine used is based on case case-base reasoning, a type of lazy learning method [3]. Lazy learning methods store the current input data and postpone the generalization of data until an explicit request is made. The case-base used is a dynamic, incrementally self selforganizing, event-content-addressable memory that allows addressable facts retrieval and events evaluation based on user preferences and generalizations formed from prior inputs. Based on the evaluation, Gaze-X will execute the most X appropriate user-supportive action. The case supportive case-based reasoning can also unlearn actions according to user instructions and thereby increasing its expertise in user userprofiled, user-supportive, intelligent interaction. supportive, Gaze-X runs in either unsupervised or supe X supervised modes. In the unsupervised mode, the users affective state is used to decide on his satisfaction with the executed action and adaptive, user-supportive actions are executed supportive one at a time. In the supervised mode, the user explicitly confirms that a preferred action has been executed and may provide feedback to the system. Gaze Gaze-X has to be setup initially in the supervised mode to build up the profile of the user using the system. Once the system has captured enough cases of the user, the system would then be able to operate correctly in the unsupervised mode. Gaze-X currently runs only on desk-top platforms based top on Linux, Windows, or Mac Os X.
application. Context information from sensors or users is made available to the application and application logic is used to acquire and analyze inputs, issue or execute context outputs when appropriate. The application logic consists of creating a situation with an information ation description that it is interested in and the actions associated with the situation. It would consist of a number of context rules and the situation will handle the remaining logic discovery, individual sources of context and data and determining when input is relevant and ta executing the appropriate services. Context input is handled by widgets and context output by services. The capture of context input is made easier by the fully declarative mechanism provided by Situations references. Situations listeners provide all the erences. necessary functionalities to obtain real-time execution. A real default listener called Tracer provides indications of the current status of variables, origins of the variables, and current state of the context rule. Traditional context-aware rul applications would need to implement customized methods to access application logic. Situations provide standard access and allow arbitrary applications to provide intelligibility and control interaction for context-aware context applications and interfaces can be built independent of the cations main application.
MIMIC
Eisenstein et al. [10] proposed a set of modelmodel based techniques that may aid designers to build UIs across several platforms, while respecting the unique constraints posed by each platform. The approach will ach isolate the features that are common to the various contexts of use and specify how the user interface should adjust when the context changes. Knowledge bases are created that describe various components of the user interface, including the interfa presentation, the platform, the task structure, and the context. The knowledge base can then be used to automatically produce user interfaces matching the requirements of each context of use. The user interface is defined as a implementation-neutral description by the MIMIC modeling language. MIMIC is a formal declarative modeling language that comprises of 3 components platform model, presentation model, and task model. The platform model describes the computer systems running the user interface and the platforms constraint information. The platform model can then be used to generate a set of user interfaces for each platform. It can also be dynamic and changes accordingly to context changes. The presentation model describes the visual appearance of the user interface. It describes the hierarchy of windows and their widgets, stylistic choices and the selection and placement of these widgets. The task model represents the structure of the tasks that the user may be performing. It is hierarchically decomposed into subtasks rchically and information regarding goals, preconditions and post conditions may be supplied.
DYNAMO-AID
Dynamo-AID (Dynamic Model AID Model-Based User Interface Development) [4-5] is a design process that 5] includes a proposed runtime architecture to develop user des interfaces for context-aware systems. However, the aware designer must first construct the declarative models which specify the interactions. Next, the models are serialized to an XML-based high-level user interface description rface language to be exported to the runtime architecture. After serialization, the designer can render a prototype to adjust any undesirable aspects of the interface. Lastly, the final user interface can be deployed on the target platform. An XML-based meta-language, DynaMOL, is language, used for the exportation and transportation of models to the runtime architecture. A preliminary implementation of the runtime architecture has been tested on top of the Dygimes framework (Dynamically Generating Interf Interfaces for Mobile Computing Devices and Embedded System) [6]. The architecture consists of a dialog controller which takes care of the changes to user interface. These changes can be caused by user interaction or context changes.
SITUATIONS
Situations[7] is an extension to the context context-aware infrastructure called the Context toolkit [8]. It supports easier building of context-aware applications by aware facilitating access to application states and behavior. It exposes an API to the internal logic of a context context-aware
415
ISSN 2079-8407
The connections, especially those between the platform and presentation models, are critical to the determination of the interfaces interactive behavior. eractive These connections also describe how the various platform constraints will influence the visual appearance of the user interface. Several techniques were described for the creation of connections between the various models and the interpretations of these. However, the automated etations generation of task-optimized presentation structures using optimized MIMIC has not been developed yet.
4. REVIEW SUMMARY
Table 1 summarizes the characteristics of the systems reviewed. From the reviews, it can be seen that the common design approach is based on capturing or
abstracting contextual information via models or situations and then translating these to a required platform. Two main techniques were used predominantly for context modeling model-based and rule-based. A model certain amount of expertise may be found in some of these approaches via the production, storage and maintenance of case-bases, context rules. But these approaches focus bases, mainly on the use of context information for interface generation. None of the systems incorporates dynamic syste learning capability. Only Gaze-X has a feedback module Gaze which enables the user to manually change context actions. In this paper, rule-based technique is chosen for based the specification of contextual information while dynamic learning capability is implemented with neural network apability techniques.
5. SYSTEM DESIGN
The design of the system architecture and learning process are detailed in this section. We first present the problem formulation.
. on the context C(u). In our SmartIFace prototype, we define: C(u) = [time, location, traffic, heart rate, location widget_use[day]] Then, if k widgets are to be displayed such that WD = {w1 , w2 ,..., wk } , we require
PROBLEM FORMULATION
We define the following for a mobile phone such that that:
WT = WD WN : WT : set of all widgets for the mobile phone WD : set of all widgets displayed on the interface WN : set of all widgets not displayed on the interface
Let C(u) represent the context tuple for user u and le P(w,C(u)) be the probability of selecting widget w based
6. SYSTEM ARCHITECTURE
The system architecture shown in Figure 1 comprises of the following modules GUI Manager, Learning Engine, RMS (Record Management System) and Rule-Base Engine. The mobile phone user interacts with the screen and that is usually captured by the phone OS. The proposed system can be viewed to be interposed
416
ISSN 2079-8407
between the screen and the phone OS. External Inputs simulates the supply of sensor- and location location-based information to the system. A timer module ( (not shown)
External Inputs
Location time, Location, heart condition Contextual information
RMS
User Interaction
S C R E E N
GUI Manager
Reconfig. Command
Learning Engine
Rules
Simulated external inputs include GPS location information, time of day, traffic conditions and heart condition. The GUI manager records user interaction when application widgets are accessed. Together, these form the contextual information that is passed to the learning
Start
engine. Interface re-configuration is based on commands configuration from the learning engine referencing rules in the rule-base rule engine. The RMS handles rule storage in the mobile phone.
Yes
Stop
417
ISSN 2079-8407
7. LEARNING PROCESS
The learning engine will add widgets, remove widgets or maintain current screen state based on the results of its learning algorithm. The learning engine ts communicates with the rule-base engine for rules update base
and reference. The rule-base engine accepts parameters base from the learning engine, fires the appropriate rules and returns the resultant action from the fired rule. The learning process flow is illustrated in Figure 2.
Start
Manual Mode?
No
After 7 days?
Yes Retrieve usage pattern Process pattern in learning engine for each context Update all context rules
Yes
No
No
Update ticker
Stop
418
ISSN 2079-8407
The learning engine is an integral part of the simulation process. The simulation process flow is illustrated in Figure 3. Besides simulating time s progression, it will continuously capture users widget interactions and pass this information to the learning engine for processing. Changes to the context will cause context rules to be updated as shown in the rule rule-base engine. The learning engine learns during an initial period ning of k days. In our simulation, we set k to 7 for a learning period of 1 week. After this initial period, the rule-base engine continuously communicates with the learning engine at a preset timing. As this may caus some latency cause in the mobile devices operation, the preset timing was set to midnight when, it is assumed, that user interaction activity would be at its minimum. The timing can, however, be set to any appropriate user-specified timing. specified After the learning engine has c ng completed pattern processing and returned the results, a decision will be made on whether to re-configure the screen widgets or configure maintain current display status. The action taken by the learning engine is determined by the type of learning algorithm implemented (explained in the next section). orithm After the learning engine has performed its action, all widgets for the specified context will be processed for display state changes before the rule rule-base fires the appropriate rule and returns the action associated with the rule. The rule-base includes helper methods to support rule base storage management.
developed for the learning engine: Minimal Intelligence (MI), Single Layer Perceptron with Error Correction (SLP) and Multi Layer Perceptron with Back Propagation (MLP). Witmate [30] and Joone [27] make use of several libraries not supported by Java ME, such as the Math class upported (no support for logarithmic, exponential, power etc), file input/output (text file not supported for Java ME), and event handling. As Witmate is a commercial program, no source code is available. Joone, on the other hand, is open ot source and may be used for the creation of neural networks. Joone codes, however, cannot be pre-verified by pre the Java ME platform. Therefore, to ensure complete compliance with the pre-verification process, customized verification code was developed.
9. MINIMAL INTELLIGENCE
To reduce processing overheads, the MI algorithm uses only the most recent user activity for all widgets in the phone to decide whether to update rules as shown in Figure 4. Each widget has an indicator in the rule and the widget will be displayed for the context if the indicator is 1. The algorithm first checks if the current user activity for each widget is present or absent (1 or 0). If user activity is present and the rule indicator is 0 (user accessed the widget but widget is not displayed), it will d include the widget in the rule. However, if user activity is absent and the rule indicator is 1 (user did not access the widget but widget is displayed), the widget is removed from the rule. MI does not track user activity pattern over time, only using the most recent user activity data on all ime, widgets to set the rules.
419
ISSN 2079-8407
Start
Yes
Sigmoid function
No Add widget to rule Rule changed required? Result >= upper threshold ? No Remove widget from rule Rule changed required? Result <= lower threshold ? No Maintain current state
Yes
Yes
Yes
Yes
No Stop
420
ISSN 2079-8407
The steps for the back propagation are (for learning data E and expected output C): 1. Compute the forward propagation of E through the network (compute the weighted sums of the network, S, and the inputs, u, of every cell). From the output, make a backward pass through ake the intermediate layers, computing the error values. For output cells o : erroro = (Co - uo)uo(1 - uo) For all hidden cells i: errori = (wm,i erroro)ui (1 w ui) m all cells connected to hidden cell w given weight, u cell input Lastly, update the weights within the network as follows :
Start Prepare data for training Calculate values for all cells in network Calculate error for current iteration
a. b.
For weights connecting hidden to output layers: w = w + * erroro * uo For weights connecting hidden to input layers: w ing = w + * error * ui
2.
a. b.
The forward pass through the network computes the cell inputs and an output. The backward pass computes the gradient and the weights are then updated so that the error is minimized. The learning rate, , minimizes the amount of change that may take place for the weights. Although it may take longer for a smaller learning rate to converge, it can minimize the chance of overshooting the target. If the learning rate is set too high, the network may not converge at all. The process flow of MLP is shown in ge Figure 6.
3.
Update weights
No
Yes
Yes
Yes
Yes
Yes
No Stop
Figure 6: Multi Layer Perceptron with Back Propagation Process Flow tion
421
ISSN 2079-8407
PERFORMANCE TESTING
Testing focuses on the prediction a accuracy of usage pattern by the MI, SLP and MLP learning algorithms. Usage pattern comprises widget interaction activity, user location, time, traffic conditions and heart condition. Table 2 summarizes the overall performance.
thresholds were designed to be adaptive according to usage data patterns specified for learning.
DETERMINATION THRESHOLDS
OF
SUITABLE
SLP and MLP algorithms require setting appropriate thresholds to determine the output of the neural network. Selecting an appropriate threshold improves prediction accuracy of user activity. However, this is complicated as the output of the sigmoi activation sigmoid function used is non-linear. Hence, the upper and lower linear.
422
ISSN 2079-8407
Figure 12: MLP Prediction Accuracy for weekly repeating usage pattern
WITH
Figure 10: SLP Prediction Accuracy for weekly repeating usage pattern
Figure 11: SLP Prediction Accuracy for daily repeating usage pattern
Test results show that MLP has the similar performance to SLP when the usage pattern is regular as with the daily repeating data set (Figure 13). For weekly wee usage patterns, however, its performance generally trails SLP although the average is similar (Figure 10 and Figure 12). The main reason for this result is because MLP needs to learn from existing data. When the data does not exhibit a significant level of repeating usage patterns, conflicting l trends may arise and cause learning errors. There are a number of error correction algorithms that can be used with the MLP. These algorithms include Back Propagation, Delta rule and Perceptron. Alsmadi et al. [1] have examined the Back Propagation, Delta rule ] and Perceptron algorithms and found that Back Propagation gave the best result with the MLP as it is designed to reduce the error between the actual output and the desired output in a gradient descent manner. manner Beside the error correction, there are other parameters that may affect the performance of the MLP. The number of hidden layers used and the number of hidden neurons in the hidden layers will in some ways affect the performance of the neural network and the accuracy of the results. Much research has been done on this area but so far there has been no single solution to all problems on deciding the best selection of the parameters. Bishop [2] states that an MLP with one hidden layer is
423
ISSN 2079-8407
sufficient to approximate any mapping to arbitrary imate accuracy as long as there are sufficiently large numbers of hidden neurons. However, there is an optimal number of hidden neurons to be used for different networks. Currently, there is no fixed answer to the optimal number of hidden layers and hidden neurons to be used. When there are significant processing steps to be operated on the inputs before obtaining the outputs, then there may be a benefit to having multiple hidden layers. Zurade [20] stated that the number of hidden neurons depended on the dimension n of the input vector and on the number of M separable disjoint regions in the n-dimension Euclidean dimension input space. He stated that there is a relationship between M, n and J (number of hidden neurons) such that
M (J, n) = It was proved in [19] that the maximum number of n nodes is closely related to the N training pairs and d input dimension in the following formula:
The approach is to try an increasing sequence of C to obtain different numbers of hidden nodes, train the in neural network for each n, and observe the n which , generates the smallest root mean squared error. Haykin [12] stated that the optimal number of hidden neurons is a number that would yield a performance near to the Bayesian classifier. His tests showed that a MLP neural network using two hidden neurons is already reasonably close to the Bayesian performance (for his test problem). There are also some rule-of-thumb methods specified in thumb [1] for determining the number of hidden neurons. In our research, we also performed tests to see if there is an improvement in the performance with different number of hidden neurons. Table 3 shows the result of using different numbers of hidden neurons and their respective prediction accuracies. It is apparent that there is iction no significant performance improvement observed when using more hidden neurons. Each hidden neuron added to the hidden layer also introduced more lag into the system as more time is required to calculate the output.
, where C is a constant
With different number of hidden neurons, it was observed that there is an average of 5% increment in the processing time required with each new hidden neuron added. This is especially an important consideration as the processing power available on the mobile platform is very le limited. If there is not a big improvement in the performance for using large numbers of neurons and layers, then it would be better to use the minimum required. For the daily repeating usage pattern, MLPs performance is similar to the SLP algorithm in that it is able to achieve over 90% accuracy due to consistency in the input data patterns (Figure 13). This consistency in the usage data also enables better training of the nueral network. However, the MLP algorithm is observed to introduce a considerable amount of lag into the application duce due to this training. The error correction for MLP is based on mean meansquared error reduction (number of iterations required to achieve the acceptable output). To achieve good mean meansquared error reduction, the number of iterations must be n, about 10,000. During testing with the 15 widgets, an average lag of about 200ms was incurred for every
learning period. This lag may become significant if more widgets and contexts are involved since the learning duration is proportional to the product of the number of n widgets and number of contexts. Table 4 summarizes the performance analysis and evaluation of the 3 learning algorithms.
Figure 13: MLP Prediction Accuracy for daily repeating usage pattern
424
ISSN 2079-8407
Learning Algorithm
Speed
Complexity
MI SLP MLP
CONCLUSION
In this paper, we have presented the design and development of an intelligent interface reconfiguration engine that is context-aware. Widget reconfiguration is aware. done dynamically without the need for modeling effort. Test results show that both the Single Layer Perceptron with Error Correction and Multi Layer Perceptron with ayer Back Propagation can be used for context context-aware reconfiguration of the mobile phone interface. However, the Single Layer Perceptron with Error Correction offers a practical yet effective solution for a resource resource-constrained mobile phone. It offers low computational overheads with reasonable prediction accuracy for the typical mobile phone user. Although competitive performance is offered by the MLP, a period of learning with existing data is required. Together with higher computational ov overheads, it may not be suitable as an on-the-fly approach. Future fly work would include investigating the effectiveness of approaches that include fuzzy logic engines and/or the Kohonen neural network as well as deploying the system on an actual mobile phone integrated to suitable wireless sensor device inputs. [5]
Computer Interaction and Interactive Systems, Interac Lecture Notes in Computer Science, 3425, (pg Science 871-876). Springer. Clerckx, T., Winters, F. and Coninx, K. (2005). , Tool Support for Designing Context-Sensitive Context User Interface using a Model-Based Approach. Model Proceedings of the 4th International Workshop on Task Models and Diagrams for user interface design, (pg 11-18). ACM Press. Coninx, K., Luyten, K., Vandervelpen, C., Bergh, J. V. D., & Creemers, B. (2003). Dygimes: Dynamically Generating Interfaces for Mobile Computing Devices and Embedded Systems. 5th S International Symposium in Human-Computer Interaction with Mobile Devices and Services, Services Lecture Notes in Computer Science, 2795, (pg Science 257-272). Springer. Dey, A. K. & Newberger (2009). A. Support for Context-Aware Intelligibility and Control. Proc. Aware Contro 27th international conference on Human factors in computing systems, April 2009, (pg 859-868). systems ACM Press. A.K. Dey, Gregory D. Abowd,and Daniel Salber K. (2001). A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of ContextContext Aware Applications. Applications Human-Computer Interaction Journal, 16(2-4), 97-166. , 16(2 Du, W. & Wang L. (2008). Context-Aware Application Programming for Mobile Device, Proceedings of the Canadian Conference on Computer Science and Software Engineering, Engineering Vol. 290, (pg 215-227 27). Eisenstein, J., Vanderdonckt, J., & Puerta, A. (2001). Applying Model-Based Techniques to the Model Development of UIs for Mobile Computers. Computers Proceedings of the 6th international conference on Intelligent user interfaces, (pg 69-76). ACM interfaces Press.
[6]
[7]
REFERENCES
[1] Alsmadi, M. K. S., Omar, K. B., & Noah, S. A. (2009). Back Propagation Algorithm: The Best Algorithm Among the Multi-Layer Perceptron Layer Algorithm. International Journal of Computer Science and Network Security, 9(4), 378-383. , Bishop C. (1995). Neural Networks for Pattern Recognition. Oxford University Press, ISBN: 0198538642, 116-160. Bontempi, G., Birattari M., & Bersini, (2002). Bersini,H. New learning paradigms in soft computing computing. Physica-Verlag Studies In Fuzziness And Soft dies Computing Series, 97-136. Clerckx, T. Luyten, K. & Coninx, K. ( (2004). DynaMo-AID: A Design Process and a Runtime AID: Architecture for Dynamic Model Model-Based User Interface Development. Engineering Human
[8]
[9]
[2]
[3]
[10]
[4]
425
ISSN 2079-8407
[11]
Gajos, K. Z. & Weld, D. S. (2004). SUPPLE : Automatically Generating Personalized User Interfaces. Proceedings of the 9th international conference on Intelligent user interface (pg 93interface, 100). ACM Press. Haykin, S. S. (1999). Neural networks: a comprehensive foundation. 2nd Edition, Prentice Hall. Heaton, J. (2008). Introduction to Neural Networks with Java. 2nd Edition, Heaton Research. Henricksen, K. and Indulska, J. A Software Engineering Framework for Context Context-Aware Pervasive Computing. Proceedings of the 2nd IEEE International Conference on Pervasive EEE Computing and Communications (pg 77-86). Communications, IEEE Computer Society. Kurniawan, S., Mahmud, M. and Nugroho, Y. (2006). A Study of the Use of Mobile Phones by Older Persons. Proceedings of the Conference on Human Factors in Computing Systems (pg 989rs Systems. 994). ACM New York. Maat L. and Pantic M. (2007), Gaze : Adaptive Gaze-X Affective Multimodal Interface for Single Single-User Office Scenarios. Proceedings of the ICMI 2006 and IJCAI 2007 international conference on Artifical intelligence for human computing (pg igence computing, 171-178). Springer-Verlag. Satyanarayan, M. (1996). Fundamental Challenges in Mobile Computing, Proceedings of the fifteenth annual ACM symposium on Principles of distributed computing (pp 1-7). computing. ACM New York. Schmidt, A. (2006). Implicit Human Computer Interaction Through Context. Personal and Ubiquitous Computing, 4(2-3), 191 , 191-199. Xu S. & Chen, L. (2008). A Novel Approach for Determining the Optimal Number of Hidden Layer Neurons for FNNs and Its Application in
Data Mining. Proceedings of the 5th International Conference on Information Technology and Application. (pg 683-686). Springer-Verlag. [20] Zurade, J. M. (1992). Introduction to Artificial Neural Systems. PWS Publishing Company. Android SDK. Retrieved Feb 2011, from http://developer.android.com/index.html BBC News. Retrieved Feb 2011, from . http://news.bbc.co.uk/2/hi/technology/7833944.st m HTC Hero. Retrieved Feb 2011, from http://www.mobilitysite.com/2009/07/htc-herohttp://www.mobilitysite.com/2009/07/htc widgets/ HTC Hero Scenes. Retrieved Feb 2011, from . http://www.gsmarena.com/htc_hero tp://www.gsmarena.com/htc_hero-review382p3.php HTC Touch Pro 2. Retrieved Feb 2011, from . http://pockethacks.com/htc-touch-pro2http://pockethacks.com/htc proximity-sensor-demo demo iPhone SDK. Retrieved Feb 2011, . 2 http://developer.apple.com/iphone/ Joone. . Retrieved Feb 2011, http://sourceforge.net/projects/joone/ from
[12]
[21]
[13]
[22]
[14]
[23]
[24]
[15]
[25]
[26] [16]
[27]
from
[28]
[17]
Nokia 5500 tilt sensor. Retrieved Feb 2011, from sensor http://tech2.in.com/india/reviews/smarthttp://tech2.in.com/india/reviews/smart mobile/nokia-5500-sport/3824/1 sport/3824/1 S310 motion sensor. Retrieved Feb 2011, from . http://www.mobilefanatic.net/2006/06/samsunghttp://www.mobilefanatic.net/2006/06/samsung motion-sensor.html Samsung Blade S5600v. Retrieved Feb 2011, S5600v from http://www.dintz.com/review-samsunghttp://www.dintz.com/review blade-gt-s5600v/ Witmate available. Retrieved Feb 2011, from . http://www.witmate.com/
[29]
[18]
[30]
[19]
[31]
426