Home

Research activities

Wearable & Pervasive computing

Bio-inspired robotics&electronics

Publications

Software

Hardware

Miscellaneous

About this site

Research Activities in Wearable & Pervasive Computing

Below you can find a summary of my research activities organized topically. This work has been carried out mostly during my PostDoc years at the Wearable Computing Laboratory at ETH Zürich, with topical collaborations with other institutes.

This list is sporadically updated - for more up-to-date information, check my publications page.

Objectives

My activities center on context awareness - especially human activity and gesture recognition - in wearable and pervasive computing, with emphasis on novel adaptive and learning algorithms, and multi-sensor correlation and fusion, targetting miniature embedded platforms.

My activities are guided by the following questions:
Keeping examples of impressive adaptation capabilities found in some bio-inspired systems, I have a particular interest in methods that improve the real-world robustness of context-aware systems.

I co-initiated and coordinate the EU FP7 FET-Open project OPPORTUNITY, where we investigate novel methods for context-awareness in opportunistic sensors configurations. I am active in the EU FP7 FET-Proactive project SOCIONICAL, where I am interested in the machine understanding of crowds by means of sensor networks and complexity science models.

Activity Recognition in Opportunistic Sensor Configurations [2009-]

Nowdays a large number of sensors are readily available on the body, in objects carried by the user (watches, cellphones), and in the environment. Thus, rather than thinking at which sensor modality to deploy and where - the dominant approach until now - the question rather becomes how to exploit the information of multiple already deployed sensors for context awareness. We call this context recognition in opportunistic sensor configurations.

Activity recognition in opportunistic sensor configurations requires to rethink the activity recognition chain, to make it more modular, more flexible, and more adaptative. Most of this work is carried out within the EU FP7 FET-Open project OPPORTUNITY.


OPPORTUNITY

Activity recognition in opportunistic sensor configurations: objectives and approach

I make the case for activity recognition to move from statically defined sensor configurations towards an opportunistic use of resources that are available on and around the user.
The wide availability of sensors in our living environment, in objects and soon in our clothing makes this (or will soon make this) a
This leads to a rethinking of the traditional activity recognition chain. I outline research directions to improve the activity recognition chain, in these papers that are essentially a summary of the objectives of the EU FP7 FET-Open project OPPORTUNITY.

This work is done within the EU FP7 FET-Open project OPPORTUNITY.




OPPORTUNITY

A reference dataset for opportunistic activity recognition

In order to develop and benchmark activity recognition algorithms for activity recognition in opportunistic sensor configurations we collected a large dataset of complex activities in highly rich sensor environments. Thus, opportunistic sensor configurations can be investigated and compared through simulations.

We deployed 15 wireless and wired networked sensor systems comprising 72 sensors of 10 modalities - in the environment, in objects, and on the body. We acquired data from 12 subjects performing morning activities, yielding over 25 hours of sensor data. Over 11000 and 17000 object and environment
interactions occured.



This work is done within the EU FP7 FET-Open project OPPORTUNITY.




Sensor network

Ensemble classifiers for scalable performance and robustness to faults

In order to exploit a large number of sensors (on body, in objects, and in the environment) - possibly of different modalities - we rely on ensemble classifiers where the decision of classifiers operating on individual sensor nodes are combined in an overall decision about the user's activities or gestures.

We showed that this approach allows scalable performance - by including different combinations of nodes at run-time - as well as it brings intrinsic robustness to faults. These characteristics are important in opportunistic activity recognition systems.

This work was done together with Piero Zappi, Elisabetta Farella, Luca Benini and Gerhard Tröster




Sensor network

Network-level power-performance trade-off in activity recognition by dynamic sensor selection

Based on previous insights on scalable performance offered by using ensemble classifiers on multiple on-body sensors, we developed power-performance management mechanism that allows to dynamically (run-time) adjust the recognition accuracy of an activity recognition system and

This work was done together with Piero Zappi, Elisabetta Farella, Luca Benini and Gerhard Tröster, partly within the EU FP7 FET-Open project OPPORTUNITY.




OPPORTUNITY

Unsupervised classifier self-calibration


We developed a new online unsupervised classifier self-calibration algorithm. Upon re-occurring context occurrences, the self-calibration algorithm adjusts the decision boundaries through online learning to better reflect the classes statistics, effectively allowing to track and adjust when classes drift in the feature space.

We applied this method to the problem of activity recognition despite changes in on-body sensor placement. This leads to changes in the class distribution in the feature space, something which usually adversely affects activity recognition systems. We showed that unsupervised classifier self-calibration can provides robustness against moderate displacement of sensors, such as those occuring when doing physical activities or wearing sensors over extended periods of time.

This work was done together with Kilian Förster and Gerhard Tröster within the EU FP7 FET-Open project OPPORTUNITY.




OPPORTUNITY

Online user-adaptive gesture recognition using brain-decoded signals


Activity and context recognition in pervasive and wearable computing ought to continuously adapt to changes typical of open-ended scenarios, such as changing users, sensor characteristics, user expectations, or user motor patterns due to learning or aging. System performance inherently relates to the user’s perception of the system behavior. Thus, the user should be guiding the adaptation process. This should be automatic, transparent, and unconscious.

We devised an online learning mechanism taking a simple binary reward signal as input (correct/incorrect behavior) to guide adaptation.

We how this method can improve recognition accuracy of a user-independent gesture recognition system towards that of a user-specific system. The signal guiding adaptation are error related potentials - a signal picked up by Electroencephalography (EEG) that is emitted in the brain when a person experiences an unexpected behavior. Thus in effect, the system becomes a closed-loop gesture recognition system capable of self-improvement without explicit user feedback.

This work was done together with Kilian Förster, Ricardo Chavarriaga, Andrea Biasiucci, José del R. Millàn, and Gerhard Tröster within the EU FP7 FET-Open project OPPORTUNITY.




OPPORTUNITY

Unsupervised lifelong adaptation in activity recognition systems


A robust activity and context-recognition system must be capable of operating over a long period of time, exploiting new sources of information as they become available and evolving in an autonomous manner, coping with changes in the number and type of available sensors. For instance, as new smart sensors are deployed in the environment, in objects or in clothing, they should be capable of learning from pre-existing smart sensors how to recognize the user's context. Thus programming of new smart sensors is avoided, new sensors introduced in an environment automatically provide for fault-tolerance, and overall the activity recognition system may become capable of coping with unpredictable changes typical in open-ended environments.

We investigate ContextCells - sensor nodes capable of activity recognition, online learning, and exchanging contextual information with each others. We show the basic principles, and demonstrate how a wearable sensor may autonomously learn to recognize user activities from ambient sensors, without explicit programming, by capturing context instances as they naturally arise.

This work was done together with Alberto Calatroni and Gerhard Tröster within the EU FP7 FET-Open project OPPORTUNITY.



Social context - Sensing collective behavior [2009-]

Nowadays almost every person has a cellphone equipped with sensors. Sensing on such a massive scales opens up avenues for new applications.

Understanding human collective behavior is a challenging multidisciplinary problem with a large number of applications: assistance in emergency and disaster scenarios, urban space planning, as well as smart-traffic management.

Motivated by these possibilities, we investigate how to recognize human collective behaviors from sensors currently included in mobile phones, as well as additional modalities available in our research platforms. We also consider the new possibilities to provide smart assistance based on human collective behavior sensing, e.g. in an emergency situation.
This work is conducted within the EU FP7 FET-Proactive project SOCIONICAL.

Social context

Decentralized Detection of Group Formations from Wearable Acceleration Sensors

Among collective behaviors, the formation of groups of people walking together is a natural occurence. Adressing a group of persons as a whole - rather than sending messages to individual persons - may become extremely valuable in emergency situations, for instance to direct a group to a specific exit.

In this feasibility study we showed how to detect the formation of groups of people from a single motion sensor, such as the one that may be found on a mobile phone. We relied on a pairwise correlation-based algorithm that is suitable for decentralized operation. Eventually, more sensors and more advanced algorithms may allow us to detect other types and more complex collective behaviors

This work was done together with Martin Wirz and Gerhard Tröster within the EU FP7 FET-Proactive project SOCIONICAL




Cognitive-Affective Context Recognition

Special attention has been placed in the community on the recognition of human behavior, gestures, or location as an important aspect of context. However, the user's contextual dimension encompasses also cognitive-affective aspects, pioneered by Rosalind Picard.





Electrodermal activity

Effect of physical activity on the electrodermal activity after a startle event [2008]

Electrodermal activity is commonly used to infer affective aspects such as fear or stress. This signal is however challenging to interpret in daily life situations, in particular due to the co-influences of the physical activity that may lead to variability in electrode skin contact pressure or sweating.

We characterized the effect of physical activity (walking) on the galvanic skin response in subjects that are purposedly startled - a situation for which a increase in electrodermal activity is expected.

This work was done together with Johannes Schumm, Marc Bächlin, Cornelia Setz, Bert Arnrich and Gerhard Tröster




Eye movement analysis

Cognitive awareness from eye movements [2009-]

Eye movements have the particularity of being consciouly controlled as well as unconsciously controlled. Unconscious eye movements are controlled by the oculomotor-plant. These eye movements serve to build a mental map of the environment, and are affected by the situation  of the subject, his activities, but also his cognitive state, such as his proficiency at carrying out a task, or his previous exposure to a situation.

Taking a fresh look at the vast amount of research related to visual perception, we outlined some new applications enabled by the machine detection of the user's cognitive state - in particular memory. We defined the experimental setup required to assess the consequences of prior exposure to a situation on eye movements - thus in effect the user's memory of a situation. We showed significant changes in eye movement patterns do occur, and now investigate machine learning techniques for the classification of these patterns.

This work was done together with Andreas Bulling and Gerhard Tröster


Context processing in sensor networks

Programming sensor nodes for distributed activity recognition is challenging when a large number of possibly heterogeneous sensors are involved. We investigate ways to reduce the  complexity of describing and executing context recognition algorithms in a distributed manner on sensor nodes.




Tiny Task Network

Programming model & execution engine: Tiny Task Network (TITAN) [2005-2009]

We developed TITAN, a programming model and execution engine to simplify the distributed execution of context recognition algorithms in sensor networks. A context recognition algorithm is represented as an interconnected service graph. Each service can run on different nodes and the communication between nodes (or within a node) is handled transparently.

The key characteristics of TITAN is to allow for dynamic run-time reconfiguration, whereby the execution of a service may be transparently moved from one to another node. Thus, TITAN is well suited for dynamically and open-ended environments.

This work was done together with Clemens Lombriser and Gerhard Tröster within the EU FP6 project e-Sense and EU FP7 project SENSEI




Tiny Task Network

Service discovery, composition and system modeling [2005-2009]

Based on the TITAN framework, we showed how complex services can be composed from simpler ones that are discovered at run-time. We presented a modeling approach to tehBased on the TITAN system, we

This work was done together with Clemens Lombriser and Gerhard Tröster within the EU FP6 project e-Sense and EU FP7 project SENSEI


Activity recognition

Activity recognition is an important aspect of context. We investigate methods and sensor modalities supporting activity recognition.




Recognition chain

Activity spotting using string matching [2005-2008]

Spotting complex gestures of a person while it performs work-related or daily activities remains challenging. Several methods have been proposed in the community to detect specific time series within a continuous stream of data - however to date this remains a very challenging problem.

We investigated a pattern matching method based on the matching of strings: a string template (corresponding to the gesture of interest) matched against a continuous string of symbols provided by the sensors. The matching distance (edit distance) between the template and the sensor string indicates the occurence of a gesture of interest.

This method is efficiently implemented using dynamic programming, has the potential for efficient hardware implementation, and can easily scale to new gestures without retraining entirely the system. The method was demonstrated on the recognition of complex manipulative gesture in a car production scenario.

This work was done together with Thomas Stiefmeier and Gerhard Tröster within the EU project WearIT@Work




Recognition chain

Event-based activity recognition [2005-2006]


High-level human activites are usually composed of activity primitives. These primitives can often be sensed with simple sensors and detectors - thus generating events. Eventually, the high-level activity can be inferred from the event statistics.

We showed how an event-based approach can be used to detect high-level activites of industrial workers in a car production scenario.

This work was done together with Thomas Stiefmeier, Clemens Lombriser and Gerhard Tröster within the EU project WearIT@Work




Eye movement analysis

Activity recognition and HCI from eye movements [2006-]


For many activities (e.g. reading) eye show characteristic movement patterns. We have shown that several human activities - e.g. reading, watching a movie, writing text, copying a text - can be distinguished by classifying eye movement patterns using machine learning techniques. This shows that eye movements is a promising additional modality to infer the user's context besides existing approaches.

This work was done together with Andreas Bulling and Gerhard Tröster



Bio-inspired techniques in human activity recognition

Bio-inspired techniques - mimicking basic biological behaviors such as learning, evolution or development - tend to achieve high robustness in adverse conditions and have great capacity to exploit at best to their operating environment.  These properties may be advantageous in dynamic open-ended environments.





CTRNN

Evolving Continuous Time Recurrent Neural Networks as Signal Predictors for Gesture Recognition [2007]

Continuous time recurrent neural networks are capable of complex temporal dynamic processing. We show that CTRNNs can be evoled to be signal (time series) predictor for motion sensor data, and eventually be used for gesture recognition.

The CTRNNs receive the current motion sensor data and attempt to predict future sample value. We use artifical evolution to evolve the network topology as a way to maximize the CTRNN performance at predicting future sensor sample values. The user's gesture are classified based on the degree of match between predicted and realized value.

This work was done together with Gonzalo Bailador and Gerhard Tröster




Genetic Programming

Evolving robust discriminative features for gesture recognition with genetic programming [2009]

The performance of classifiers is ultimately limited by the discriminative power of the underlying features. Features are usually from related work, or domain-specific knowledge. Yet, ultimately, the space of possible features are limited to those conceivable by the designer.

By using genetic programming, one can explore outside of the space of traditional features, and thus potentially find more robust or more discriminative features.

We show that that features can be evolved to improve robustness to on-body sensor placement variations, and that they may outperform traditional features.

This work was done together with Kilian Förster and Gerhard Tröster



Gait analysis

Gait analysis is an important topic in the biomedical community, be it to infer the onset of degenerative illnesses or assess the progress of rehabilitation. With a different purpose, gait analysis is also promising in the wearable and pervasive computing comunity: gait is a promising carrier of contextual information, it is is omnipresent, and it can be sensed from ubobtrusive sensors integrated in floors or shoes. We investigate what kind of contextual information that may be inferred from gait, which new applications they allow, and outline the challenges and limitations.





Gait

Gait-based authentication: promises and challenges [2009]

Template-based approaches using acceleration signals have been proposed for gait-based biometric authentication. In daily life a number of real-world factors affect the users’ gait and we investigate their effects on authentication performance.

We analyze the effect of walkingspeed, different shoes, extra load, and the natural variation over days on the gait. We introduce a statistical Measure of Similarity (MOS) suited for template-based pattern recognition. The MOS and actual authentication show that these factors may affect the gait of an individual at a level comparable to the variations between individuals. A change in walking speed of 1km/h for example has the same MOS of 20% as the in-between individuals’ MOS. This limits the applicability of gait-based authentication approaches. We identify how these real-world factors may be compensated and we discuss the opportunities for gait based context-awareness in wearable computing systems.

This work was done together with Marc Bächlin and Gerhard Tröster




Gait

Preventing back-pain: detection of on-body load placement from gait alterations [2009]

Back problems are often the result of carrying loads in inappropriate manners: too heavy loads, carried during extended period of time, or inappropriately placed on body. Load does however also influence our gait, which opens up a possibility to detect unobtrusively on-body load and load placement from sensors unobtrusively integrated in shoes or garments.

We assessed the feasibility of detecting on-body load placement from gait alterations, and the challenges to address to improve robustness


This work was done together with Marco Benocci and Marc Bächlin, Elisabetta Farella, Luca Benini and Gerhard Tröster



Localization

Most localization techniques in wearable/pervasive computing use radio signals (GPS, GSM fingerprinting, etc) or dead-reckoning integrating user's acceleration. We proposed and characterized a few alternate approaches.

Vision-based dead-reckoning

Vision-based dead-recoking [2007]

We investigated a dead-reckoning system for wearable computing that allows to track the user’s trajectory in an unknown and non-instrumented environment by integrating the optical flow. Only a single inexpensive camera worn on the body is required, which may be reused for other purposes such as HCI.

We outlined the accuracy of the approach in a number of oeprating conditions, and outlined the many tradeoffs inherent to the approach, between e.g. camera resolution, sample rate, motion model, optical flow computation, camera placement, etc.

This work was done together with Reto Jeny, Patrick de la Hamette and Gerhard Tröster



Healthcare applications





Wearable Assistant for PD patients

Wearable assistant for patients with Parkinson disease and Freezing of Gait syndrome [2008-2009]

Freezing of gait (FOG) is one common impaired motor skill in advanced Parkinson disease patients. It typically manifests as a sudden and transient inability to move. FOG is associated with substantial social and clinical consequences for patients: it is a common cause of falls, it interferes
with daily activities, and significantly impairs quality of life.

We developed a context-aware wearable assistant for Parkinson's disease patients with freezing of gait (FOG). An embedded computer detects freezing
from on-body wireless motion sensors and provides context-aware auditory stimulation to support resuming walking.

We developed the wearable platform, and investigated in a pilot study aspects such as user acceptance and technical performance. We have obtained positive initial results motivating follow-up studies on a larger scale to assess clinical effectiveness.

This work was done within the EU FP6 project Daphnet together with Marc Bächlin, Meir Plotnik, Jeff Hausdorff and Gerhard Tröster


Technical Platforms

To foster user acceptance wearable sensors should be small, light, and unobtrusive. They also should be easy to deploy and interface and robust, to acquire high quality data with little technical setup overhead. A wearable computer is usually desired to collect data from the sensors, run context-recognition algorithms, and interface with the user. We developed several platforms to support our research activities.





SensorButton

Miniature networked SensorButtons [-2005]

We developed a wearable platform that addresses miniaturization and low-power objectives: it is a miniature networked SensorButton with the form factor of a button, so that it can be integrated in garments in an unobtrusive way. It has several sensor modalities useful in wearable computing, on-board processing power, and a wireless link for sensor networking or communication with a base station.

This platform was used to further research in power aware algorithms for wearable computing and energy harvesting solutions.

This work was done by Nagendra Bharathula, together with Jamie Ward, Mathias Stäger, Paul Lukowicz and Gerhard Tröster




Daphnet wearable computer

Daphnet wearable computer [2006-2009]

We developed a platform for combined physiological signal acquisition and context awareness, targeting medical application. It is wearable, has extended battery life, offers enough computational power to run context-recognition algorithms and is flexible so that a variable number of sensors can be added to it (an internal bay allows for custom input/output extension boards). The core of the platform is an Intel XScale PXA270 processor running a Linux 2.6 operating system.

This platform was used among others to realize an assistant for Parkinson disease patients with freezing of gait.

This work was done within the EU FP6 project Daphnet together with Marc Bächlin and Gerhard Tröster, and builds upon previous platforms developed by Lauffer, Ossevoort, Macaluso, Winkelmann, Amft and Lukowicz.




NTMotion:AccGyro
NTECG
NTKBD

"Nanotera" Bluetooth Sensors [2006-]

I developed a number of hardware components for the acquisition of data and the rapid prototyping of context-aware application. This "kit" of sensors comprises:
  • NTMotion:Acc: acceleration sensor node
  • NTMotion:AccGyro: acceleration and rate of turn sensor node
  • NTECG: ECG sensor
  • NTKBD: annotation keypad
  • Charging/programming station
It is designed to be easy to use, rapid to deploy, easy to interface with computers and wearable devices (standard Bluetooth RFCOMM interface), and to form an integrated system. Integration means that all wireless nodes share a common high-level architecture, a common firmware, a common charging/programming station, and common communication prototocols.
Furthermore, a suite of software was developed to allows the acquisition of

This kit is part of a larger effort to develop an educational kit for wearable computing, with focus on activity recognition.

So far, this platform has been used at the Wearable Computing Laboratory, ETH Zürich, in the following projects:
  • Student education (master level) in the Wearable Systems 1 lecture
  • Data collection in the EU FP7 FET-Open project OPPORTUNITY (w/ Kilian Fröster, Alberto Calatroni)
  • Data collection in the EU FP7 FET-Proactive project SOCIONICAL (w/ Martin Wirz)
  • Activity recognition from fused on-body sensor data (w/ Piero Zappi)
  • Activity spotting (w/ Bernd Tessendorf)
  • Activity recording on mobile phones (w/ Clemens Lombriser, Mirco Rossi)
  • Wearable assistant for Parkinson disease patients with freezing of gait (w/ Marc Bächlin)
  • Wearable snowboard assistant (w/ Thomas Holleczek)
  • Stress detection (w/ Cornelia Setz, Johannes Schumm, Bert Arnrich)

Part of this work was done with Marc Bächlin with contributions from Johannes Schumm and Urban Suppiger, within the Swiss funded NanoTera project "Educational kit for wearable computing"





Eye movement analysis

Wearable electrooculography [2006-]

We have investigated technological solutions to sense eye movements using electro-oculography while minimizing user disturbance, by integrating sensing and electronics into the frame of goggles. We have shown that it is possible to recognize a number of user activities by means of eye movement using machine learning techniques.

This work was done together with Andreas Bulling and Gerhard Tröster




Smart shirt

Technology for the Rapid Propotyping of Smart Shirts [2006-2009]

Activity and gesture recognition requires the on-body placement of a number of sensors. In order to simplify the deployment of sensors on body (e.g. during data acquisition sessions in the lab, or trials with patients) a solution integrated into clothing is desirable.

We've developed a technology based on ultraminiature motion sensors attached to the clothing with silicon gel to rapidly design and prototype smart shirt. The sensors are interfaced by tiny wires to a body-worn signal processing platform that can log data or send it to a PC for further processing.

This technology allows to produce with limited cost and in short time prototype smart shirt that can be used e.g. to validate sensor modalities and placement. After validation, a more textile-integrated solution can be envisionned.

This platform was used to investigate sensor placement variability when attached to clothing, and to record motion data from subjects performing upper-body rehabilitation movements, and to monitor the posture of kids.

This work was done with Holger Harms and Gerhard Tröster