Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
7014
João Gama Elizabeth Bradley Jaakko Hollmén (Eds.)
Advances in Intelligent Data Analysis X 10th International Symposium, IDA 2011 Porto, Portugal, October 29-31, 2011 Proceedings
13
Volume Editors João Gama University of Porto, Faculty of Economics LIAAD-INESC Porto, L.A. Rua de Ceuta, 118, 6°, 4050-190 Porto, Portugal E-mail:
[email protected] Elizabeth Bradley University of Colorado, Department of Computer Science Boulder, CO 80309-0430, USA E-mail:
[email protected] Jaakko Hollmén Aalto University School of Science Department of Information and Computer Science P.O. Box 15400, 00076 Aalto, Finland E-mail:
[email protected] ISSN 0302-9743 e-ISSN 1611-3349 ISBN 978-3-642-24799-6 e-ISBN 978-3-642-24800-9 DOI 10.1007/978-3-642-24800-9 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011938470 CR Subject Classification (1998): H.3, H.4, I.2, F.1, H.2.8, J.3, I.4 LNCS Sublibrary: SL 3 – Information Systems and Application, incl. Internet/Web and HCI
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
We are proud to present the proceedings of the 10th International Symposium on Intelligent Data Analysis, which was held during October 29-31, in Porto, Portugal. The IDA series started in 1995 and was held biennially until 2009. The year 2010 marked a special year: the IDA symposium refocused on the original goals of the series. This focus was continued and consolidated this year, as outlined in the Call for Papers: “IDA 2011 solicits papers on all aspects of intelligent data analysis, particularly papers on intelligent support for modeling and analyzing complex, dynamical systems. Many important scientific and social systems are poorly understood, particularly their evolving nature. Intelligent support for understanding these kinds of systems goes beyond the usual algorithmic offerings in the data mining literature and includes data collection, novel modes of data acquisition, such as crowd sourcing and other citizen science mechanisms; data cleaning, semantics and markup; searching for data and assembling datasets from multiple sources; data processing, including workflows, mixed-initiative data analysis, and planning; data and information fusion; incremental, mixed-initiative model development, testing and revision; and visualization and dissemination of results...” The IDA community responded enthusiastically to this call and the symposium program was accordingly very strong. We received 73 submissions from people in 29 countries. Each submission was evaluated by at least three members of the Program Committee; acceptance was based on these experts’ assessment of the quality and novelty of the work. Members of the Senior Program Committee were asked to solicit and advocate for papers that reported on preliminary but highly novel work. Overall, 73 papers were submitted; 19 and 16 were accepted for oral and poster presentation, respectively. The meeting was hosted by the Department of Computer Sciences, Faculty of Science, University of Porto in Portugal. Marking the IDA 10th anniversary, we were honored by the participation of three distinguished invited speakers. Xiaohui Liu from Brunel University in the United Kingdom – one of the founders of the IDA symposium series – presented a talk on “Intelligent Data Analysis: Keeping Pace with Technological Advance”. Michael R. Berthold from the University of Konstanz in Germany presented “From Pattern Discovery to Discovery Support: Creativity and Heterogeneous Information Networks”. Carla Gomes from Cornell University in the United States presented “Computational Sustainability”. These invited talks were complemented by a special General Chair’s Lecture presented by Ricardo Baeza-Yates from Yahoo! Research Barcelona in Spain entitled “Web Mining or the Wisdom of the Crowds”.
VI
Preface
We wish to express our gratitude to all authors of submitted papers for their intellectual contributions; to the Program Committee members and the additional reviewers for their effort in reviewing, discussing, and commenting on the submitted papers; to the members of the IDA Steering Committee for their ongoing guidance and support; and to the Senior Program Committee for their active involvement. We thank Richard Van De Stadt for running the submission website and handling the production of the proceedings. We gratefully acknowledge those who were involved in the local organization of the symposium: Pedro Pereira Rodrigues (Chair), Carlos Ferreira, Raquel Sebasti˜ ao, M´ arcia Oliveira, and Petr Kosina. The 2011 IDA symposium would not have been possible without the support of the following organizations: – – – – – – – –
Associa¸c˜ao Portuguesa para a Inteligˆencia Artificial Centro Internacional de Matem´ atica Funda¸c˜ao para a Ciˆencia e a Tecnologia INESC Porto KNIME — Konstanz Information Miner LIAAD — The Laboratory of Artificial Intelligence and Decision Support University of Porto Yahoo! Research Center, Barcelona
We especially thank Ericsson for funding the IDA Frontier Prize, a newly established award for a novel, visionary contribution to data analysis in the understanding of complex systems. August 2011
Jo˜ao Gama Elizabeth Bradley Jaakko Hollm´en
Organization
Conference Chair Jo˜ ao Gama
University of Porto, Portugal
Program Chairs Elizabeth Bradley Jaakko Hollm´en
University of Colorado, USA Aalto University, Finland
Poster Chairs Fazel Famili Al´ıpio Jorge
CNRC, Canada University of Porto, Portugal
Local Organization Chair Pedro Pereira Rodrigues
University of Porto, Portugal
Publicity Chairs David Weston Carlos Ferreira
Imperial College, UK University of Porto, Portugal
Frontier Prize Chairs Michael R. Berthold Niall M. Adams
University of Konstanz, Germany Imperial College, UK
Webmasters Luis Matias Petr Kosina
University of Porto, Portugal University of Porto, Portugal
Local Organization Carlos Ferreira Raquel Sebasti˜ao M´ arcia Oliveira Petr Kosina
LIAAD-Inesc LIAAD-Inesc LIAAD-Inesc LIAAD-Inesc
Porto, Porto, Porto, Porto,
University University University University
of of of of
Porto Porto Porto Porto
VIII
Organization
Senior Program Committee Members Niall M. Adams Michael R. Berthold Elizabeth Bradley Paul Cohen Jo˜ ao Gama Jaakko Hollm´en Frank Klawonn Joost Kok Xioahui Liu Hannu Toivonen David Weston
Imperial College, UK University of Konstanz, Germany University of Colorado, USA University of Arizona, USA University of Porto, Portugal Aalto University, Finland Ostfalia University of Applied Sciences, Germany Leiden University, The Netherlands Brunel University West London, UK University of Helsinki, Finland Imperial College, UK
Program Committee Members Henrik Bostr¨ om Jean-Fran¸cois Boulicaut Andre de Carvalho Jose del Campo Bruno Cr´emilleux Werner Dubitzky Saˇso Dˇzeroski Fazel Famili Ad Feelders Ingrid Fischer Johannes F¨ urnkranz Gemma Garriga Ricard Gavald` a Gerard Govaert Kenny Gruchalla Lawrence Hall Howard Hamilton Eyke H¨ ullermeier Al´ıpio Jorge Eammon Keogh Wesley Kerr Rudolf Kruse
Stockholm University, Sweden INSA Lyon, France University of Sao Paulo, Brazil University of Malaga, Spain University of Caen, France University of Ulster, UK Joˇzef Stefan Institute, Ljubljana, Slovenia IIT - National Research Council Canada, Canada Utrecht University, The Netherlands University of Konstanz, Germany Technische Universit¨at Darmstadt, Germany INRIA, France Technical University of Catalonia (UPC), Spain University of Technology of Compiegne, France National Renewable Energy Lab, USA University of South Florida, USA University of Regina, Canada University of Marburg, Germany University of Porto, Portugal University of California Riverside, USA University of Arizona, USA Otto von Guericke University Magdeburg, Germany
Organization
Pedro Larranaga Manuel Martin-Merino Ernestina Menasalvas Maria-Carolina Monard Giovanni Montana Miguel Prada J. Sunil Rao Victor Robles V´ıtor Santos Costa Roberta Siciliano Myra Spiliopoulou Stephen Swift Evimaria Terzi Allan Tucker Juha Vesanto Ricardo Vig´ ario
Universidad Politecnica de Madrid, Spain Pontificial University of Salamanca, Spain Universidad Politecnica de Madrid, Spain University of Sao Paulo, Brazil Imperial College, UK University of Leon, Spain The University of Miami, USA Technical University of Madrid, Spain University of Porto, Portugal University of Naples Federico II, Italy Otto von Guericke University Magdeburg, Germany Brunel University, UK Boston University, USA Brunel University, UK Xtract Ltd., Finland Aalto University, Finland
Additional Referees Satrajit Basu Igor Braga Ivica Dimitrovski Dora Erdos Carlos Ferreira Valentin Gjorgjioski Pascal Held Frederik Janssen Dragi Kocev John N. Korecki Mikko Korpela Petr Kosina Mikko Kurimo
Jefrey Lijffijt Christian Moewes Bruno Nogueira M´ arcia Oliveira Jonathon Parker Georg Ruß Leander Schietgat Raquel Sebasti˜ ao Newton Spolaˆ or Matthias Steinbrecher Janne Toivola Lucas Vendramin
IX
Table of Contents
Invited Papers Bisociative Knowledge Discovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Michael R. Berthold
1
Computational Sustainability (Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carla P. Gomes
8
Intelligent Data Analysis: Keeping Pace with Technological Advances (Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Xiaohui Liu
9
Selected Contributions Comparative Analysis of Power Consumption in University Buildings Using envSOM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Seraf´ın Alonso, Manuel Dom´ınguez, Miguel Angel Prada, Mika Sulkava, and Jaakko Hollm´en Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jo˜ ao B´ artolo Gomes, Mohamed Medhat Gaber, Pedro A.C. Sousa, and Ernestina Menasalvas Intra-firm Information Flow: A Content-Structure Perspective . . . . . . . . . Yakir Berchenko, Or Daliot, and Nir N. Brueller
10
22
34
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Borgelt and Tobias K¨ otter
43
Finding Ensembles of Neurons in Spike Trains by Non-linear Mapping and Statistical Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Braune, Christian Borgelt, and Sonja Gr¨ un
55
Towards Automatic Pathway Generation from Biological Full-Text Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ekaterina Buyko, J¨ org Linde, Steffen Priebe, and Udo Hahn
67
Online Writing Data Representation: A Graph Theory Approach . . . . . . . Gilles Caporossi and Christophe Leblay
80
XII
Table of Contents
Online Evaluation of Email Streaming Classifiers Using GNUsmail . . . . . Jos´e M. Carmona-Cejudo, Manuel Baena-Garc´ıa, ´ Jos´e del Campo-Avila, Albert Bifet, Jo˜ ao Gama, and Rafael Morales-Bueno The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages in a Temporal Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Stefano Ceccon, David Garway-Heath, David Crabb, and Allan Tucker
90
101
Mining Train Delays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Boris Cule, Bart Goethals, Sven Tassenoy, and Sabine Verboven
113
Robustness of Change Detection Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . Tamraparni Dasu, Shankar Krishnan, and Gina Maria Pomann
125
GaMuSo: Graph Base Music Recommendation in a Social Bookmarking Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jeroen De Knijf, Anthony Liekens, and Bart Goethals
138
Resampling-Based Change Point Estimation . . . . . . . . . . . . . . . . . . . . . . . . . Jelena Fiosina and Maksims Fiosins
150
Learning about the Learning Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jo˜ ao Gama and Petr Kosina
162
Predicting Computer Performance Dynamics . . . . . . . . . . . . . . . . . . . . . . . . Joshua Garland and Elizabeth Bradley
173
Prototype-Based Classification of Dissimilarity Data . . . . . . . . . . . . . . . . . . Barbara Hammer, Bassam Mokbel, Frank-Michael Schleif, and Xibin Zhu
185
Automatic Layout Design Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fadratul Hafinaz Hassan and Allan Tucker
198
An Alternative to ROC and AUC Analysis of Classifiers . . . . . . . . . . . . . . Frank Klawonn, Frank H¨ oppner, and Sigrun May
210
The Algorithm APT to Classify in Concurrence of Latency and Drift . . . Georg Matthias Krempl
222
Identification of Nuclear Magnetic Resonance Signals via Gaussian Mixture Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Martin Krone, Frank Klawonn, Thorsten L¨ uhrs, and Christiane Ritter Graphical Feature Selection for Multilabel Classification Tasks . . . . . . . . . Gerardo Lastra, Oscar Luaces, Jose Ramon Quevedo, and Antonio Bahamonde
234
246
Table of Contents
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Christian Loyek, Jan K¨ olling, Daniel Langenk¨ amper, Karsten Niehaus, and Tim W. Nattkemper
XIII
258
Data Quality through Model Checking Techniques . . . . . . . . . . . . . . . . . . . Mario Mezzanzanica, Roberto Boselli, Mirko Cesarini, and Fabio Mercorio
270
Generating Automated News to Explain the Meaning of Sensor Data . . . Martin Molina, Amanda Stent, and Enrique Parodi
282
Binding Statistical and Machine Learning Models for Short-Term Forecasting of Global Solar Radiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Llanos Mora-L´ opez, Ildefonso Mart´ınez-Marchena, Michel Piliougine, and Mariano Sidrach-de-Cardona Bisociative Discovery of Interesting Relations between Domains . . . . . . . . and Uwe Nagel, Kilian Thiel, Tobias K¨ otter, Dawid Piatek, Michael R. Berthold Collaboration-Based Function Prediction in Protein-Protein Interaction Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Hossein Rahmani, Hendrik Blockeel, and Andreas Bender Mining Sentiments from Songs Using Latent Dirichlet Allocation . . . . . . . Govind Sharma and M. Narasimha Murty
294
306
318 328
Analyzing Parliamentary Elections Based on Voting Advice Application Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaakko Talonen and Mika Sulkava
340
Integrating Marine Species Biomass Data by Modelling Functional Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Allan Tucker and Daniel Duplisea
352
A Stylometric Study and Assessment of Machine Translators . . . . . . . . . . V. Suresh, Avanthi Krishnamurthy, Rama Badrinath, and C.E. Veni Madhavan
364
Traffic Events Modeling for Structural Health Monitoring . . . . . . . . . . . . . Ugo Vespier, Arno Knobbe, Joaquin Vanschoren, Shengfa Miao, Arne Koopman, Bas Obladen, and Carlos Bosma
376
Supervised Learning in Parallel Universes Using Neighborgrams . . . . . . . . Bernd Wiswedel and Michael R. Berthold
388
XIV
Table of Contents
iMMPC: A Local Search Approach for Incremental Bayesian Network Structure Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amanullah Yasin and Philippe Leray Analyzing Emotional Semantics of Abstract Art Using Low-Level Image Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . He Zhang, Eimontas Augilius, Timo Honkela, Jorma Laaksonen, Hannes Gamper, and Henok Alene Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
401
413
425
Bisociative Knowledge Discovery Michael R. Berthold Nycomed Chair for Bioinformatics and Information Mining, Dept. of Computer and Information Science, University of Konstanz, Konstanz, Germany
[email protected] Abstract. Data analysis generally focusses on finding patterns within a reasonably well connected domain of interest. In this article we focus on the discovery of new connections between domains (so called bisociations), supporting the creative discovery process in a novel way. We motivate this approach, show the difference to classical data analysis and conclude by briefly illustrating some types of domain-crossing connections along with illustrative examples.
1
Motivation
Modern data analysis enables users to discover complex patterns of various types in large information repositories. Together with some of the data mining schema, such as CRISP-DM and SEMMA, the user participates in a cycle of data preparation, model selection, training, and knowledge inspection. Many variations on this theme have emerged in the past, such as Explorative Data Mining, Visual Analytics, and many others but the underlying assumption has always been that the data the methods are applied to models one (often rather complex) domain. Note that by domain we do not want to indicate a single feature space (Multi View Learning or Parallel Universes are just two of many other types of learning methods to operate on several spaces at the same time) but instead we want to emphasize the fact that the data to be analyzed represents objects that are all regarded as representing properties under one more or less specific aspect. However, methods that support the discovery of connections between previously unconnected (or only loosely coupled) domains have not received much attention in the past. However, in order to really support the discovery of novel insights finding connections between previously unconnected domains promises true potential. Research on (computational) creativity strongly suggests that this type of “out of the box thinking” is an important part of the human ability to achieve truly creative discoveries. In this paper we summarize some more recent work focusing on the discovery of such domain-crossing connections. To contrast the finding of “within domain” patterns (also termed associations) we use the term bisociation as coined by Arthur Koestler in [4] to stress the difference. We argue that Bisociative Knowledge Discovery represents an important challenge in our quest to building truly creative discovery support systems. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 1–7, 2011. c Springer-Verlag Berlin Heidelberg 2011
2
2
M.R. Berthold
Bisociation
Defining bisociation formally is, of course, a challenge. An extensive overview of related work, links to computational creativity and related areas in AI as well as a more thorough formalization can be found in [3]. Here we will only concentrate on the essential parts for the remainder of this paper and only intuitively motivate the background. Boden [2] distinguishes three different types of creative discoveries: Combinatorial, Exploratory, and Transformational Creativity. Where the second and third category can be mapped on (explorative) data analysis or at least the discovery process within a given domain, Combinatorial Creativity nicely represents what we are interested in here: the combination of different domains and the creative discovery stemming from new connections between those domains. Informally, bisociation can be defined as (sets of) concepts that are bridging two otherwise not –or only very sparsely– connected domains whereas an association bridges concepts within a given domain. Of course, not all bisociation candidates are equally interesting and in analogy to how Boden assesses the interestingness of a creative idea as being new, surprising, and valuable [2], a similar measure for interestingness can be specified when the underlying set of domains and their concepts are known. Going back to Koestler we can summarize this setup nicely: The creative act is not an act of creation in the sense of the Old Testament. It does not create something out of nothing; it uncovers, selects, re-shuffles, combines, synthesizes already existing facts, ideas, faculties, skills. The more familiar the parts, the more striking the new whole. Transferred to the data analysis scenario, this puts the emphasis on finding patterns across domains whereas finding patterns in the individual domains themselves is a problem that has been tackled already for quite some time. Put differently, he distinguishes associations that work within a given domain (called matrix by Koestler) and are limited to repetiveness (here: finding other/new occurrences of already identified patterns) and bisociations finding novel connections crossing independent matrices (domains).
3
Types of Bisociation
Obviously the above still remains relatively vague and for concrete implementations the type of bisociative pattern that are sought needs to be specified better. In the past years a number of bisociation types emerged in the context of Bisociative Knowledge Discovery: Bridging Concepts, Bridging Graphs, and Bridging by Structural Similarity. Since these ideas are also addressed in other areas of research, additional types most likely exist in those fields as well.
Bisociative Knowledge Discovery
3.1
3
Bridging Concepts
The most natural type of bisociation is represented by one concept which links two domains, Figure 1 illustrates this.
Fig. 1. Bridging concept (from [3])
Such bridging concepts do not need to exist in the context of a network based representation as suggested by the figure but can also be found in other representations. In [6], for instance, different textual domains were analyzed to find bisociative terms that link different concepts from the two domains. An example for a few bridging concepts is shown in Figure 2. Here a well known data set containing articles from two domains (migraine and magnesium) was searched for bridging terms. This example does reproduce an actual discovery in medicine.
Fig. 2. Bridging concepts - an example reproducing the Swanson discovery (from [6])
4
M.R. Berthold
3.2
Bridging Graphs
More complex bisociations can be modeled by bridging graphs, Figure 3a illustrates this.
(a)
(b) Fig. 3. Bridging graphs (from [3])
Here two different domains are connected by a (usually small) subset of concepts that have some relationship among themselves. In a network based representation one would identify a relatively dense subgraph connecting two domains but also in other representations such “chains of evidence” can be formalized, connecting two domains. Two examples for bridging graphs are shown in Figure 4 (the data stems from Schools-Wikipedia, see [7] for details). One can nicely see how the two concepts “probability space” and “arithmetic mean” connect the domain of movies with some more detailed concepts in the statistics domain. This is at first glance surprising but finds its explanation in the (in both cases also somewhat “creative”) use of those concepts in the two films or the series of films dominated by one actor. The second example nicely bridges physical properties and usage scenarios of phonographs.
Steven Spielberg
Kinematics
Pirates of the Caribbean film series
The Lord of the Rings film trilogy
Harry Potter film series
Angular velocity
Acceleration
Velocity
Jurassic Park film The Golden Compass film
Phonograph cylinder
Arnold Schwarzenegger Arithmetic mean
American popular music
Probability space Mean Random variable Variance
Rhythm and blues
Standard deviation Linear regression
Louis Jordan
Jazz Miles Davis
Fig. 4. Bridging graphs - two examples (from [5])
Bisociative Knowledge Discovery
3.3
5
Bridging by Structural Similarity
The third, most complex type of bisociation does not rely on some straightforward type of link connecting two domains but models such connections on a higher level. In both domains two subsets of concepts can be identified that share a structural similarity. Figure 5 illustrates this – again in a network based representation but also here other types of structural similarity can be defined.
Fig. 5. Bridging by graph similarity (from [3])
An interesting example of such structural similarities can be seen in Figure 6. Again, the demonstration data set based on Schools-Wikipedia was used. The two nodes slightly off center (“Euclid” on the left and “Plato” on the right) are farther apart in the original network but share structural properties such as being closely connected to the hub of a subnetwork (“mathematics” vs. “philosophy”). Note that also “Aristotle” fills a similar role in the philosophy domain.
Goldbach's conjecture
Thucydides
Fermat's last theorem
Homer
Mathematical proof
Humanities
Mathematician
Combinatorics
Theorem
Socrates
Georg Cantor
Golden ratio
David Hilbert
History of science Mesopotamia
History of mathematics
Pythagoras
Irrational number
Renaissance
Logic
Logic
Natural number
Euclid's Elements
Ancient history
Friedrich Nietzsche
Number theory
Middle Ages Demosthenes
Science
Alexander the Great
Thales
Euclidean geometry
Number
Pythagorean theorem Real number
Pythagoras
Sparta
Ethics
6.0
5.5
ArithmeticAlgebra
Euclid
Square root
5.0
Geometry 4.5
Athena
5.0
Cartesian coordinate system
4.0
Triangle
3.5 3.0
Angle
Empiricism
Immanuel Kant
4.0
Mind
Philosophy of mind
Stoicism
Baruch 3.0 Spinoza
Emotion
2.5 2.0
2.0
Polyhedron
Fig. 6. Bridging by graph similarity - example (from [7])
6
3.4
M.R. Berthold
Other Types of Bisocation
The bisociation types discussed above are obviously not complete. The first two types are limited to a 1:1 match on the underlying structures and require the two domains to already have some type of (although sparse) neighborhood relation. Only the third type allows matching on a more abstract level, finding areas of structural similarity and drawing connections between those. Other such, more abstract types of bisociation certainly exist but likely also more direct bisociation types can be defined as well.
4
Bisociation Discovery Methods
In order to formalize the types of bisociations and develop methods for finding them, a more detailed model of the knowledge space needs to be available. When dealing with various types of information and the ability to find patterns in those information repositories a network based model is often an appropriate choice due to its inherent flexibility. A number of methods can be found in [1]. We hasten to add, however, that this is not the only way to model domains and bisociations, again in [1] some contributions finding bisociation in non-network type domains can be found. It is interesting to note that quite a few of the existing methods in the machine learning and data analysis areas can be used with often only minor modifications. For instance, methods for item set mining can be applied to the detection of concept graphs and measures of bisociation strength can also be derived from other approaches to model interestingness. Quite a bit of Bisociative Knowledge Discovery can rely on existing methods but the way those methods are applied is often radically different. Instead of searching for patterns that have reasonably high occurrence frequencies we are often interested in the exact opposite: bisociations are at their heart something that is new and only exists in very faint ways if at all so far.
5
Outlook
Bisociative Knowledge Discovery promises great impact especially in those areas of scientific research where data gathering still outpaces model understanding. Once the mechanisms are well understood the task of data analysis tends to change and the focus lies much stronger on (statistically) significant and validated patterns. However, in the early phase of research, usually the ability to collect data by far outperforms the experts ability to make sense out of those gigantic data repositories and use them to form new hypotheses. Current methods fall short of offering true, explorative access to patterns within but in particular across domains – the framework sketched here (and more substantially founded in [1]) can help to address this shortcoming. Much work still needs to be done, however, as many more types of bisociations can be formalized and many of the
Bisociative Knowledge Discovery
7
existing methods in the Machine Learning and Data Analysis/Mining community are waiting to be applied to these problems. One very interesting development here are the network based bisociation discovery methods which nicely begin to bridge the gap between solidly understood graph theoretical algorithms and overly heuristic, poorly controllable methods. Putting those together can lead to the discovery of better understood bisociative (and other) patterns in large networks. The Data Mining Community has been looking for an exciting “Grand Challenge” for a number of years now. Bisociative Knowledge Discovery could offer just that: inventing methods and building systems that support the discovery of truly new knowledge across different domains will have immense impact on how research in many fields can be computer supported in the future. Acknowledgements. The thoughts presented in this paper would not have emerged without countless, constructive and very fruitful discussions with the members of the BISON Project and the BISON Group at Konstanz University. In particular, I want to thank Tobias K¨ otter, Kilian Thiel, Uwe Nagel, Ulrik Brandes, and our frequent guest bison Werner Dubitzky for many discussions around the nature of bisociation and creative processes. Most of this work was funded by the European Commission in the 7th Framework Programme (FP7ICT-2007-C FET-Open, contract no. BISON-211898).
References 1. Berthold, M.R. (ed.): Bisociative Knowledge Discovery, 1st edn. LNCS. Springer, Heidelberg (in preparation) 2. Boden, M.A.: Pr´ecis of the creative mind: Myths and mechanisms. Behavioural and Brain Sciences 17, 519–570 (1994) 3. Dubitzky, W., K¨ otter, T., Schmidt, O., Berthold, M.R.: Towards creative information exploration based on Koestler’s concept of bisociation. In: Berthold, M.R. (ed.) Bisociative Knowledge Discovery, 1st edn. LNCS. Springer, Heidelberg (in preparation) 4. Koestler, A.: The Act of Creation. Macmillan, NYC (1964) 5. Nagel, U., Thiel, K., K¨ otter, T., Piatek, D., Berthold, M.R.: Bisociative discovery of interesting relations between domains. In: Proceedings of IDA the 10th Conference on Intelligent Data Analysis, Portugal, Porto (in press) 6. Sluban, B., Jurˇsiˇc, M., Cestnik, B., Lavraˇc, N.: Exploring the power of outliers for cross-domain literature mining. In: Berthold, M.R. (ed.) Bisociative Knowledge Discovery. LNCS. Springer, Heidelberg (in preparation) 7. Thiel, K., Berthold, M.R.: Node similarities from spreading activation. In: Berthold, M.R. (ed.) Bisociative Knowledge Discovery, 1st edn. LNCS. Springer, Heidelberg (in preparation)
Computational Sustainability Carla P. Gomes Cornell University Ithaca, NY, USA
[email protected] Abstract. Computational sustainability [1] is a new interdisciplinary research field with the overall goal of developing computational models, methods, and tools to help manage the balance between environmental, economic, and societal needs for sustainable development. The notion of sustainable development — development that meets the needs of the present without compromising the ability of future generations to meet their needs — was introduced in Our Common Future, the seminal report of the United Nations World Commission on Environment and Development, published in 1987. In this talk I will provide an overview of computational sustainability, with examples ranging from wildlife conservation and biodiversity, to poverty mitigation, to large-scale deployment and management of renewable energy sources. I will highlight overarching computational challenges at the intersection of constraint reasoning, optimization, data mining, and dynamical systems. Finally I will discuss the need for a new approach that views computational sustainability problems as “natural” phenomena, amenable to a scientific methodology, in which principled experimentation, to explore problem parameter spaces and hidden problem structure, plays as prominent a role as formal analysis.
Acknowledgments. The author is the lead Principal Investigator of an Expedition in Computing grant on Computational Sustainability from the National Science Foundation (NSF award number: 0832782). The author thanks NSF for the research support and the grant team members for their many contributions towards the development of a vision for computational sustainability, in particular, Chris Barrett, Antonio Bento, Jon Conrad, Tom Dietterich, John Gunckenheimer, John Hopcroft, Ashish Sabharwhal, Bart Selman, David Shmoys, Steve Strogatz, and Mary Lou Zeeman.
Reference [1] Gomes, C.P.: Computational Sustainability: Computational methods for a sustainable environment, economy, and society. The Bridge, National Academy of Engineering 39(4) (Winter 2009)
J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, p. 8, 2011. c Springer-Verlag Berlin Heidelberg 2011
Intelligent Data Analysis: Keeping Pace with Technological Advances Xiaohui Liu School of Information Systems, Computing and Mathematics, Brunel University, London UB8 3PH, UK
[email protected] Abstract. Over the past few decades, we have witnessed significant advances in technology that have done so much to change the way we live and communicate, e.g. the medical and biotechnology, the internet and mobile technology etc. Often these technologies lead to a huge amount of data being generated, and making best use of these technologies often depends on how best to interpret these data in the context of many problem solving and complex systems. Intelligent Data Analysis is needed to address the interdisciplinary challenges concerned with the effective analysis of data [1-3]. In this talk, I will look into a range of real world complex systems via technological changes and explore the role of IDA in these systems, in particular, how to ensure that quality data are obtained for analysis, to handle human factors and domain knowledge with care, to meet challenges in modelling dynamic systems, as well as to consider all these when analysing complex systems [4-10].
References 1. Liu, X.: Intelligent Data Analysis: Issues and Challenges. The Knowledge Engineering Review 11(4), 365–371 (1996) 2. Berthold, M., Hand, D.J.: Intelligent Data Analysis: an Introduction, 2nd edn. Springer, Heidelberg (2007) 3. Cohen, P., Adams, N.: Intelligent Data Analysis in the 21st Century. In: Adams, N.M., Robardet, C., Siebes, A., Boulicaut, J.-F. (eds.) IDA 2009. LNCS, vol. 5772, pp. 1–9. Springer, Heidelberg (2009) 4. Liu, X., Cheng, G., Wu, J.: Analysing Outliers Cautiously. IEEE Transactions on Knowledge and Data Engineering 14, 432–437 (2002) 5. Swift, S., Tucker, A., Vinciotti, V., Martin, M., Orengo, C., Liu, X., Kellam, P.: Consensus Clustering and Functional Interpretation of Gene Expression Data. Genome Biology 5, R94 (2004) 6. Chen, S., Liu, X.: An Integrated Approach for Modeling Learning Patterns of Students in Web-Based Instruction: A Cognitive Style Perspective. ACM Transactions on Computer Human Interaction 15(1), 1–28 (2008) 7. Wang, Z., Liu, X., Liu, Y., Liang, J., Vinciotti, V.: An Extended Kalman Filtering Approach to Modelling Nonlinear Dynamic Gene Regulatory Networks. IEEE/ACM Transactions on Computational Biology and Bioinformatics 6(3), 410–419 (2009) 8. Ruta, A., Li, Y., Liu, X.: Real-Time Traffic Sign Recognition from Video by ClassSpecific Discriminative Features. Pattern Recognition 43(1), 416–430 (2010) 9. Fraser, K., Wang, Z., Liu, X.: Microarray Image Analysis: an Algorithmic Approach. Chapman & Hall/CRC, London (2010) 10. Liang, J., Wang, Z., Liu, X.: Distributed State Estimation for Discrete-Time Sensor Networks with Randomly Varying Nonlinearities and Missing Measurements. IEEE Transactions on Neural Networks 22(3), 486–496 (2011) J. Gama, E. Bradley, and J. Hollmén (Eds.): IDA 2011, LNCS 7014, p. 9, 2011. © Springer-Verlag Berlin Heidelberg 2011
Comparative Analysis of Power Consumption in University Buildings Using envSOM Seraf´ın Alonso1 , Manuel Dom´ınguez1 , Miguel Angel Prada1, , Mika Sulkava2 , and Jaakko Hollm´en2 1
2
Grupo de Investigaci´ on SUPPRESS, Universidad de Le´ on, Le´ on, Spain {saloc,manuel.dominguez,ma.prada}@unileon.es Department of Information and Computer Science, Aalto University School of Science, Espoo, Finland {mika.sulkava,jaakko.hollmen}@tkk.fi
Abstract. Analyzing power consumption is important for economic and environmental reasons. Through the analysis of electrical variables, power could be saved and, therefore, better energy efficiency could be reached in buildings. The application of advanced data analysis helps to provide a better understanding, especially if it enables a joint and comparative analysis of different buildings which are influenced by common environmental conditions. In this paper, we present an approach to monitor and compare electrical consumption profiles of several buildings from the Campus of the University of Le´ on. The envSOM algorithm, a modification of the self-organizing map (SOM), is used to reduce the dimension of data and capture their electrical behaviors conditioned on the environment. After that, a Sammon’s mapping is used to visualize global, component-wise or environmentally conditioned similarities among the buildings. Finally, a clustering step based on k-means algorithm is performed to discover groups of buildings with similar electrical behavior. Keywords: Power consumption, Environmental conditions, Data mining, Exploratory analysis, Self-Organizing Maps, envSOM, Sammon’s mapping, k-means.
1
Introduction
Power consumption has become an issue due to its continuous growth in the last years, provoking the increase of pollution. Since buildings account for 40% of the power consumption in the European Union, the reduction of their consumption is very important to reduce the electricity dependency and comply with the Kyoto Protocol. In that sense, public buildings should set an example [1]. Some authors argue that 20% of the electricity used by buildings could be saved just by repairing malfunctions and avoiding unnecessary operation, i.e.,
Miguel A. Prada carried out most of his work while a researcher at the Dept. of Information and Computer Science of the Aalto University School of Science.
J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 10–21, 2011. c Springer-Verlag Berlin Heidelberg 2011
Comparative Analysis of Power Consumption in University Buildings
11
Table 1. Description of the buildings ID Description 1 Data Center 2 Radio 3 Engineering School I 4 Lecture Hall 5 Engineering School II 6 School of Arts 7 Law School 8 School of Education 9 School of Labor Studies 10 Veterinary Hospital
ID 11 12 13 14 15 16 17 18 19 20
Description Veterinary School Cafeteria I Biology School Service Center Farm Development Animal Facilities School of Sports Sports Center I Sports Center II Central Library
ID Description 21 Bank 22 Cafeteria II 23 Molecular Biology Center 24 Agricultural Facilities 25 Dormitory 26 Administration Offices 27 Environmental Center 28 Mining Eng. School 29 Language Center 30 Business School
by achieving energy efficiency [2]. Optimization and fault detection can only be possible if information about the relevant variables is available for analysis and monitoring. This feedback has proven to be effective as a self-teaching tool for saving electricity [3]. A detailed understanding of the energy use allows us to find where it is wasted and forecast demand. In large organizations, such as universities, it might result in a significant reduction of the electricity bill, especially under the current open market where tariffs can be negotiated. Nevertheless, monitoring is usually carried out for individual buildings and offers simple plots of the measurements of each variable. The result will be a high number of graphs which are difficult to analyze. The application of data mining techniques helps to provide a better understanding [4,5,6,7].We claim that data analysis of the electricity consumption in a whole organization will be useful if it enables a joint and comparative analysis of the different buildings with respect to a group of common environmental conditions. In this paper, we present an approach to analyze electrical consumption profiles of several buildings from the Campus of the University of Le´ on with regard to the environment. The Campus consists of 30 buildings distributed throughout the city of Le´ on, Spain, most of them located in the same area (see Figure 1). These buildings, listed in Table 1, are used for teaching, research or administrative services. The aim is to analyze and compare the power consumption of all buildings to find out relationships with the environment, which can be considered common since there are only insignificant variations among the buildings. The envSOM algorithm —a modification of the self-organizing map (SOM) [8] for the analysis of data sets conditioned on a common environment that we proposed recently [9]— is used for that matter, along with an additional projection and clustering. The analysis may lead to achieve energy efficiency, detect malfunctions and obtain information to negotiate tariffs and reduce the electricity bill. A monitoring system has been developed to measure and store the electrical data in the whole Campus. This platform is based on a 3-layer structure [10]. In the physical layer, electrical meters are installed in each building. They are connected to each other by means of a communication network based on the
S. Alonso et al.
(Asturias
City of León
B18 B19
)
N-6 21 (Ca ntab ria)
12
Vegazana Campus
B17
B27 B29 B26 B28 B25
)
ada
nferr
(Po
B5
AP-66 (Asturias)
AP-71 (Astorga) A-66 (Madrid)
A-66 (Madrid)
20 N-1
B24
A-231 (Burgos)
(Valla dolid )
B3
B4
B2 B9 B30
B7 B14
B8
50
100 m
B11 B15 B23
B6 B12
B21 B22 B20 0
UNIVERSITY OF LEÓN Vegazana Campus
B13
B10 B16
B1
Fig. 1. Location of the buildings
Modbus open protocol, which gives flexibility and facilitates the integration [11]. An acquisition service collects data from the meters and stores the information in a database at the middle layer. Clients can access the data through the Internet and download them for further off-line analysis. This paper is structured as follows: In Section 2, the proposed approach is presented. The experimental results are described in Section 3. Finally, the conclusions are drawn in Section 4.
2
Analysis of Electrical Data
The proposed approach consists of several steps which are explained below. The aim is to provide a summarized and joint view of the buildings which enables their comparative analysis and visual pattern recognition. Data are preprocessed to remove erroneous samples. To avoid short time gaps due to network delays and answer timeouts from meters, all samples are approximated to the nearest ones according to the desired sampling period of 2 minutes. Other samples can be missing due to network disruption or problems with acquisition and storage. However, this drawback is not severe because the SOM has been proven to work with partial or incomplete training data [12]. The variables used in the analysis are selected according to prior knowledge and correlation analysis, because some electrical variables are strongly correlated. The selected process variables are the average of voltages between phases, active power, power factor, current unbalance, the average of the total harmonic distortions (THD) of voltages, the average of the total harmonic distortions of currents and the energy per area. The remaining variables are either less relevant or highly correlated with other selected variables, e.g., current with power and energy. Since an important aim of this work is to analyze the influence of the environment in the power consumption of each building, relevant environmental variables must also be included. The weather-related variables will very likely influence the electrical profile, so temperature, humidity and solar radiation have
Comparative Analysis of Power Consumption in University Buildings
13
been included in the analysis. Apart from these variables, some temporal features may strongly affect the behavior. For that reason, both the time of the day (decomposed as coordinates X and Y of the hour hand in a clock-like representation to avoid discontinuity between maximum and minimum values [13]) and the type of day (working or holiday day) are used as environmental variables. 2.1
EnvSOM: Discovering Environmental Patterns in Electrical Data
The goal is to analyze how the electrical consumption behaves with regard to the common environment and compare the buildings with each other. The selected approach provides visual information that helps experts to achieve a qualitative understanding of the behavior of the buildings and the relationships among them. The SOM is an excellent tool for pattern recognition and visualization in engineering [14], which provides useful visualization tools such as the component planes or the u-matrix [15]. However, the comparative analysis of several processes that share common variables is not possible unless they are stacked in a single data set, since different individual maps will organize differently. Anyway, SOM will not place the emphasis on organizing the map with regard to the environmental variables, making difficult to interpret a process variable with respect to an environmental one. Furthermore, worse results should be expected as the number of buildings and variables grows. For that reason, a variant of SOM called envSOM [9] is used in this work. This modification of the algorithm models the process variables conditioned on the environmental ones, i.e., it approximates the probability density function of a data set, given the environmental conditions. The purpose of this approach is to achieve maps that are similarly organized for all buildings, given the common variables, in order to facilitate the visual data mining and the comparison between buildings. The envSOM algorithm consists of two consecutive phases based on the traditional SOM [8]: 1. First phase: A traditional SOM is trained using the selected variables for all buildings, i.e., the environmental variables plus the electrical variables of every building. Only the environmental variables will be used for computing the best matching unit, so the electrical variables must be masked out in the winner searching process. Thus, the winner c is selected using equation 1. c(t) = arg min x(t) − mi (t)ω , i = 1, 2, . . . , M, i
(1)
where x(t) − mi (t)2ω = ωx(t) − mi (t)2 =
ωk [xk (t) − mik (t)]2
k
The current input is represented by x and m indicates the codebook vectors and the Euclidean norm is denoted by ◦ . M and t are, respectively, the number of units and the time. The binary mask ω is used to indicate which
14
S. Alonso et al.
variables are used for computing the winner and the values ωk are 1 or 0, depending on if the component corresponds to an environmental variable or not. The update rule has not been modified. The result obtained from this training will be a map where only the environmental components are organized. This way, we achieve an accurate model which represents the common environment in the best possible way. Moreover, the resulting values of the electrical features will be used for initialization in the second phase. 2. Second phase: A new SOM is trained for each building using all its variables, i.e., the environmental conditions and the electrical features. It should be noted that environmental components are the same for each building. They are initialized with the codebooks that result from the first phase. In this stage, every component will take part in the winner computation, so no mask will be applied. However, unlike the first phase, the update process is modified so that only the electrical variables are updated properly, since the environmental variables are already well organized. For that reason, the following equation is used: mi (t + 1) = mi (t) + α(t)hci (t)Ω[x(t) − mi (t)]
(2)
The new mask, Ω, is also a vector with binary values, where Ωk is 0 when it corresponds to environmental variable and 1 otherwise. Therefore, after this phase, all variables will be organized properly. The advantage over the traditional SOM is that the resulting map is topologically ordered according to the environmental model. 2.2
Comparative Analysis of the Buildings
The envSOM captures the electrical behavior of the buildings in a special way because electrical components are topologically ordered according to the common environmental conditions as mentioned above. Therefore, a correspondence between electrical components can be assumed for all buildings. The visualizations of component planes and u-matrices of each envSOM can be useful to detect patterns and correlations or to check the magnitude of the variables. Although the visual analysis could focus on these tools, this kind of visualizations is inefficient when it is necessary to consider a high number of buildings or environmental conditions. For that reason, another alternative visualization is included. A set of similarity matrices can be defined from the codebook vectors of the envSOM in order to discover similarities or differences in the electrical behavior of the buildings and find their causes. Its projection to a two-dimensional space will display a cloud of points that represents a similarity with respect to one or more variables. Similarities can also be conditioned on environmental variables. This visualization can be improved with information from clustering, so it becomes very flexible to discover or confirm knowledge. Computation of Similarity Matrices. Let N be the number of buildings. A N ×N matrix with information about similarities between the electrical behavior of all buildings can be defined with respect to one or all variables and with or
Comparative Analysis of Power Consumption in University Buildings
15
without considering the environment. The matrices allow us to compare buildings each other. The metric used to measure the similarities between buildings is the L1 distance, because the Euclidean one would emphasize high distances. There are three possibilities for computing a similarity matrix depending on the desired comparison between buildings: 1. Global similarity matrix : this matrix summarizes the L1 distances for each electrical component of codebook vectors. The similarity between buildings B(p) and B(q) is calculated using equation p (mik − mqik ) , (3) GSpq = L1 (B(p), B(q)) = i
k
where mik corresponds to the weight of neuron i and k is the electrical component. 2. Component-wise similarity matrix : this matrix indicates the L1 distances with respect to a single electrical variable of the codebook vectors. Note that several similarity matrices, one for each electrical variable, e.g., the average of voltages, active power, power factor, etc., could be computed using the following equation p k = L1 (B k (p), B k (q)) = (mik − mqik ) , (4) V Spq i
where k is the given variable. 3. Conditioned similarity matrix : this matrix contains the L1 distances for each one (or all) of the electrical components, conditioned on the environmental variables, so that different matrices can be obtained for different conditions and thresholds within each condition. For instance, it could be interesting to analyze power consumption when temperature is high or low, different time periods (morning, afternoon and night), etc. The matrix is computed using p k = L1 (B k (p), B k (q))|v = (mik − mqik ) / ν1 < v < ν2 , (5) CSpq i
where v is the environmental variable and ν1 and ν2 are the lower and upper threshold values for desired range. Projection to a Low-Dimensional Space. The abovementioned matrices contain useful information about the differences among the buildings. Nevertheless, it is quite difficult to interpret them, so a projection to a visualization space would be very useful. Sammon’s mapping was applied for each similarity matrix [16]. However, other nonlinear projection methods could also be useful for this task [17]. Sammon’s mapping is a nonlinear dimensionality reduction method that preserves the mutual distances among data points, as multidimensional scaling, but emphasizing the short distances. This way, each building can be represented by one point in a 2D graph. The proximity of the points in the visualization space indicates similarity in the original space of similarity matrices and, consequently, among the envSOM models of the buildings with regard to the specified variables or environmental conditions.
16
S. Alonso et al.
Clustering of Buildings. As a complementary method, a clustering process can be useful to separate the buildings of the Campus in groups according to the power consumption. The k-means technique has been chosen for this purpose [18]. It is applied directly to the vectorized codebook matrix of the envSOM corresponding to each building. Note that a mere clustering of the time series provides information about daily profiles but does not take into account the weather-related variables, unlike the proposed approach. As for the similarity matrices, a global, a component-wise or a conditioned clustering can be performed by using solely the corresponding vectors. The clustering process is repeated for different numbers of clusters (between 2 and 10). The minimization of two criteria is used to select the optimal number of clusters: the Davies and Bouldin index and the within-cluster sums of point-to-centroid distances [19].
3
Experimental Results
An experiment was performed to validate the proposed approach for the comparative analysis of electrical behavior of university buildings. For that purpose, electrical data from 30 buildings of the University of Le´ on were acquired during one year (from March, 2010 to February, 2011) with a sampling period of 2 minutes. A preprocessing step substituted the erroneous samples by NaNs, whereas external weather-related data, provided by AEMET (Spanish meteorological agency), were linearly interpolated at the same sampling period and added to the electrical data set. The variables considered for the analysis are the ones described in Section 2, i.e., 7 electricity-related features and 6 common environmental features. Therefore, the size of the whole data set is over 56 millions of data. The whole data set was normalized to the 0-1 range in all buildings to guarantee that all variables contribute equally to the envSOM training. The first phase of the envSOM algorithm is applied jointly for all buildings and therefore uses a 216-dimensional SOM with 6 environmental and 210 electrical variables. The winner selection mask, ω, is also 216-dimensional and masks out the electrical variables: [1 1 1 1 1 1 0 0 . . . 0]. In the second phase, an individual 13-dimensional SOM (with 6 environmental and 7 electrical variables) is trained for each building. The update mask, Ω, is [0 0 0 0 0 0 1 1 1 1 1 1 1] for each SOM. Both phases use the same free parameters, selected heuristically. The learning rate, α(t), decreases in time, the neighborhood function, hci (t), is implemented as Gaussian and the initialization is linear along the greatest eigenvectors in both phases. The number of training epochs or iterations is fixed to 100, a high enough value to achieve a good organization. The dimensions of SOMs are 70 × 50, i.e., they have 3500 units. As a result, 30 envSOM maps have been obtained, one per building. The envSOM algorithm has been implemented in Matlab as a modification of the SOM Toolbox [20]. Component planes are generated from the envSOMs. The ones corresponding to the environmental variables are shown in Figure 2a. These components define the environment which conditions the electrical behavior of buildings. Note that they are common for all buildings. It can be seen that the annual range
Comparative Analysis of Power Consumption in University Buildings Hour (X coordinate)
Hour (Y coordinate)
Temperature (ºC)
Humidity (%)
Solar radiation (W/m2)
17
Type of day (holiday/working day)
0.99
0.99
30.6
90.3
848
1
0
0
16.5
57.7
424
0.5
−0.99
2.43
25
0
0
−0.99
(a) Common environmental variables Voltage (V)
Power (KW)
Power factor
Current unbalance (%)
THD voltage (%)
THD current (%)
Energy per area (Wh/m2)
410
238
0.985
12.9
2.76
13.6
1.97
404
157
0.944
7.77
1.96
8.79
1.3
76
0.903
2.6
1.16
3.97
0.628
399
(b) Electrical variables of Building B1 (Data Center) Voltage (V)
Power (KW)
Power factor
Current unbalance (%)
THD voltage (%)
THD current (%)
Energy per area (Wh/m2)
406
5.28
0.919
197
0.0004
63.4
0.807
397
2.96
0.805
133
0.0002
45.4
0.453
388
0.65
0.69
67.6
0
27.4
0.099
(c) Electrical variables of Building B2 (Radio) Fig. 2. Component planes corresponding to the common environmental variables and the electrical variables of two different buildings. Dark colors indicate low values and light colors correspond to high values.
of the weather variables is wide, as expected in a town with continental climate. As expected, there is also a correlation between temperature, humidity and solar radiation. The coordinates X and Y take values between -1 and 1, which correspond to the sine and cosine of the angle formed by the hour hand of the clock, used to represent the time as a continuous variable. The type of day divides the map in two different zones: working days (1) and holidays (0). Regarding electrical variables, the component planes of the 30 buildings cannot be displayed due to space limitations. However, as an example, the planes of the electrical components of buildings B1 and B2 are shown in Figure 2b and Figure 2c respectively. These buildings show a very different electrical behavior. For instance, we can highlight that the power variable presents great differences not only in its range but also in its distribution. B1 shows a strong correlation between power consumption, temperature and solar radiation, which could be explained by the influence of the systems for server cooling and air conditioning at that building. B2 shows worse behavior than B1 regarding power factor and current unbalance: lower values of power factor (the power company establishes penalties when it goes below 0.95) and higher current unbalance. However, in both cases, the maximum values of current unbalance take place at night or on holidays, when power demand is low. Differences in voltages and their THDs indicate that the electric supply of the buildings comes from different points. With respect to the average THD of currents, values in B1 are very distinct from values in B2. The quality of currents is admissible in B1. On the contrary,
18
S. Alonso et al.
poor current quality appears very often in B2. The energy per area ratio in B1 is greater than this ratio in B2, due to the high consumption of the server rooms. The two-dimensional displays used to compare the buildings are computed from the envSOMs. First, 30 × 30 similarity matrices are calculated using L1 distance as explained in Section 2.2. The resulting matrices are projected to the 2-dimensional space by means of the Sammon’s mapping. The k-means algorithm is applied to the codebook vectors of the envSOMs to cluster the buildings and the result is used to label the 2D visualizations. The number of clusters is selected between 2 and 10 according to the value of the Davies and Bouldin index and the error calculated as within-cluster sums of point-to-centroid distances, using only one round. Three kinds of comparisons are possible, depending on the similarity matrix used (global, component-wise or conditioned). We present and discuss one result for each case below. Figure 3a shows the global comparison among the 30 buildings of the University of Le´ on. It can be seen that the results of the projection and clustering agree to a large extent. In this case, the optimal number of clusters, i.e., the number of different electrical profiles in the Campus buildings, is 6. Clusters 2 (squares) and 3 (diamonds) include most of the buildings. This result suggests that most of the buildings behave similarly. That confirms the expected behavior, since most of those buildings are used for teaching and research. Furthermore, the buildings used to exemplify the usefulness of the component planes, B1 and B2, are projected very far from each other and belong to different clusters (3 or diamonds and 5 or stars, respectively). This situation denotes again that the buildings have very different electrical behavior, as explained above. Cluster 4 (right-pointed triangles) only contains one building (B19) which has a very poor current quality. Cluster 6 (left-pointed triangles) groups two buildings, B10 and B16, with a strange power factor. This fact helped to detect a measurement fault in the corresponding meters. The component-wise comparisons can also be very useful to understand the behavior with respect to the relevant electrical variables. The main variable which determines the consumption, and therefore the billing, is active power. For that reason, the results for this component are shown in Figure 3b. The number of clusters according to power is 5. Most of the buildings have similar power profiles and are grouped into clusters 1 (circles), 2 (squares) and 4 (rightpointed triangles). B1 and B11 are depicted far from the remaining buildings because of their high consumption: the Veterinary School due to its large area and the existence of many laboratories with electrical machinery and cooling systems and the Data Center due to the large amount of servers, air conditioning and cooling systems. Referring to the first example, B2 is placed far away from B1 and belongs to a different cluster. This has also an easy explanation, because B2 demands very little power. Finally, comparisons conditioned on the environmental variables can be carried out to better understand the influence of the environment in the distribution of the electrical variables. As an example of that, Figure 3c presents the results obtained from the comparison of the buildings with regard to the power variable
Comparative Analysis of Power Consumption in University Buildings
19
Global comparison between buildings 4000
B19
B22
B10 B16
3000
B21
2000
B2
B18
B8
Y Dimension
B9
B4
0
B12
B14
1000
B15
B7
B17 −1000
−2000
B25
Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5 Cluster 6
−3000
−4000 −4000
B29
B28
B20
B30
B24 B27
B26 B1 −1000
−2000
−3000
B11
B23
B6
B3
B13
B5
1000 0 X Dimension
3000
2000
4000
(a) Global comparison and clustering of buildings. Component-wise comparison between buildings for active power 150
B1
Cluster 1 Cluster 2 Cluster 3 Cluster 4 Cluster 5
100
B23 B10
B24
50
B5
B15
Y Dimension
B26 0
B13
−50
B30
B11
−1200
−1000
−800
−600 −400 X Dimension
B25 B16 B18 B19 B10 B22 4 B4 B14 B1 1B 4E E2 2 B17 B 17 B B28 B2 2 E21 E2 E2 21 B29 B2 29 B9 B 9 B8 B 8 B7 B 7B B3
B20
−200
B22
B4
B14
B2
B17 B21
B28
B29 B9
B8
B7
B6
−100
−150 −1400
B19
B27
B3
B12
0
200
400
(b) Comparison and clustering of buildings with regard to active power. Conditioned comparison between buildings: Power (KW) | high temperature
40 B26
30
B27
B1 B23
Y Dimension
20 10
B24
0
B13
B5 5 B30 30
B2 B 2627 B26 B B27 B8 B8 B15 B15 1E25 E 56 E16 E525 16 E19 EE18 1 10 08922 E29 EE10 E2 2E4 E32 E22 E E2 E E17 EE9 E21 E28 E 94222 E7 E 7 E14 481 E E6 EE3 6314
B16
B7 B6
−20
B20
B11
B25
B29
B12 2
−10
B8
B15
B5
B10 B17 B9
B3
B18 B19 B28 B14
B4
B22
B2 B21
B30 B12
−30 Cluster 1 Cluster 2 Cluster 3 Cluster 4
−40 −50 −600
−500
−400
−300 −200 X Dimension
−100
0
100
(c) Comparison and clustering of buildings with regard to active power conditioned on high temperature (21 − 31o C). Fig. 3. Visual displays of building similarities and clusters
20
S. Alonso et al.
and conditioned on a certain range of high temperatures. The number of clusters is 4. It can be seen that most buildings are projected closer to each other and grouped into clusters 1 (circles) and 3 (diamonds). The explanation can be that high temperatures usually happen in the summer season, which corresponds to a low activity in the buildings. The remaining buildings that are scattered in the graph correspond to buildings with air conditioning systems which remain somewhat active for research or services in summer. It can be pointed out that B1 and B2 have a different power demand when temperature is high.
4
Conclusions
The approach presented in this paper has proven to be useful to analyze and compare the electrical behavior of the buildings of the University of Le´ on. The envSOM algorithm is used for dimensionality reduction because it models electrical data conditioned on the environment and enables the comparison among buildings with a common environment. The resulting data are projected again by means of Sammon’s mapping and clustered using k-means to obtain a global and summarized or a detailed view of consumption in the whole Campus. Component planes, comparison and cluster maps can be used as visualization tools. The results help to understand qualitatively the power consumption of the Campus buildings and can serve as a basis for more specific analyses. They reveal that the number of different electrical behaviors in the Campus buildings is 6. Considering the power demand, it can be concluded that there are 5 different profiles. An identical tariff should be applied for buildings within the same cluster. If it is possible, these buildings should also be grouped into a single billing point to take advantage of similar behaviors. The proposed data analysis allowed us to discover and confirm relationships between electrical variables and environmental ones. For example, power consumption is increased when temperature is high in buildings with large air-conditioning and cooling systems. On the other hand, this data analysis also helped to detect faults in the meters. Two of them presented an erroneous connection that caused errors in power factor measurements. Acknowledgments. This work was supported in part by the Spanish Ministerio de Ciencia e Innovaci´ on (MICINN) and with European FEDER funds under grants DPI2009-13398-C02-01 and DPI2009-13398-C02-02. We thank Agencia Estatal de Meteorolog´ıa (AEMET) of the Government of Spain for providing us the weather data for this study.
References 1. European Parliament: Directive 2010/31/EU of the European Parliament and of the Council of 19 may 2010 on the energy performance of buildings (recast). Official Journal of the European Union 53(L153) (2010)
Comparative Analysis of Power Consumption in University Buildings
21
2. Gershenfeld, N., Samouhos, S., Nordman, B.: Intelligent infrastructure for energy efficiency. Science 327(5969), 1086–1088 (2010) 3. Darby, S.: The effectiveness of feedback on energy consumption. Technical report, Environmental Change Institute. University of Oxford (2006) 4. Sforna, M.: Data mining in a power company customer database. Electric Power Systems Research 55(3), 201–209 (2000) 5. Chicco, G., Napoli, R., Piglione, F., Postolache, P., Scutariu, M., Toader, C.: Load pattern-based classification of electricity customers. IEEE Transactions on Power Systems 19(2), 1232–1239 (2004) 6. Figueiredo, V., Rodrigues, F., Vale, Z., Gouveia, J.: An electric energy consumer characterization framework based on data mining techniques. IEEE Transactions on Power Systems 20(2), 596–602 (2005) 7. Verd´ u, S., Garc´ıa, M., Senabre, C., Mar´ın, A., Franco, F.: Classification, filtering, and identification of electrical customer load patterns through the use of selforganizing maps. IEEE Transactions on Power Systems 21(4), 1672–1682 (2006) 8. Kohonen, T.: Self-Organizing Maps. Springer, New York (1995) 9. Alonso, S., Sulkava, M., Prada, M., Dom´ınguez, M., Hollm´en, J.: EnvSOM: A SOM algorithm conditioned on the environment for clustering and visualization. In: Laaksonen, J., Honkela, T. (eds.) WSOM 2011. LNCS, vol. 6731, pp. 61–70. Springer, Heidelberg (2011) 10. Eckerson, W.W.: Three tier client/server architectures: Achieving scalability, performance, and efficiency in client/server applications. Open Information Systems 3(20), 46–50 (1995) 11. Kastner, W., Neugschwandtner, G., Soucek, S., Newmann, H.: Communication systems for building automation and control. Proceedings of the IEEE 93(6), 1178– 1203 (2005) 12. Samad, T., Harp, S.A.: Self-organization with partial data. Network: Computation in Neural Systems 3, 205–212 (1992) 13. Fan, S., Chen, L., Lee, W.J.: Short-term load forecasting using comprehensive combination based on multimeteorological information. IEEE Transactions on Industry Applications 45(4), 1460–1466 (2009) 14. Kohonen, T., Oja, E., Simula, O., Visa, A., Kangas, J.: Engineering applications of the self-organizing map. Proceedings of the IEEE 84(10), 1358–1384 (1996) 15. Vesanto, J.: SOM-based data visualization methods. Intelligent Data Analysis 3(2), 111–126 (1999) 16. Sammon Jr., J.W.: A non-linear mapping for data structure analysis. IEEE Transactions on Computers 18, 401–409 (1969) 17. Lee, J.A., Verleysen, M.: Nonlinear Dimensionality Reduction. Information Science and Statistics. Springer, Heidelberg (2007) 18. Xu, R., Wunsch, D.: Survey of clustering algorithms. IEEE Transactions on Neural Networks 16(3), 645–678 (2005) 19. Halkidi, M., Batistakis, Y., Vazirgiannis, M.: On clustering validation techniques. Intelligent Information Systems 17(2), 107–145 (2001) 20. Vesanto, J., Himberg, J., Alhoniemi, E., Parhankangas, J.: SOM toolbox for matlab 5. Technical Report A57, Helsinki University of Technology (2000)
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices Jo˜ ao B´artolo Gomes1 , Mohamed Medhat Gaber2 , Pedro A.C. Sousa3 , and Ernestina Menasalvas1, 1
3
Facultad de Informatica - Universidad Politecnica Madrid, Spain
[email protected] [email protected] 2 School of Computing, University of Portsmouth, England
[email protected] Faculdade de Ciˆencias e Tecnologia, Universidade Nova de Lisboa, Portugal
[email protected] Abstract. Recent advances in ubiquitous devices open an opportunity to apply new data stream mining techniques to support intelligent decision making in the next generation of ubiquitous applications. This paper motivates and describes a novel Context-aware Collaborative data stream mining system CC-Stream that allows intelligent mining and classification of time-changing data streams on-board ubiquitous devices. CC-Stream explores the knowledge available in other ubiquitous devices to improve local classification accuracy. Such knowledge is associated with context information that captures the system state for a particular underlying concept. CC-Stream uses an ensemble method where the classifiers are selected and weighted based on their local accuracy for different partitions of the instance space and their context similarity in relation to the current context. Keywords: Collaborative Data Stream Mining, Context-awareness, Concept Drift, Ubiquitous Knowledge Discovery.
1
Introduction and Motivation
The increasing technological advances and popularity of ubiquitous devices, such as smart phones, PDAs (Personal Digital Assistants) and wireless sensor networks, open an opportunity to perform intelligent data analysis in such ubiquitous computing environments [12,7,10]. This work is focused on collaborative data stream mining on-board ubiquitous devices in complex ubiquitous environments. The goal is to learn an anytime, anywhere classification model that represents the underlying concept from a stream of labelled records [6,7]. Such model is used to predict the label of the incoming unlabelled records. However, in real-world ubiquitous applications, it
The research is partially financed by project TIN2008-05924 of Spanish Ministry of Education.
J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 22–33, 2011. c Springer-Verlag Berlin Heidelberg 2011
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices
23
is common for the underlying concept of interest to change over time [20]. The problem of learning from time-changing data streams is generally known in the literature as concept drift [25,11,19,20,8]. An effective data stream mining system must recognise and adapt to changes by continuously learning the different timechanging concepts [20,8]. Therefore, learning systems should be able to adapt to concept changes without explicitly being informed about such changes. For example, using available contextual features [24,9] or the performance of the base learner [8] as a technique to adapt to concept drift. Context awareness is an important part of ubiquitous computing [17,5,1]. In most ubiquitous applications concepts are associated with context, this means that they may reappear when a similar context occurs. For example, a weather prediction model usually changes according to the seasons. The same applies with product recommendations or text filtering models where the user interest and behaviour might change over time due to fashion, economy, spatial-temporal situation or any other context [25,9,20]. This has motivated some works [24,9] to analyse how to use context to track concept changes. In this work, we propose to use the knowledge available in other devices and context information to collaboratively improve the accuracy of local predictions. The data mining problem is assumed to be the same in all the devices, however the feature space and the data distributions are not static, as assumed in traditional data mining approaches [13,20]. We are interested in understanding how context and the knowledge available in other devices can be integrated to improve local predictive accuracy in a ubiquitous data stream mining scenario [7]. As an illustrative example, collaborative spam filtering [2] is one of the possible applications for the proposed collaborative learning approach. Each ubiquitous device learns and maintains a local filter that is incrementally learnt from a local data stream based on features extracted from the incoming mails. In addition, user usage patterns and feedback are used to supervise the filter that represents the target concept (i.e., the distinction between spam and ham). The ubiquitous devices collaborate by sharing their knowledge with others, which can improve local predictive accuracy. Furthermore, the dissemination of knowledge is faster, as devices new to the mining task, or that have access to fewer labelled records, can anticipate spam patterns that were observed in the community, but not yet locally. Moreover, the privacy and computational issues that would result from sharing the original mail are minimised, as only the filters are shared. Consequently, this increases the efficiency of the collaborative learning process. However, many challenges arise from this scenario, the major ones are: – how the knowledge from the community can be exploited to improve local predictiveness, while resolving possible conflicts, as ultimately the local knowledge should have priority over the community (i.e., user interests can be unique); – how to integrate context information in the collaborative learning process; and – how to adapt to changes in the underlying concept and provide anytime, anywhere data stream classification in ubiquitous devices.
24
J. B´ artolo Gomes et al.
To address these challenges, in this paper, we propose an incremental ensemble approach (CC-Stream) where the models from the community are selected and weighted based on their local accuracy for different partitions of the instance space and their context similarity to the current context. The rest of the paper is organised as follows. The following Section reviews the related work. Section 3 provides the preliminaries of the approach and the problem definition, which is followed by the description of the CC-Stream system in Section 4. Finally, in Section 5, our conclusions and future work are presented.
2 2.1
Related Work Collaborative Data Stream Mining
In collaborative and distributed data mining, the data is partitioned and the goal is to apply data mining to different, usually very small and overlapping, subsets of the entire data [3,18]. In this work, our goal is not to learn a global concept, but to learn from other devices their concepts, while maintaining a local or subjective point of view. Wurst and Morik [26] explore this idea by investigating how communication among peers can enhance the individual local models without aiming at a common global model. The motivation is similar to what is proposed in domain adaptation [4] or transfer learning [16]. However, these assume a batch scenario. When the mining task is performed in a ubiquitous environment [7,10], an incremental learning approach is required. In ubiquitous data stream mining, the feature space of the records that occur in the data stream may change over time [13] or be different among devices [26]. For example, in a stream of documents where each word is a feature, it is impossible to know in advance which words will appear over time, and thus what is the best feature space to represent the documents with. Using a very large vocabulary of words results inefficient, as most of the words will likely be redundant and only a small subset of words is finally useful for classification. Over time, it is also likely that new important features appear and that previously selected features become less important, which brings change to the subset of relevant features. Such change in the feature space is related to the problem of concept drift, as the target concept may change due to changes in the predictiveness of the available features. 2.2
Context-Aware Data Mining
Context representation in information systems is a problem studied by many researchers as they attempt to formally define the notion of context. Schmidt et al. [17] defined context as the knowledge about the users and device state. Moreover, Dey [5] defines context as ‘Context is any information that can be used to characterise the situation of an entity’. In contrast, Brezillon and Pomerol [1] argue that there is no particular knowledge that can be objectively called context, as context is in the eye of the beholder. They state that ‘knowledge that can be qualified as ‘contextual’ depends on the context!’. In addition, Padovitz
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices
25
et al. [15] proposed a general approach that models context using geometrical spaces called Context Spaces, which allows the inference of situations in contextaware environments. The context spaces representation is used for the context representation in the system proposed in this paper. Context dependence has been recognised as a problem in several real world domains [22,24,9]. Turney [22] was among the first ones to introduce the problem of context in machine learning. Widmer [24] exploits what is referred as contextual clues (based on the Turney [22] definition of primary/contextual features) and proposes a meta-learning method to identify such clues. Contextual clues are context-defining attributes or combinations of attributes whose values are characteristic of the underlying concept. When more or less systematic changes in their values are observed this might indicate a change in the target concept. The method automatically detects contextual clues on-line, and when a potential context change is signalled, the knowledge about the recognised context clues is used to adapt the learning process in some appropriate way. However, if the hidden context is not represented in the contextual clues it is not possible to detect and adapt to change. 2.3
Ensemble Approaches for Data Stream Mining
Ensemble approaches have been applied successfully to improve classification accuracy in data mining problems and particularly in data streams, where the underlying concept changes [19,23,14]. CC-Stream is a context-aware ensemble approach to exploit the community knowledge in a ubiquitous data stream mining scenario [7]. The proposed system is most related in terms of the learning algorithm to what has been proposed in [27] and [21] as both approaches consider concept drift, select the best classifier for each instance based on its position in the instance space, and able to learn from data streams. However, their base classifiers are learnt from chunks of a stream of trained records in a sequential method, while we explore how the knowledge available in the ubiquitous environment can be used and adapted to represent the current underlying concept for a given device. Moreover, these approaches do not consider the integration of context.
3
Preliminaries
Predictive and Context Feature Space definitions. Let X be the space of all attributes and its possible values and Y the set of possible (discrete) class labels. According to the general idea of Turney [22] and to Widmer [24] definition: Definition 1 (Predictive features). A feature (attribute-value combination) ai : vij is predictive if p(ck |ai = vij ) is significantly different from p(ck ) for some class ck ∈ Y .
26
J. B´ artolo Gomes et al.
Definition 2 (Predictive attributes). An attribute ai is predictive if one of its values vij (i.e., some feature ai : vij ) is predictive. Definition 3 (Contextual features). A feature ai : vij is contextual if it is predictive of predictive features, i.e., if p(ak : vkl |ai = vij ) is significantly different from p(ak : vkl ) for some feature ak : vkl that is predictive. Definition 4 (Contextual attributes). An attribute ai is contextual if one of its values vij is contextual. Such notions are based on a probability distribution for the observed classes given the features. However, when the probability distribution is unknown it is often possible to use background knowledge to distinguish between predictive and contextual features [22]. 3.1
Problem Definition
This work is focused on collaborative data stream mining between ubiquitous devices. Each ubiquitous device d aims to learn the underlying concept from a local stream DSd of labelled records where the set of class labels Y is fixed. However, the feature space X is not static. Let Xdi ⊆ X be the feature space for DSd and its ith record Xi = (xi , yi ) with xi ∈ Xdi and yi ∈ Y . We assume that the underlying concept is a function fd that assigns each record xi to the true class label yi . This function fd can be approximated using a data stream mining algorithm to train a model md at device d from the DSd labelled records. The model md returns the class label of an unlabelled record x, such that md (x) = y ∈ Y . The aim is to minimise the error of md (i.e., the number of predictions different from fd ). However, the underlying concept of interest fd may change over time and the number of labelled records available for that concept are sometimes limited. To address such situations, we propose to exploit models from the community and use the available labelled records from DSd to obtain a model md . We expect md to be more accurate than using the local labelled records alone when building the model. The incremental learning of md should adapt to changes in the underlying concept. Moreover, we assume context information is available in the ubiquitous devices. 3.2
Context Integration
In situations where contextual information is related to the underlying concepts, such knowledge could be exploited to improve the adaptation to the underlying concept. Nevertheless, these relations are not known apriori, and it is even possible that given the available context information it is not possible to find such relations. Still, in many real-world problems we find examples where the available context information does not explain all global concept changes, but partially explains some of these. For example, user preferences often change with time or location. Imagine a user that has different interests during the weekends, weekdays, when at home or at work. In general, different concepts can recur due
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices
27
to periodic context (e.g., seasons, locations, week days) or non-periodic context (e.g., rare events, fashion, economic trends). When exploiting context relations to improve local predictive accuracy, one could argue that context information should be simply added as additional attributes in the base learner. However, that would increase the problem dimensional complexity and introduce noise to the learning process, since concepts may change due to factors that may not be expressed as context attributes. Therefore, we believe that such context information should be integrated carefully in a meta-learning level (as discussed in [22,24,9]). 3.3
Context Representation
The context representation and similarity we propose to integrate in our approach is inspired on the Context Spaces model [15], where a context state is represented as an object in a multidimensional Euclidean space. A context state ci in context space C is defined as a tuple of N context attribute-values, ci = (ai1 , ..., ain ) where ain represents the value of context attribute an for the ith context state ci ∈ C. The available context information depends on the learning environment and data mining problem. Context information can represent simple sensors (e.g., time, temperature, humidity) or a more complex context (e.g., season, location, gait) defined by domain experts or inferred by other means beyond the scope of the problem discussed in this work. We assume that at anytime a context provider can be asked the current context state. Context Similarity. Context similarity is not a trivial problem [15], because while it could be more immediate to measure the (dis)similarity between two values in a continuous attribute, the same is not that easy when we consider categorical ones and to a greater extent when integrating the heterogeneous attributes similarity into a (dis)similarity measure between context states. For the purposes of this work the degree of similarity between context states ci and cj , using the Euclidean distance is defined as: N |ci − cj | = dist(aik − ajk ) K=1
where aik represents the k th attribute-value in context state ci . For numerical attributes distance is defined as: (aik − ajk )2 s2 where s is the estimated standard deviation for ak . For nominal attributes distance is defined as: 0 if aik = ajk dist(aik , ajk ) = 1 otherwise dist(aik , ajk ) =
28
4
J. B´ artolo Gomes et al.
CC-Stream
In this work, a context-aware collaborative learning system CC-Stream for ubiquitous data stream mining is proposed. CC-Stream uses an ensemble to combine the knowledge of different models, in order to represent the current underlying concept. There is a large number of ensemble methods to combine models, which can be roughly divided into: i) voting methods, where the class that gets most votes is chosen [19,23,14]; and ii) selection methods, where the ”‘best”’ model for a particular instance is used to predict the class label [27,21]. Algorithm 1. CC-Stream Training Require: Data stream DS of labelled records, Context Provider context, window w of records. 1: repeat 2: ci = context.getCurrentContext(); 3: Add next record DSi from DS to w; 4: if w → numRecords > wM axSize then 5: f orget(w → oldestRecord); 6: end if 7: r = getRegion(DSi); 8: for all M odel → mj do 9: prediction := mj .classif y(DSi ); 10: if prediction = DSi → class then 11: updateRegionCorrect(r, mj ); 12: addContext(ci , mj ); 13: end if 14: updateRegionT otal(r, mj ); 15: end for 16: until END OF STREAM
The CC-Stream system uses a selection method that partitions the instance space X into a set of regions R. For each region, an estimate of the models’ accuracy is maintained over a sliding window. This estimated value is updated incrementally as new labelled records are observed in the data stream or new models are available. This process (in Algorithm 1), can be considered a metalearning task where we try to learn for each model from the community how it best represents the local underlying concept for a particular region ri ∈ R. Furthermore, for each context state obtained from the context provider the models that are accurate for that context are associated with it. Figure 1 shows the regions that result from the partition of the instance space. Moreover, it shows an illustrative context space with some possible context states that are associated with particular models (i.e., different colours). For the classification of a new unlabelled record xi , CC-Stream uses the best model prediction. The best model is considered to be the one that is more accurate for the partition ri that contains the new record and that is associated
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices
29
Algorithm 2. CC-Stream Classification Require: Data stream DS of unlabelled records, Context Provider context. 1: repeat 2: ci = context.getCurrentContext(); 3: Get DSi from DS; 4: r := getRegion(DSi ); 5: for all M odel → mj do 6: model := argmaxj (getAccuracy(mj , r), getContext(mj , ci )); 7: end for 8: return model.classif y(DSi ); 9: until END OF STREAM
with a context similar to the current one, as detailed in Algorithm 2. The accuracy for a region ri is the average accuracy for each value of its attributes. The next sections explain how the regions are created using the attribute values, how the base learner deals with the dynamic feature space in each device and how context is integrated. 4.1
Creating Regions
An important part of CC-Stream is to learn for each region of the instance space X which model mj performs better. This way mj predictions can be used with confidence to classify incoming unlabelled records that belong to that particular region. The instance space can be partitioned in several ways, here we follow the method used by Zhu et al. [27], where the partitions are created using the different values of each attribute. For example, if an attribute has two values, two
Fig. 1. Partion the instance space into regions and classifiers associated context states
30
J. B´ artolo Gomes et al.
estimators of how the classifiers perform for each value are kept. If the attribute is numeric, it is discretised and the regions use the values that result from the discretisation process. This method has shown good results and it represents a natural way of partitioning the instance space. However, there is an increased memory cost associated with a larger number of regions. To minimise this cost the regions can be partitioned into higher granularity ones, aggregating attribute values into a larger partition. This is illustrated in Figure 1, where the values V 4 and V 5 of attribute A1 are grouped into regions r41 to r45. 4.2
Base Learner
In CC-Stream the knowledge base of each device uses local knowledge and knowledge received from the community. The local model is incrementally built using the data stream incoming records. This model is treated similarly to the models received from the community. Its global performance is monitored and it is expected to increase with the number of training records [6]. Once its performance is stable the model can be shared with other devices. The base learner algorithm is used to learn a model that represents the data stream underlying concept. Any classification algorithm able to learn incrementally and to handle a dynamic feature space (i.e., each feature is treated independently ) can be used for this task. The decision of the base learner algorithm can be made according to the nature of data to be mined, choosing the algorithm that best suits it (e.g., high accuracy, handles noise, memory consumed, faster processing time). Some popular algorithms that can be used as base learner are the Naive Bayes and Nearest Neighbour Algorithms [24,13]. 4.3
Integration of Context
To exploit the context information available in ubiquitous devices, we propose to learn which models are able to represent the current underlying concept given a certain context state. When a new labelled records is processed in the data stream the context provider is asked for the current context state. For each context state we keep estimators of model performance (described in Algorithm 1). Such information is used when the system is asked to classify a new unlabelled record. This procedure is described in Algorithm 2. In line 6 we are interested in maximising the following function argmaxj (getAccuracy(mj , r), getContext(mj , ci )) = wr × getAccuracy(mj , r) + wc × getContext(mj , ci ) where the getAccuracy(mj , r) is the accuracy of model mj for region r and getContext(mj , ci ) is the context performance of model mj in situations of a context similar to the occurring context state ci . This is calculated using the context state cj that is associated with model mj and that is the nearest (i.e., smallest distance, using the distance function defined in Section 3.3) to ci . The wr and wc are weights given to the different factors of the selection function.
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices
31
In contrast with the information about models accuracy for the different regions of the instance space that is kept over a sliding window ( because of concept drift ), context information associated with the models is kept over the entire learning process as it will capture higher level patterns (i.e., the relation between context and models/concepts) of the long-term process [24]. Resource Awareness. Resource-awareness is an important issue in ubiquitous data stream mining [7]. In such a dynamic ubiquitous usage scenario, it is possible for CC-Stream to receive more knowledge from the community over time that it can keep in memory. In such situations we propose to discard past models that have the lowest performance to allow the integration of new, promising models.
5
Conclusions and Future Work
This paper discusses collaborative data stream mining in ubiquitous environments to support intelligent decision making in the next generation of ubiquitous applications. We propose the CC-Stream system, an ensemble approach combines the knowledge of different models available in the ubiquitous environment. It incrementally learns which models are more accurate for certain regions of the feature space and context space. CC-Stream is able to locally adapt to changes in the underlying concept using a sliding window of the models estimates for each region. Moreover, as base learner CC-Stream uses incremental algorithms that consider each individual feature independently. This way it is able to handle the heterogeneity across the devices models and able to deal with the data stream dynamic feature space. As future work we would like to evaluate CC-Stream with synthetic and real data sets in terms of accuracy and resource consumption. The context similarity function is something we also plan to evaluate and improve. Moreover, we plan to develop a news recommender application that uses CC-Stream to deliver personalised content to a smart phone user while reducing information overload. Acknowledgments. The work of J.P. B´ artolo Gomes is supported by a Phd Grant of the Portuguese Foundation for Science and Technology (FCT) and a mobility grant from Consejo Social of UPM that made possible his stay at the University of Portsmouth. This research is partially financed by project TIN2008-05924 of Spanish Ministry of Science and Innovation. Thanks to the FCT project KDUDS (PTDC/EIA-EIA/98355/2008).
References 1. Brezillon, R., Pomerol, J.C.: Contextual knowledge sharing and cooperation in intelligent assistant systems. Travail Humain 62, 223–246 (1999) 2. Cortez, P., Lopes, C., Sousa, P., Rocha, M., Rio, M.: Symbiotic Data Mining for Personalized Spam Filtering. In: IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technologies, WI-IAT 2009, vol. 1, pp. 149–156. IEEE, Los Alamitos (2009)
32
J. B´ artolo Gomes et al.
3. Datta, S., Bhaduri, K., Giannella, C., Wolff, R., Kargupta, H.: Distributed data mining in peer-to-peer networks. IEEE Internet Computing, 18–26 (2006) 4. Daum´e III, H., Marcu, D.: Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research 26(1), 101–126 (2006) 5. Dey, A.K., Abowd, G.D., Salber, D.: A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. Human-Computer Interaction 16(2), 97–166 (2001) 6. Domingos, P., Hulten, G.: Mining high-speed data streams. In: Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 71–80. ACM, New York (2000) 7. Gaber, M.M., Krishnaswamy, S., Zaslavsky, A.: Ubiquitous data stream mining. In: Current Research and Future Directions Workshop Proceedings held in conjunction with The Eighth Pacific-Asia Conference on Knowledge Discovery and Data Mining, Sydney, Australia. Citeseer (2004) 8. Gama, J., Medas, P., Castillo, G., Rodrigues, P.: Learning with drift detection. In: Bazzan, A.L.C., Labidi, S. (eds.) SBIA 2004. LNCS (LNAI), vol. 3171, pp. 286–295. Springer, Heidelberg (2004) 9. Harries, M.B., Sammut, C., Horn, K.: Extracting hidden context. Machine Learning 32(2), 101–126 (1998) 10. Hotho, A., Pedersen, R., Wurst, M.: Ubiquitous Data. In: May, M., Saitta, L. (eds.) Ubiquitous Knowledge Discovery. LNCS, vol. 6202, pp. 61–74. Springer, Heidelberg (2010) 11. Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 97–106. ACM, New York (2001) 12. Kargupta, H., Bhargava, R., Liu, K., Powers, M., Blair, P., Bushra, S., Dull, J., Sarkar, K., Klein, M., Vasa, M., et al.: VEDAS: A Mobile and Distributed Data Stream Mining System for Real-Time Vehicle Monitoring. In: Proceedings of SIAM International Conference on Data Mining, vol. 334 (2004) 13. Katakis, I., Tsoumakas, G., Vlahavas, I.: On the utility of incremental feature selection for the classification of textual data streams. In: Bozanis, P., Houstis, E.N. (eds.) PCI 2005. LNCS, vol. 3746, pp. 338–348. Springer, Heidelberg (2005) 14. Kolter, J.Z., Maloof, M.A.: Dynamic weighted majority: An ensemble method for drifting concepts. The Journal of Machine Learning Research 8, 2755–2790 (2007) 15. Padovitz, A., Loke, S.W., Zaslavsky, A.: Towards a theory of context spaces. In: Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops, 2004, pp. 38–42 (2004) 16. Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 1345–1359 (2009) 17. Schmidt, A., Beigl, M., Gellersen, H.W.: There is more to context than location. Computers & Graphics 23(6), 893–901 (1999) 18. Stahl, F., Gaber, M.M., Bramer, M., Yu, P.S.: Pocket Data Mining: Towards Collaborative Data Mining in Mobile Computing Environments. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), vol. 2, pp. 323–330. IEEE, Los Alamitos (2010) 19. Street, W.N., Kim, Y.S.: A streaming ensemble algorithm (SEA) for large-scale classification. In: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 377–382. ACM, New York (2001) 20. Tsymbal, A.: The problem of concept drift: definitions and related work. Computer Science Department, Trinity College Dublin (2004)
Context-Aware Collaborative Data Stream Mining in Ubiquitous Devices
33
21. Tsymbal, A., Pechenizkiy, M., Cunningham, P., Puuronen, S.: Dynamic integration of classifiers for handling concept drift. Inf. Fusion 9, 56–68 (2008) 22. Turney, P.D.: Exploiting context when learning to classify. In: Brazdil, P.B. (ed.) ECML 1993. LNCS, vol. 667, pp. 402–407. Springer, Heidelberg (1993) 23. Wang, H., Fan, W., Yu, P.S., Han, J.: Mining concept-drifting data streams using ensemble classifiers. In: Proceedings of the Ninth ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, pp. 226–235. ACM, New York (2003) 24. Widmer, G.: Tracking context changes through meta-learning. Machine Learning 27(3), 259–286 (1997) 25. Widmer, G., Kubat, M.: Learning in the presence of concept drift and hidden contexts. Machine learning 23(1), 69–101 (1996) 26. Wurst, M., Morik, K.: Distributed feature extraction in a p2p setting–a case study. Future Generation Computer Systems 23(1), 69–75 (2007) 27. Zhu, X., Wu, X., Yang, Y.: Effective classification of noisy data streams with attribute-oriented dynamic classifier selection. Knowl. Inf. Syst. 9, 339–363 (2006)
Intra-Firm Information Flow: A Content-Structure Perspective Yakir Berchenko1,2, Or Daliot3 , and Nir N. Brueller4 1
The Leslie and Susan Gonda Multidisciplinary Brain Research Center Bar Ilan University, Ramat Gan 52900, Israel 2 University of Cambridge, Department of Veterinary Medicine Madingley Road, Cambridge CB3 0ES, United Kingdom 3 Faculty of Law, Hebrew University of Jerusalem, Mt Scopus, Jerusalem 91905, Israel 4 Faculty of Management, Tel-Aviv University, Ramat-Aviv, Tel-Aviv 69978, Israel
[email protected] Abstract. This paper endeavors to bring together two largely disparate areas of research. On one hand, text mining methods treat each document as an independent instance despite the fact that in many text domains, documents are linked and their topics are correlated. For example, web pages of related topics are often connected by hyperlinks and scientific papers from related fields are typically linked by citations. On the other hand, Social Network Analysis (SNA) typically treats edges between nodes according to ”flat” attributes in binary form alone. This paper proposes a simple approach that addresses both these issues in data mining scenarios involving corpora of linked documents. According to this approach, after assigning weights to the edges between documents, based on the content of the documents associated with each edge, we apply standard SNA and network theory tools to the network. The method is tested on the Enron email corpus and successfully discovers the central people in the organization and the relevant communications between them. Furthermore, Our findings suggest that due to the non-conservative nature of information, conservative centrality measures (such as PageRank) are less adequate here than non-conservative centrality measures (such as eigenvector centrality). Keywords: Natural language processing, social network analysis, information flow.
1
Introduction
The study of information flow in an organization or a social network is linked to issues of productivity, efficiency and drawing useful conclusions about the business processes of the organization. It can lead to insights on employee interaction patterns within the organization at different levels of the organization’s hierarchy. Historically, research addressing this subject has been conducted by social scientists and physicists [1,2,3] and previous work has emphasized binary J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 34–42, 2011. c Springer-Verlag Berlin Heidelberg 2011
Information Flow: A Content-Structure Perspective
35
or ”flattened” interaction data [4]. For example, in [5] the authors studied communications in a modern organization by analyzing a data set with millions of electronic mail messages, exchanged between many thousands of employees, to explore the role of boundaries between individuals in structuring communications within the firm. Such studies, however, tend to ignore the richness of the linguistic content of the interactions (words, topics, etc.). On the other hand, research in natural language processing (NLP) has focused on analyzing documents in corpora without taking into consideration the fact that they are actually representing interactions of some sort (see [6] for a review of natural language processing). Obviously, in reality, each document could be part of a larger networked object composed of entities (e.g. authors) and links between them (e.g. correspondence). Against this backdrop we introduce a simple and general approach which utilizes both the content of the documents in a given corpus as well as the links presented in the data. Thus we are able to focus on specific topics and better characterize the flow of information regarding each of these topics. This allows us to find, for example, the central people in a network (organization) and their meaningful communications in our target topics. Briefly, the method is comprised of the following three steps: – Assign each document a score according to the relevance of its content to the topic of interest. – Build a weighted network, where the weight of each edge is determined according to the scores of the documents associated with it. – Apply one of the SNA or network theory methods to study weighted networks (e.g. calculate the centrality). Although there is a rich and well established methodology for the first and third steps (see [6] for a NLP review, and [7] for a review of network theory) they have never been combined. In a sense, the intuitive and simple second step is our major contribution here. In the remainder of this paper we review related work (section 2) and provide more detail on our method (section 3) which we tested (section 4) on the Enron database [9].
2
Related Work
In [4] the authors modeled the spread of information in an email correspondence network essentially as a random branching process with offspring distributed as the number of individuals to whom a person sends emails. The underlying premise in [4] was that if and only if the branching process is supercritical then a large fraction of the population will receive the information (see [8] for an introduction to branching processes). A more advanced approach was taken by [3] which presented a descriptive analysis of a cellular-phone network of oneto-one human communication. Onnela et al. [3] considered a network weighted
36
Y. Berchenko, O. Daliot, and N.N. Brueller
according to the number of conversations between two ”nodes” (or total duration of the conversations), taking the plausible assumption that this is indicative of the amount of information passed along that edge. Similarly, a recent study of communications in a large firm [5] analyzed millions of emails by measuring dyad-level probabilities of communications. The subject of centrality was under the spotlight in [2], where Newman studied scientific coauthorship networks with edges connecting a pair of nodes if they wrote a paper together. Although Newman [2] allowed the edges to have weights depicting the number of joint papers by two authors, his work suffers from the same shortcomings as [4] and [3]: the weights are only vaguely related to the amount or sort of information passed along an edge. Another unresolved issue, in the case of Newman [2], is how papers in a certain discipline be chosen as a basis for creating coauthorship networks? In [2] Newman selected these somewhat arbitrarily (from four publicly available databases of papers), whereas our approach allows us to examine a given network and weigh each paper according to its relevance to the discipline and subdiscipline we are interested in. Perhaps the closest approach to the one taken here is the interesting Bayesian generative model suggested in [10] that describe both links and text messages exchanged on these links. However, though [10] is alert to different salient topics appearing in the data (Enron email corpus), structurally the authors relied mostly on the isolated sender-recipient substructure. Thus, the recursive nature of the network and the relations between its nodes was ignored. We feel, however, that recursive and global measures such as PageRank [14] and eigenvector centrality are more appropriate. Indeed, even in the realm of standard NLP, where links between documents are absent, researchers have introduced artificial links to facilitate clustering of documents [13]. The use of both the content of web pages and the global network structure was the key rationale in [14] which introduced PageRank and Google. In order to rank web pages according to their relevance to the user’s query Google actually combines two different values, which in a sense are orthogonal. For each page, the score assigned after considering only the content of the page, and the score assigned by PageRank, representing the centrality of the page in the network, are combined in some way. As acknowledged in [14], figuring out how to calculate the first score and combine it with the second ”is something of a black art”. Adapting our approach to this problem is straightforward - instead of using the ”random surfer” model (which goes uniformly at random from a web page to one of the pages it points to) [14] we use a ”content biased surfer”. This new ”surfer” simply goes from one page to another one it points to with a probability that is proportional to the relevance of its content to the query.
3
Method
Let us recall the definitions of eigenvector centrality and PageRank centrality, and rephrase the last paragraph of section 2 with more rigor. Given a network, possibly of web pages connected by hyperlinks, we denote the adjacency matrix
Information Flow: A Content-Structure Perspective
37
of the network by A, i.e. aij = 1 if node j is pointing to node i, and aij = 0 otherwise. We denoteby A˜ the column-normalized adjacency matrix of the network, i.e. a ˜ij = aij / k aik . The premise underlying eigenvector centrality is that the importance (centrality) of a web page is proportional to the sum of the pages pointing to it; thus if ci is the centrality of node i then ci = λ j aij cj , where λ is some constant factor. Rewriting the last equation in matrix notation we get C = λAC, where C is a vector containing the ci ’s. Now, assuming that A is a primitive matrix1 , the Perron - Frobenius theorem guarantees us the existence of a unique (normalized) right-eigenvector having positive entries. This eigenvector is exactly our centrality vector C (hence the name). A similar approach lies behind PageRank centrality, only now one needs to ˜ However, because in the general case the matrix solve the equation: C = λAC. ˜ A (and A) is not primitive, Ref. [14] introduced the following matrix: M := (1 − α)A˜ + αE, where each entry of the matrix E is equal to n1 (n being the number of nodes) and α is some parameter (typically α ≈ 0.15). Another intuitive basis for this choice of M is random walks on graphs, or the ”random surfer” model. It is easy to see that M is the transition matrix for a ”random surfer” who simply keeps clicking on successive links at random, and once in a while (with probability α) jumps to a new page from the entire network uniformly at random. Thus the leading eigenvector of M (associated with the largest eigenvalue, λmax = 1 in this case) which solves C = M C corresponds to the standing probability of the random walk on the graph in the long time limit [14]. What happens when a user presents Google with a query on a specific topic? PageRank is based solely on the location of web pages in the graph structure of the Web, regardless of their content, so an additional component is required. While it is not specified exactly how Google resolve this issue, the general strategy is to give each page i a score si based on its relevance to the query (word frequencies, etc.) and then somehow combine ci and si to get a final rank ri . Here we ask the following question: why should the surfer keep clicking on links completely at random? What happens if he chooses a link with a probability that is proportional to the relevance of its destination to the topic of the surfer’s interest? Thus, the transition matrix for our ”content biased surfer” + αE where (after weighing with E to ensure primitivity) is: M := (1 − α)W w ˜ij = si /Σj sj . Now we are able to extract the rank for each page immediately from the leading eigenvector of M (see fig. 1). Adapting our method to other applications is straightforward; in the case of a coauthorship networks, as in [2], we do the following: 1) Assign each paper, k, a score sk according to its relevance to the topic (discipline) of interest. 2) Build a matrix (weighted network) W with entries 1
A nonnegative square matrix is said to be a primitive matrix if there exists k such that all the entries of Ak are positive. For more on primitive matrices and their use in ranking, see [15,16].
38
Y. Berchenko, O. Daliot, and N.N. Brueller
wij =
k∈Pij
sk
where Pij is the set of paper written together by authors i and j (and possibly others). 3) Find the central authors in the network. As a proof of concept we exemplify on a collections of NIPS (Neural Information Processing Systems) conference papers2 how to find the most central authors in two different subdisciplines - the computer science oriented and the biologicalmodeling oriented. In step 1 we assigned each paper k a score according to its relevance to the computer-science subdiscipline (biological-modeling subdiscipline), bm scs k (respectively, sk ). This was done by summing the frequencies of occurrence in the paper of the following words: [computation, information, processing, unit, computational, distributed, component] (respectively, the words: [transduction, ion, channel, cell, voltage, membrane, neuron]). We built a weighted network W cs (respectively, W bm ) as in step 2, ignoring papers with a single author (i.e. wii = 0), and performed centrality analysis as above (with similar results by PageRank and eigenvector centrality). We found that the most central author in the computer-science subdiscipline was Terry Sejnowski and the most central author in the biological-modeling subdiscipline was Christof Koch. Indeed, Terry Sejnowski has an h-index (”an index to quantify an individual’s scientific research output” [18]) of 91 and his most cited paper ”An Information-Maximization Approach to Blind Separation and Blind Deconvolution” has been cited 3560 times, and Christof Koch has an h-index of 55 and his most cited paper ”A Model of Saliency-Based Visual Attention for Rapid Scene Analysis” has been cited 1113 times3 . A third, more sophisticated, centrality measure which takes into consideration the lack of symmetry in a network and assumes there are two kinds of nodes, hubs and authorities, is the famous hubs-authorities algorithm by Kleinberg [11]. Here one computes one score for each node as a hub (”sender”) by solving H = λW T W H and another score for each node as an authority (”recipient”) by solving U = λW W T U , where W T is the transpose of the matrix W .
4
Enron Email Corpus
Below we report the results of applying the method to the Enron Email Dataset. At the end of 2001 it was revealed that the reported financial condition of the Enron Corporation, an American energy company, was sustained substantially by institutionalized and systematic accounting fraud. During an investigation by the Federal Energy Regulatory Commission the Enron email dataset was made public, including various personal as well as official emails. The database was later collected and prepared by Melinda Gervasio at SRI for the CALO project. 2 3
Statistics for the data and links to files in Matlab format in [17]. Statistics were calculated on Google Scholar by Quadsearch http://quadsearch.csd.auth.gr/index.php
Information Flow: A Content-Structure Perspective
a)
b) Data
Content Of Documents
Calculate Centrality
Evaluate Content
-
Combine Scores
Topic
Data
Topic
A
C
39
Content Of Documents
Evaluate Content
A
S
Make Weighted Network
W
R
S
Calculate Centrality
R
Fig. 1. Architecture for combining content and network structure for ranking. a) Given a dataset and a topic that we are interested in, the standard approach decomposes the data into two components, orthogonal in a sense. The content of each page is evaluated according to its relevance to the topic to give a score si , while the position of the document in the network is used to yield a centrality score ci . These two scores are then combined to yield a final rank ri . b) In the approach presented here we first combine the content-based scores and the network structure, producing a weighted network W . We then calculate the centrality on W .
William Cohen from CMU posted the dataset on the web for researchers4. This version of the dataset contains around 517,000 emails from 151 users (encompassing altogether more than 20, 000 users) divided into 3500 folders. A report on yet another Enron database and dataset characteristics is available in [9]. In order to test our method, we focused on a subject we expected to be present in the Enron email correspondence network, with prominent role players and significant communications - the legal domain. We manually selected 290 emails related to legal affairs and identified 127 Enron employees associated with the legal domain (most of them in-house lawyers, but also a few of their secretaries). Together with another 290 emails discussing a variety of topics we used a quarter of each group (70 emails) to train a Support Vector Machine (SVM) [12] to assign a value tempi to each email i according to its relevance to the legal domain. The distribution of the scores was roughly bell-shaped in the range of [-1.4 1.4]. In order to both increase the entropy of the distribution of the scores [6] and to obtain positive weights for building the network, we raised a parameter b to the power of tempi , thus retaining a score si = btempi . We then built a matrix (weighted network) W with entries wij =
k∈Pij
sk
where Pij is the set of emails sent from j to i (and possibly others). When applying centrality analysis to the network W we find the following: 4
http://www.cs.cmu.edu/ enron/
40
Y. Berchenko, O. Daliot, and N.N. Brueller
b)
SVM only SVM + centrality Random
1- Sensitivity
# of "lawyers" discovered
a)
0.1
# of people checked
0.2
0.3
0.4 0.5 0.6
0.7 0.8 0.9
1
1- Specificity
Fig. 2. a) The number of employees related to the legal domain discovered versus the number of people checked. Most of the employees were discovered very early, and all (127) before checking 5000 (out of >20,000). b) Centrality scores combined with NLP methods improve information retrieval. A false-alarm / misdetection curve comparing quality of classification by SVM only (blue) and our method (green). Intuitively this portrays a path starting at the top left corner (1, 0) and checking the 440 test emails one at a time - if classified correctly the path continues one step down, else it continues a step to the right (thus a random classifier would result in the diagonal dashed grey line).
4.1
Using Topic Dependent Centrality We Can Find the Important People for a Given Topic
First we verified that indeed the most central people in the network are the most important and relevant people in the organization to the topic of interest. For this purpose we sorted the > 20, 000 users according to their centrality score (hubs-authorities) in a decreasing manner. The 127 employees related to the legal domain were indeed ranked high on the list; when plotting the number of ”lawyers” discovered versus the number of people checked, figure 2a demonstrates that most of the employees related to the legal domain were discovered very early, and all before checking 5000. 4.2
Centrality Scores Combined with NLP Methods Improve Information Retrieval; In Particular Non-conservative Ones
To test whether we could also find the important communications on legal affairs, we assigned a score ek to each email k combining the score for its content sk and the centrality score of its sender ci (with similar results for considering the recipient as well) for example: e k = ci sk . We then ranked the 440 emails in the test set and plotted a false-alarm/ missdetection curve (ROC or DET curve). Figure 2b shows that combining centrality scores (hubs-authorities) with SVM improves information retrieval in comparison to SVM alone.
Information Flow: A Content-Structure Perspective
41
An interesting phenomenon we noticed at this point (not shown) is that this improvement was much more significant when using eigenvector centrality or hubs-authorities centrality as opposed to PageRank centrality. In addition, there was a slight advantage to hubs-authorities centrality over eigenvector centrality. The reason for this is probably the non-conservative nature of information and the fact that it can be created (or destroyed) out of nothing; therefore it is not adequate to apply conservative centrality measures, such as PageRank, which normalizes and equalize the amount of information leaving each node. Hubsauthorities centrality, with its added sensitivity to the lack of symmetry in the structure of the data, an email correspondence network, thus emerges as the slightly more suitable centrality measure of the two non-conservative ones.
5
Discussion
We described here a new approach for combining NLP and SNA in order to study networks of linked documents. This approach should be useful when trying to find the key individuals in an organization and to study the flow of information. Our method could inform future studies such as [5]. The study found that women, mid- to high-level executives, and members of the executive management, sales and marketing functions are most likely to participate in cross-group communications. In effect, these individuals bridge the lacunae between distant groups in the company’s social structure. Extending this analysis to include context sensitivity could facilitate similar endeavors by differentiating between domains in which the organizational benefits from high-quality communications, and those in which the organization lacks connectivity. Perhaps most advanced work attempting a combination of the sort described here is [19] where a generative probabilistic model was combined with PageRank to mine a digital library for influential authors. However, our study suggests that due to the non-conservative nature of information, PageRank is not the best suited for such purpose and better results would be obtained by choosing a non-conservative centrality measure such as eigenvector centrality. Our findings and observation regarding this non-conservative nature of information should lead to major improvements in previous work [21,20]. Acknowledgements. We would like to thank colleagues from our institutions, especially Yoed Kenett, for insightful discussions. YB is supported by the National Institutes of Nursing Research, grant NR10961.
References 1. Wasserman, S., Faust, K.: Social network analysis: Methods and applications. Cambridge University Press, Cambridge (1994) 2. Newman, M.E.J.: Who is the best connected scientist? A study of scientific coauthorship networks in Complex Networks. In: Ben-Naim, E., Frauenfelder, H., Toroczkai, Z. (eds.) pp. 337–370. Springer, Berlin (2004)
42
Y. Berchenko, O. Daliot, and N.N. Brueller
3. Onnela, J.-P., Saram¨ aki, J., Hyvonen, J., Szab´ o, G., Argollo de Menezes, M., Kaski, K., Barab´ asi, A.-L., Kert´esz, J.: Analysis of a large-scale weighted network of oneto-one human communication. New J. Phys. 9, 179 (2007) 4. Wu, F., Huberman, B.A., Adamic, L.A., Tyler, J.R.: Information flow in social groups. Physica A 337, 327–335 (2004) 5. Kleinbaum, A.M., Stuart, T.E., Tushman, M.L.: Communication (and Coordination?) in a Modern, Complex Organization. Harvard Business School Working Paper, no. 09-004 (July 2008) 6. Manning, C.D., Sch¨ utze, H.: Foundations of Statistical Natural Language Processing, 1st edn. The MIT Press, Cambridge (1999) 7. Boccaletti, S., Latora, V., Moreno, Y., Chavez, M., Hwang, D.U.: Complex networks: structure and dynamics. Physics Reports 424, 175–308 (2006) 8. Athreya, K.B., Ney, P.E.: Branching Processes. Courier Dover Publications (2004) 9. Shetty, J., Adibi, J.: The Enron email dataset database schema and brief statistical report (Technical Report). Information Sciences Institute (2004) 10. McCallum, A., Corrada-Emmanuel, A., Wang, X.: Topic and Role Discovery in Social Networks. In: IJCAI (2005) 11. Kleinberg Jon, M.: Authoritative sources in a hyperlinked environment. Journal of the ACM 46(5), 604–632 (1999) 12. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and other kernel-based learning methods. Cambridge University Press, Cambridge (2000) 13. Kurland, O., Lee, L.: Respect my authority! HITS without hyperlinks, utilizing cluster-based language models. In: Proceedings of SIGIR 2006, pp. 83–90 (2006) 14. Page, L., Brin, S., Motwani, R., Winograd, T.: The PageRank citation ranking: bringing order to the web. Tech. rep. Stanford Digital Library Technologies Project (1998) 15. Burgess, M., Canright, G., Engø-Monsen, K.: Mining location importance from the eigenvectors of directed graphs (2006), http://research.iu.hio.no/papers/directed.pdf 16. Langville Amy, N., Meyer Carl, D.: Deeper inside PageRank. Internet Mathematics Journal (2004) 17. Rosen-Zvi, M., Griffiths, T., Steyvers, M., Smyth, P.: The author-topic model for authors and documents. In: Proceedings of the Conference on Uncertainty in Artificial Intelligence, vol. 20 (2004) 18. Hirsch, J.E.: An index to quantify an individual’s scientific research output. PNAS 102(46), 16569–16572 (2005) 19. Mimno, D., McCallum, A.: Mining a digital library for influential authors. In: Joint Conference on Digial Libraries, JCDL (2007) 20. Frikh, B., Djanfar, A.S., Ouhbi, B.: An intelligent surfer model combining web contents and links based on simultaneous multiple-term query. In: Computer Systems and Applications, AICCSA 2009 (2009) 21. Richardson, M., Domingos, P.: Combining Link and Content Information in Web Search. Web Dynamics, 179–194 (2004)
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions Christian Borgelt1 and Tobias K¨ otter2 1
European Centre for Soft Computing, c/ Gonzalo Guti´errez Quir´ os s/n, E-33600 Mieres (Asturias), Spain
[email protected] 2 Dept. of Computer Science, University of Konstanz, Box 712, D-78457 Konstanz, Germany
[email protected] Abstract. Mining fault-tolerant (or approximate or fuzzy) item sets means to allow for errors in the underlying transaction data in the sense that actually present items may not be recorded due to noise or measurement errors. In order to cope with such missing items, transactions that do not contain all items of a given set are still allowed to support it. However, either the number of missing items must be limited, or the transaction’s contribution to the item set’s support is reduced in proportion to the number of missing items, or both. In this paper we present an algorithm that efficiently computes the subset size occurrence distribution of item sets, evaluates this distribution to find fault-tolerant item sets, and exploits intermediate data to remove pseudo (or spurious) item sets. We demonstrate the usefulness of our algorithm by applying it to a concept detection task on the 2008/2009 Wikipedia Selection for schools.
1
Introduction and Motivation
In many applications of frequent item set mining one faces the problem that the transaction data to analyze is imperfect: items that are actually contained in a transaction are not recorded as such. The reasons can be manifold, ranging from noise through measurement errors to an underlying feature of the observed process. For instance, in gene expression analysis, where one may try to find coexpressed genes with frequent item set mining [9], binary transaction data is often obtained by thresholding originally continuous data, which are easily affected by noise in the experimental setup or limitations of the measurement devices. Analyzing alarm sequences in telecommunication data for frequent episodes can be affected by alarms being delayed or dropped due to the fault causing the alarm also affecting the transmission system [18]. In neurobiology, where one searches for assemblies of neurons in parallel spike trains with the help of frequent item set mining [11,4], ensemble neurons are expected to participate in synchronous activity only with a certain probability. In this paper we present a new algorithm to cope with this problem, which efficiently computes the subset size occurrence distribution of item sets, evaluates this distribution to find fault-tolerant item sets, and uses intermediate data to remove pseudo (or spurious) item sets. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 43–54, 2011. c Springer-Verlag Berlin Heidelberg 2011
C. Borgelt and T. K¨ otter items
fault-tolerant
transactions
transactions
perfect/standard
transactions
items transactions
transactions
items
transactions
44
pseudo/spurious
Fig. 1. Different types of item sets illustrated as binary matrices
The rest of this paper is organized as follows: in Section 2 we review the task of fault-tolerant item set mining and some approaches to this task. In Section 3 we describe how our algorithm traverses the search space and how it efficiently computes the subset size occurrence distribution for each item set it visits. In Section 4 we discuss how the intermediate/auxiliary data that is available in our algorithm can be used to easily cull pseudo (or spurious) item sets. In Section 5 we compare our algorithm to two other algorithms that fall into the same category, and for certain specific cases can be made to find the exact same item sets. In addition, we apply it to a concept detection task on the 2008/2009 Wikipedia Selection for Schools to demonstrate its practical usefulness. Finally, in Section 6 we draw conclusions and point out possible future work.
2
Fault-Tolerant or Approximate Item Set Mining
In standard frequent item set mining only transactions that contain all of the items in a given set are counted as supporting this set. In contrast to this, in fault-tolerant item set mining transactions that contain only a subset of the items can still support an item set, though possibly to a lesser degree than transactions containing all items. Based on the illustration of these situations shown on the left and in the middle of Figure 1, fault-tolerant item set mining has also been described as finding almost pure (geometric or combinatorial) tiles in a binary matrix that indicates which items are contained in which transactions [10]. In order to cope with missing items in the transaction data to analyze, several fault-tolerant (or approximate or fuzzy) frequent item set mining approaches have been proposed. They can be categorized roughly into three classes: (1) errorbased approaches, (2) density-based approaches, and (3) cost-based approaches. Error-Based Approaches. Examples of error-based approaches are [15] and [3]. In the former the standard support measure is replaced by a fault-tolerant support, which allows for a maximum number of missing items in the supporting transactions, thus ensuring that the measure is still anti-monotone. The search algorithm itself is derived from the famous Apriori algorithm [2]. In [3] constraints are placed on the number of missing items as well as on the number of (supporting) transactions that do not contain an item in the set. Hence it is related to the tile-finding approach in [10]. However, it uses an enumeration search scheme that traverses sub-lattices of items and transactions, thus ensuring a complete search, while [10] relies on a heuristic scheme.
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions
45
Density-Based Approaches. Rather than fixing a maximum number of missing items, density-based approaches allow a certain fraction of the items in a set to be missing from the transactions, thus requiring the corresponding binary matrix tile to have a minimum density. This means that for larger item sets more items are allowed to be missing than for smaller item sets. As a consequence, the measure is no longer anti-monotone if the density requirement is to be fulfilled by each individual transaction. To overcome this [19] require only that the average density over all supporting transaction must exceed a user-specified threshold, while [17] define a recursive measure for the density of an item set. Cost-Based Approaches. In error- or density-based approaches all transactions that satisfy the constraints contribute equally to the support of an item set, regardless of how many items of the set they contain. In contrast to this, cost-based approaches define the contribution of transactions in proportion to the number of missing items. In [18,5] this is achieved by means of user-provided item-specific costs or penalties, with which missing items can be inserted. These costs are combined with each other and with the initial transaction weight of 1 with the help of a t-norm. In addition, a minimum weight for a transaction can be specified, by which the number of insertions can be limited. Note that the cost-based approaches can be made to contain the error-based approaches as a limiting or extreme case, since one may set the cost/penalty of inserting an item in such a way that the transaction weight is not reduced. In this case limiting the number of insertions obviously has the same effect as allowing for a maximum number of missing items. The approach presented in this paper falls into the category of cost-based approaches, since it reduces the support contribution of transactions that do not contain all items of a considered item set. How much the contribution is reduced and how many missing items are allowed can be controlled directly by a user. However, it treats all items the same, while the cost-based approaches reviewed above allow for item-specific penalties. Its advantages are that, depending on the data set, it can be faster, admits more sophisticated support/evaluation functions, and allows for a simple filtering of pseudo (or spurious) item sets. In pseudo (or spurious) item sets a subset of the items co-occur in many transactions, while the remaining items do not occur in any (or only very few) of the (fault-tolerantly) supporting transactions (illustrated in Figure 1 on the right; note the regular pattern of missing items compared to the middle diagram). Despite the ensuing reduction of the weight of the transactions (due to the missing items), the item set support still exceeds the user-specified threshold. Obviously, such item sets are not useful and should be discarded by requiring, for instance, a minimum fraction of supporting transactions per item. This is easy in our algorithm, but difficult in the cost-based approaches reviewed above. Finally note that a closely related setting is the case of uncertain transactional data, where each item is endowed with a transaction-specific weight or probability, which indicates the degree or chance with which it is a member of the transaction. Approaches to this related, but nevertheless fundamentally different problem, which we do not consider here, can be found in [8,13,7].
46
C. Borgelt and T. K¨ otter
global variables: lists : array of array of integer; cnts : array of integer; dist : array of integer; iset : set of integer; emin : real;
(may also be passed down in recursion) (∗ transaction identifier lists ∗) (∗ item counters, one per transaction ∗) (∗ subset size occurrence distribution ∗) (∗ current item set ∗) (∗ minimum evaluation of an item set ∗)
procedure sodim (n: integer); var i : integer; t : array of integer; e : real; begin while n > 0 do begin n := n − 1; t := lists[n]; for i := 0 upto length(t)-1 do inc(cnts[t[i]]); inc(dist[cnts[t[i]]]); end; e := eval(dist, length(iset)+1); if e ≥ emin then begin add(iset, n); report the current item set sodim(n); remove(iset, n); end; for i := 0 upto length(t)-1 do dec(dist[cnts[t[i]]]); dec(cnts[t[i]]); end; end; end; (∗ end of sodim() ∗)
(∗ (∗ (∗ (∗
n: number of selectable items ∗) loop variable ∗) to access the transaction id lists ∗) item set evaluation result ∗)
(∗ while there are items left ∗) (∗ get the next item and its trans. ids ∗) begin (∗ traverse the transaction ids ∗) (∗ increment the item counter and ∗) (∗ the subset size occurrences, ∗) (∗ i.e., update the distribution ∗) (∗ evaluate subset size occurrence distrib. ∗) (∗ if the current item set qualifies ∗) (∗ add current item to the set ∗) iset ; (∗ recursively check supersets ∗) (∗ remove current item from the set, ∗) (∗ i.e., restore the original item set ∗) begin (∗ traverse the transaction ids ∗) (∗ decrement the subset size occurrences ∗) (∗ and then the item counter, ∗) (∗ i.e., restore the original distribution ∗)
Fig. 2. Simplified pseudo-code of the recursive search procedure
3
Subset Size Occurrence Distribution
The basic idea of our algorithm is to compute, for each visited item set, how many transactions contain subsets with 1, 2, . . . , k items, where k is the size of the considered item set. We call this the subset size occurrence distribution of the item set, as it states how often subsets of different sizes occur. This distribution is evaluated by a function that combines, in a weighted manner, the entries which refer to subsets of a user-specified minimum size (and thus correspond to a maximum number of missing items). Item sets that reach a user-specified minimum value for the evaluation measure are reported. Computing the subset size occurrence distribution is surprisingly easy with the help of an intermediate array that records for each transaction how many of the items in the currently considered set are contained in it. In the search, which is a standard depth-first search in the subset lattice that can also be seen as a
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions
lists[n]
47
transaction identifiers
1 2 4 7 8 11 increment
cnts counters 0 3 4 2 3 0 4 5 1 0 2 3 3 4 4 3 0 1 item (one per transaction) 7 0
1
2
3
4
5
6
8
9
10
11
increment
dist
0 2 5 7 7 8 12 9 1 4 5
4
3
2
1
0
subset size occurrences
Fig. 3. Updating the subset size occurrence distribution with the help of an item counter array, which records the number of contained items per transaction
divide-and-conquer approach (see, for example, [5] for a formal description), this intermediate array is updated every time an item is added to or removed from the current item set. The counter update is carried out with the help of transaction identifier lists, that is, our algorithm uses a vertical database representation and thus is closely related to the well-known Eclat algorithm [20]. The updated fields of the item counter array then give rise to updates of the subset size occurrence distribution, which records, for each subset size, how many transactions contain at least as many items of the current item set. Pseudo-code of the (simplified) recursive search procedure is shown in Figure 2. Together with the recursion the main while-loop implements the depthfirst/divide-and-conquer search by first including an item in the current set (first subproblem — handled by the recursive call) and then excluding it (second subproblem — handled by skipping the item in the while-loop). The for-loop at the beginning of the outer while-loop increments first the item counters for each transaction containing the current item n, which thus is added to the current item set. Then the subset size occurrence distribution is updated by drawing on the new value of the updated item counters. Note that one could also remove a transaction from the counter for the original subset size (after adding the current item), so that the distribution array elements represent the number of transactions that contain exactly the number of items given by their indices. This could be achieved with an additional dec(dist[cnts[t[i]]]) as the first statement of the for-loop. However, this operation is more costly than forming differences between neighboring elements in the evaluation function, which yields the same values (see Figure 4 — to be discussed later). As an illustrative example, Figure 3 shows an example of the update. The top row shows the list of transaction identifiers for the current item n (held in the pseudo-code in the local variable t), which is traversed to select the item counters that have to be incremented. The second row shows these item counters, with old and unchanged counter values shown in black and updated values in blue. Using the new (blue) values as indices into the subset size distribution array, this distribution is updated. Again old and unchanged values are shown in black, new values in blue. Note that dist[0] always holds the total number of transactions. An important property of this update operation is that it is reversible. By traversing the transaction identifiers again, the increments can be retracted,
48
C. Borgelt and T. K¨ otter
global variables: wgts : array of real;
(may also be passed down in recursion) (∗ weights per number of missing items ∗)
function eval (d: array of integer, (∗ d: subset size occurrence distribution ∗) k: integer) : real; (∗ k: number of items in the current set ∗) var i: integer; (∗ loop variable ∗) e: real; (∗ evaluation result ∗) begin e := d[k] · wgts[0]; (∗ initialize the evaluation result ∗) for i := 1 upto min(k, length(wgts)) do (∗ traverse the distribution ∗) e := e +(d[k − i] − d[k − i + 1]) · wgts[i]; return e; (∗ weighted sum of transaction counters ∗) end; (∗ end of eval() ∗) Fig. 4. Pseudo-code of a simple evaluation function
thus restoring the original subset size occurrence distribution (before the current item n was added). This is exploited in the for-loop at the end of the outer whileloop in Figure 2, which restores the distribution, by first decrementing the subset size occurrence counter and then the item counter for the transaction (that is, the steps are reversed w.r.t. the update in the first for-loop). Between the for-loops the subset size occurrence distribution is evaluated and if the evaluation result reaches a user-specified threshold, the extended item set is actually constructed and reported. Afterwards supersets of this item set are processed recursively and finally the current item is removed again. This is in perfect analogy to standard frequent item set algorithms like Eclat or FP-growth. The advantage of this scheme is that the evaluation function has access to fairly rich information about the occurrences of subsets of the current item set. While standard frequent item set mining algorithms only compute (and evaluate) dist[k] (which always contains the standard support) and the JIM algorithm [16] computes and evaluates only dist[k], dist[1] (number of transactions that contain at least one item in the set), and dist[0] (total number of transactions), our algorithm knows (or can easily compute as a simple difference) how many transactions miss 1, 2, 3 etc. items. Of course, this additional information comes at a price, namely a higher processing time, but in return one obtains the possibility to compute much more sophisticated item set evaluations. A very simple example of such an evaluation function is shown in Figure 4: it weights the numbers of transactions in proportion to the number of missing items. The weights can be specified by a user and are stored in a global weights array. We assume that wgts[0] = 1 and wgts[i] ≥ wgts[i + 1]. With this function fault-tolerant item sets can be found in a cost-based manner, where the costs are represented by the weights array. An obvious alternative—inspired by [16]—is to divide the final value of e by dist[1] in order to obtain an extended Jaccard measure. In principle, all measures listed in [16] can be generalized in this way, by simply replacing the standard support (all items are contained) by the extended support computed in the function shown in Figure 4.
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions
49
Note that the extended support computed by this function as well as the extended Jaccard measure that can be derived from it are obviously anti-monotone, since each element of the subset size occurrence distribution is anti-monotone (if elements are paired from the number of items in the respective sets downwards), while dist[1] is monotone. This ensures the correctness of the algorithm.
4
Removing Pseudo/Spurious Item Sets
Pseudo (or spurious) item sets can result if there exists a set of items that is strongly correlated (no or almost no missing items) and supported by many transactions. Adding an item to this set may not reduce the support enough to let it fall below the user-specified threshold, even if this item is not contained in any of the transactions containing the correlated items. As an illustration consider the right diagram in Figure 1: the third item is contained in only one of the eight transactions. However, the total number of missing items in this binary matrix (and thus the extended support) is the same as in the middle diagram, which we consider as a representation of an acceptable fault-tolerant item set. In order to cull such pseudo (or spurious) item sets from the output, we added to our algorithm a check whether all items of the set occur in a sufficiently large fraction of the supporting transactions. This check can be carried out in two forms: either the user specifies a minimum fraction of the support of an item set that must be produced from transactions containing the item (in this case the reduced weights of transactions with missing items are considered) or he/she specifies a minimum fraction of the number of supporting transactions that must contain the item (in this case all transactions have the same unit weight). Both checks can fairly easily be carried out with the help of the vertical transaction representation (transaction identifier lists), the intermediate/auxiliary item counter array (with one counter per transaction) and the subset size occurrence distribution: One simply traverses the transaction identifier list for each item in the set to check and computes the number of supporting transactions that contain the tested item (or the support contribution derived from these transactions). The result is then compared with the total number of supporting transactions (which is available in dist[m], where m is the number of weights (see Figure 4) or the extended support (the result of the evaluation function shown in Figure 4). If the result exceeds a user specified threshold (given as a fraction or percentage) for all items in the set, the item set is accepted, otherwise it is discarded (from the output, but still processed recursively, because these conditions are not anti-monotone and thus cannot be used for pruning). In addition, it is often beneficial to filter the output for closed item sets (no superset has the same support/evaluation) or maximal item sets (no superset has a support/evaluation exceeding the user-specified threshold). In principle, this can be achieved with the same methods that are used in standard frequent item set mining. In our algorithm we consider closedness or maximality only w.r.t. the standard support (all items contained), but in principle, it could also be implemented w.r.t. the more sophisticated measures. Note, however, that this
50
C. Borgelt and T. K¨ otter
1
log(time/s)
log(time/s)
3
one insertion SaM RElim 0 SODIM 100
200
2
1 300
absolute support
400
500
two insertions SaM RElim SODIM 100
200
300
absolute support
400
500
Fig. 5. Execution times on the BMS-Webview-1 data set. Light colors refer to an insertion penalty factor of 0.25, dark colors to an insertion penalty factor of 0.5.
notion of closedness differs from the notion introduced and used in [6,14], which is based on δ-free item sets and is a mathematically ore sophisticated approach.
5
Experiments
We implemented the described item set mining approach as a C program, called SODIM (Subset size Occurrence Distribution based Item set Mining), that was essentially derived from an Eclat implementation (which provided the initial setup of the transaction identifier lists). We implemented all measures listed in [16], even though for these measures (in their original form) the JIM algorithm is better suited, because they do not require subset occurrence values beyond dist[k], dist[1], and dist[0]. However, we also implemented the extended support and the extended Jaccard measure (as well as generalizations of all other measures described in [16]), which JIM cannot compute. We also added optional culling of pseudo (or spurious) item sets, thus providing possibilities far surpassing the JIM implementation. This SODIM implementation has been made publicly available under the GNU Lesser (Library) Public License.1 In a first set of experiments we tested our implementation on artificially generated data. We created a transaction database with 100 items and 10000 transactions, in which each item occurs in a transaction with 5% probability (independent items, so co-occurrences are entirely random). Into this database we injected six groups of co-occurring items, which ranged in size from 6 to 10 items and which partially overlapped (some items were contained in two groups). For each group we injected between 20 and 30 co-occurrences (that is, in 20 to 30 transactions the items of the group actually co-occur). In order to compensate for the additional item occurrences due to this, we reduced (for the items in the groups) the occurrence probabilities in the remaining transactions (that is, the transactions in which they did not co-occur) accordingly, so that all items shared the same individual expected frequency. In a next step we removed from each co-occurrence of a group of items one group item, thus creating the noisy 1
http://www.borgelt.net/sodim.html
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions
51
instances of item sets we try to find with the SODIM algorithm. Note that due to this deletion scheme no transaction contained all items in a given group and thus no standard frequent item set mining algorithm is able to detect the groups, regardless of the used minimum support threshold. We then mined this database with SODIM, using a minimum standard support (all items contained) of 0, a minimum extended support of 10 (with a weight of 0.5 for transactions with one missing item) and a minimum fraction of transaction containing each item of 75%. In addition, we restricted the output to maximal item sets (based on standard support), in order to suppress the output of subsets of the injected groups. This experiment was repeated several times with different databases generated in the way described above. We observed that the injected groups were always perfectly detected, while only rarely a false positive result, usually with 4 items, was produced. In a second set of experiments we compared SODIM to the two other costbased methods reviewed in Section 2, namely RElim [18] and SaM [5]. As a test data set we chose the well-known BMS-Webview-1 data, which describes a web click stream from a leg-care company that no longer exists. This data set has been used in the KDD cup 2000 [12]. By properly parameterizing these methods, they can be made to find exactly the same item sets. We chose two insertion penalties (RElim and SaM) or downweighting factors for missing items (SODIM), namely 0.5 and 0.25, and tested with one and two insertions (RElim and SaM) or missing items (SODIM). The results, obtained on an Intel Core 2 Quad Q9650 (3GHz) Computer with 8 GB main memory running Ubuntu Lunix 10.04 (64 bit) and gcc version 4.4.3, are shown in Figure 5. Clearly, SODIM outperforms both SaM and RElim by a large margin, with the exception of the lowest support value for one insertion and a penalty of 0.5, where SODIM is slightly slower than both SaM and RElim. It should be noted, though, that this does not render SaM and RElim useless, because they offer options that SODIM does not, namely the possibility to define item-specific insertion penalties (SODIM treats all items the same). On the other hand, SODIM allows for more sophisticated evaluation measures and the removal of pseudo/spurious item sets. Hence all three algorithms are useful. To demonstrate the usefulness of our method, we applied it also to the 2008/2009 Wikipedia Selection for schools2 , which is a subset of the English Wikipedia3 with about 5500 articles and more than 200,000 hyperlinks. We used a subset of this data set that does not contain articles belonging to the subjects “Geography”, “Countries” or “History”, resulting in a subset of about 3,600 articles and more than 65,000 hyperlinks. The excluded subjects do not affect the chemical subject we focus on in our experiment, but contain articles that reference many articles or that are referenced by many articles (such as United States with 2,230 references). Including the mentioned subject areas would lead to an explosion of the number of discovered item sets and thus would make it much more difficult to demonstrate the effect we are interested in. 2 3
http://schools-wikipedia.org/ http://en.wikipedia.org
52
C. Borgelt and T. K¨ otter Table 1. Results for different numbers of missing items Missing items Transactions Chemical elements Other elements Not referenced 0 25 24 1 0 1 47 34 13 1 2 139 71 68 3 3 239 85 154 9
The 2008/2009 Wikipedia Selection for schools describes 118 chemical elements4 . However, there are 157 articles that reference the Chemical element article or are referenced by it, so that simply collecting the referenced or referencing articles does not yield a good extensional representation of this concept. Searching for references to the Chemical element article thus results not only in articles describing chemical elements but also in other articles including Albert Einstein, Extraterrestrial Life, and Universe. Furthermore, there are 17 chemical elements (e.g. palladium) that do not reference the Chemical element article. In order to better filter articles that are about a chemical element, one may try to extend the query with the titles of articles that are frequently co-referenced with the Chemical element article, but are more specific than a reference to/from this article alone. In order to find such co-references, we apply our SODIM algorithm. In order to do so, we converted each article into a transaction, such that each referenced article is an item in the transaction of the referring article. This resulted in a transaction database with 3,621 transactions. We then ran our SODIM algorithm with a minimum item set size of 5 and a minimum support (all items contained) of 25 in order to find the co-references. 29 of the 81 found item sets contain the item Chemical element. From the 29 item sets we chose the following item set for the subsequent experiments: {Oxygen, Electron, Hydrogen, Melting point, Chemical Element }. The first column of Table 1 shows the results for different settings for the allowed number of missing items. The second column contains the number of matching transactions. Column three and four contain the number of discovered chemical elements and the number of other articles. The last column contains the number of discovered chemical elements that do not reference the Chemical Element article. By allowing some missing items per transaction the algorithm was able to find considerably more chemical elements than the classical version.
6
Conclusions and Future Work
In this paper we presented a new cost-based algorithm for mining fault-tolerant frequent item sets that exploits subset size occurrence distributions. The algorithm efficiently computes these distributions while traversing the search space in the usual depth-first manner. As evaluation measures we suggested a simple extended support, by which transactions containing only some of the items of a given set can still contribute to the support of this set, as well as an extension of 4
http://schools-wikipedia.org/wp/l/List_of_elements_by_name.htm
Mining Fault-Tolerant Item Sets Using Subset Size Occurrence Distributions
53
the generalized Jaccard index that is derived from the extended support. Since the algorithm records, in an intermediate array, for each transaction how many items of the currently considered set are contained, we could also add a simple and efficient check in order to cull pseudo and spurious item sets from the output. We demonstrated the usefulness of our algorithm by applying it, combined with filtering for maximal item sets, to the 2008/2009 Wikipedia Selection for schools, where it proved beneficial to detect the concept of a chemical element despite the only limited standardization of pages on such substances. We are currently trying to extend the method to incorporate item weights (weighted or uncertain transactional data, see Section 2), in order to obtain a method that can mine fault-tolerant item sets from uncertain or weighted data. A main problem of such an extension is that the item weights have to be combined over the items of a considered set (for instance, with the help of a t-norm). This naturally introduces a tendency that the weight of a transaction goes down even if the next added item is contained, simply because the added item is contained with a weight less than one. If we now follow the scheme of downweighting transactions that are missing an item with a user-specified factor, we have to make sure that a transaction that contains an item (though with a low weight) does not receive a lower weight than a transaction that does not contain the item (because the downweighting factor is relatively high). Acknowledgements. This work was supported by the European Commission under the 7th Framework Program FP7-ICT-2007-C FET-Open, contract no. BISON-211898.
References 1. Aggarwal, C.C., Lin, Y., Wang, J., Wang, J.: Frequent Pattern Mining with Uncertain Data. In: Proc. 15th ACM SIGMOD Int. Conf. on Knowledge Discovery and Data Mining (KDD 2009), Paris, France, pp. 29–38. ACM Press, New York (2009) 2. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules. In: Proc. 20th Int. Conf. on Very Large Databases (VLDB 1994), Santiago de Chile, pp. 487–499. Morgan Kaufmann, San Mateo (1994) 3. Besson, J., Robardet, C., Boulicaut, J.-F.: Mining a New Fault-Tolerant Pattern Type as an Alternative to Formal Concept Discovery. In: Proc. Int. Conference on Computational Science (ICCS 2006), Reading, United Kingdom, pp. 144–157. Springer, Berlin (2006) 4. Berger, D., Borgelt, C., Diesmann, M., Gerstein, G., Gr¨ un, S.: An Accretion based Data Mining Algorithm for Identification of Sets of Correlated Neurons. In: 18th Ann. Computational Neuroscience Meeting (CNS*2009), Berlin, Germany (2009) 5. Borgelt, C., Wang, X.: SaM: A Split and Merge Algorithm for Fuzzy Frequent Item Set Mining. In: Proc. 13th Int. Fuzzy Systems Association World Congress and 6th Conf. of the European Society for Fuzzy Logic and Technology (IFSA/EUSFLAT 2009), Lisbon, Portugal, pp. 968–973. IFSA/EUSFLAT Organization Committee, Lisbon (2009)
54
C. Borgelt and T. K¨ otter
6. Boulicaut, J.-F., Bykowski, A., Rigotti, C.: Approximation of Frequency Queries by ˙ Means of Free-Sets. In: Zighed, D.A., Komorowski, J., Zytkow, J.M. (eds.) PKDD 2000. LNCS (LNAI), vol. 1910, pp. 75–85. Springer, Heidelberg (2000) 7. Calders, T., Garboni, C., Goethals, B.: Efficient Pattern Mining of Uncertain Data with Sampling. In: Zaki, M.J., Yu, J.X., Ravindran, B., Pudi, V. (eds.) PAKDD 2010. LNCS, vol. 6118, pp. 480–487. Springer, Heidelberg (2010) 8. Chui, C.-K., Kao, B., Hung, E.: Mining Frequent Itemsets from Uncertain Data. In: Zhou, Z.-H., Li, H., Yang, Q. (eds.) PAKDD 2007. LNCS (LNAI), vol. 4426, pp. 47–58. Springer, Heidelberg (2007) 9. Creighton, C., Hanash, S.: Mining Gene Expression Databases for Association Rules. Bioinformatics 19, 79–86 (2003) 10. Gionis, A., Mannila, H., Sepp¨ anen, J.K.: Geometric and Combinatorial Tiles in 0-1 Data. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS (LNAI), vol. 3202, pp. 173–184. Springer, Heidelberg (2004) 11. Gr¨ un, S., Rotter, S. (eds.): Analysis of Parallel Spike Trains. Springer, Berlin (2010) 12. Kohavi, R., Bradley, C.E., Frasca, B., Mason, L., Zheng, Z.: KDD-Cup 2000 Organizers’ Report: Peeling the Onion. SIGKDD Exploration 2(2), 86–93 (2000) 13. Leung, C.K.-S., Carmichael, C.L., Hao, B.: Efficient Mining of Frequent Patterns from Uncertain Data. In: 7th IEEE Int. Conf. on Data Mining Workshops (ICDMW 2007), Omaha, NE, pp. 489–494. IEEE Press, Piscataway (2007) 14. Pensa, R.G., Robardet, C., Boulicaut, J.F.: Supporting Bi-cluster Interpretation in 0/1 Data by Means of Local Patterns. In: Intelligent Data Analysis, vol. 10, pp. 457–472. IOS Press, Amsterdam (2006) 15. Pei, J., Tung, A.K.H., Han, J.: Fault-Tolerant Frequent Pattern Mining: Problems and Challenges. In: Proc. ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery (DMK 2001), Santa Babara, CA. ACM Press, New York (2001) 16. Segond, M., Borgelt, C.: Item Set Mining Based on Cover Similarity. In: Huang, J.Z., Cao, L., Srivastava, J. (eds.) PAKDD 2011, Part II. LNCS, vol. 6635, pp. 493–505. Springer, Heidelberg (2011) 17. Sepp¨ anen, J.K., Mannila, H.: Dense Itemsets. In: Proc. 10th ACM SIGMOD Int. Conf. on Knowledge Discovery and Data Mining (KDD 2004), Seattle, WA, pp. 683–688. ACM Press, New York (2004) 18. Wang, X., Borgelt, C., Kruse, R.: Mining Fuzzy Frequent Item Sets. In: Proc. 11th Int. Fuzzy Systems Association World Congress (IFSA 2005), Beijing, China, pp. 528–533. Tsinghua University Press and Springer-Verlag, Beijing, and Heidelberg (2005) 19. Yang, C., Fayyad, U., Bradley, P.S.: Efficient Discovery of Error-tolerant Frequent Itemsets in High Dimensions. In: Proc. 7th ACM SIGMOD Int. Conf. on Knowledge Discovery and Data Mining (KDD 2001), San Francisco, CA, pp. 194–203. ACM Press, New York (2001) 20. Zaki, M.J., Parthasarathy, S., Ogihara, M., Li, W.: New Algorithms for Fast Discovery of Association Rules. In: Proc. 3rd Int. Conf. on Knowledge Discovery and Data Mining (KDD 1997), Newport Beach, CA, pp. 283–296. AAAI Press, Menlo Park (1997)
Finding Ensembles of Neurons in Spike Trains by Non-linear Mapping and Statistical Testing Christian Braune1,2 , Christian Borgelt1 , and Sonja Gr¨ un3,4,5 1
European Centre for Soft Computing Calle Gonzalo Guti´errez Quir´ os s/n, E-33600 Mieres (Asturias), Spain 2 Otto-von-Guericke-University of Magdeburg Universit¨ atsplatz 2, D-39106 Magdeburg, Germany 3 RIKEN Brain Science Institute, Wako-Shi, Japan 4 Institute of Neuroscience and Medicine (INM-6), Research Center J¨ ulich, Germany 5 Theoretical Systems Neurobiology, RWTH Aachen University, Aachen, Germany
[email protected],
[email protected],
[email protected] Abstract. Finding ensembles in neural spike trains has been a vital task in neurobiology ever since D.O. Hebb’s work on synaptic plasticity [15]. However, with recent advancements in multi-electrode technology, which provides means to record 100 and more spike trains simultaneously, classical ensemble detection methods became infeasible due to a combinatorial explosion and a lack of reliable statistics. To overcome this problem we developed an approach that reorders the spike trains (neurons) based on pairwise distances and Sammon’s mapping to one dimension. Thus, potential ensemble neurons are placed close to each other. As a consequence we can reduce the number of statistical tests considerably over enumeration-based approaches (like e.g. [1]), since linear traversals of the neurons suffice, and thus can achieve much lower rates of falsepositives. This approach is superior to classical frequent item set mining algorithms, especially if the data itself is imperfect, e.g. if only a fraction of the items in a considered set is part of a transaction.
1
Introduction and Motivation
With the help of electrodes it is possible to observe the electrical potential of a single neuron directly, while multi-electrode arrays (MEAs) allow for several neurons to be observed simultaneously and their electrical potential to be recorded in parallel. By analyzing the wave form of the electrical potential it is possible to detect increases in the potential (so-called spikes) as well as to separate signals from multiple neurons recorded by a single electrode [17]. After this step, which is called spike sorting, a single spike train consists of a list of the exact times at which spikes have been recorded. Due to the relatively high time resolution it is advisable to perform some time binning in order to cope with inherent jitter. That is, the continuous time indices are discretized by assigning a time index to each spike. A 10s record binned with 1ms time bins therefore leads to a list of up to 10000 entries with the time indices of the spiking events. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 55–66, 2011. c Springer-Verlag Berlin Heidelberg 2011
56
C. Braune, C. Borgelt, and S. Gr¨ un
In this form, a spike train can also be interpreted as a binary vector with 10000 dimensions, where each dimension represents one time index and the vector elements are either 0 (no spike) or 1 (spike). Figure 1 shows two sets of spike trains, each generated artificially for 100 neurons and 10,000 time bins (10 seconds) with a simple Poisson model. The top diagram shows random noise (independent neurons, no correlations), while the data in the middle diagram contains an ensemble of 20 neurons. Without any computer aid it is hardly possible to distinguish these two data sets, let alone decide with a high degree of certainty which plot belongs to which data set. Only if the correlated neurons are sorted together (bottom diagram), the assembly becomes discernible. This paper proposes an algorithm to find an ordering of neurons that reflects the fact that ensembles can be found as lines within the background noise if they are properly sorted. Section 5 shows that such an ordering can then be exploited to perform linear statistical testing to find neuronal ensembles. Thus it abolishes the need to perform all pairwise tests, while still being able to detect the inherent dependency structure to a very high degree, often even perfectly.
2
Related Work
Gerstein et al. (1978) proposed the accretion algorithm which tries to find such ensembles by iterative combination/merging of neural spike trains [9]. Though not executable at the time it was published for massively parallel spike train data, today even standard desktop computers allow to perform the necessary tests in acceptable time. However, there still remains the problem that after merging relevant spike trains, some subsets of neurons are tested more than once and thus the likelihood of receiving false positive results becomes fairly large because of the huge number of tests that are performed. Attempts have been made to identify the structure of neuronal ensembles with the help of frequent item set mining (FIM) algorithms [1]. In this approach, neurons are mapped to items, time bins to transactions. Although FIM algorithms are able to detect groups of neurons in the background noise, they generally also report much smaller groups as significant ensembles. In most of these smaller groups all but one neuron belong to a real ensemble and the remaining one randomly produces one or two coincidences, which, however, are considered significant by standard statistical measures. In addition, in the domain of spike train analysis we face the problem that it cannot be reasonably expected that all neurons of an ensemble always participate in a synchronous spiking event. The synfire model [7] suggests that 50 to 80% of the ensemble neurons may participate, with a different selection on each instance. In order to cope with this problem, fault-tolerant (or approximate or fuzzy) frequent item set mining algorithms may be applied (for example, [23,2,3]). However, these approaches have the drawback that they enumerate all potential groups/ensembles and thus suffer severely from the false positives problem. Considering the neural spike trains as binary vectors and the whole set of parallel spike trains as a matrix (see Section 3) an interesting approach was
57
neurons (1–100)
Finding Ensembles of Neurons in Spike Trains
neurons (1–100)
time (0–10s)
neurons (1–100)
time (0–10s)
time (0–10s)
Fig. 1. Dot-displays of two sets of parallel spike trains generated with a neurobiological spike train simulator. The top diagram shows independent neurons, while the bottom diagrams contain a group of 20 correlated neurons. In the middle diagram, these neurons are randomly interspersed with independent neurons, while in the bottom diagram they are sorted into the bottom rows (see also: [11,12,13]).
58
C. Braune, C. Borgelt, and S. Gr¨ un
suggested in [10], which uses the Fiedler vector (i.e. the eigenvector corresponding to the smallest non-zero eigenvalue of the symmetric matrix LS = DS − S, where S = (sij ) is a similarity matrix and DS = (dii ) is a diagonal matrix with dii = j sij ). The entries/coordinates of the Fiedler vector are used to sort the rows of a given binary matrix such that large tiles (sub-matrices) become visible that contain a lot of 1s. The core argument underlyingthe approach is that the Fiedler vector minimizes the stress function xT Ls x = i,j sij · (xi − xj )2 . With the constraints xT e = 0 and xT x = 1 this can be easily shown. In [10] it is claimed that sorting the rows by the corresponding components of the Fiedler vector results in a good ordering by minimizing the inter-row stress, mapping similar objects next to each other. Within this optimization one has to consider sij to be constant as the similarity of the objects cannot change. Only the components of the vector are subject to change. A problem with this calculation, however, is that even for small similarities a small difference between xi and xj would lower the overall stress. Equally an arbitrarily large similarity can still be adjusted by choosing fitting values for xi and xj . Although it can be considered correct that the Fiedler vector minimizes the overall stress over all suitable vectors, this stress function itself is not necessarily the one that really needs to be minimized to obtain more than a heuristic ordering of the rows. Experiments we conducted showed that for neural spike trains the Fiedler vector based sorting often yields highly unsatisfactory results. Since we actually want to place similar spike trains close to each other, where high similarity may also be represented as a small distance and vice versa, Sammon’s Non-Linear Mapping [20] suggests itself as an alternative.
3
Presuppositions
With the interpretation of spike trains as binary vectors and parallel spike trains as a binary matrix, we can say that the synchronous spiking activity of neuronal ensembles leads to sub-matrices (or tiles) in which—under suitable row and column permutations—every entry is 1. This view presumes that every neuron that is part of an ensemble participates in every coincident spiking event (that is, the probability of copying from the coincidence process into the individual spike trains is 1.0 for all neurons). We start from this simplification, because it helps us to design the algorithm, even though it holds neither for natural processes nor will it be it a necessary precondition for the final algorithm. (Although, of course, best results are achieved if the data is perfect.) Our task consists in finding a suitable row permutation (re-ordering of the spike trains) and maybe also a column permutation (re-ordering of the time bins) without having to test every pair of vectors (spike trains), and subsequently also every triple, quadruple or even larger groups of vectors by merging all vectors except one and testing the resulting vector against the remaining one (as the Accretion algorithm does [9]). Such an approach is disadvantageous, because group testing and incrementing the size of the groups to be tested leads to an (over-)exponential increase in the number of tests that have to be performed.
Finding Ensembles of Neurons in Spike Trains
59
Fig. 2. Distance/similarity matrix of 100 neurons (spike trains) computed with the Dice measure with an injected ensemble of size 20 (the darker the gray, the lower the pairwise distance). The data set underlying this distance matrix is depicted as a dot display in the middle diagram of Figure 1: each row (as well as each column) of this diagram corresponds to a row of the diagram in Figure 1 and thus to a neuron.
Hence even a test as simple as the χ2 test will consume a lot of time—probably without producing significant results. As the low firing probabilities of neurons lead to a relatively low density of the vectors and the even lower rate of coincidences lead to circumstances where the χ2 test returns excessively many falsepositive results, the algorithmic need for larger groups to be tested increases even further. To prevent this, performing an exact test would be desirable but the execution time of tests like Fisher’s exact test renders this infeasible.
4
Sorting with Non-linear Mappings
To avoid having to test all subsets of neurons/spike trains and thus to allow the use of an exact test (like Fisher’s), some pre-ordering of the neurons/spike trains is necessary, which allows us to carry out the tests on a simple linear traversal through the list of vectors. In this way only neighboring neurons/spike trains have to be tested against each other, reducing the total number of tests required. Figure 2 shows an example of a distance matrix for a set of spike trains (namely the spike trains shown as a dot display in the middle diagram of Figure 1). Even the untrained human eye can easily distinguish those elements that indicate a smaller distance between two spike trains than the majority of all matrix entries. To make an algorithm able to identify these elements without the aid of image recognition algorithms, it is desirable to sort all neurons/spike trains that are close to each other (according to the used distance measure) to either side (beginning or end) of the sorted set of neurons/spike trains. Several algorithms for such a (non-)linear mapping are already known, with the classical algorithm known as “Sammon’s Non-Linear Mapping” suggesting
60
C. Braune, C. Borgelt, and S. Gr¨ un
Algorithm 1. Sorting with Sammon’s non-linear mapping (SSNLM) Require: List of Spike Trains L = {n0 , . . .}, distance measure d, statistic t, significance level α Ensure: Set of ensembles A 1: i := 0 2: while (|L| > 1) do 3: n := |L| 4: Ai := ∅ 5: Let dm be a n × n matrix 6: Let L be a list of all IDs 7: for all (a, b) ∈ L2 , a ≥ b do 8: dm [a, b] = dm [b, a] = d(a, b) 9: end for 10: Perform Sammon mapping with dm, k = 1, init=PCA, store result in SV 11: Sort L according to its corresponding entries in SV 12: if (t(L[0], L[1]) > t(L[n − 2], L[n − 1])) then 13: reverse L 14: end if 15: while (t(L[0], L[1]) < α) do 16: Ai := Ai ∪ {nL[0] , nL[1] } 17: L := L − {nL[0] } 18: end while 19: i := i + 1 20: end while i−1 {Aj } 21: return A = j=0
itself immediately. Usually it is applied to project high-dimensional data onto a two- or three-dimensional space, but we will apply it here to project onto a single dimension. Sammon’s algorithm aims at keeping the interpoint distances between two vectors in the target space close to the distances in the original space by minimizing the total error between these distances with an iterative update scheme. The resulting values for each spike train (one-dimensional mapping) are then used as a sorting criterion for the set of spike trains.
5
Algorithm
The algorithm assumes that all data points are given as an unordered list and returns a list of lists, each of which contains a possible substructure. The algorithm works as follows (see Algorithm 1): While there are still neurons/spike trains that have not been assigned to an ensemble or have been rejected, the algorithm continues to look for ensembles (line 2). The first step is to calculate the distance matrix with the distance measure given as an argument (lines 5–9). After the non-linear mapping of this matrix has been performed, the list of neurons/spike trains is sorted according to the result vector (here: a list of real values, lines 10–11). As Sammon’s Non-Linear Mapping may map
Finding Ensembles of Neurons in Spike Trains
61
a substructure to any side of the real-valued spectrum, we test for the end of the list that shows stronger evidence for rejecting the null hypothesis of independence and—if necessary—simply reverse the list (lines 12–14). As long as any of the consecutive tests produces p-values that are below the chosen level of significance, the corresponding neurons/spike trains are added to the list of the current ensemble (lines 15–18). Each time a test has led to a rejection of the null hypothesis, the first neuron/spike train is also permanently removed from the list of neurons/spike trains, so that it cannot be assigned to any other ensemble.
6
Distance Measures
Sammon’s algorithm works on a distance matrix, which we have to produce by computing a distance between pairs of binary vectors (namely the discretized spike trains). For this we can select from large variety of distance measures for binary vectors, which emphasize different aspects of dissimilarity (see, for instance, [4] for an overview). As it is to be expected that some of these measures are better suited to find a suitable reordering than others, we compared several of them, in particular those shown in Table 2. All of these measures are defined in terms of the entries of contingency tables as shown in Table 1, which state how often two neurons A and B spike both (n11 ), spike individually without the other (n01 and n10 ) and spike neither (n00 ), as well as the row and column sums of these numbers (n0. , n1. , n.0 and n.1 ) and the total number of time bins (n.. ). Although necessarily incomplete, Table 2 lists the most common measures. In order to assess how well the distance measures shown in Table 2 are able to distinguish between vectors that contain independent, random noise and those that contain possibly correlated data, we evaluated the different distance measures by an outlier detection method, assuming that all neurons/spike trains not belonging to an ensemble are outliers. Such a method has been proposed in [18]. This algorithm, which is inspired by so-called noise clustering [5], introduces a noise cloud, which has the same distance to each point in the data set. A data point is assigned to the noise cloud if and only if there is no other point in the data set that is closer to it than the noise cloud. The algorithm starts with a noise cloud distance of 0, and therefore at the beginning of the algorithm all points belong to the noise cloud. Then the distance to the noise cloud is slowly increased, causing more and more data points to fall out of the noise cloud and to be assigned to a non-noise cluster. Plotting the distance of the noise cloud against the number of points belonging to the non-noise cluster leads to a curve like those shown in Figure 3. Analyzing the steps in this curve allows us to make some assumptions about possible clusters within the data. Spike trains that have a lot of spikes in common have smaller distances for a properly chosen metric and therefore fall out of the noise cluster earlier than trains that only have a few spikes in common. We assume that neurons/spike trains of an ensemble have significantly smaller distances than neurons/spike trains that do not belong to any ensemble. The plateaus in each curve of Figure 3 indicate that this method is quite able to identify neurons belonging to an ensemble simply by comparing the
62
C. Braune, C. Borgelt, and S. Gr¨ un Table 1. A contingency table for two neurons A and B neuron B value 0 (no spike) 1 (spike) 0 (no spike) n00 n01 A 1 (spike) n10 n11 sum n.0 n.1
sum n0. n1. n..
Table 2. Some distance measures used for comparing two item covers Hamming [14] Jaccard [16] Tanimoto [22] Dice [6] Sørensen [21] Rogers & Tanimoto [19]
n01 + n10 n.. n01 + n10 = n01 + n10 + n11 n01 + n10 = n01 + n10 + 2n11 2(n01 + n10 ) = n00 + 2(n01 + n10 ) + n11 2n01 n10 = n11 n00 + n01 n10 n11 n00 − n01 n10 =1− √ n0. n1. n.0 n.1 1 n11 n.. − n.1 n1. = − √ 2 2 n0. n1. n.0 n.1
dHamming = dJaccard dDice dR&T
Yule [24]
dYule
χ2 [4]
dχ2
Correlation [8]
dCorrelation
inter-vector distances. Even if the copy probability (probability for a coincident spike to be copied to a time bin) drops to values as small as 0.5 the difference between neurons belonging to an ensemble and those not belonging to one remains visible. However, a problem arising from this approach is that even if all neurons are detected that most likely belong to a relevant ensemble, the structure itself remains unknown as neurons are only assigned to the noise or non-noise clusters. Figure 3 shows this quite well: the last diagram on the top row depicts a case where two ensembles of 20 neurons each (both with the same parameters), which were injected into a total set of 100 neurons, cannot be separated.
7
Experiments
We conducted several experiments to test the quality of our algorithm w.r.t. different parameter sets. These tests include a different number of ensembles as well as variations of the copy probability from the coincidence process. To generate the test data, a set of non-overlapping ensembles of neurons is chosen and parallel spike trains are simulated with the given parameters. We considered an ensemble to be (completely) found if and only if every neuron belonging to the ensemble has been successfully identified. We considered
Finding Ensembles of Neurons in Spike Trains
default
1
t=5000
1
p=0.01
1
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0.2
0.2
0
0
0.2
0.4
0.6
0.8
1
r=0.8
1
0
0
0.2
0.4
0.6
0.8
1
r=0.66
1
0
0
0.2
0.4
0.6
0.8
1
r=0.50
1
0
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.2
0.2
0.2
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0.8
1
0
0
0.2
0.4
0.6
0
0.2
0.4
0.6
0.8
1
1
0.8
0
m=1-20,21-40
1
0.8
63
0.8
1
distance measures Hamming Jaccard Dice 0.4 Rogers & Tanimoto Yule 0.2 Chi square Correlation 0 0
0.2
0.4
0.6
0.8
1
Fig. 3. Fraction of neurons not assigned to the noise cloud plotted over the distance from the noise cloud for different distance measures. The default parameters of the underlying data sets are: n = 100 neurons, t = 10000 time bins, p = 0.02 probability of an item occurring in a transaction, m = 1–20 group of neurons potentially spiking together, c = 0.005 probability of a coincident spiking event for the group(s) of neurons, r = 1 probability with which a spike is actually copied from the coincident spiking process. Deviations from these parameters are stated above the diagrams.
an ensemble to be partially found if only a subset of the ensemble’s neurons was detected by the algorithm, regardless of how small this subset may be. However, in order to prevent very small (and thus inconclusive) subsets to appear in the result, the minimum ensemble size to be reported was set to three. The results, for which each configuration was tested several times in order to ensure reliability, are shown in Table 3. The algorithm never reported any neurons belonging to an ensemble not present in the list of actual ensembles. The suppression of false positives in this case most likely originates from the limitation of an ensemble to have at least three neurons or more to be reported. In addition to the first test, where different parameters were compared, we performed a series of tests with (nearly) fixed parameters where only the copy probability for the coincidences was altered. While the first tests were performed with a copy probability of 1.0 the following tests were performed with a steadily decreasing copy probability. Table 4 shows the results of these tests. It is clearly visible that the detection rate for complete ensembles decreases with decreasing copy probability. This is not surprising, because with copy probabilities as low as 0.5, only about 50% of the neurons participate in a coincident event and every neuron belonging to an ensemble participates in only about 50% of all coincidences. This leaves plenty of space for coincidences simply disappearing in the background noise. On the other hand the rate of partially detected ensembles stays stable for a larger range of copy probabilities, indicating that only a few neurons are not recognized by this method as they are not participating in
64
C. Braune, C. Borgelt, and S. Gr¨ un
Table 3. Experimental results for the assembly detection method based on reordering the items/neurons with the Sammon projection
# of runs/experiments # of neurons per runs # of time bins per spike train firing probability coincidence probability copy probability assemblies per run assembly size significance level for statistic distance measure used total # of assemblies total # of assemblies found total # of partial ens. found success rate success rate (incl. partial finds)
Test 1 50 100 10000 0.02 0.0075 1.0 0-5 20 0.05 dDice 108 105 0 97.2% 97.2%
Test 2 50 100 10000 0.02 0.0075 1.0 0-6 5-20 0.05 dDice 132 97 5 73.5% 77.3%
Test 3 50 100 10000 0.02 0.0075 1.0 0-5 10 0.05 dJaccard 128 86 4 67.2% 70.3%
Test 4 50 100 5000 0.03 0.0075 1.0 0-5 10 0.05 dDice 118 114 0 96.6% 96.6%
Test 5 500 100 5000 0.02 0.005 1.0 0-6 10 0.05 dYule 1227 1170 57 95.4% 100%
Table 4. Experimental results for the assembly detection method based on reordering the items/neurons with the Sammon projection, different copy probabilities
# of runs/experiments # of neurons per runs # of time bins per spike train firing probability coincidence probability copy probability assemblies per run assembly size significance level for statistic distance measure used total # of assemblies total # of assemblies found total # of partial ens. found success rate success rate (incl. partial finds)
Test 6 250 100 5000 0.02 0.005 0.85 0-6 10 0.05 dYule 648 612 35 94.4% 99.9%
Test 7 250 100 5000 0.02 0.005 0.75 0-6 10 0.05 dYule 684 633 39 92.5% 98.3%
Test 8 250 100 5000 0.02 0.005 0.6 0-6 10 0.05 dYule 633 417 188 65.9% 95.6%
Test 9 50 100 10000 0.02 0.005 0.6 0-6 10 0.05 dYule 129 119 4 92.3% 95.4%
Test 10 100 100 10000 0.02 0.005 0.4 0-6 10 0.05 dYule 236 30 158 12.7% 79.7%
enough coincidences. Even if the copy probability drops to 0.4 the partial detection rate remains at almost 80%. This means that even with (on average) 60% of the data missing, still 80% of the ensembles were detected at least partially.
8
Conclusions and Future Work
Based on the results shown in the preceding section we can conclude that a testing procedure based on sorting the data and performing only certain
Finding Ensembles of Neurons in Spike Trains
65
combinations of pair-wise tests based on the result of a non-linear mapping produces very good result for the detection of neural ensembles. This holds even if the quality of the data is not ideal—be it lossy data accumulation or missing spikes due to the underlying (biological) process. The tests performed show that even with 60% of the relevant data (i.e. the coincident spikes) lost nearly as much as 80% of the ensembles could be detected at least partially. Yet, there is still room to improve on the algorithm. At the moment the decision whether a neuron belongs to a relevant ensemble or not is based on ruleof-thumb values. No sophisticated methods are used to analyze the noise/nonnoise line of the outlier detection method (see Figure 3). The difference between the two groups is currently only found by a constant value depending on the distance function used rather than using the inflection or the saddle point of the resulting curve. Here a far more analytical approach would be desirable to automate the detection process. Also overlapping groups cannot be distinguished at the moment as data points are assigned to at most one substructure. Acknowledgments. This work has been partially supported by the Helmholtz Alliance on Systems Biology.
References 1. Berger, D., Borgelt, C., Diesmann, M., Gerstein, G., Gr¨ un, S.: An Accretion based Data Mining Algorithm for Identification of Sets of Correlated Neurons. In: 18th Ann. Computational Neuroscience Meeting (CNS*2009), Berlin, Germany (2009); BMC Neuroscience, vol. 10(suppl. 1) 2. Besson, J., Robardet, C., Boulicaut, J.-F.: Mining a New Fault-Tolerant Pattern Type as an Alternative to Formal Concept Discovery. In: Proc. Int. Conference on Computational Science (ICCS 2006), Reading, United Kingdom, pp. 144–157. Springer, Berlin (2006) 3. Borgelt, C., Wang, X.: SaM: A Split and Merge Algorithm for Fuzzy Frequent Item Set Mining. In: Proc. 13th Int. Fuzzy Systems Association World Congress and 6th Conf. of the European Society for Fuzzy Logic and Technology (IFSA/EUSFLAT 2009), Lisbon, Portugal, pp. 968–973. IFSA/EUSFLAT Organization Committee, Lisbon (2009) 4. Choi, S.-S., Cha, S.-H., Tappert, C.C.: A Survey of Binary Similarity and Distance Measures. Journal of Systemics, Cybernetics and Informatics 8(1), 43–48 (2010); Int. Inst. of Informatics and Systemics, Caracas, Venezuela 5. Dav´e, R.N.: Characterization and Detection of Noise in Clustering. Pattern Recognition Letters 12, 657–664 (1991) 6. Dice, L.R.: Measures of the Amount of Ecologic Association between Species. Ecology 26, 297–302 (1945) 7. Diesmann, M., Gewaltig, M.-O., Aertsen, A.: Conditions for Stable Propagation of Synchronous Spiking in Cortical Neural Networks. Nature 402, 529–533 (1999) 8. Edwards, A.L.: The Correlation Coefficient. In: An Introduction to Linear Regression and Correlation, pp. 33–46. W.H. Freeman, San Francisco (1976) 9. Gerstein, G.L., Perkel, D.H., Subramanian, K.N.: Identification of Functionally Related Neural Assemblies. Brain Research 140(1), 43–62 (1978)
66
C. Braune, C. Borgelt, and S. Gr¨ un
10. Gionis, A., Mannila, H., Sepp¨ anen, J.K.: Geometric and Combinatorial Tiles in 0-1 Data. In: Boulicaut, J.-F., Esposito, F., Giannotti, F., Pedreschi, D. (eds.) PKDD 2004. LNCS (LNAI), vol. 3202, pp. 173–184. Springer, Heidelberg (2004) 11. Gr¨ un, S., Diesmann, M., Aertsen, A.: ‘Unitary Events’ in Multiple Single-neuron Activity. I. Detection and Significance. Neural Computation 14(1), 43–80 (2002) 12. Gr¨ un, S., Abeles, M., Diesmann, M.: Impact of Higher-order Correlations on Coincidence Distributions of Massively Parallel Data. In: Marinaro, M., Scarpetta, S., Yamaguchi, Y. (eds.) Dynamic Brain - from Neural Spikes to Behaviors. LNCS, vol. 5286, pp. 96–114. Springer, Heidelberg (2008) 13. Berger, D., Borgelt, C., Louis, S., Morrison, A., Gr¨ un, S.: Efficient Identification of Assembly Neurons within Massively Parallel Spike Trains. In: Computational Intelligence and Neuroscience, article ID 439648. Hindawi Publishing Corp., New York (2009/2010) 14. Hamming, R.V.: Error Detecting and Error Correcting Codes. Bell Systems Tech. Journal 29, 147–160 (1950) 15. Hebb, D.O.: The Organization of Behavior. J. Wiley & Sons, New York (1949) ´ 16. Jaccard, P.: Etude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin de la Soci´et´e Vaudoise des Sciences Naturelles 37, 547–579 (1901) 17. Lewicki, M.S.: A Review of Methods for Spike Sorting: The Detection and Classification of Neural Action Potentials. Network: Computation in Neural Systems 9, R53–R78 (1998) 18. Rehm, F., Klawonn, F., Kruse, R.: A Novel Approach to Noise Clustering for Outlier Detection. Soft Computing — A Fusion of Foundations, Methodologies and Applications 11(5), 489–494 (2007) 19. Rogers, D.J., Tanimoto, T.T.: A Computer Program for Classifying Plants. Science 132, 1115–1118 (1960) 20. Sammon, J.W.: A Nonlinear Mapping for Data Structure Analysis. IEEE Trans. Comput. 18(5), 401–409 (1969) 21. Sørensen, T.: A Method of Establishing Groups of Equal Amplitude in Plant Sociology based on Similarity of Species and its Application to Analyses of the Vegetation on Danish Commons. Biologiske Skrifter / Kongelige Danske Videnskabernes Selskab 5(4), 1–34 (1948) 22. Tanimoto, T.T.: IBM Internal Report (November 17, 1957) 23. Wang, X., Borgelt, C., Kruse, R.: Mining Fuzzy Frequent Item Sets. In: Proc. 11th Int. Fuzzy Systems Association World Congress (IFSA 2005), Beijing, China, pp. 528–533. Tsinghua University Press and Springer, Beijing, and Heidelberg (2005) 24. Yule, G.U.: On the Association of Attributes in Statistics. Philosophical Transactions of the Royal Society of London, Series A 194, 257–319 (1900)
Towards Automatic Pathway Generation from Biological Full-Text Publications Ekaterina Buyko1 , J¨ org Linde2 , Steffen Priebe2 , and Udo Hahn1 1
Jena University Language & Information Engineering (JULIE) Lab Friedrich-Schiller-Universit¨ at Jena, Germany {ekaterina.buyko,udo.hahn}@uni-jena.de 2 Research Group Systems Biology / Bioinformatics Leibniz Institute for Natural Product Research and Infection Biology Hans Kn¨ oll Institute (HKI) {joerg.linde,steffen.priebe}@hki-jena.de
Abstract. We introduce an approach to the automatic generation of biological pathway diagrams from scientific literature. It is composed of the automatic extraction of single interaction relations which are typically found in the full text (rather than the abstract) of a scientific publication, and their subsequent integration into a complex pathway diagram. Our focus is here on relation extraction from full-text documents. We compare the performance of automatic full-text extraction procedures with a manually generated gold standard in order to validate the extracted data which serve as input for the pathway integration procedure. Keywords: relation extraction, biological text mining, automatic database generation, pathway generation.
1
Introduction
In the life sciences community, a plethora of data is continuously generated by wet lab biologists, running either in vivo or in vitro experiments, and by bioinformaticians conducting in silico investigations. To cope with such massive volumes of biological data suitable abstraction layers are needed. Systems biology holds the promise to transform mass data into abstract models of living systems, e.g., by integrating single interaction patterns between proteins and genes into complex interaction networks and by visualizing them in terms of pathway diagrams (cf., e.g., [9,16]). However, both the determination of single interaction patterns (either from biological databases or biological publications), and their integration into interaction pathways are basically manual procedures which require skilled expert biologists to produce such data abstractions in a time-consuming and labor-intensive process. A typical example of the outcome of this challenging work is the KEGG database.1 Despite all these valuable efforts, Baumgartner et al. [2] have shown that the exponential growth rate of publications already 1
http://www.genome.jp/kegg/pathway.html
J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 67–79, 2011. c Springer-Verlag Berlin Heidelberg 2011
68
E. Buyko et al.
outpaces human capabilities to keep track with the speed of publication of documents relevant for database curation and pathway building. As a long-term goal, we aim at automating the process of pathway generation. This goal can be decomposed into several sub-goals: – Relation Extraction. The basic building blocks of pathways, viz. single interaction relations between proteins and genes, have to be harvested automatically from the scientific literature, full-text documents, in particular. – Full Text Analytics. Human pathway builders read the full text of scientific publications in order to get a comprehensive and detailed picture of the experiments being reported. In contrast, almost all approaches to automatic relation extraction are based on document surrogates, the accompanying abstracts, only. Running text analytics on full texts rather than on abstracts faces a significant increase in linguistic complexity (e.g., a larger diversity of the types of anaphora). Hence, performance figures differ for abstracts and their associated full texts when such phenomena are accounted for [15,6]. – Relation Integration. Once single interaction relations are made available, they have to be organized in a comprehensive pathway. This requires to identify overlapping arguments, determine causal and temporal orderings among the relations, etc. [12] That procedure is not just intended to replicate human comprehension and reasoning capabilities but much more should lead to more complete and even novel pathways that could not have been generated due to the lack of human resources. In this paper, our focus will be on the first two problems, while the third one is left for future work. Indeed, the success of text mining tools for automatic database generation, i.e., automatic relation extraction procedures complementing the work of human database curators, has already been demonstrated for RegulonDB,2 the world’s largest manually curated reference database for the transcriptional regulation network of E. coli (e.g., [14,8,6]). We here focus on a re-construction of a regulatory network for the human pathogenic microorganism C. albicans, which normally lives as a harmless commensal yeast within the body of healthy humans [13]. However, this fungus can change its benevolent behaviour and cause opportunistic superficial infections of the oral or genital epithelia. Given this pathogene behaviour, knowledge about the underlying regulatory interactions might help to understand the onset and progression of infections. Despite their importance, the number of known regulatory interactions in C. albicans is still rather small. The (manually built) Transfac database3 collects regulatory interactions and transcription factor binding sites (TFBS) for a number of organisms. However, it includes information about only five transcription factors of C. albicans. The Candida Genome Database4 includes the most up-to-date manually curated gene annotations. Although regulatory interactions might be mentioned, they are not the main focus of the database and often rely on micro-array experiments only. 2 3 4
http://regulondb.ccg.unam.mx/ http://www.biobase-international.com/index.php?id=transfac http://www.candidagenome.org/
Towards Automatic Pathway Generation
69
Currently, the HKI Institute in Jena is manually collecting transcription factor-target gene interactions for C. albicans. The workflow requires to carefully read research papers and critically interpret experimental techniques and results reported therein. Yet, the most time-consuming step is paper reading and understanding the interactions of a transcription factor or its target genes. In fact, it took this group more than two years to identify information about the 79 transcription factors (TFs) for C. albicans. To speed up this process, automatic text-mining algorithm were plugged in to identify regulatory interactions more rapidly and at a higher rate. Yet, such quantitative criteria (processing speed, number of identified relations, etc.) carefully have to be balanced with qualitative ones (reliability and validity of identified relations).
2
Related Work
Although relation extraction (RE) has become a common theme of natural language processing activities in the life sciences domain, it is mostly concerned with general, coarse-grained protein-protein interactions (PPIs). There are only few studies which deal with more fine-grained (and often also more complex) topics such as gene regulation. The Genic Interaction Extraction Challenge [11] marks an early attempt to determine the state-of-the-art performance of systems designed for the detection of gene regulation interactions. The best system achieved a performance of about 50% F-score. The results, however, have to be taken with care as the LLL corpus used in the challenge is severely limited in ˇ c et al. [17] extracted gene regulatory networks and achieved in the RE size. Sari´ task an accuracy of up to 90%. They disregarded, however, ambiguous instances, which may have led to the low recall plunging down to 20%. ˇ c et al.’s approach for the first largeRodriguez-Penagos et al. [14] used Sari´ scale automatic reconstruction of RegulonDB. The best results scored at 45% recall and 77% precision. Still, this system was specifically tuned for the extraction of transcriptional regulation in E. coli. Buyko and Hahn [6] using a generic RE system [5] achieved 38% recall and 33% precision (F-score: 35%) under reallife conditions. Hahn et al. [8] also pursued the idea of reconstructing curated databases from a more methodological perspective, and compared general rulebased and machine learning (ML)-based system performance for the extraction of regulatory events. Given the same experimental settings, the ML-based system slightly outperformed the rule-based one, with the additional advantage that the ML approach is intrinsically more general and thus scalable.
3 3.1
Gene Regulation Events as an Information Extraction Task Information Extraction as Relation Extraction
To illustrate the notion and inherent complexity of biological events, consider the following example. “p53 expression mediated by coactivators TAFII40 and
70
E. Buyko et al.
TAFII60.” contains the mention of two protein-protein interactions (PPIs), namely PPI(p53, TAFII40), and PPI(p53, TAFII60). More specifically, TAFII40 and TAFII60 regulate the transcription of p53. Hence, the interactions mentioned above can be described as Regulation of Gene Expression which is much more informative than just talking about PPI. The coarse-grained PPI event can thus further be decomposed into molecular subevents (in our example, Regulation and Gene Expression) or even cascades of events (formally expressed by nested relations). The increased level of detail is required for deep biological reasoning and comes already close to the requirements underlying pathway construction. Algorithmically speaking, event extraction is a complex task that can be subdivided into a number of subtasks depending on whether the focus is on the event itself or on the arguments involved: Event trigger identification deals with the large variety of alternative verbalizations of the same event type, e.g., whether the event is expressed in a verbal or in a nominalized form, in active or passive voice (for instance, “A is expressed ” vs. “the expression of A” both refer to the same event type, viz. Expression(A)). Since the same trigger may stand for more than one event type, event trigger ambiguity has to be resolved as well. Named Entity Recognition is a step typically preceding event extraction. Basically, it aims at the detection of all possible event arguments, usually, within a single sentence. Argument identification is concerned with selecting from all possible event arguments (the set of sentence-wise identified named entities) those that are the true participants in an event, i.e., the arguments of the relation. Argument ordering assigns each true participant its proper functional role within the event, mostly the acting Agent (and the acted-upon Patient/Theme). In the following, we are especially interested in Regulation of Gene Expression (so-called ROGE) events (which are always used synonymous with regulatory relations). For our example above, we first need to identify all possible gene/protein names (p53, TAFII40, TAFII60 ), as well as those event triggers that stand for Gene Expression and Regulation subevents (“expression” and “mediate”, respectively). Using this information, we extract relationships between genes/proteins. Given our example, the outcome are two ROGE events — p53 is assigned the Patient role, while TAFII40 and TAFII60 are assigned the Agent roles. In the next section, we will focus on ROGE events only. 3.2
Regulation of Gene Expression Events
The regulation of gene expression can be described as the process that modulates the frequency, rate or extent of gene expression, where gene expression is the process in which the coding sequence of a gene is converted into a mature gene product or products, namely proteins or RNA (taken from the definition of
Towards Automatic Pathway Generation
71
the Gene Ontology class Regulation of Gene Expression, GO:0010468).5 Transcription factors, cofactors and regulators are proteins that play a central role in the regulation of gene expression. The analysis of gene regulatory processes is ongoing research work in molecular biology and affects a large number of research domains. In particular, the interpretation of gene expression profiles from micro-array analyses could benefit from exploiting our knowledge of ROGE events as described in the literature. Accordingly, we selected two document collections as training corpora for the automatic extraction of ROGE events. GeneReg Corpus. The GeneReg corpus consists of more than three hundred PubMed6 abstracts dealing with the regulation of gene expression in the model organism E. coli. GeneReg provides three types of semantic annotations: named entities involved in gene regulatory processes, e.g., transcription factors and genes, events involving regulators and regulated genes, and event triggers. Domain experts annotated ROGE events, together with genes and regulators affecting the expression of the genes. This annotation was based on the Gro ontology [3] class ROGE, with its two subclasses Positive ROGE and Negative ROGE. An event instance contains two arguments, viz. Agent, the entity that plays the role of modifying gene expression, and Patient, the entity whose expression is modified. Transcription factors (in core ROGE), or polymerases and chemicals (in auxiliary ROGE) can act as agents.7 The sentence “H-NS and StpA proteins stimulate expression of the maltose regulon in Escherichia coli.” contains two Positive ROGE instances with H-NS and StpA as regulators and maltose regulon as regulated gene group. GeneReg comes with 1,200 core events, plus 600 auxiliary ROGE events. BioNLP Shared Task Corpus. The BioNLP-ST corpus contains a sample of 950 Medline abstracts. The set of molecular event types being covered include, among others, Binding, Gene Expression, Transcription, and (positive, negative, unspecified) Regulation. Buyko et al. [4] showed that the regulation of gene expression can be expressed by means of BioNLP-ST Binding events. For example, the sentence “ModE binds to the napF promoter.” describes a Binding event between ModE and napF. As ModE is a transcription factor, this Binding event stands for the description of the regulation of gene expression and, thus, has to be interpreted as a ROGE event. We selected for our experiments all mentions of Binding events from the BioNLP-ST corpus.
4
The JReX Relation Extraction System
The following experiments were run with the event extraction system JReX (Jena Relation eXtractor). Generally speaking, JReX classifies pairs of genes in 5 6 7
http://www.geneontology.org/ http://www.pubmed.org Auxiliary regulatory relations (606 instances) extend the original GeneReg corpus.
72
E. Buyko et al.
sentences as interaction pairs using various syntactic and semantic decision criteria (cf. Buyko et al. [5] for a more detailed description). The event extraction pipeline of JReX consists of two major parts, the pre-processor and the dedicated event extractor. The JReX pre-processor uses a series of text analytics tools to annotate and thus assign structure to the plain text stream using linguistic meta data (see Subsection 4.1 for more details). The JReX event extractor incorporates manually curated dictionaries and ML technology to sort out associated event triggers and arguments on this structured input (see Subsection 4.2 for further details). Given that methodological framework, JReX (from the JulieLab team) scored on 2nd rank among 24 competing teams in the BioNLP’09 Shared Task on Event Extraction,8 with 45.8% precision, 47.5% recall and 46.7% F-score. After the competition, this system was further streamlined and achieved 57.6% precision, 45.7% recall and 51.0% F-score [5], thus considerably narrowing the gap to the winner of the BioNLP’09 Shared Task (Turku team, 51.95% F-score). JReX exploits two fundamental sources of knowledge to identify ROGE events. First, the syntactic structure of each sentence is taken into account. In the past years, dependency grammars and associated parsers have turned out to become the dominating approach for information extraction tasks to represent the syntactic structure of sentences in terms of dependency trees (cf. Figure 1 for an example). Basically, a dependency parse tree consists of (lexicalized) nodes, which are labelled by the lexical items a sentence is composed of, and edges linking pairs of lexicalized nodes, where the edges are labelled by dependency relations (such as SUBject-of, OBJect-of, etc.). These relations form a limited set of roughly 60 types and express fundamental syntactic relations among a lexical head and a lexical modifier. Once a dependency parse has been generated for each sentence, the tree, first, undergoes a syntactic simplification process in that (from the perspective of information extraction) semantically irrelevant lexical material is deleted from this tree. This leads to so-called “trimmed” dependency trees which are much more compact than the original dependency parse trees. For example, JReX prunes auxiliary and modal verbs which govern the main verb in syntactic structures such as passives, past or future tense. Accordingly, (cf. Figure 1), the verb “activate” is promoted to the Root in the dependency graph and governs all nodes that were originally governed by the modal “may”. Second, conceptual taxonomic knowledge of the biology domain is taken into consideration for semantic enrichment. Syntactically trimmed dependency trees still contain lexical items as node labels. These nodes are screened whether they are relevant for ROGE events and, if so, lexical labels are replaced by conceptual labels at increasing levels of semantic generality. For instance (cf. Figure 1), the lexical item “TNF-alpha” is turned into the conceptual label Gene. This abstraction avoids over-fitting of dependency structures for the machine learning mechanisms on which JReX is based. 8
http://www-tsujii.is.s.u-tokyo.ac.jp/GENIA/SharedTask/
Towards Automatic Pathway Generation
73
Fig. 1. Trimming of dependency graphs
4.1
JReX Pre-processor
As far as pre-processing is concerned, JReX uses the JCore tool suite [7], e.g., the JulieLab sentence splitter and tokenizer. For shallow syntactic analysis, we apply the OpenNLP POS Tagger and Chunker,9 both re-trained on the Genia corpus.10 For dependency parsing, the MST parser [10] was retrained on the Genia Treebank and the parses subsequently converted to the CoNLL’07 format.11 The data is further processed for named entity recognition and normalization with the gene tagger GeNo [18] and a number of regex- and dictionary-based entity taggers (covering promoters, binding sites, and transcription factors). 4.2
JReX Event Extractor
The JReX event extractor accounts for three major subtasks – first, the detection of lexicalized event triggers, second, the trimming of dependency graphs which involves eliminating informationally irrelevant and enriching informationally relevant lexical material, and, third, the identification and ordering of arguments for the event under scrutiny. Event Trigger Detection & Trimming of Dependency Graphs. Event triggers cover the large variety of lexicalizations of an event. Unfortunately, these lexical signals often lack discriminative power relative to individual event types and thus carry an inherent potential for ambiguity.12 For event trigger detection, the JReX system exploits various trigger terminologies gathered from selected corpora (see Subsection 3.2), extended and curated by biology students. The 9 10 11
12
http://incubator.apache.org/opennlp/ http://www-tsujii.is.s.u-tokyo.ac.jp/~ genia/topics/Corpus/ We used Genia Treebank version 1.0, available from http://www-tsujii. is.s.u-tokyo.ac.jp. The conversion script is accessible via http://nlp.cs. lth.se/pennconverter/ Most of the triggers are neither specific for molecular event mentions, in general, nor for a particluar event type. “Induction”, e.g., occurs 417 times in the BioNLP-ST training corpus as putative trigger. In 162 of these cases it acts as a trigger for the event type Positive Regulation, 6 times as a trigger for Transcription, 8 instances trigger Gene Expression, while 241 occurrences do not trigger an event at all.
74
E. Buyko et al.
event trigger detection is performed with the Lingpipe Dictionary Chunker.13 In the next step, JReX performs the deletion and re-arrangement of syntactic dependency graph nodes by trimming dependency graphs. This process amounts to eliminating informationally irrelevant and to enriching informationally relevant lexical nodes by concept overlays as depicted in Figure 1 (for a detailed account and an evaluation of these simplification procedures, cf. [5]). Argument Identification and Ordering. The basic event extraction steps are argument detection and ordering. For argument extraction, JReX builds pairs of triggers and their putative arguments (named entities) or triggerless pairs of named entities only. For argument extraction, JReX uses two classifiers: Feature-based Classifier. Three groups of features are distinguished: (1) lexical features (covering lexical items before, after and between the mentions of the event trigger and an associated argument; (2) chunking features (concerned with head words of the phrases between two mentions; (3) dependency parse features (considering both the selected dependency levels of the arguments, parents and least common subsumer, as well as the shortest dependency path structure between the arguments for walk features). For this feature-based approach, the Maximum Entropy (MaxEnt) classifier from Mallet is used.14 Graph Kernel Classifier. The graph kernel uses a converted form of dependency graphs in which each dependency node is represented by a set of labels associated with that node. The dependency edges are also represented as nodes in the new graph such that they are connected to the nodes adjacent in the dependency graph. Subgraphs which represent, e.g., the linear order of the words in the sentence can be added, if required. The entire graph is represented in terms of an adjacency matrix which is further processed to obtain the summed weights of paths connecting two nodes of the graph (cf. Airola et al. [1] for details). For the graph kernel approach, the LibSVM Support Vector Machine is used as classifier.15
5
Experiments
5.1
Evaluation Scenario and Experimental Settings
C. albicans Interactions as Gold Standard. In the following, the C. albicans Regulatory Network (RN) refers to the collection of regulatory interactions curated by the HKI in Jena. C. albicans RN includes the following information for each regulation event: regulatory gene (the Agent in such an event, a transcription factor (TF)), and the regulated gene (the Patient). Evaluation against C. albicans RN constitutes a challenging real-life scenario. 13 14 15
http://alias-i.com/lingpipe/ http://mallet.cs.umass.edu/index.php/Main_Page http://www.csie.ntu.edu.tw/~ cjlin/libsvm
Towards Automatic Pathway Generation
75
We use two gold standard sets for our evaluation study, viz. C. albicans RN and C. albicans RN-small. C. albicans RN contains 114 interactions for 31 TFs. As this set does not contain any references from interactions to the full texts they were extracted from, there is no guarantee that the reference full texts are indeed contained in our document sets we used for this evaluation study. Therefore, we built the C. albicans RN-small set, the subset of the C. albicans RN, which is complemented with references to full texts (8 documents) and contains 40 interactions for 7 TFs. C. albicans RN-small serves here as a gold standard set directly linked to the literature for a more proper evaluation scenario of JReX. Processing and Evaluation Settings. For our experiments, we re-trained the JReX argument extraction component on the corpora presented in Section 3.2, i.e., on the GeneReg corpus and on Binding event annotations from the BioNLP-ST corpus. As Binding events do not represent directed relations, we stipulate here that the protein occurring first is assigned the Agent role.16 For argument detection, we used the graph kernel and MaxEnt models in an ensemble configuration, i.e., the union of positive instances was computed.17 To evaluate JReX against C. albicans RN we, first, processed various sets of input documents (see below), collected all unique gene regulation events extracted this way, and compared this set of events against the full set of known events in C. albicans RN. A true positive (TP) hit is obtained, when an event found automatically corresponds to one in C. albicans RN, i.e., having the same agent and patient. The type of regulation is not considered. A false positive (FP) hit is counted, if an event was found which does not occur in the same way in C. albicans RN, i.e., either patient or agent (or both) are wrong. False negatives (FN) are events covered by C. albicans RN but not found by the system automatically. By default, all events extracted by the system are considered in the “TF-filtered” mode, i.e., only events with an agent from the list of all known TFs for C. albicans from C. albicans RN are considered. From the hits we calculated standard precision, recall, and F-score values. Of course, the system performance largely depends on the size of the base corpus collection being processed. Thus, for all three document sets we got separate performance scores. Input Document Sets. Various C. albicans research paper sets were prepared for the evaluation against the C. albicans gold standards. We used a PubMed search for “Candida albicans” and downloaded 17,750 Medline abstracts and 6,000 freely available papers in addition to approximately 1,000 documents nonfree full text articles collected at the HKI. An overview of the data sets involved is given in Table 1. The CA set contains approx. 17,750 Medline abstracts, the CF holds more than 7,000 documents. The CF-small set is composed of 8 full 16 17
In particular transcription factors that bind to regulated genes are mentioned usually before the mention of regulated genes. That is, an SVM employing a graph kernel as well as a MaxEnt model were used to predict on the data. Both prediction results were merged by considering each example as a positive event that had been classified as a positive event by either the SVM or the MaxEnt model.
76
E. Buyko et al. Table 1. Document sets collected for the evaluation study Document Set CA - C. albicans abstracts CF - C. albicans full texts CF-small - C. albicans RN-small texts
Number of Documents 17,746 7,024 8
texts from the CF set that are used as references for C. albicans interactions from C. albicans RN-small. 5.2
Experimental Results
We ran JReX with the re-trained argument extractor on all documents sets introduced above. As baseline we decided on simple sentence-wise co-occurrence of putative event arguments and event triggers, i.e., if two gene name mentions and at least one event trigger appear in a sentence, that pair of genes is considered to be part of a regulatory relation. As the regulatory relation (ROGE event) is a directed relation, we built two regulatory relations with interchanged Agent and Patient. The results of the baseline and JReX runs are presented in Table 2 for the C. albicans RN set and Table 3 for the C. albicans RN-small set. Using the baseline, the best recall could be achieved on full texts with 0.84 points on the C. albicans RN (see Table 2) and with 0.90 points on the C. albicans RN-small set (see Table 3). For the abstract sets we could achieve only low recall values of 0.27 points and 0.31 points on both sets, respectively. This outcome confirms the reasonable assumption that full text articles contain considerably more mentions of interaction events than their associated abstracts. Still, the precision of the baseline on full text articles is miserable (0.01 on the C. albicans RN set and 0.02 on the C. albicans RN-small set), respectively. This data indicates that a more sophisticated approach to event extraction such as the one underlying the JReX system is much needed. On the C. albicans RN set, with JReX we achieved 0.35 recall and 0.18 precision points on full texts (see Table 2). Given the lack of literature references from the C. albicans RN set, we cannot guarantee that our document collection contains all relevant documents. Therefore, we used the C. albicans RN-small set for a proper evaluation of JReX. The best JReX-based results could be achieved on full texts analyzing 8 officially referenced full text articles. JReX achieves here 0.52 points F-score with 0.54 points recall and 0.51 points precision (see Table 2. Event extraction evaluated on full data set C. albicans RN for known transcription factors in C. albicans . Recall/Precision/F-score values are given for each document set. Full Texts Abstracts All Sets R/P/F R/P/F R/P/F Co-occurrence 0.84/0.01/0.03 0.27/0.09/0.14 0.84/0.01/0.03 JReX 0.35/0.18/0.24 0.13/0.29/0.18 0.35/0.18/0.23
Text Mining
Towards Automatic Pathway Generation
77
Table 3. Event extraction evaluated on referenced data set C. albicans RN-small for known transcription factors in C. albicans . Recall/Precision/F-score values are given for each document set. Full Texts Abstracts C. albicans RN-small texts All Sets R/P/F R/P/F R/P/F R/P/F Co-occurrence 0.90/0.02/0.03 0.31/0.13/0.18 0.72/0.14/0.24 0.90/0.02/0.03 JReX 0.64/0.30/0.41 0.13/0.28/0.18 0.54/0.51/0.52 0.64/0.29/0.40
Text Mining
Table 3). This level of performance matches fairly well with the results JReX obtained in the BioNLP 2009 Shared Task. When we use all the data sets for the evaluation on the C. albicans RN-small set, we our recall peaks at 0.64 points with a reasonable precision of 0.29 and an F-score at 0.40 (see Table 3). As the recall figures of our baseline reveal that abstracts contain far less ROGE events than full texts, we have strong evidence that the biomedical NLP community needs to process full-text articles to effectively support database curation. As full-text documents are linguistically more complex and thus harder to process, the relative amount of errors is higher than on abstracts. The manual analysis of false negatives revealed that we miss, in particular, relations that are described in a cross-sentence manner using coreferential expressions. As JReX extracts relations between entities only within the same sentence, the next step should be to incorporate cross-sentence mentions of entities as well.
6
Conclusion
Pathway diagrams for the visualization of complex biological interaction processes between genes and proteins are an extremely valuable generalization of tons of experimental biological data. Yet, these diagrams are usually built by highly skilled humans. As an alternative, we here propose to automate the entire process of pathway generation. The focus of this work, however, is on extracting relations, the building blocks of pathways, from the original full-text publications rather than from document surrogates such as abstracts, the common approach. Our evaluation experiments reveal that indeed full texts have to be screened rather than informationally poorer abstracts. This comes at the price of dealing with more complex linguistic material—full texts exhibit linguistic structures (e.g., anaphora) typically not found (at that rate) in abstracts. After the integration of anaphora resolution tools, the next challenge will consist of integrating all harvested relations into a single, coherent pathway, thus reflecting causal, temporal, and functional ordering constraints among the single relations [12]. Acknowledgments. This work is partially funded by a grant from the German Ministry of Education and Research (BMBF) for the Jena Centre of Systems Biology of Ageing (JenAge) (grant no. 0315581D).
78
E. Buyko et al.
References 1. Airola, A., Pyysalo, S., Bj¨ orne, J., Pahikkala, T., Ginter, F., Salakoski, T.: A graph kernel for protein-protein interaction extraction. In: BioNLP 2008 – Proceedings of the ACL/HLT 2008 Workshop on Current Trends in Biomedical Natural Language Processing, Columbus, OH, USA, June 19, pp. 1–9 (2008) 2. Baumgartner Jr., W.A., Cohen, K.B., Fox, L.M., Acquaah-Mensah, G., Hunter, L.: Manual curation is not sufficient for annotation of genomic databases. Bioinformatics (ISMB/ECCB 2007 Supplement) 23(13), i41–i48 (2007) 3. Beisswanger, E., Lee, V., Kim, J.j., Rebholz-Schuhmann, D., Splendiani, A., Dameron, O., Schulz, S., Hahn, U.: Gene Regulation Ontology gro: Design principles and use cases. In: MIE 2008 – Proceedings of the 20th International Congress of the European Federation for Medical Informatics, G¨ oteborg, Sweden, May 26-28, pp. 9–14 (2008) 4. Buyko, E., Beisswanger, E., Hahn, U.: The GeneReg corpus for gene expression regulation events: An overview of the corpus and its in-domain and out-of-domain interoperability. In: LREC 2010 – Proceedings of the 7th International Conference on Language Resources and Evaluation, La Valletta, Malta, May 19-21, pp. 2662– 2666 (2010) 5. Buyko, E., Faessler, E., Wermter, J., Hahn, U.: Syntactic simplification and semantic enrichment: Trimming dependency graphs for event extraction. Computational Intelligence 27(4) (2011) 6. Buyko, E., Hahn, U.: Generating semantics for the life sciences via text analytics. In: ICSC 2011 – Proceedings of the 5th IEEE International Conference on Semantic Computing, Stanford University, CA, USA (September 19-21, 2011) 7. Hahn, U., Buyko, E., Landefeld, R., M¨ uhlhausen, M., Poprat, M., Tomanek, K., Wermter, J.: An overview of JCoRe, the Julie Lab Uima component repository. In: Proceedings of the LREC 2008 Workshop ‘Towards Enhanced Interoperability for Large HLT Systems: UIMA for NLP’, Marrakech, Morocco, May 31, pp. 1–7 (2008) 8. Hahn, U., Tomanek, K., Buyko, E., Kim, J.J., Rebholz-Schuhmann, D.: How feasible and robust is the automatic extraction of gene regulation events? A crossmethod evaluation under lab and real-life conditions. In: BioNLP 2009 – Proceedings of the NAACL/HLT BioNLP 2009 Workshop, Boulder, CO, USA, June 4-5, pp. 37–45 (2009) 9. Luciano, J.S., Stevens, R.D.: e-sience and biological pathway semantics. BMC Bioinformatics 8 (Suppl 3) (S3) (2007) 10. McDonald, R.T., Pereira, F., Kulick, S., Winters, R.S., Jin, Y., Pete, W.: Simple algorithms for complex relation extraction with applications to biomedical IE. In: ACL 2005 – Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, Ann Arbor, MI, USA, June 25-30, pp. 491–498 (2005) 11. N´edellec, C.: Learning Language in Logic: Genic interaction extraction challenge. In: Proceedings LLL 2005 – 4th Learning Language in Logic Workshop, Bonn, Germany, August 7, pp. 31–37 (2005) 12. Oda, K., Kim, J.D., Ohta, T., Okanohara, D., Matsuzaki, T., Tateisi, Y., Tsujii, J.: New challenges for text mining: Mapping between text and manually curated pathways. BMC Bioinformatics 9(suppl. 3) (S5) (2008) 13. Odds, F.C.: Candida and Candidosis, 2nd edn. Bailli`ere Tindall, London (1988)
Towards Automatic Pathway Generation
79
14. Rodr´ıguez-Penagos, C., Salgado, H., Mart´ınez-Flores, I., Collado-Vides, J.: Automatic reconstruction of a bacterial regulatory network using natural language processing. BMC Bioinformatics 8(293) (2007) 15. Sanchez, O., Poesio, M., Kabadjov, M.A., Tesar, R.: What kind of problems do protein interactions raise for anaphora resolution? A preliminary analysis. In: SMBM 2006 – Proceedings of the 2nd International Symposium on Semantic Mining in Biomedicine, Jena, Germany, April 9-12, pp. 109–112 (2006) 16. Viswanathan, G.A., Seto, J., Patil, S., Nudelman, G., Sealfon, S.C.: Getting started in biological pathway construction and analysis. PLoS Computational Biology 4(2), e16 (2008) ˇ c, J., Jensen, L.J., Ouzounova, R., Rojas, I., Bork, P.: Extracting regulatory 17. Sari´ gene expression networks from PubMed. In: ACL 2004 – Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, Barcelona, Spain, July 21-26, pp. 191–198 (2004) 18. Wermter, J., Tomanek, K., Hahn, U.: High-performance gene name normalization with GeNo. Bioinformatics 25(6), 815–821 (2009)
Online Writing Data Representation: A Graph Theory Approach Gilles Caporossi1 and Christophe Leblay2 1
GERAD and HEC Montr´eal, Montr´eal, Canada ITEM, Paris, France and SOLKI, Jyv¨ askyl¨ a, Finland
[email protected],
[email protected] 2
Abstract. There are currently several systems to collect online writing data in keystroke logging. Each of these systems provides reliable and very precise data. Unfortunately, due to the large amount of data recorded, it is almost impossible to analyze except for very limited recordings. In this paper, we propose a representation technique based upon graph theory that provides a new viewpoint to understand the writing process. The current application is aimed at representing the data provided by ScriptLog although the concepts can be applied in other contexts.
1
Introduction
The recent approaches of the study of writing based upon online record contrast with those based upon paper versions. The later ones are oriented on the page space and the former ones emphasis on the temporal dimension. The models based upon online records have been developed in the 80s, with the pioneering work of Matsuhashi [13]. Returning to a bipolar division, Matsuhashi suggests distinguishing between the conceptual level (semantics, grammar and spelling) and sequential plan (the planning and phrasing). The work that will follow will be, for the vast majority of them, related to the software for recording. Thus, the work of Ahlsen & Str¨ omqvist [2] and Wengelin [19] are directly related to software applications ScriptLog, those of Sullivan & Lindgren [16] applications JEdit, those of Van Waes & Schellens [17], or Van Waes & Leijten [18] software InPutLog, those of Jakobsen [9] Translog software, and finally those of Chesnet & Alamargot [6] Eye and Pen software. Without having to deny all work on the revision seen as a product (final text), these new approaches are all pointing the finger that writing is primarily a temporal activity. The multitude of software approaches developed for online recording of the writing activity shows a clear interest in the study of the process of writing. Would it be from the cognitive psychology or from the didactic point of view, analyzing the writing activity as a process is very important for researchers. What all these studies share is a common concern: that the record of writing is associated with its representation. As the raw data collected by the software is so detailed, it is difficult to analyze without preprocessing and without a proper representation that may be used conveniently by the researcher. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 80–89, 2011. c Springer-Verlag Berlin Heidelberg 2011
Online Writing Data Representation: A Graph Theory Approach
81
In this paper, we propose a new representation technique that visually allows identification of basic operations (such as insertion, deletion, etc.), but also to identify portions in the document according to the processing activity of the writer. Each time this new representation technique was presented to psychologists or linguists it received a very positive feedback. The paper is organized as follows : The next section describes the data involved and the third exposes some visualizing techniques. In the fourth and fifth section, we describe the proposed methodology and some transpositions from classical linguistic transformations of texts. The concepts proposed in the paper are illustrated by examples from the recording of the writing a short essay of 15 minutes under ScriptLog 1.4 (Mac version). This corpus composed entirely of Finnish-speaking writers [10] includes novice writers (first year of college Finnish, French, Core) and expert writers (university teachers).
2
Data
Elementary events that are recorded by a software such as ScriptLog are keyboard keystrokes or mouse clicks, as shown on figure 1. They represent the basic units that technically suffice to represent the whole writing process. Using them with no previous treatment may however not be a convenient way to look at the data. For instance, studying the pauses, their lengths and locations does not require the same level of information as studying the text revision process. In both cases, the same basic information is used but the researcher needs to apply agregation to a different level depending on his needs. Preprocessing to the data sometimes cannot be avoided. Given the large amount of data produced by the system (the log file corresponding to the data that was recorded in 15 minutes may yield up to 2000 lines), this preprocessing should, if possible, be automated to avoid errors.
Fig. 1. Excerpt from a log file (log) obtained using ScriptLog
82
3
G. Caporossi and C. Leblay
Visualization Techniques
Visualization is an important part of this writing process but few visualization techniques actually exists. One of those techniques is the so-called linear representation in which any character written is displayed. Should a portion be deleted, it is crossed out instead of being deleted in order to show the text production in its process and not only as a final product. Cursor movements by arrows or mouse are also identified so that it is possible to follow the text construction process. An example of linear representation is given on figure 2. Such a representation has the advantage to display the text but remains difficult to understand in the case of a complex creation.
Fig. 2. Linear representation of a short 15 minutes text
Another way to visualize the creation process is based upon few values representing the text produced so far. Au fil de la plume in Gen`ese du texte [5] displays the position of the cursor as well as the total length of the text as a function of time. Such an approach indicates zones where the writer modifies already written text (see figure 3). This type of representation is also referred to as ”GIS representation” and is used in various software such as InputLog [16]. One weakness of the ”GIS representation” is that the position of visible text may not be correct as soon as an insertion or deletion occurs at a prior position. The position of any character being altered, it is difficult to figure out the part of the text involved by any subsequent modification. Another weak point is the lack of reference to the text corresponding to points on the graphic.
4
Method : Graph Representation
To assess the problem of the moving position of written text after revision, we propose here a slightly different approach in which each character is not described by the absolute position when written. Instead, we use a relative position which
Online Writing Data Representation: A Graph Theory Approach
83
Fig. 3. Au fil de la plume - Gen`ese du Texte
proves to be more suitable to represent the dynamic aspect involved in the writing activity. Sequences of keystroke are merged together to form an entity involved in the conception of the text, which is represented by a node in the graph. Should two nodes interact, either by a chronological or spatial relation, they are joined by an edge, or link, showing this relation. Graphs are mathematical tools based on nodes or vertices that are possibly connected by links or edges. Some application fields may be more or less related to graph theory. Chemistry is closely related to graph theory as some graph theoretical results directly apply to chemistry [3][4]. Some other applications refer to the algorithmic part underlying graph theory and networks, such as transportation, scheduling and communication. Since the 1990s, graphs are also used for the representation purpose in human sciences by the means of concept maps [14]. In the present paper, we propose to use graph representation to visualize a new kind of data. Some examples of graph representations of the writing process are drawn on figure 4, for a novice production, and figure 5, for an expert. – The size of a vertex is related to the number of elementary operations it represents. In the case of the novice writer, there are few large nodes, which shows a higher frequency of errors or typos. The text corresponding to each node could be displayed within the node, which would provide a representation close to the linear representation, but we decided not to do so here in order to keep the representations as simple as possible. – The structure of the graph is also very informative, the structure of the graph of the novice is almost linear while a portion (in the middle of the graph) of the graph of the expert is much more complicated. This complex portion, between nodes 37 and 79 represents a part of the production that was rewritten and changed on a higher level, clearly not just from the lexical point of view.
84
G. Caporossi and C. Leblay
Fig. 4. Graph visualization : an example of novice writer
The key or mouse events found in the records are of three types: (i) additions or insertions a character or a space (ii) deletions of characters or spaces, and (iii) cursor moves by the mean of arrows or mouse. Spatially and temporally contiguous sequences are merged and represented by the nodes of the graph. 4.1
Nodes
The size and color of each node is an indicator of the number of elementary events it represents and their nature respectively. An addition that has later been removed appears in yellow, an addition that remains until the final text is drawn in red and deletions are displayed in blue. The final text thus appears in red while modifications that do not appear in the final text are either yellow or blue depending on their nature. The nodes are numbered according to their creation sequence. 4.2
Links
The nodes are connected by links or edges representing a spatial or temporal relation. The shape and color of edges indicate the nature of this relation. A solid line represents the chronological link (solid lines draw a path from the node 0 to the last node thru all nodes in the chronological order). Other links between nodes necessarily correspond to spatial relations. The link between an addition node and its deletion counterpart is drawn in blue and the spatial link between nodes that are part of the final text appear in red. Reading the content of nodes
Online Writing Data Representation: A Graph Theory Approach
85
Fig. 5. Graph visualization : an example of expert writer
along the red link will therefore display the whole text in its final version. Note that the path describing the final text is composed of red nodes that are linked by red links.
5
Analysis of Graphical Patterns
Different avenues are possible to the analysis of graphs and we will concentrate here on the most useful ones. We will first identify patterns that correspond to some classical operations involved in the writing process. From a technical point of view, some operations will correspond to special subgraphs which could easily be recognized. The identification of these subgraphs is useful to analyze the graph as a representation of the writing process. 5.1
Additions and Insertions
Adding text could occur in three ways : (i) adding text at the end of the node that is being written will not be represented by any special pattern; (ii) inserting text in the node currently being written, but not at the end will cause this node to be split and a triangle with a solid red line appears as illustrated on figure 6. This solid red line is crossed in a way or the other depending if we follow the spatial or the chronological order. (iii) Inserting text in a node that is already written will cause this node to split and the corresponding configuration is shown on figure 7. From the graphic and linguistic standpoints, insertions corresponds
86
G. Caporossi and C. Leblay
Fig. 6. Insertion in current node
Fig. 7. Insertion
to addition inside (ii - iii) while the addition is the development of the text at its end (i). 5.2
Deletions
In the case of deletions as additions, different subgraphs are found depending if we are erasing the end of the last node, a part of the last node or a part of a node that was already written. The case of an immediate suppression (e.g. after a typing error) is shown in figure 8 where the text from node 4 is immediately removed by node 5, a deletion in the last node but not at its end is presented on figure 9 where node 110 is removed by node 112. A delayed removal will result in the subgraph shown in figure 10.
Fig. 8. Immediate deletion
5.3
Substitutions
In addition to these simple operations, some more complex operations may be viewed as sequences of these simple operations, but they nevertheless correspond to special subgraphs that may easily be recognized. For instance, replacement
Online Writing Data Representation: A Graph Theory Approach
87
Fig. 9. Deletion of a part of the last node
Fig. 10. Delayed elimination
may be viewed as a deletion immediately followed by an insertion at the same place. Figure 11 represents the subgraph corresponding to a replacement in a node that was already written. The replacement of a string in the last node, but not at its end is shown on figure 12. The interpretation of the replacement at the end of the last node is more complex because from a technical point of view, it is impossible to know whether the addition comes instead of the deleted portion or after. The deletion is not bounded and the addition may extend beyond the replacement, which is difficult to identify. In this case, some other information must be used by the researcher to interpret this sequence in a way or the other.
6
Summary and Future Research Directions
In this paper, we propose a new representation technique of written language production in which the problem of moving text position is handled. This technique allows the researcher to easily identify the portion of the document the writer is modifying. According to the reaction of researchers from linguistics working on the writing process, this representation is easier to understand than those previously available. An important aspect being the capability to visualize modification patterns from a spacial and temporal point of view on the same representation. It also seems that the intuition is more stimulated by a graph representation than it could be by linear or GIS representations. Some important aspects of the graph representation in writing need further investigations. – Emphasis on the temporal aspect by inserting nodes corresponding to long pauses (the definition of the minimum duration of a pause may be defined by the user), or indicating the time and duration corresponding to each node. – Distinguish the various levels of text improvements as defined by Faigley and Witte [7] by distinguishing surface modification (correction of typos,
88
G. Caporossi and C. Leblay
Fig. 11. Delayed substitution
Fig. 12. Substitution in the last node
orthographical adjustment..) from text-based modification (reformulation, syntactic..). A first step in this direction would be to differentiate nodes involving more than a single word, which may be identified by the presence of a space before and after visible characters. Indeed, a second step would be to improve the qualification of the nature of the transformation represented by a node. This last part requires tools from computational linguistic. – The graph drawing aspect is actually achieved by hand. Devising an algorithm that would automatically place vertices in such a way that (i) patterns are easy to recognize and (ii) the spatial aspect is preserved as much as possible, so that following the writing process remains easy.
References 1. Alamargot, D., Chanquoy, L.: Through the Models of Writing. Studies in Writing. Kluwer Academic Pubishers, Dordrecht (2001) 2. Ahls´en, E., Str¨ omqvist, S.: ScriptLog: A tool for logging the writing process and its possible diagnostic use. In: Loncke, F., Clibbens, J., Arvidson, H., Lloyd, L. (eds.) Argumentative and Alternative Communication: New Directions in Research and Practice, pp. 144–149. Whurr Publishers, London (1999) 3. Caporossi, G., Cvetkovi´c, D., Gutman, I., Hansen, P.: Variable Neighborhood Search for Extremal Graphs. 2. Finding Graphs with Extremal Energy. J. Chem. Inf. Comput. Sci. 39, 984–996 (1999)
Online Writing Data Representation: A Graph Theory Approach
89
4. Caporossi, G., Gutman, I., Hansen, P.: Variable Neighborhood Search for Extremal Graphs. 4. Chemical Trees with Extremal Connectivity Index. Computers and Chemistry 23, 469–477 (1999) 5. Chenouf, Y., Foucambert, J., Violet, M.: Gen`ese du texte.Technical report 30802 - Institut National de Recherche P´edagogique (1996) 6. Chesnet, D., Alamargot, D.: Analyse en temps r´eel des activit´es oculaires et graphomotrices du scripteur: int´erˆets du dispositif Eye and Pen. L’ann´ee Psychologique 32, 477–520 (2005) 7. Faigley, L., Witte, S.: Analysing revision. College composition and communication 32, 400–414 (1981) 8. Jakobsen, A.L.: Logging target text production with Translog. In: Hansen, G. (ed.) Probing the Process in Translation. Methods and Results, Samfundslitteratur, Copenhagen, pp. 9–20 (1999) 9. Jakobsen, A.L.: Research Methods in Translation: Translog. In: Sullivan, K.P.H., Lindgren, E. (eds.) Computer Key-Stroke Logging and Writing: Methods and Applications, pp. 95–105. Elsevier, Amsterdam (2006) 10. Leblay, C.: Les invariants processuels. En de¸ca ` du bien et du mal ´ecrire. Pratiques 143/144, 153–167 (2009) 11. Leijten, M., Van Waes, L.: Writing with speech recognition: the adaptation process of professional writers. Interacting with Computers 17, 736–772 (2005) 12. Lindgren, E., Sullivan, K.P.H., Lindgren, U., Spelman Miller, K.: GIS for writing: applying geographic information system techniques to data-mine writing’s cognitive processes. In: Ri- Jlaarsdam, G. (series ed.), Torrance, M., Van Waes, L., Galbraith, D. (vol. eds.) Writing and Cognition: Research and Applications, pp. 83–96. Elsevier, Amsterdam (2007) 13. Matsuhashi, A.: Revising the plan and altering the text. In: Matsuhashi, A. (ed.) Writing in Real Time, pp. 197–223. Ablex Publishing Corporation, Norwood (1987) 14. Novak, J.D.: Concept maps and Vee diagrams: Two metacognitive tools for science and mathematics education. Instructional Science 19, 29–52 (1990) 15. Str¨ omqvist, S., Karlsson, H.: ScriptLog for Windows - User’s manual. Technical report - University of Lund: Department of Linguistic and University College of Stavanger: Centre for Reading Research (2002) 16. Sullivan, K.P.H., Lindgren, E. (eds.): Computer Keystroke Logging and Writing: Methods and Applications. Elsevier, Amsterdam (2006) 17. Van Waes, L., Schellens, P.J.: Writing profiles: The effect of the writing mode on pausing and revision patterns of experienced writers. Journal of Pragmatics 35(6), 829–853 (2003) 18. Van Waes, L., Leijten, M.: Inputlog: New Perspectives on the Logging of On-Line Writing Processes in a Windows Environment. In: Sullivan, K.P.H., Lindgren, E. (eds.) Computer Key-Stroke Logging and Writing: Methods and Applications, pp. 73–93. Elsevier, Amsterdam (2006) ¨ Examining pauses in writing: Theories, methods and empirical data. 19. Wengelin, A.: In: Sullivan, K.P.H., Lindgren, E. (eds.) Computer Key-Stroke Logging and Writing: Methods and Applications, pp. 107–130. Elsevier, Amsterdam (2006)
Online Evaluation of Email Streaming Classifiers Using GNUsmail 1 ´ Jos´e M. Carmona-Cejudo1, , Manuel Baena-Garc´ıa1 , Jos´e del Campo-Avila , 2 3 1 Albert Bifet , Jo˜ao Gama , and Rafael Morales-Bueno 1
2
Universidad de M´alaga, Spain University of Waikato, New Zealand 3 University of Porto, Portugal
[email protected] Abstract. Real-time email classification is a challenging task because of its online nature, subject to concept-drift. Identifying spam, where only two labels exist, has received great attention in the literature. We are nevertheless interested in classification involving multiple folders, which is an additional source of complexity. Moreover, neither cross-validation nor other sampling procedures are suitable for data streams evaluation. Therefore, other metrics, like the prequential error, have been proposed. However, the prequential error poses some problems, which can be alleviated by using mechanisms such as fading factors. In this paper we present GNUsmail, an open-source extensible framework for email classification, and focus on its ability to perform online evaluation. GNUsmail’s architecture supports incremental and online learning, and it can be used to compare different online mining methods, using state-of-art evaluation metrics. We show how GNUsmail can be used to compare different algorithms, including a tool for launching replicable experiments. Keywords: Email Classification, Online Methods, Concept Drift, Text Mining.
1 Introduction The use of email is growing every day as a result of new applications on mobile devices and Web 2.0 services, for both personal and professional purposes. When emails are read and processed by the user, they can be easily stored due to the low cost of data storage devices. It is common to file emails with a certain level of interest, group them into some hierarchical structure using category-based folders, or tag them with different labels. However, email classification can pose more difficulties than spam detection because of the potentially high number of possible folders. Traditionally, classification tasks had to be manually carried out by the user, which was very time-consuming. To help with these tedious tasks, most email clients support hand-crafted rules that can automate them attending to different predefined characteristics (sender, subject, nature of content, etc.). But these rules are not only static and inflexible, but also frequently
Corresponding author.
J. Gama, E. Bradley, and J. Hollm´en (Eds.): IDA 2011, LNCS 7014, pp. 90–100, 2011. c Springer-Verlag Berlin Heidelberg 2011
Online Evaluation of Email Streaming Classifiers Using GNUsmail
91
considered to be too difficult to be defined by most users [10]. Therefore, it would be desirable to induce such rules automatically. Machine learning and data mining methodologies are presented as suitable technologies to solve this problem. They can be used to induce models (in our case, rules) from a data source composed of classified emails. These models can be used to predict the classification of new email messages. Email classification is a subfield of the more general text classification area. Its aim is to assign predefined labels, categories, or folders to emails. From a simplified perspective, the following activities have to be carried out to construct a machine learning-based email classifier: 1. Corpus preprocessing: original emails are transformed into a format supported by machine learning algorithms. 2. Model construction: a classification model is induced by some algorithm. 3. Validation: the quality of the model is tested by classifying previously unseen emails. Two different approaches have been defined, depending on how the messages are processed: batch learning, where the whole dataset is available before the beginning of the learning process, and online learning, where data are continually being received over time. Different solutions have been proposed for each approach. Irrespective of the type of approach, there is a lack of systems to compare and evaluate different machine learning models for email classification. GNUsmail [7] is a framework that allows to compare different email classification algorithms. The aim of this paper is thus to introduce new improvements to GNUsmail and show its qualities as a platform to carry out replicable experimentation in the field of data stream mining applied to email classification. We analyze email as a stream of data with great amounts of messages which arrive continuously and must be processed only once (incremental learning). In this online approach, we consider that the nature of emails within a folder may change over time. We review and implement different proposals to evaluate and compare data stream mining methods, including modern approaches such as sliding windows and fading factors [13] applied to the classical prequential error. Furthermore, we adapt GNUsmail to use recently proposed statistical tests to detect significant differences between the performance of online algorithms, and then employ the framework to evaluate different machine learning methods. The rest of this paper is organized as follows: in Section 2, we introduce related work, while in Section 3, our proposed preprocessing and learning framework is described. In Section 4, we present recently proposed methods for comparing the performance of stream mining algorithms. In Section 5, we describe our experimental setup. Then, Section 5.2 shows the most relevant results, and finally Section 6 offers our conclusions and mentions some possibilities for future work.
2 Related Work Most of the systems that try to classify emails in an automatic way have been implemented using a batch approach [22], which, at best, can only be updated from time to time [17]. RIPPER [9] was one of the first algorithms used for email classification, using tf-idf weighting (term frequency - inverse document frequency) to produce if-then
92
J.M. Carmona-Cejudo et al.
rules. Another common algorithm is Na¨ıve Bayes, used in ifile [22] for the purpose of general email classification, or SpamCop [21] for specific spam filtering. Another popular technique is SVM, used in MailClassifier [23] (one extension for Thunderbird). With regard to online learning, little effort has been devoted to applying stream mining tools [8] to email classification. Manco et al. [18] have proposed a framework for adaptive mail classification (AMCo), but they admit that their system does not actually take into account the updating of the model. Another system that incorporates incremental learning is the one proposed by Segal et al. Their system, SwiftFile [25], uses a modified version of the AIM algorithm [2] to support incremental learning. Although they have advanced in this line of research, they admit that there are limitations, like few incremental algorithms for text classification or closed integration between email clients and classifiers. A more recent approach is represented by GMail Priority Inbox, which uses logistic regression models, combining global and per-user models in a highly scalable fashion [1]. This approach ranks the messages according to a given priority metric.
3 GNUsmail: Architecture and Characteristics GNUsmail [7] is an open-source framework for online adaptive email classification, with an extensible text preprocessing module, based on the concept of filters that extract attributes from emails, and an equally extensible learning module into which new algorithms, methods and libraries can be easily integrated. GNUsmail contains configurable modules for reading email, preprocessing text and learning. In the learning process, the email messages are read as the model is built, because, for practical reasons, email messages are analyzed as an infinite flow of data. The source code is available at http://code.google.com/p/gnusmail/ and it is licensed under the GPLv3 license. We now explain in more detail the different modules integrated in GNUsmail. 3.1 Reading Email and Text Preprocessing Module The reading email module can obtain email messages from different data sources, such as a local filesystem or a remote IMAP server. This allows to process datasets like ENRON or to process personal email messages from remote servers like GMail. The main feature of the text preprocessing module is a multi-layer filter structure, responsible for performing feature extraction tasks. The Inbox and Sent folders are skipped in the learning process because they can be thought of as ‘non-specific’ folders. Every mail belonging to any other folder (that is, to any topical folder) goes through a pipeline of linguistic operators which extract relevant features from it [24]. As the number of possible features is prohibitively large, only the most relevant ones are selected, so GNUsmail performs a feature selection process using different methods for feature selection [12, 19]. Some ready-to-use filters are implemented as part of the GNUsmail core, and new ones can be incorporated, giving developers the chance to implement their own filters, as well as test and evaluate different techniques. We have implemented a linguistic filter to extract attributes based on relevant words. It is based on the ranking
Online Evaluation of Email Streaming Classifiers Using GNUsmail
93
provided by the folder-wise tf-idf function. We have implemented several filters to extract non-linguistic features such as CC, BCC, sender, number of receivers, domain of sender, size, number of attachments or body/subject length. We have also implemented filters to extract metalinguistic features, such as capital letters proportion, or the language the message is written in. 3.2 Learning Module The learning module allows the incorporation of a wide variety of stream-based algorithms, such as those included in the WEKA [15] or MOA [5, 6] frameworks. WEKA (Waikato Environment for Knowledge Analysis) methods are used mainly with small datasets in environments without time and memory restrictions. MOA (Massive Online Analysis) is a data mining framework that scales to more demanding problems, since it is designed for mining data streams. GNUsmail offers three updateable classifiers from the WEKA framework, although more can be easily added. The first one is Multinomial Na¨ıve Bayes, a probabilistic classifier which is commonly used as a baseline for text classification. The other two classifiers belong to the family of lazy classifiers, which store all or a subset of the learning examples; when a new sample is given as input to the classifier, a subset of similar stored examples is used to provide the desired classification. IBk is one of these methods, a k-nearest neighbours algorithm, to be precise, that averages the k nearest neighbours to provide the final classification for a given input. NN-ge (Nearest Neighbour with Generalised Exemplars) [20] is another nearest-neighbours-like algorithm that groups together examples within the same class. This reduces the number of classification errors that result from inaccuracies of the distance function. In the streaming scenario, GNUsmail uses MOA by including its tools for evaluation, a collection of classifier algorithms for evolving data streams, some ensemble methods, and drift detection methods. HoeffdingTree is a classifier algorithm developed by Domingos et al. [11] to efficiently construct decision trees in stream-oriented environments. An appealing feature of Hoeffding trees is that they offer sound guarantees of performance. They can approach the accuracy of non-streaming decision trees with an unlimited number of training examples. An improved learner available in MOA is the HoeffdingTreeNB, which is a Hoeffding Tree with Na¨ıve Bayes classifiers at leaves, and a more accurate one is the HoeffdingTreeNBAdaptive, which monitors the error rate of majority class and Na¨ıve Bayes decisions in every leaf, and chooses to employ Na¨ıve Bayes decisions only where they proved accurate in the past. Additionally, some ensemble methods from the MOA framework are included, such as OzaBag and OzaBoost; incremental online bagging and boosting methods respectively. For concept drift detection, we have included DDM (Drift Detection Method)[14] that detects change in the accuracy error of the classifier by comparing window statistics.
4 Evaluation of Data Stream Mining Methods In data stream contexts, neither cross-validation nor other sampling procedures are suitable for evaluation. Other methods, like the prequential measure, are more appropriate
94
J.M. Carmona-Cejudo et al.
[13]. By using prequential methods, a prediction is made for each new example (i-th example) using the current model. Once the real class is known, we can compute the loss function (L(yi , yˆi ), where L is a loss function such as the 0 − 1 loss function, yi is the realclass and yˆi is the prediction), and a cumulative loss function (Si ) is updated: i Si = j=1 L(yj , yˆj ). After that, the model is updated using that example. Thus, we can evaluate the performance of the system (that is influenced by the increasing number of examples that are constantly arriving) and the possible evolution of the models (because of concept drift). Using the previous cumulative loss function (Si ), we can estimate the mean loss at every moment (i): Ei = Si /i. Although the prequential approach is a pessimistic estimator when compared with the holdout approach (where an independent test set is previously reserved), it can perform similarly to holdout by using forgetting mechanisms, like sliding windows or fading factors, and overcome other problems that are present in the holdout approach [13]. The method based on sliding window is about considering only the last examples. Therefore, not all the examples are used to evaluate the performance: the former are forgotten and the most recent ones are stored in a window of size w. Those w examples in the window are the examples used to calculate different performance measures. On the other hand, using fading factors, which is the preferred method according to [13], allows to decrease the influence of the examples in the measure as they get older. For example, if we compute the loss function (L) for every single example, the prequential error at moment i using fading factors is defined as Ei = Si /Bi , where Si = L(yj , yˆj ) + α · Si−1 and Bi = 1 + α · Bi−1 (α < 1). There exist different ways of comparing the performance of two classifiers; McNemar test being one of the most used tests. This test needs to store and update two quantities: a (the quantity of the examples misclassified by the first classifier and not by the second one) and b (the quantity of the examples misclassified by the second classifier and not by the first one). The McNemar statistic (M ) rejects the null hypothesis (the performances are the same) with confidence level of 0.99 if its value is greater than 2 6.635, since it follows a χ2 distribution. It is calculated as M = sign(a − b) × (a−b) a+b . In order to extend its usage to the prequential approach we have two options. If we consider a sliding window context, we only need to use the examples in the window to compute the statistic. But if we consider the usage of fading factors, we should adapt the definition in the following way: Mi = sign(ai − bi ) ×
(ai − bi )2 ai + b i
where ai = fi + α · ai−1 and bi = gi + α · bi−1 , being fi = 1 if and only if the i-th example is misclassified by the first classifier and not by the second one (fi = 0 otherwise) and gi = 1 if and only if the i-th example is misclassified by the second classifier and not by the first one (gi = 0 otherwise).
5 Replicable Experimentation In this section, we describe how we have adapted GNUsmail to include the exposed evaluation methods. We also explain the experimental setup that we have used to test the
Online Evaluation of Email Streaming Classifiers Using GNUsmail
95
proposed adaptation, as well as its results. All experiments are replicable since GNUsmail incorporates such functionality. 5.1 Experimental Setup We have evaluated our framework for both incremental and online learning schemes, using the ENRON email dataset [16]. Following the approach of [3] and [4], we have used seven specific users (beck-s, farmer-d, kaminski-v, kitchen-l, lokay-m, sanders-r and williams-w3), based on their interest in terms of number of messages and folders. For each of these users, we have used only topic folders with more than two messages. The GNUsmail experimentation launcher first checks whether the ENRON dataset is already available, offering to download it if necessary. As classification attributes we have used the number of recipients, the sender ( broken down into username and domain), the body length, the capital letters proportion, the size of email, the subject length and the most relevant words, as we explained in Subsection 3.1. For both the incremental and the online approach, the messages are analyzed in chronological order, and they are given to the underlying learning algorithms one by one. In this way, the system simulates how the model would work when classifying incoming emails in the real world. The specific algorithms to be used and compared are configurable. For each author, we compare the online performance of the following algorithms: – – – –
OzaBag over NN-ge, using DDM for concept drift detection NNge Hoeffding Trees Majority class
Instead of using the classical prequential error, the launcher can be configured to use sliding windows or fading factors. For each algorithm and author, GNUsmail plots the prequential-based metrics, to visually analyse the differences in performance. These plots also show the concept drifts when there is a concept drift detector associated with some algorithm. We do experiments with the DDM detector, and report the point where a concept drift warning is detected that finally leads to an actual concept drift. In addition, for the sake of performing some statistical tests, GNUsmail also includes the McNemar test because it can be easily adapted to deal with the online characteristic [13]. Thus, graphics are produced for each of the algorithm pairs, and the critical points of the McNemar test for a significance of 99% are shown as horizontal lines, while the zone where no significant difference is found is shown on a gray background. 5.2 Results In Table 1 we show the final MOA prequential accuracies with bagging of DDM and NN-ge for each folder. As can be see in this table, the classification results are similar to those in [3] and [4]. The final results depend largely on each specific author, and, more precisely, on their folders. Folders with a big amount of messages and dealing with
96
J.M. Carmona-Cejudo et al. Table 1. Final folder-wise MOA prequential accuracies with bagging of DDM and NN-ge
Folder Correct/Total Percentage Folder Correct/Total Percentage beck-s (101 folders) 1071/1941 55.18% farmer-d (25 folders) 2743/3590 76.41% europe 131/162 80.86% logistics 1077/1177 91.58% tufco 588/608 96.71% calendar 104/123 84.55% wellhead 210/337 62.32% recruiting 89/114 78.07% personal 211/320 65.94% doorstep 49/86 56.97% kaminsky-v (41 folders) 1798/2699 66.62% kitchen-l (47 folders) 2254/3790 59.47% universities 298/365 81.64% esvl 640/712 89.89% hr 159/299 53.18% resumes 420/545 77.06% east power 160/253 63.24% personal 154/278 55.4% regulatory 210/242 86.78% conferences 163/221 73.76% lokay-m (11 folders) 1953/2479 78.78% sanders-r (30 folders) 887/1207 73.49% tw commercial group 1095/1156 94.72% iso pricecaps 404/420 96.19% nsm 109/119 91.6% corporate 345/400 86.25% senator dunn 43/83 51.81% articles 152/232 65.51% 86/176 48.86% px 49/68 72.06% enron t s williams-w3 (18 folders) 2653/2778 95.5% schedule crawler 1397/1398 99.91% bill williams iii 1000/1021 97.94% hr 74/86 86.05% symsees 74/81 91.36%
very specific subjects are more likely to be correctly learnt by the models. There are folders whose final prequential value goes beyond 90%, as in the cases of farmer-d, kaminski-v, kitchen-l and lokay-m (see table 1). The results for this author are illustrative, since it can be seen that the best-scored folders are precisely the ones that have a specially large amount of messages in them. When analysing these results, one should take into account that we are dealing with a massive multi-class domain. As a result, it is difficult to get good overall performance as some folders will be learnt better than others. In the case of beck-s, for instance, we have over 100 different values for the folder. Another difficulty is that the classes are extremely unbalanced: for williams-w3 there are folders with more than one thousand mails (bill williams iii), and folders with only a dozen mails (el paso). Furthermore, it is less than obvious what the semantics of the folders intended by the user is: both hr and human resources contain emails dealing with human resources which makes it difficult to determine how the messages should be classified. Another general problem is that some folders contain mails which have little in common (let us think about personal folders). In Figure 1 we show the evolution of the prequential error for two authors, beck-s and kitchen-l, plotting both the classical and the fading factors versions of the prequential error. Better results are achieved when using fading factors, since the effect of poor initial performance is reduced because of the limited number of available messages at the beginning. NN-ge outperforms other basic methods, whose relative performance varies for every author. The performance obtained by NN-ge is comparable with the best results obtained by [3] and [4], which use SVM-based approaches. As a consequence of using OzaBag and DDM over NN-ge, the results are further improved. Figure 2 depicts the McNemar tests for beck-s and kitchen-l, using fading factors with α = 0.995. OzaBag with DDM and NN-ge significantly outperforms the Majority Class and Hoeffding Trees algorithms during most of the execution. The comparison between simple NN-ge and NN-ge in combination with OzaBag and DDM
Online Evaluation of Email Streaming Classifiers Using GNUsmail
97
(a) Legend
(b) Prequential error for beck-s
(c) Fading factors preq. for beck-s (α = 0.095)
(d) Prequential error for kitchen-l
(e) Fading factors preq. for kitchen-l (α = 0.995)
Fig. 1. Evolution of prequential accuracy (1 - prequential error) for beck-s (top) and kitchen-l (bottom) for four algorithms: OzaBag over NN-ge, using DDM for concept drift detection, NNge, Hoeffding Tree and Majority class. The classical prequential error (left) and fading factors, with α = 0.995 (right) are shown. We see that the classical prequential error shows more conservative behaviour since past information is never forgotten. The detection of concept drift by the DDM algorithm is marked with a vertical dashed line. It can be seen that, after concept drift detection, the performance of the OzaBag-based algorithm improves. As expected, Majority class is not a very good classifier. Hoeffding Tree is unable to undo the path (i.e. close an expanded node on top upper section of the tree), so its performance tends to deteriorate very rapidly when a wrong decision has been made.
shows that the difference is not significant during part of the execution time. Nevertheless, especially in the case of kitchen-l, OzaBag and DDM are able to significantly improve their performance, in part because of the detection and adaptation to concept drifts.
98
J.M. Carmona-Cejudo et al.
(a) OzaBag vs. NN-ge (beck-s)
(b) OzaBag vs. NN-ge (kitchen-l)
(c) OzaBag vs. Hoeffding tree (beck-s)
(d) OzaBag vs. Hoeffing tree (kitchen-l)
(e) OzaBag vs. Majority class (beck-s)
(f) OzaBag vs. Majority class (kitchen-l)
Fig. 2. McNemar tests for beck-s and kitchen-l, using fading factors with α = 0.995. The gray area represents the zone where no significant differences have been detected. We observe that OzaBag is significantly better than Majority class and Hoeffding tree during almost all of the execution. On the other hand, OzaBag improves NN-ge after the detection of concept drifts.
Online Evaluation of Email Streaming Classifiers Using GNUsmail
99
The methods based on HoeffdingTree have lower accuracy than the methods based on NN-ge when directly applied to the domain of email classification. These streaming decision tree methods frequently need millions of examples to start showing good performance. Within the email classification domain, it is not unusual to be dealing with thousands of messages, but millions of messages would be something uncommon.
6 Conclusion In this paper we have presented different methods to evaluate data stream mining algorithm in the domain of email classification. We have improved GNUsmail, an opensource extensible framework for email classification, which incorporates a flexible architecture into which new feature extraction, feature selection, learning and evaluation methods can be incorporated. In this enhanced version of the framework we incorporate recently proposed evaluation methods for online learning with concept drift. Such evaluation methods improve the prequential error measures by using mechanisms to reduce the effect of past examples, such as sliding windows and fading factors. Our experiments show that such improved measures are adequate also in the domain of email classification. Moreover, the McNemar test is used as a tool to compare the on-line performance of two given algorithms. GNUsmail offers additional functionality to configure and replicate email-classification experiments. Current online learning algorithm implementations have an important limitation that affects the learning process: learning attributes have to be fixed before beginning the induction of the algorithm. They need to know all the attributes, values and classes before the learning itself, since it is not possible to start using a new attribute in the middle of the lifetime of a learning model. Future methods should support online addition of new features. Another possibility for future research would be to extend the GNUsmail framework to other text-based domains. The current architecture of the framework is designed with a low level of coupling, which allows to easily integrate other modules for reading from different sources of information, such as blog entries or tweets. Acknowledgments. This work has been partially supported by the SESAAME project (code TIN2008-06582-C03-03) of the MICINN, Spain. Jos´e M. Carmona-Cejudo is supported by AP2009-1457grant from the Spanish government.
References [1] Aberdeen, D., Pacovsky, O., Slater, A.: AIM: The learning behind gmail priority inbox. Tech. rep., Google Inc. (2010) [2] Barrett, R., Selker, T.: AIM: A new approach for meeting information needs. Tech. rep., IBM Almaden Research Center, Almaden, CA (1995) [3] Bekkerman, R., Mccallum, A., Huang, G.: Automatic categorization of email into folders: Benchmark experiments on Enron and SRI Corpora. Tech. rep., Center for Intelligent Information Retrieval (2004) [4] Bermejo, P., G´amez, J.A., Puerta, J.M., Uribe-Paredes, R.: Improving KNN-based e-mail classification into folders generating class-balanced datasets. In: Proceedings of the 12th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Sytems (IPMU 2008), pp. 529–536 (2008)
100
J.M. Carmona-Cejudo et al.
[5] Bifet, A., Holmes, G., Pfahringer, B., Kirkby, R., Gavald`a, R.: New ensemble methods for evolving data streams. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2009), pp. 139–148 (2009) [6] Bifet, A., Holmes, G., Pfahringer, B., Kranen, P., Kremer, H., Jansen, T., Seidl, T.: MOA: Massive Online Analysis, a Framework for Stream Classification and Clustering. Journal of Machine Learning Research - Proceedings Track 11, 44–50 (2010) ´ [7] Carmona-Cejudo, J.M., Baena-Garc´ıa, M., del Campo-Avila, J., Bueno, R.M., Bifet, A.: Gnusmail: Open framework for on-line email classification. In: ECAI, pp. 1141–1142 (2010) [8] Chaudhry, N., Shaw, K., Abdelguerfi, M. (eds.): Stream Data Management. Advances in Database Systems. Springer, Heidelberg (2005) [9] Cohen, W.: Learning rules that classify e-mail. In: Papers from the AAAI Spring Symposium on Machine Learning in Information Access, pp. 18–25 (1996), citeseer.ist.psu.edu/406441.html [10] Crawford, E., Kay, J., McCreath, E.: IEMS - the intelligent email sorter. In: Proceedings of the 19th International Conference on Machine Learning (ICML 2002), pp. 83–90 (2002) [11] Domingos, P., Hulten, G.: Mining high-speed data streams. In: Knowledge Discovery and Data Mining, pp. 71–80 (2000), citeseer.ist.psu.edu/article/domingos00mining.html [12] Forman, G.: An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research 3, 1289–1305 (2003) [13] Gama, J.: Knowledge Discovery from Data Streams. CRC Press, Boca Raton (2010) [14] Gama, J., Medas, P., Castillo, G., Rodrigues, P.: Learning with drift detection. In: Bazzan, A.L.C., Labidi, S. (eds.) SBIA 2004. LNCS (LNAI), vol. 3171, pp. 286–295. Springer, Heidelberg (2004) [15] Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., Witten, I.H.: The WEKA data mining software: An update. SIGKDD Explorations 11(1), 10–18 (2009) [16] Klimt, B., Yang, Y.: The enron corpus: A new dataset for email classification research. In: Proceedings of the 15th European Conference on Machine Learning, ECML 2004 (2004) [17] Maes, P.: Agents that reduce work and information overload. Communications of the ACM 37(7), 30–40 (1994) [18] Manco, G., Masciari, E., Tagarelli, A.: A framework for adaptive mail classification. In: Proceedings of the 14th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2002), pp. 387–392 (2002) [19] Manning, C.D., Sch¨utze, H.: Foundations of Statistical Natural Language Processing. The MIT Press, Cambridge (2003) [20] Martin, B.: Instance-Based Learning: Nearest Neighbour with Generalization. Master’s thesis, University of Waikato (1995) [21] Pantel, P., Lin, D.: SpamCop: A spam classification & organization program. In: Proceedings of the AAAI 1998 Workshop on Learning for Text Categorization, pp. 95–98 (1998) [22] Rennie, J.D.M.: ifile: An application of machine learning to e-mail filtering. In: Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2000) Text Mining Workshop (2000) [23] Sabellico, E., Repici, D.: http://mailclassifier.mozdev.org/, http://mailclassifier.mozdev.org/ [24] Sebastiani, F.: Machine learning in automated text categorization. ACM Computing Surveys 34, 1–47 (2002) [25] Segal, R.B., Kephart, J.O.: Incremental learning in SwiftFile. In: Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), pp. 863–870 (2000)
The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages in a Temporal Process Stefano Ceccon1, David Garway-Heath2, David Crabb3, and Allan Tucker1 1
Department of Information Systems and Computing, Brunel University, Uxbridge UB8 3PH, London, UK 2 NIHR Biomedical Research Centre for Ophthalmology, Moorfields Eye Hospital NHS, EC1V 2PD, London, UK 3 Department of Optometry and Visual Science, City University, EC1V 0HB, London, UK
[email protected] Abstract. Data modeling using Bayesian Networks (BNs) has been investigated in depth for many years. More recently, Dynamic Bayesian Networks (DBNs) have been developed to deal with longitudinal datasets and exploit time dependent relationships in data. Our approach makes a further step in this context, by integrating into the BN framework a dynamic on-line data-selection process. The aims are to efficiently remove noisy data points in order to identify and model the key stages in a temporal process and to obtain better performance in classification. We tested our approach, called Dynamic Stage Bayesian Networks (DSBN), in the complex context of glaucoma functional tests, in which the available data is typically noisy and irregularly spaced. We compared the performances of DSBN with a static BN and a standard DBN. We also explored the potential of the technique by testing on another dataset from the Transport of London database. The results are promising and the potential of the technique is considerable. Keywords: Dynamic Stage Bayesian Network, time warping, time series, classification, glaucoma.
1 Introduction Glaucoma is a major cause of blindness worldwide (Resnikoff et al., 2004), but its underlying mechanisms are still not clear. However, early treatment has been shown to slow the progression of the disease, thus early diagnosis is desirable (Yanoff et al., 2003). Typical tests for glaucoma, such as functional and anatomical measurements, are not as sensitive as desired, due to the high variability of the test results and the essential subjectivity of clinical judgments (Artes and Chauhan, 2005). Nowadays, however, new medical instruments provide a large amount data, which can be exploited further using statistical and Artificial Intelligence techniques. Due to the high variability of the measurements in different tests and metrics of progression, and for the high amount of noise in the data, this is not a simple task. In particular, determining changes in individual patients is still one of the most challenging aspects of glauJ. Gama, E. Bradley, and J. Hollmén (Eds.): IDA 2011, LNCS 7014, pp. 101–112, 2011. © Springer-Verlag Berlin Heidelberg 2011
102
S. Ceccon et al.
coma management. For instance, functional measurements of both healthy and glaucoma patients are subject to a large degree of fluctuation (e.g. with different results on different days) which makes it difficult to decide whether there is a random variation or a real progression between two consequent examinations (Yanoff et al., 2003, Flammer et al, 1984, Hutchings et al, 2000). Further, there is a lack of standardized clinical metrics for defining glaucoma, i.e. there is no standard definition of conversion for glaucomatous patients, which leads to biases towards one test or another or between different metrics. A simplified scheme of typical functional measurements over time can be seen in Figure 1. Glaucomatous patients (red) can suddenly worsen or can be steadily progressing over time, while control subjects (blue) may be more or less subject to physiological decrease in functionality. In this complex context, no common threshold value can be used and absolute values are not highly informative. Notice that noise, such as short-term variation, is not included in the scheme.
Fig. 1. Scheme of typical functional tests result for glaucomatous (red) and control (blue, broken) subjects. Noise is not modeled in the graph.
In such a situation many questions arise: given that patients seem to progress at different speeds and to different sensitivity values, how can we define the conversion of glaucoma? Can we avoid the bias of clinical metrics? Can we find out at what stage of disease a patient is? How do we exploit time dependent relationships in the data without taking into account the noise and the physiological loss of functionality? Our approach to answer these questions is based upon the Bayesian Network (BN) framework, which is a well known modelling technique that graphically represents relationships between variables (Pearl, 1988). Static BNs have already been used for classification in glaucoma with promising results in (Tucker et al., 2005) and (Ceccon et al., 2010). In the latter, the model confirmed evidence from literature that the structural relations of the optic disc change with the progression of the disease, which naturally leads to the idea of modelling the disease over time. However, static BNs do not deal with time-dependent relationships, whereas our aim is to extend their performance to longitudinal data to exploit the nature of the dataset. Dynamic Bayesian Networks (DBN) (Dean and Kanazawa, 1989) have already been developed for dealing with longitudinal datasets. However, the presence of noise and higher order time-dependent relationships can be a drawback in their application in this context,
The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages
103
given that data is typically assumed to be generated by a stationary process (Murphy, 2002). In this paper we integrate a dynamic on-line data-selection process into the BN framework. The aims of this model, which we call the Dynamic Stage Bayesian Network (DSBN), are to remove noisy data points whilst simultaneously finding key stages in the progression of glaucoma. We also aim to reduce the size of the training dataset needed and to obtain better performance in classification in comparison with BNs and DBNs. We test our approach in the context of glaucoma functional tests, in which the available data is typically noisy and irregularly spaced. We compare the performances of DSBN with BN and DBN. We also explore the potential of the technique by applying the DSBN to a dataset obtained from the Transport of London data store (Transport of London Ltd, 2010) for clustering underground stations based upon pedestrian traffic-flow.
2 Methods 2.1 Datasets In this study two independent datasets were used (Table 1). Dataset A is a longitudinal dataset of 19 control subjects and 43 patients from an Ocular Hypertensive Treatment group, who developed glaucoma in the time span observed. Data consists of VF point sensitivities obtained with the Humphrey Field Analyzer II (Carl Zeiss Meditec, Inc., Dublin, CA). Sensitivity values were averaged for computational and simplicity reasons. Patients who eventually developed glaucoma as assessed by clinicians were considered to be “glaucomatous” since the first visit, i.e. no external conversion metric was used. Dataset B is a longitudinal dataset of London Underground passengers’ counts during a weekday in November 2009. Counts are number of entries in 30 stations in 2 hours time span for 24 hours. Therefore, each station presents 11 ordered values. Data was discretized in 20 states using a frequency based approach. The discretization was performed not for the purposes of easier calculation, but to make the values uniform and expose the qualitative patterns in the data. A discrete variable was added to each time series, representing the estimated group for each station. Table 1. Characteristics of the Datasets used in the study
Control Subjects (values) Converters (values) Total Subjects (values)
Dataset A 19 (155) 43 (474) 62 (629)
Dataset B 30 (331)
2.2 Dynamic Stage Bayesian Networks BNs are probabilistic directed graphical models in which each node represents a variable, and arcs between nodes represent conditional independence assumptions. Each variable is associated to a conditional probability distribution (CPD), so that given the set of all CPDs in the network it is possible to infer about any value of any node. For classification, BNs are used by computing the most probable value of the class node
104
S. Ceccon et al.
given the values observed for all the other nodes. To build a BN, the CPDs of all the variables need to be estimated from the data. The structure of the network must be hand-coded based upon expected relationships, or can be learned from data using several approaches. The DSBN is based upon the standard BN framework, but extended with longitudinal datasets and integrated with a noise removal process. This technique focuses on the quality of the data and simultaneously facilitates the ability of BN models to exploit underlying relationships in the data for modeling and classification. unlike the DBN, the structure and parameters of the the DSBN can change throughout the temporal process, i.e. data is not assumed to be generated by a stationary process. Moreover, its noise-removal process allows the extraction of only useful information for a particular task, such as classification or clustering. The algorithm starts with an empty BN in which the structure is a set of leaf nodes linked to a classification node (Figure 2). Each leaf node has a Normal distribution and corresponds to a final key stage in the temporal process. The number of leaf nodes is arbitrarily chosen, however a structure search can be implemented to find the best number of nodes.
Fig. 2. Structure of our instance of the DSBN model. Stage nodes are modeled as Gaussian nodes, while the class node is a discrete node.
The learning algorithm for a model with N leaf nodes and K training time series TK of size N, obtained from K time series of DMk data points, can be presented as it follows: Input: (T1..K)init , DSBNinit k
k
For each Tk: random subset DN from DM Score DSBNinit Perform SA: For each Iteration { *
k
k
subset DN from DM Tk = Select random * Score DSBN(T1..K ) * * If DSBN(T1..K ) is accepted: Update DSBN, Tk = Tk } Output:
DSBNfinal, (T1..K)final
The algorithm involves a random selection of a subset of N points DNk from the complete K time series used for training, in order to fill the N leaf nodes. The ordering of the data must always be preserved, i.e. the order of the data points in a time series is a constraint (as subsequent events cannot cross each other going backwards in time). The class value (label) of the time series is the other constraint in the model. A second step is represented by a Simulated Annealing (SA) search over the space of the
The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages
105
possible alignments of the data. SA (Jacobs et al, 1991) is a well-known quasi-greedy algorithm that searches through the space of possible solutions to obtain the optimal alignment of the data. The accepting of solutions is regulated by a “temperature factor” that evolves over time, allowing even non-optimal solutions to be explored. If the solution is accepted, then the parameters of the model are updated using the new aligned data and the iteration is complete. Another time series might be chosen at the next iteration until convergence is found or a fixed number of iterations is reached. The warping is implemented as a random selection of a subset of the data, in our case 5 data points, from a random time series. However, we adopted an exhaustive search in the testing process through all of the ordered combinations of nodes occupancies. Furthermore, during the testing phase we want to consider the fact that in clinical practice we don’t have the whole time-series available from the beginning. For instance, we want to investigate if a patient has glaucoma at his first appointment, then at the second, etc.. Therefore we have to recompute the exhaustive search after adding one data point at a time. Figure 3 shows a typical example of how the testing algorithm works in practice.
Fig. 3. Scheme of the warping algorithm on a time-series (left) to the DSBN (right), in two steps (from top to bottom). Data points are represented with incrementing numbers indicating their cardinal position in the time series. Red/empty and blue/full circles represent glaucomatous and controls parameter distributions for each stage. Given 6 data points (top), 5 data points are fitted in the best set of parameter distributions of DSBN (right), in this case glaucomatous. Data point number 4 is left out from the model, representing noise. Notice that the order of subsequent visits is preserved.
We have also tested the DSBN on another dataset available on line: the Transport of London Underground service passenger counts. Data from 30 underground stations of London were used to train a DSBN model with 5 leaf nodes. Stations were not previously grouped, but for obvious reasons there are underlying patterns in the data which can be easily observed on the time dimension. These patterns in the dataset typically include peaks at the beginning of office hours, in the evening or stationary high or low values during the whole day. This data can be useful for many tasks, such
106
S. Ceccon et al.
as for planning new stations (together with geographical information) or for containing costs, e.g. by choosing numbers of staff manning stations at different times. Lots of fluctuations are present in each time series throughout the day. Given that there is no grouping for the stations of the dataset, we allow the algorithm to group the data itself. This process, called clustering (Jein et al., 1999), is a known technique for finding similarities in the data and group data in different clusters. BNs offer a specific parameter estimation algorithm, called Expectation-Maximization (Dempster et al., 1997), that deals with this situations. However, the training algorithm for DSBN is effective only with thousands of iterations, and since the EM algorithm slows down each iteration, this solution becomes unfeasible. Therefore, we treated the class node as a leaf node, i.e. we included it in the warping algorithm by letting the learning algorithm randomly change its value during the heuristic search. The learning algorithm was repeated with random restarts 5 times, with 15000 iteration each time. The number of groups in which data was clustered was set to 4. Notice that, since data is randomly picked initially for our model, this task can be seen as both a validation task for correctly aligning the patterns over time and for clustering the stations into groups. The former can be evaluated by looking at the data on the time dimension, while the latter can be evaluated by considering geographical and social aspects of the station and its neighborhoods. 2.3 Theoretical Considerations For a fully observed model with n nodes, the log-likelihood of data D, which contains M cases, can be written as M
n
M
L = log ∏ P ( Dm | G ) = log P ( X i | Pa( X i ), Dm ) m =1
i =1 m =1
where Pa(Xi) represents the parents of node Xi.Our goal is to find the parameters and the data D that maximizes the likelihood. The log-likelihood decomposes according to the structure of the graph, so we can maximize the contribution of each node independently and obtain the maximum likelihood estimation of the parameters for a given set of data D. In our case we are also warping data D, therefore we use an heuristic approach to find the maximum likelihood estimation L for a given dataset D.
3 Results Here we present the results of the learning algorithm from all the VF time series available. SA was run 10 times with 3000 iteration and the best scoring solution was chosen. Random restart was used. Estimated leaf nodes distributions are presented in Figure 4. These can be seen as key stages in the data, in particular they are the key stages that best discriminate between glaucoma and controls subjects.
The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages
107
310 300
Sensitivity (dB)
290 280 270 260 250 240 230
1
2
3 stages
4
5
Fig. 4. Key stages identified using DSBN for glaucoma (red) and control (blue, broken line) subjects, in terms of sensitivity (dB) Mean and Standard Deviation values for each stage
A clear descending pattern in sensitivity values can be seen for glaucomatous patients, which shows their progressive loss of functionality. On the other hand, control subjects seem to keep stationary high functionality, even though a fluctuation is present throughout the pattern. As observed in the literature, control subjects are subject to a high variability and a physiological loss of functionality of the eye. However, patients can suddenly worsen and lose their vision, albeit at different rates (Haas et al., 1986, Spry et al., 2002). The key stages identified confirm this possibility, as time is taken out from the process and the pure signal is left. The physiological deterioration of the functionality is taken out from the model, as it is not a useful discriminator between controls and glaucomatous. This explains why there is no reduction for control subjects, but mild fluctuation and a slight increase at stage 4. Standard deviation (SD) for different stages is also of interest, being different for the controls and glaucomatous patterns. For controls it reduces over the stages, while for glaucomatous subjects it increases. A higher variability for glaucomatous subjects is also confirmed in the literature (Flammer et al., 1984). A test was also performed on the whole dataset to assess the robustness of the training/testing algorithm. The ROC curve obtained is compared in Figure 5 with the static BN and a first order DBN. The AUROC obtained was 0.80, against 0.77 and 0.73 of BN and DBN. Notice the higher sensitivity at high specificities. To fairly assess the performances and robustness, the algorithm was tested using Cross Validation (CV). In particular, 2, 3, 5, 10 and 15 CV tests were assessed. AUROC and accuracy values are shown in Figure 6. In the testing phase, for each node, the SD was set at the value obtained on the whole dataset (i.e. 25), in order to take into account the variability not included in the training dataset. The AUROC values obtained over different CV tests with standard BN seem generally higher than those obtained using DBN, however the difference is not statistically significant. DSBN clearly outperforms the other models in all CV tests (p < 0.03). Accuracy results present the same pattern, however this time without statistical significance. Turning now to the underground traffic data, the CPDs of the leaf nodes obtained on dataset B are shown in Figure 7. Each of the 4 clusters is showed as a
108
S. Ceccon et al. 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.2
0.4
0.6
0.8
1
Fig. 5. ROC curves for DSBN (black), BN (red, broken) and DBN (blue, dotted) on dataset A 1
1
0.9
0.95 0.9 Accuracy
AUROC
0.8 0.7 0.6
0.8 0.75
0.5 0.4
0.85
0.7 2
3
5
10
15
0.65
2
3
5
10
15
Fig. 6. AUROC mean values (left) and mean accuracy (right) for DSBN (blue, full), DBN (green, square) and standard BN (red, empty) on different CV tests. The size of the training dataset was respectively 31, 41, 49, 55, and 56.
continuous line passing through the mean values estimated for each stage. Typical patterns including peaks in office hours (morning, evening or both) and high values throughout all day were captured from data. For instance, underground stations which are near national train stations (e.g. Euston, assigned to cluster 2) present typical double peak patterns because of commuters, while nightlife stations (e.g. Covent Garden, assigned to cluster 4) present a typical high peak at night time (before the closure of the station at midnight). Overcrowded or “all-day” busy stations, such as King’s Cross (which is one of the main train stations in London and is next to St. Pancras International rail station) or Green Park (which is near main touristic attractions) seem not to show clear peaks, being for most of the day on a uniform high value. These stations were assigned to cluster 3. To assess the validity of the clusters, the silhouette plot of the clustered data was inspected (Rousseeuw, 1987).
The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages
109
12 1 2 3 4
Passenger Counts
10 8 6 4 2 0
1
1.5
2
2.5
3 Stages
3.5
4
4.5
5
Fig. 7. Mean and SD values for each clusters over the 5 key stages identified for dataset B
The silhouette value is a measure of how close each point in one cluster is to points in the other clusters. The silhouette mean value of the clustered data was found to be 0.67, and silhouette plot showed well separated clusters.
4 Discussion The results show that our algorithm can be a useful technique to find out key stages and for classification within a temporal process with irregular sampling and noise.It turns out to be effective on the glaucomatous dataset as it obtains better classification performances than both BN and DBN. It must be noted that BN techniques are among the best classifiers on this type of data (Ceccon et al, 2010). Tested using different CV folds, the algorithm outperforms the others, showing good robustness with different training dataset size (Figure 6). In fact, given that the model uses a “compressed” subsample of the data, i.e. the most informative, we only require a few subjects to effectively train the model. The process of alignment of the time series over the leaf nodes transforms the temporal dimension of the data points to a “stages” dimension, as defined by the data, itself. By looking at the stages as a discrete series, this approach unfolds the hidden pattern in a dataset by teasing out the noise and the shared variability between the classes. In other words, the pattern followed by the key stages is the one that best discriminates glaucoma patients from controls. Looking at the SD can also lead to interesting insights, as it reflects the nature of the data and also the modeling process itself: for instance, if the SD decreases over the stages, a subject that shows a higher value at later stages is classified with a higher probability (less SD means higher peak in the probability distribution) as control than the same value at an early stage. This is due to the fact that the model “learns” that towards a later stage it is easy to discriminate between the two, i.e. it exploits time dependent information. At early stages, it is more difficult to say anything about the subjects, therefore their SD, and of course the mean values, are more similar between the two groups. On the other hand, the glaucomatous distributions towards later stages show that there is higher variability in the glaucomatous data, i.e. last stage alone has less weight if compared
110
S. Ceccon et al.
to control subjects. The nature of the class node is very important in the learning process as it guides the alignment by leading to the selection of the set of data that give best discrimination between the classes. However, there is no dependence on clinical metrics, because the glaucoma onset point indicated by clinicians and clinical metrics was not used as an indicator for the training. Rather, the patients who eventually lost their vision at some point were considered “glaucomatous” from the beginning. Therefore, there might be an overestimation of false negatives with respect to the indicated onset. However, this approach gives us an independent technique to characterize the whole disease process in contrast with the controls. Moreover, it allows to discriminate glaucomatous patients and controls before the other approaches. Figure 8 shows an example of how the algorithm can be used to characterize different patients by looking at their disease process in terms of stages.
Fig. 8. Sensitivity values in dB over time in months (top) and probability of having glaucoma according to DSBN over time in months (bottom) for two patients (left and right). Squares represent the selected data for each stage after the last visit, however probability values showed at each time point are dependent only on previous visits.
For instance (Figure 8, left), it would be possible to observe that a patient clearly decreases its functionality smoothly over time (top), and this corresponds to an increased probability of having glaucoma (bottom). However, other patients (Figure 8, right) might remain in the same stage for a long time, and then quickly worsen and progress to the subsequent stages. It is also very interesting to look at the discarded data, i.e. the “noise” component of the time series. In Figure 8, on the 4th stage of the patient on the right (top), there is a peak towards a high sensitivity value. This is a clear noise signal, due for instance to short term variability, and in fact the algorithm discards it and the corresponding glaucoma probability value (bottom) remains at 1.0. This differs from static BNs, where each visit is treated as independent from those happened before. Another result that was explored was to assess the values of point-wise data (i.e. before averaging for modelling) corresponding to the averaged data selected for each different stage. This analysis showed very interesting insights which are only partially confirmed in the literature and can therefore offer lots of new information about the disease process. For instance, it was found that the first stages are characterized by an arcuate-localized decrease of visual field functionality, followed by a decrease only in the central area of the visual field. The results obtained
The Dynamic Stage Bayesian Network: Identifying and Modelling Key Stages
111
on dataset B confirmed the ability of DSBN to extract patterns from longitudinal data as well as exploit the strengths given by the BN framework. In this case, data was clustered in 4 groups which clearly show typical patterns observed by looking at the data. However they were mapped in terms of stages, i.e. what occurs between the peaks (i.e. the key stages) is not of interest and was discarded as “noise”. Indeed, being not informative, it represents noise when the aim is to group the data effectively. Notice that by correctly identifying these patterns, data must have been necessarily correctly aligned on peak times, therefore proving again the effectiveness of the DSBN.
4 Conclusions We tested the DSBN with a simple univariate model using functional glaucoma measurements. The algorithm successfully identified meaningful key stages in the disease process. In terms of classification performances, DSBN outperformed other BN techniques, proving to be an effective classification technique with noisy and irregular longitudinal datasets. The learning process could be expanded to allow structural changes over the network, obtaining a flexible model that allows one to find relationships between data whilst optimally aligning and simplifying it. This demonstrates the potential of the technique, as it could be possible to explore relationships between different variables at different stages and improve the classification performances. This technique doesn’t only suit this particular data framework. In fact, the strengths of BN models can still be exploited further, as we proved by successfully aligning and clustering a completely different dataset. The technique can potentially be used on many types of datasets: for example on irregularly spaced time series, with differing degrees of noise, which presents underlying trends that can be used for classification, clustering or forecasting.
References 1. Artes, P.H., Chauhan, B.C.: Longitudinal changes in the visual field and optic disc in glaucoma. Progress in Retinal and Eye Research 24(3), 333–354 (2005) 2. Ceccon, S., Garway-Heath, D., Crabb, D., Tucker, A.: Investigations of Clinical Metrics and Anatomical Expertise with Bayesian Network Models for Classification in Early Glaucoma. In: IDAMAP 2010, Washington, United States (2010) 3. Dean, T., Kanazawa, K.: A model for reasoning about persistence and causation. Artificial Intelligence 93(1-2), 1–27 (1989) 4. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society 39(1), 1–38 (1977) 5. Flammer, J., Drance, S.M., Zulauf, M.: Differential light threshold. Short- and long-term fluctuation in patients with glaucoma, normal controls, and patients with suspected glaucoma. Arch Ophthalmol. 102, 704–706 (1984) 6. Haas, A., Flammer, J., Schneider, U.: Influence of age on the visual fields of normal subjects. American Journal of Ophthalmology 101(2), 199–203 (1986)
112
S. Ceccon et al.
7. Hutchings, N., Wild, J.M., Hussey, M.K., Flanagan, J.G., Trope, G.E.: The Long-Term Fluctuation of the Visual Field in Stable Glaucoma. Invest. Ophthalmol. Vis. Sci. 41, 3429–3436 (2000) 8. Jacobs, R.A., Jordan, M.I., Nowlan, S.J., Hinton, G.E.: Adaptive mixtures of local experts. Neural Computation 3(1), 79–87 (1991) 9. Jain, A.K., Murty, M.N., Flynn, P.J.. Data clustering: a review. ACM Computing Surveys 31(3) (1999) 10. Murphy, K.P.: Dynamic Bayesian Networks: Representation, Inference and Learning. University of California, Berkeley (2002) 11. Pearl, J.: Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann, San Francisco (1988) 12. Resnikoff, S., Pascolini, D., Etya’ale, D., Kocur, I., Pararajasegaram, R., Pokharel, G.P., Mariotti, S.P.: Global data on visual impairment in the year 2002. Bulletin of the World Health Organization 82, 844–851 (2004) 13. Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. of Computational and Applied Mathematics 20, 53–65 (1987) 14. Spry, P.G.D., Johnson, C.A.: Identification of Progressive Glaucomatous Visual Field Loss. Survey of Ophthalmology 47(2), 158–173 (2002) 15. Transport of London Ltd. London Underground passenger counts data. Greater London Authority, City of London, London (April 2010) 16. Tucker, A., Vinciotti, V., Liu, X., Garway-Heath, D.: A spatio-temporal Bayesian network classifier for understanding visual field deterioration. Artificial Intelligence in Medicine 34(2), 163–177 (2005) 17. Yanoff, M., Duker, J.S.: Ophthalmology, 2nd edn., Mosby, St. Louis (2003)
Mining Train Delays Boris Cule1 , Bart Goethals1 , Sven Tassenoy2 , and Sabine Verboven2,1 1
2
University of Antwerp, Department of Mathematics and Computer Science, Middelheimlaan 1, 2020 Antwerp, Belgium INFRABEL - Network, Department of Innovation, Barastraat 110, 1070 Brussels, Belgium
Abstract. The Belgian railway network has a high traffic density with Brussels as its gravity center. The star-shape of the network implies heavily loaded bifurcations in which knock-on delays are likely to occur. Knock-on delays should be minimized to improve the total punctuality in the network. Based on experience, the most critical junctions in the traffic flow are known, but others might be hidden. To reveal the hidden patterns of trains passing delays to each other, we study, adapt and apply the state-of-the-art techniques for mining frequent episodes to this specific problem.
1
Introduction
The Belgian railway network, as shown in Figure 1, is very complex because of the numerous bifurcations and stations at relatively short distances. It belongs to the group of the most dense railway networks in the world. Moreover, its star-shaped structure creates a huge bottleneck in its center, Brussels, as approximately 40% of the daily trains pass through the Brussels North-South junction. During the past five years, the punctuality of the Belgian trains has gradually decreased towards a worrisome level. Meanwhile, the number of passengers, and therefore also the number of trains necessary to transport those passengers, has increased. Even though the infrastructure capacity is also slightly increasing by doubling the number of tracks on the main lines around Brussels, the punctuality is still decreasing. To solve the decreasing punctuality problem, its main causes should be discovered, but because of the complexity of the network, it is hard to trace their true origin. It may happen that a structural delay in a particular part of the network seems to be caused by busy traffic, although in reality this might be caused by a traffic operator in a seemingly unrelated place, who makes a bad decision every day, unaware of the consequences of his decision. We study the application of data mining techniques in order to discover related train delays in this data. In a related work, Flier et al. [2] try to discover dependencies in the underlying causes of the delays, but we search for patterns in the delays themselves. Mirabadi and Sharafian [6] use association mining to analyse the causes in accident data sets, while we consider frequent pattern mining methods. More specifically, we analyse a dataset consisting of a sequence of delayed trains using a recently developed frequent episode mining technique [8]. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 113–124, 2011. c Springer-Verlag Berlin Heidelberg 2011
114
B. Cule et al.
Fig. 1. The Belgian Railway Network
An episode in a sequence is usually considered to be a set of events that reoccurs in the sequence within a window of specified length [5]. The order in which the events occur is also considered important. The order restrictions of an episode are typically described by a directed acyclic graph. We use a database provided by Infrabel, the Belgian railway maintenance company, containing the times of trains passing through characteristic points in the railway network. In order to discover hidden patterns of trains passing delays to each other, our first goal is to find frequently occurring sets of train delays. More precisely, we try to find all delays that frequently occur within a certain time window, counted over several days or months of data, and interdependencies among them. For example, we consider episodes such as: Trains A, B, and C, with C departing before A and B, are often delayed at a specific location, approximately at the same time. Computing such episodes, however, is intractable in practice as the number of such potentially interesting episodes grows exponentially with the number of trains [4]. Fortunately, efficient episode mining techniques have recently been developed, making the discovery of such episodes possible. A remaining challenge is still to distinguish the interesting episodes from the irrelevant ones. Typically, a frequent episode mining algorithm will find an enormous amount of episodes amongst which many can be ruled out by irrelevance. For example, two local trains which have no common characteristic points in their route could, however, appear as a pattern if they are both frequently delayed, and their common occurrence can be explained already by statistical independence. In the next Section, we give a detailed description of the dataset, and in Section 3, we discuss various patterns and describe the method we used. In Section 4, we discuss how we preprocessed the collected data, give a concrete
Mining Train Delays
115
case study at one particular geographical location, and report on preliminary experiments showing promising results. Finally, We conclude the paper with suggestions for future work in Section 5.
2
The Dataset
The Belgian railway network contains approximately 1800 characteristic geographic reference points — stations, bifurcations, unmanned stops, and country borders. At each of these points, timestamps of passing trains are being recorded. As such, a train trajectory can be reconstructed using these measurements along its route. In practice, however, the true timestamps are not taken at the actual characteristic points, but at enclosing signals. The timestamp ta,i for arrival in characteristic point i is approximated using the times recorded at the origin So,i , and the destination Sd,i of the train passing i as follows: do,i ta,i = tSo,i + (1) vi where do,i is the distance from So,i to the characteristic point i and the velocity vi is the maximal permitted speed at Sd,i . To calculate the timestamp td,i for departure in characteristic point i we use td,i = tSd,i −
dd,i . vi
(2)
where dd,i is the distance from Sd,i to the characteristic point i. Hence, based on these timestamps, delays can be computed by comparing ta,i and td,i to the scheduled arrival and departure times. Table 1 gives a small fictional example of the relevant part of the provided data (irrelevant columns were omitted). We now describe this dataset column per column. The first two columns are self-explanatory and contain the date and the Train ID respectively. The third column, PTCar, contains the geographical measurement points (the characteristic points referred to above), and is followed by arrival and departure times of the train in that point (computed using the approximation method described above). Finally, the sixth and seventh column contain the arrival and departure delay respectively, computed by comparing the actual and scheduled arrival and departure times. The actual dataset we worked with contained the complete information about all departures and arrivals of all trains in characteristic points in Belgium in January 2010.
3
Frequent Episodes
Before we present our chosen method of mining frequent episodes in the Infrabel dataset, we start off with a short discussion of why we opted not to use simpler patterns, such as frequent itemsets or sequences.
116
B. Cule et al. Table 1. An excerpt from a fictional train delay database
Date
Train ID PTCar Arrival Time Departure Time Arr. Delay Dep. Delay
... 15/02/2010 15/02/2010 15/02/2010 ... 15/02/2010 15/02/2010 15/02/2010 15/02/2010 ... 15/02/2010 ...
3.1
100 100 100
1255 941 169
06:14:18 06:37:25
06:07:23 06:17:57 06:38:30
18 205
23 117 210
100 123 123 123
445 114 621 169
07:28:03 06:24:22 06:31:51
06:11:58 06:26:10 06:33:49
183 82 231
-2 70 229
123
541
07:10:37
-
97
-
Itemsets
The simplest possible pattern are itemsets [3]. Typically, we look for items (or events) that often occur together, where the user, by setting a frequency threshold, decides what is meant by ‘often’. In this setting, the data is usually organised in a transaction database, and we consider an event (or a set of events) frequent if the number of transactions in which it can be found (or its support ) is greater than or equal to a user-defined support threshold. In order to mine frequent itemsets in the traditional way, the Infrabel data would need to be transformed. In principle, a transaction database could be created, such that each transaction would consist of train IDs of trains that were late within a given period of time. Each transaction would represent one such period. Mining frequent itemsets would then result in obtaining sets of train IDs that are often late ‘together’. Clearly, though, the frequent itemset method is neither intuitive nor suitable for tackling our problem. It would require a lot of preprocessing work in order to transform the data into the necessary format, and the resulting database would contain many empty or identical transactions. Assume we look at time windows of five minutes. As our time stamps are expressed in seconds, each second would represent a starting point of such a window. It is easy to see that many consecutive windows would contain exactly the same train IDs. Most importantly, though, frequent itemsets are simply too limited to extract all the useful information from the Infrabel database, as they contain no temporal information whatsoever. 3.2
Sequences
The Infrabel database is, by its nature, sequential, and it would be natural to try to generate patterns taking the order of events into account, too. Typically, when searching for sequential patterns, the database consists of a set of sequences, and the goal is to find frequent subsequences, i.e., sequences that can be found in many of the sequences in the database [9,1].
Mining Train Delays
117
Again, in order to mine frequent sequences in the traditional way, the Infrabel data would need to be transformed. We could approach this in a way similar to the one described above for itemsets — namely, we could create a sequence for each time window of a certain length. Each such sequence would then contain the train IDs of trains that were late within that window, only this time the train IDs in each sequence would be ordered according to the time at which they were late (the actual, rather than the scheduled, time of arrival or departure). Now, instead of only finding which trains were late together, we can also identify the order in which these trains were late. This method, though clearly superior to the frequent itemset technique, still suffers from the same problems in terms of preprocessing and redundancy of the resulting dataset, and still does not allow us to generate patterns that we wish to find. In this setting, we are only capable of discovering patterns with a total order. If, for example, we have a situation in which trains A and B, when both delayed, cause train C to also be delayed, but the order in which A and B come into the station does not matter, this method will fail to discover this as a pattern (assuming that the supports of sequences ABC and BAC are below the desired threshold). 3.3
Episodes
One step further from both itemsets and sequences are episodes [5]. An episode is a temporal pattern that can be represented as a directed acyclic graph, or DAG. In such a graph, each node represents an event (an item, or a symbol), and each directed edge from event x to event y implies that x must take place before y. If such a graph contained cycles, this would be contradictory, and could never occur in a database. Note that both itemsets and sequences can be represented as DAGs. An itemset is simply a DAG with no edges (events can then occur in any order), and a sequence is a DAG where the events are fully ordered (for example, a sequence s1 s2 · · · sk corresponds to graph (s1 → s2 → · · · → sk )). However, episodes allow us to find more general patterns, such as the one given in Figure 2. The pattern depicted here tells us that a occurs before b and c, while b and c both occur before d, but the order in which b and c occur may vary. Formally, an event is a couple (si , ti ) consisting of a symbol s from an alphabet Σ and a time stamp t, where t ∈ N. A sequence s is an ordered set of events, i.e., ti ≤ tj if i < j. An episode G is represented by a directed acyclic graph with labelled nodes, that is, G = (V, E, lab), where V = v1 · · · vK is the set of nodes, E is the set of directed edges, and lab is the labelling function lab : V → Σ, mapping each node vi to its label. b a
c
d
Fig. 2. A general episode
118
B. Cule et al.
Given a sequence s and an episode G we say that G occurs in s if there exists an injective map f mapping each node vi to a valid index such that the node vi in G and the corresponding sequence element (sf (vi ) , tf (vi ) ) have the same label, i.e., sf (vi ) = lab(vi ), and that if there is an edge (vi , vj ) in G, then we must have f (vi ) < f (vj ). In other words, the parents of vj must occur in s before vj . The database typically consists of one long sequence of events coupled with time stamps, and we want to judge how often an episode occurs within this sequence. We do this by sliding a time window (of chosen length t) over the sequence and counting in how many windows the episode occurs. Note that each such window represents a sequence — a subsequence of the original long sequence s. Given two time stamps, ti and tj , with ti < tj , we denote s[ti , tj [ the subsequence of s found in the window [ti , tj [, i.e., those events in s that occur in the time period [ti , tj [. The support of a given episode G is defined as the number of windows in which G occurs, or sup(G) = |{(ti , tj )|ti ∈ [t1 − t, tn ], tj = ti + t and G occurs in s[ti , tj [}|. The Infrabel dataset corresponds almost exactly to this problem setting. For each late train, we have its train ID, and a time stamp at which the lateness was established. Therefore, if we convert the dataset to a sequence consisting of train IDs and time stamps, we can easily apply the above method. 3.4
Closed Episodes
Another problem that we have already touched upon is the size of the output. Often, much of the output can be left out, as a lot of patterns can be inferred from a certain smaller set of patterns. It is inherent in the nature of the support measure that for each discovered frequent pattern, we also know that all its subpatterns must be frequent. However, should we leave out all these subepisodes, the only thing we would know about them is that they are frequent, but we would be unable to tell how frequent. If we wish to rank episodes, and we do, we cannot remove any information about the frequency from the output. Another way to reduce the output is to generate only closed patterns [7]. In general, a pattern is considered closed, if it has no superpattern with the same support. This holds for episodes, too. Formally, we first have to define what we mean by a superepisode. We say that episode H is a superepisode of episode G if V (G) ⊆ V (H), E(G) ⊆ E(H) and labG (v) = labH (v) for all v ∈ G, where labG is the labelling function of G and labH is the labelling function of H. We say that G is a subepisode of H, and denote G ⊆ H. We say an episode is closed if there exists no episode H, such that G ⊆ H and sup(G) = sup(H). As an example, consider a sequence of delayed trains ABCXY ZABC. For simplicity, assume the time stamps to be consecutive minutes. Given a sliding window size of 3 minutes, and a support threshold of 2, we find that episode (A → B → C), meaning that train A is delayed before B, and B before C, has frequency 2, but so do all of its subepisodes of size 3, such as (A → B, C),
Mining Train Delays
119
(A, B → C) or (A, B, C). These episodes can thus safely be left out of the output, without any loss of information. Thus, if episode (A → B) is in the output, and episode (A, B) is not, we can safely conclude that the support of episode (A, B) is equal to the support of episode (A → B). Furthermore, we can conclude that if these two trains are both late, then A will always depart/arrive first. If, however, episode (A, B) can be found in the output, and neither (A → B) nor (B → A) are frequent, we can conclude that these two trains are often late together, but not necessarily in any particular order. If both (A, B) and (A → B) are found in the output, and (B → A) is not, then the support of (A, B) must be higher than the support of (A → B), and we can conclude that the two trains are often late together, and A mostly arrives/departs earlier than B. In our experiments, we have used the latest implementation of the ClosEpi algorithm for generating closed episodes [8]. In this work, the authors actually mine only strict episodes, whereby they insist that an episode containing two nodes with the same label must have an edge between them, but this restriction is not relevant here, as we never find such episodes. For an episode to contain two nodes with the same label, it would need to contain the same train ID twice, and, in our dataset, no train ID is used twice on the same day.
4
Experiments
In this section we first describe the preprocessing steps that we had to take in order to transform the provided database into a valid input for the ClosEpi algorithm. We then present a case study consisting of a detailed analysis of patterns found in one particular train station in Belgium. 4.1
Data Preprocessing
The preprocessing of the Infrabel dataset consisted of two main steps. First, we noted that if we look at all data in the Infrabel database as one long sequence of late trains coupled with time stamps, we will find patterns consisting of trains that never even cross paths. To avoid this, we split the database into smaller datasets, each of which contained only information about delays in one particular spatial reference point. Further, to avoid mixing apples and pears, we split each spatial point into two datasets — one containing departure information, and the other arrival data. In this way, we could find episodes containing trains that are late at, for example, arrival, at approximately the same time, in the same place. Second, we needed to eliminate unnecessary columns and get the data into the desired input format for the ClosEpi algorithm. In other words, our dataset needed to consist only of a time stamp and a train ID. Clearly, we first needed to discard all rows from the database that contained trains that were not delayed at all (depending on the application, these were the trains that were delayed less than either 3 or 6 minutes). Once this was done, the actual delay became irrelevant, and could also be discarded. Obviously, in the arrival dataset, all information about departures
120
B. Cule et al.
could also be removed, and vice versa, and as the spatial point was clear from the dataset we used, we could also remove it from the actual content of the dataset. Finally, we merged the date and time columns to create one single time stamp, and ordered each dataset on this new merged column. Starting from the fictional example given in Table 1, assuming we consider trains with a delay of 3 or more minutes at arrival as delayed, for characteristic spatial point 169 we would obtain the input dataset given in Table 2. Table 2. The data from Table 1 after preprocessing for arrivals at spatial point 169
Time stamp ... 06:31:51 15/02/2010 ... 06:37:25 15/02/2010 ...
4.2
Train ID 123 100
Example
The Belgian railways recognise two types of delays, those of 3 or more minutes (the blocking time) and those of 6 or more minutes (the official threshold for a train to be considered delayed), so we ran experiments for both of these parameters. We have chosen a window size of 30 minutes (or 1800 seconds) — if two trains are delayed more than half an hour apart, they can hardly form a pattern. We tested the algorithm on data collected in Zottegem, a medium-sized station in the south-west of Belgium. Zottegem was chosen as it has an intelligible infrastructure, as shown in Figure 3. The total number of trains leaving Zottegem in the month of January 2010 was 4412. There were 696 trains with a departure delay at Zottegem of 3 minutes or more, of which 285 had a delay at departure larger than or equal to 6 minutes. The delays are mainly situated during peak hours. As the number of delayed trains passing through Zottegem is relatively small, the output can be manually evaluated. As can be seen in Figure 3, the two railway lines intersecting at Zottegem are line 89, connecting Denderleeuw with Kortrijk, and line 122, connecting Ghent-Sint-Pieters and Geraardsbergen. Line 89 is situated horizontally on the scheme and line 122 goes diagonally from the upper right corner to the lower left corner. This intersection creates potential conflict situations which adds to the station’s complexity. Moreover, the station must also handle many connections, which can also cause the transmission of delays. For example, Figure 4 shows the occupation of the tracks in the station in the peak period between 17:35 and 17:50 on a typical weekday. We will analyse some of the patterns discovered in this period later in this section. The trains passing through Zottegem are categorized as local trains (numbered as the 16-series and the 18-series), cityrail (22-series) going to and coming from Brussels, intercity connections (23-series) with fewer stops than a cityrail or a local train, and the peak hour trains (89-series).
Mining Train Delays
121
Fig. 3. The schematic station layout of Zottegem
Fig. 4. Occupation of the tracks during evening peak hour at Zottegem
The output of the ClosEpi algorithm is a rough text file of closed episodes with a support larger than the predefined threshold. An episode is represented by a graph of size (n, k) where n is the number of nodes and k the number of edges. Note that a graph of size (n,0) is an itemset. We aimed to discover the top 20 episodes of size 1 and 2, and the top 5 episodes of size 3 and 4, so we varied the support threshold accordingly. All experiments lasted less than a minute. In Tables 3–6 some of the episodes which were detected in the top 20 most frequently appearing patterns are listed. For example, the local train no. 1867 from Zottegem to Kortrijk is discovered as being 3 or more minutes late at departure on 15 days, and 6 or more minutes on 8 days in the month of January 2010. A paired pattern can be a graph of size (2,0), meaning the trains appear together but without a specific order, or of size (2,1), where there is an order of appearance for the two trains. For example, train no. 1867 and train no. 8904 appear together as being 3 or more minutes late in 15079 windows in January 2010. The pattern trains no. 8904 and 1867 have a delay at departure of 3 or
122
B. Cule et al.
Table 3. Episodes of size (1,0) representing the delay at departure in station Zottegem during evening peak hour (16h – 19h) for January 2010 Train ID Route 1867 8904 8905 8963
Support Delay ≥ 3’ Delay ≥ 6’ Zottegem – Kortrijk 27000 14400 Schaarbeek – Oudenaarde 28800 18000 Schaarbeek – Kortrijk 27000 14400 Ghent-Sint-Pieters – Geraardsbergen 25200 12600
more minutes, and train no. 8904 leaves before 1867 appears in 13557 such windows. Among the top 20 patterns with pairs of trains (Table 4), it can be noticed that the pattern 1867 → 8963 was only discovered in the search for 6 or more minutes delay at departure. The pattern also appeared while searching for delays of 3 or more minutes, but its support was not high enough to appear in the top 20. Table 4. Episodes of size (2,k) representing the delay at departure in station Zottegem during evening peak hour (16h – 19h) for January 2010 Episode Support Train ID Relation Train ID Delay ≥ 3’ Delay ≥ 6’ 1867 8904 15079 1867 ← 8904 13557 1867 8905 18341 1867 ← 8905 12995 1867 8963 18828 8888 1867 → 8963 5327 8904 8904 8904 8905 8905 8905
→ →
8905 8963 8963
18608 18410 16838
9506 10391 8819
→ ←
8963 8963 8963
20580 13325 -
10608 5078 5530
The patterns which include lots of information are to be found in the output of episodes of size 3 and up, as can be seen in Tables 5 and 6. Some episodes shown here occurred quite often, but to discover the desired number of episodes of sizes (3,k) and (4,k) the threshold had to be lowered to 5500 which corresponds to a minimal appearance of the pattern on 4 days. The question remains if these really are interesting patterns. Let us now return to the peak trains shown in Figure 4. In the example, the peak-hour train no. 8904 often departs from the station with a delay of 3 minutes with a support of 28800 and a support of 18000 for a delay of 6 minutes (see Table 3). In real-time the peak-hour train no. 8905 follows train no. 8904 on the
Mining Train Delays
123
Table 5. Episodes of size (3,k) representing the delay at departure in station Zottegem during evening peak hour (16h – 19h) for January 2010 Episode Support Train ID Relation Train ID Delay ≥ 3’ Delay ≥ 6’ 8904 → 8905 14358 7510 8963 8904
8904
→
8905 ↓ 8963
11069
-
8905 8963
15804
8956
Table 6. Episode of size (4,k) representing the delay at departure in station Zottegem during evening peak hour (16h – 19h) for January 2010 Episode Train ID Relation 8904 →
Support Train ID Delay ≥ 3’ Delay ≥ 6’ 1867 10024 6104 8905 8963
same trajectory, 4 minutes later. This can also be detected by looking at the occupation of the tracks in Figure 4. It is, therefore, obvious that whenever no. 8904 has a delay, the 8905 will also have a delay. It is therefore not surprising that this episode can be found in the output given in Table 4. Trains no. 1867 and no. 8963 both offer connections to trains no. 8904 and no. 8905. So, if train 8904 has a delay, it will be transmitted to trains 1867 and 8963. This is also stated in Table 6, which shows an episode of size four, found by the ClosEpi algorithm, where trains no. 8904, 1867, 8905, and 8963 are all late at departure, and 8904 departs before the other three trains.
5
Conclusion and Outlook
We have studied the possibility of applying state-of-the-art pattern mining techniques to discover knock-on train delays in the Belgian railway network using an Infrabel database containing the times of trains passing through characteristic points in the network. Our experiments show that the ClosEpi algorithm is useful for detecting interesting patterns in the Infrabel data. There are still many opportunities for improvement, however. For example, a good visualization of the discovered episodes would certainly help in identifying the most interesting patterns in the data more easily. In order to avoid finding too many patterns consisting of trains that never even cross paths, we only considered trains passing in a single spatial reference point. As a result, we can not discover knock-on delays over the whole network. In order to tackle this problem, the notion of a pattern needs to be redefined. Interestingness measures other than support, or other data preprocessing techniques, could also be investigated.
124
B. Cule et al.
Acknowledgments. The authors would like to thank Nikolaj Tatti for providing us with his implementation of the ClosEpi algorithm.
References 1. Agrawal, R., Srikant, R.: Mining sequential patterns. In: Proc. of the 11th International Conference on Data Engineering, pp. 3–14 (1995) 2. Flier, H., Gelashvili, R., Graffagnino, T., Nunkesser, M.: Mining Railway Delay Dependencies in Large-Scale Real-World Delay Data. In: Ahuja, R.K., M¨ ohring, R.H., Zaroliagis, C.D. (eds.) Robust and Online Large-Scale Optimization. LNCS, vol. 5868, pp. 354–368. Springer, Heidelberg (2009) 3. Goethals, B.: Frequent Set Mining. In: The Data Mining and Knowledge Discovery Handbook, ch. 17, pp. 377–397. Springer, Heidelberg (2005) 4. Gunopulis, D., Khardon, R., Labbuka, H., Saluja, S., Toivonen, H., Sharma, R.S.: Discovering all most specific sentences. ACM Transactions on Database Systems 28(2), 140–174 (2003) 5. Mannila, H., Toivonen, H., Verkamo, A.I.: Discovery of Frequent Episodes in Event Sequences. Data Mining and Knowledge Discovery 1, 259–298 (1997) 6. Mirabadi, A., Sharifian, S.: Application of Association rules in Iranian Railways (RAI) accident data analysis. Safety Science 48, 1427–1435 (2010) 7. Tan, P.-N., Steinbach, M., Kumar, V.: Introduction to Data Mining. Pearson Addison Wesley (2006) 8. Tatti, N., Cule, B.: Mining Closed Strict Episodes. In: Proc. of the IEEE International Conference on Data Mining, pp. 501–510 (2010) 9. Wang, J.T.-L., Chirn, G.-W., Marr, T.G., Shapiro, B., Shasha, D., Zhang, K.: Combinatorial pattern discovery for scientific data: some preliminary results. ACM SIGMOD Record 23, 115–125 (1994)
Robustness of Change Detection Algorithms Tamraparni Dasu1 , Shankar Krishnan1 , and Gina Maria Pomann2 2
1 AT&T Labs - Research North Carolina State University
Abstract. Stream mining is a challenging problem that has attracted considerable attention in the last decade. As a result there are numerous algorithms for mining data streams, from summarizing and analyzing, to change and anomaly detection. However, most research focuses on proposing, adapting or improving algorithms and studying their computational performance. For a practitioner of stream mining, there is very little guidance on choosing a technology suited for a particular task or application. In this paper, we address the practical aspect of choosing a suitable algorithm by drawing on the statistical properties of power and robustness. For the purpose of illustration, we focus on change detection algorithms (CDAs). We define an objective performance measure, streaming power, and use it to explore the robustness of three different algorithms. The measure is comparable for disparate algorithms, and provides a common framework for comparing and evaluating change detection algorithms on any data set in a meaningful fashion. We demonstrate on real world applications, and on synthetic data. In addition, we present a repository of data streams for the community to test change detection algorithms for streaming data.
1
Introduction
Data streams are increasingly prevalent in the world around us, in scientific, financial, industrial domains, in entertainment and communication and in corporate endeavors. There is a plethora of algorithms for summarizing data streams, for detecting changes and anomalies, and for clustering. However, the focus is typically on proposing, adapting, or improving a stream mining algorithm and comparing it to existing benchmark algorithms. Very little is understood about the behavior of the actual algorithm itself. Sometimes, the suitability of the algorithm is more critical than an incrementally improved efficiency or accuracy. In this paper, we outline a framework for addressing the behavioral properties of algorithms that will help practitioners choose the one most suited for their task. We focus on change detection algorithms for the purpose of illustration and propose a rigorous basis for understanding their behavior. Change detection algorithms (CDAs) are applied widely and of interest in many domains. The ability of an algorithm to detect change varies by type of change, and depends on the underlying decision making methodology. To understand this, consider the following three CDAs that will be utilized throughout the paper. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 125–137, 2011. c Springer-Verlag Berlin Heidelberg 2011
126
T. Dasu, S. Krishnan, and G.M. Pomann
The first one is the rank-based method of Kifer et al. [11] which we call Rank. We study three tests defined in their paper: those based on the Kolmogorov-Smirnoff (KS), the φ-statistic (Phi) and the Ξ-statistic (Xi). These tests are intended for one-dimensional data and use statistical tests that are based on rank ordering the data. The second method proposed by Song et al. [13] which we call Density, uses kernel density estimation and the resulting test statistic has an asymptotic Gaussian distribution. The third one is the information theoretic algorithm of Dasu et al. [5] which we call KL, and it relies on bootstrapping to estimate the distribution of the Kullback Leibler distance between two histograms. Given the radically different approaches to comparing distributions, each algorithm had its strengths and weaknesses and a suitable choice depends on the task at hand. In this paper, we explore CDAs in the context of statistical power and robustness. Power measures the ability of a CDA to detect a change in distribution, while robustness refers to the effect of small perturbations in the data on the outcome of the algorithm. 1.1
Statistical Power
The power of a statistical test [4] is a fundamental concept in statistical hypothesis testing. It measures the ability of a test to distinguish between two hypotheses H0 and H1 . A test statistic Tn (X) computed from an observed data sample X = {X1 , X2 , . . . , Xn } is used to decide whether a hypothesis H0 is likely to be true. Let C0 , C1 be non-overlapping regions in the domain of Tn (X) such that the hypothesis is declared to be true if Tn (X) ∈ C0 , and not true if Tn (X) ∈ C1 . C1 is called the critical region, and is specified to meet the criterion P (T (X ) ∈ C1 |H0 ) = α,
(1)
where α is the Type I error (false positive) probability. For a specified hypothesis H1 and given Type I error rate α, Definition 1. The power of the test statistic T (X ) against the specified alternative H1 is defined to be PT = P (T (X ) ∈ C1 |H1 ).
(2)
The power in Equation 2 is estimated using Tn (X), the sample estimate of T (X ) based on the data. Figure 1(a) illustrates the power of the test T (X ) in distinguishing between the null hypothesis H0 and alternatives H1 and H2 . The red (light colored) curve represents the sampling distribution of the test statistic T (X ) when the null hypothesis H0 is true. The vertical line delineates the critical region C1 . The power against each alternative hypothesis is given by the area under the corresponding curve in the region C1 , as per Equation 2. The area under H0 in C1 is the Type I error α. When the alternate hypotheses (e.g. H1 or H2 ) depart significantly from the null hypothesis H0 , the sampling distribution of T (X ) under H0 differs significantly from that under H1 or H2 . The power increases,
Robustness of Stream Mining Algorithms
H1
H0
Wj
Under H1
C0
H0
C1 H2
Type I error =
127
Wk
W1
Power of T(X)
(W0,W1)
(W0,Wj)
Under H2
(Wk,Wi)
Streaming Power
C1
C0
(a)
(b)
Fig. 1. (a) Static power: Given null hypothesis H0 , power against alternative hypotheses H1 , H2 . (b) Streaming power computation.
reflecting an increasing ability of the test statistic T (X ) to distinguish between H0 and the alternative distributions. In Figure 2, we plot the power of the three variants of the Rank test, Density and KL, for the two parameter Gaussian N (0, 1). We call the resulting plots power surface of the tests. Blue and green represent regions of low power, while red and brown represents high power. As the true distribution H1 departs from H0 , due to the values of mean and variance deviating from the hypothesized values of 0 and 1 repsectively, the algorithm is able to detect the change in distributions with greater probability or power. The surface is obtained empirically. We defer a discussion of the estimation of power to Section 3. Note that the three variants of Rank CDA have very different power behavior. Rank-KS, based on the Kolmogorov Smirnov (KS) statistic has a very gradual increase in power as the distributions differ, and is less responsive to the change in variance as compared to the other two variants. This is because the KS statistic is based just on the maximum distance between the two cumulative distribution functions, which is influenced more by a location shift (mean change) than a scale change (different variance). The second statistical concept that is important in determining the utility of an algorithm is robustness. 1.2
Robustness
In order to study the stability or robustness of an algorithm to small perturbations in the data, we borrow the notion of influence function (IF) from robust statistics . Let x be a data point from the sample space χ, and let F0 be the distribution from which the sample is assumed to be drawn. Further, let Δx be a Delta distribution that concentrates all probability mass at x. The IF measures the rate of change of T corresponding to infinitesmal contamination of the data.
128
T. Dasu, S. Krishnan, and G.M. Pomann
(a)
(b)
(d)
(c)
(e)
Fig. 2. Decision surface of the Rank algorithms while comparing a standard Gaussian N (0, 1) to the two-parameter family of Gaussian distributions as the mean and standard deviation depart from 0 and 1 respectively. Blue and green represent regions of low power, while red and brown represents high power. (a) Rank-KS, (b) Rank-Xi, (c) Rank-phi, (d) Density, and (e) KL. Note that the, while the general trends are similar in all methods, the iso-contours behave differently for each method. In (a), the contours are more sensitive to location (mean) shift, than scale change, whereas the behavior is reversed in (d).
Definition 2. The influence function of a test statistic T is given by IF (x, T, F0 ) = lim
→0
T (Δx + (1 − )F0 ) − T (F0 ) .
(3)
For a statistic T to be robust, the IF should have the following properties: – Small gross-error sensitivity: The IF should be bounded and preferably small, otherwise a small contamination in the sample can lead to large changes and unpredictable behavior of the statistic. maxx IF (x, T, F0 ) should be small. – Finite rejection point: Beyond a certain point, outliers should have no effect on the statistic T . IF (x, T, F0 ) = 0, ∀x : |x| > r, for some reasonable r > 0. – Small local-shift error: No neighborhood of any specific value of x should lead to large values of the influence function, because this would result in unexpected behavior in specific neighborhoods. IF (y,T,F0 )−IF (x,T,F0 ) max(x,y,x=y) should be small. y−x
Robustness of Stream Mining Algorithms
129
A detailed discussion is outside the scope of this paper, see [8] for further reference. We use the notion of robustness of streaming power to propose a framework for exploring, evaluating and choosing a CDA.
2
Related Work
A variety of change detection schemes have been studied in the past, examining static datasets with specific structure [3], time series data [2,10], and for detecting “burstiness” in data [12,14]. The definition of change has typically involved fitting a model to the data and determining when the test data deviates from the built model [9,7]. Other work has used statistical outliers [14] and likelihood models [12]. The paper by Ganti et al. [7] uses a family of decision tree models to model data, and defines change in terms of the distance between model parameters that encode both topological and quantitative characteristics of the decision trees. They use bootstrapping to determine the statistical significance of their results. Kifer et al. [11] lay out a comprehensive nonparametric framework for change detection in streams. They exploit order statistics of the data, and define generalizations of the Wilcoxon and Kolmogorov-Smirnoff (KS) test in order to define distance between two distributions. They also define two other test statistics based on relativized discrepancy using the difference in sample proportions. The first test, φ, is standardized to be faithful to the Chernoff bound, while the second test, Ξ, is standardized to give more weight to the cases where the proportion is close to 0.5. Their method is effective on one-dimensional data streams; however, as they point out, their approach does not trivially extend to higher dimensions. Aggarwal [1] considers the change detection problem in higher dimensions based on kernel methods; however, his focus is on detecting the “trends” of the data movement, and has a much higher computational cost. Given a baseline data set and a set of newly observed data, Song, Wu, Jermaine and Ranka [13] define a test statistic called the density test based on kernel estimation to decide if the observed data is sampled from the baseline distribution. The baseline distribution is estimated using a novel application of the Expectation Maximization (EM) algorithm. Their test statistic is based on the difference in log probability density between the reference and test dataset, which they show to exhibit a limiting normal distribution. Dasu et al. [5] present an algorithm for detecting distributional changes in multi-dimensional data streams using an information-theoretic approach. They use a space partitioning scheme called the kdq-tree to construct multi-dimensional histograms, which approximate the underlying distribution. The Kullback-Leibler (KL) distance between reference and test histograms, along with bootstrapping [6], is used to develop a method for deciding distributional shifts in the data stream.
3
Our Framework
In this section, we introduce a framework for evaluating CDAs using a mixture model that naturally captures the behavior of a data stream.
130
T. Dasu, S. Krishnan, and G.M. Pomann
Consider a multi-dimensional data stream where a data point x = (x1 , x2 , . . . , xd ) consists of d attributes, categorical or continuous. Assume that the change detection algorithms uses the sliding window framework, where a window refers to a contiguous segment of the data stream containing a specified number of data points n. The data stream distribution in each window Wi corresponds to some Fi ∈ F, the distribution space of the data stream. The data distribution Fi in Wi is compared to the data distribution F0 in a reference window W0 , each window of width n. Suppose that the data stream’s distribution changes over time from F0 to F1 . Define the distribution to be tested, Fδ as Fδ = δF1 + (1 − δ)F0
(4)
This is a natural model for the way change occurs in the sliding window frame work as shown in the example explained in Figure 1(b). The stream’s initial reference distribution is F0 (lighter shade) contained in windows W0 and W1 . As the stream advances, its distribution starts changing to F1 (darker shade). Window Wi starts encountering the new distribution and contains a mixture of F0 and F1 . The mixing proportion changes with time, with the contaminating distribution F1 becoming dominant by window Wk , culminating with δ=1, when all the data points in the window are from F1 . Once the algorithm detects the change, the reference window is reset to the current window. To compute power, we sample with replacement from each of the windows B in the test pair (W0 , Wi ) and generate B sample test pairs {(W0 , Wi )}j=1 . We run the algorithm on each of the B pairs of windows and gather the set of B B binary change outcomes {Ij }j=1 . This set consists of B i.i.d. Bernoulli trials with probability pi (subscript refers to the test window Wi ) that Ij = 1. Since Ij = 1 ⇔ the algorithm detects change, pi represents the probability that the algorithm will detect a change between F0 and Fi , and is the streaming power of the algorithm A. We will define it formally in Section 3.1. We estimate pi using the B replications by computing the proportion of “Change” responses. B j=1 Ij pˆi = (5) B A high proportion of change represents a high ability to discriminate between the data distributions in the two windows of the test pair (W0 , Wi ), i.e high streaming power. A graph of pˆi , the proportion of change responses in B Bernoulli trials is shown as a curve at the bottom of Figure 1(b). When the data in the two windows W0 and Wi of the test pair come from the same distribution F0 , we expect the streaming power to be low, shown by the first downward arrow in Figure 1(b). When the data stream distribution starts changing from F0 to F1 , the windows reflect a mixture of the old and new distributions as seen in Wj . The comparison between W0 and Wj should yield a higher proportion of “Change” responses in the B resampled test pairs, a higher streaming power. This is reflected in the high value of the curve as shown by the second downward arrow in Figure 1(b). When
Robustness of Stream Mining Algorithms
131
the power is consistently high, change is declared and the reference window is reset from W0 to Wk . Finally, since the windows Wk and Wi share the same distribution F1 , the algorithm adjusts to the new distribution F1 and returns to a lower streaming power (third downward arrow). 3.1
Streaming Power
As discussed in the preceding section, streaming power measures the ability of an algorithm to discriminate between two distributions. Formally, Definition 3. The streaming power of an algorithm A at time t is defined to be the probability that the algorithm will detect a change with respect to a reference distribution F0 , given that there is a change in the data distribution of the stream, F0 (Ft ) = P (IA (t) = 1|Ft ), Ft ∈ F. SA
(6)
where IA (t) is the indicator function that is equal to 1 if the CDA detects a change and 0 otherwise. For convenience, we drop F0 from our notation for streaming power, and denote it as SA (Ft ). From Equation 2, it is clear that IA (t) = 1 ⇔ T (X ) ∈ C∞ , where T (X ) and C1 are the decision function and critical region (with respect to F0 ) used by the algorithm to determine the binary response It . Therefore, streaming power can be thought of as a temporal version of power. 3.2
Robustness of CDAs
In order to study robustness of a CDA, we define a function that is analogous to the influence function from Section 1.2 . Definition 4. The rate curve of an algorithm A as a function of δ (mixing proportion parameter defined in Section 3) is the first derivative of the power curve, denoted by, SA (δ + ) − SA (δ) . (7) μA (δ) = lim →0 Note that the rate curve is analogous to the influence function from Section 1.2, and measures the rate of change of streaming power when the hypothesized distribution F0 has an infinitismal contamination from a δ-mixture of F0 and Ft . A CDA should detect change with rapidly increasing power in some region [δ1 , δ2 ] of the mixing proportion δ, and taper off to become constant. In order to be stable to outliers, δ1 > α, the significance level of the test, which represents the acceptable proportion of false positives. Moreover, δ2 should be considerably < 1 so that the CDA detects the change with high probability before there is too much contamination. Beyond, δ2 , the power should be constant, and the rate curve should have value 0, analogous to the finite rejection point criteria from Section 1.2.
132
T. Dasu, S. Krishnan, and G.M. Pomann
(a)
(b)
(c)
(d)
Fig. 3. (a) Power curves of Rank (3 variants) (lines with markers), KL (solid red) and Density (dashed blue) tests for the two parameter 1D-Gamma distribution. The X-axis represents the mixing proportion of f0 (Γ (0.5, 0.5)), the reference distribution, and f1 (Γ (0.5, 0.6)), the contaminating distribution. (b) Corresponding sensitivity curves. (c) Power curves of KL (solid red) and Density (dashed blue) tests where f0 = N 3 (0, 1) (the 3D standard Gaussian) is the reference distribution and f1 = (N 2 (0, 1), N (0.2, 1)), the contaminating distribution. (d) Corresponding sensitivity curves.
In addition, to measure how the streaming power of an algorithm increases in relation to its distance from the reference distribution (as measured by the mixing proportion δ), we define the sensitivity curve, ηA (δ), to be ηA (δ) = μA (δ)/δ, and further define : Definition 5. The sensitivity of an algorithm A ∗ = max ηA (δ) ηA δ
(8)
Sensitivity is akin to local-shift error from Section 1.2. In the following section, we explore and evaluate the three CDAs using these concepts.
4
Investigating the CDAs
In this section, we use experiments on real and simulated data to investigate the three CDAs, Rank, Density and KL. The rank ordering based algorithms of [11] are applicable mainly in the 1-D setting and hence included only in the 1D experiments. In the multi-dimensional setting, only Density [13] and KL [5]
Robustness of Stream Mining Algorithms
133
algorithms are compared. We use the mixture model introduced in Section 3 to perform the study. We describe the results of applying the algorithms to real and “hybrid”’ data. The data sets are described in detail in Section 4.5. All three algorithms are run with the same α, window size and power replications. Hybrid streams are created by injecting the data segment with changes, and alternating the clean baseline segment (no change) with a changed segment, e.g. ”‘Clean-Change1Clean-Change2-Clean-Change3”’ and so on. The clean baseline segment is inserted between every change to ensure that a change is always with respect to the original baseline. Changes of a given type, e.g. level shift, occur consecutively with increase in severity, and enable us to track the extent of change at which the algorithm can detect the given type of change. For example, an algorithm might detect a level shift only if it exceeds say 2% of the current mean. 4.1
Experiments
We use univariate and multivariate distributions to investigate the behavior of the CDAs. One dimensional Gamma: Figures 3 (a) and (b) show the power and sensitivity curves for Rank (KS, Phi, xi), Density and KL algorithms in 1D for the two parameter family of gamma distributions. F0 is Γ (0.5, 0.5), while F1 is Γ (0.6, 0.5). The X-axis shows δ, the mixing proportion of F0 and F1 as it ranges from 0 to 1. The Y -axis shows the streaming power and sensitivity of the algorithms, respectively, computed using the approach described in Section 3. The number of bootstrap samples is 1000, α=0.01 and the number of test pair replications is 200. As δ increases, the power of all the algorithms increases. Rank and its variants have low power and do not show the ability to discriminate until δ exceeds 0.6. The rate curve never hits zero, so that the finite rejection point criteria is not satisfied. This is true of Density and KL too, but to a much lesser extent. KL and Density tests have higher sensitivity, 4.08 and 4.16 respectively, than the Rank algorithms (2.99, 2.87, 3.3) as seen from the maxima of the sensitivity curves, and are able to detect the change at a lesser level of contamination. Multidimensional Gaussian: Figure 3 (c) and (d) show the power and sensitivity respectively of Density and KL algorithms for a 3D family of standard Gaussian distributions. Rank and its variants are not applicable in higher dimensions. Here, F0 is N 3 (0, 1), while F1 is (N 2 (0, 1), N (0.2, 1)), note that the change is only in a single dimension. Density algorithm shows a much higher sensitivity (20.36) than the KL test (2.46). This is surprising since Density and KL exhibit similar behavior on one dimensional data. Since the change in this example is restricted to one dimension, we expect their behavior to be similar. Therefore, we conducted experiments specifically to test the effect of dimensionality on the CDAs.
134
4.2
T. Dasu, S. Krishnan, and G.M. Pomann
Effect of Dimensionality
In order to investigate the effect of dimensionality on change detection, we designed the following experiment. We created a controlled data set from the 5D IP Data. We confined the change to just one dimension. We ran Density and KL algorithms on 1D, 2D, 3D, 4D and 5D, where in each case only the first dimension had changes. The results of our experiment are shown in Figure 4 (b). The figure has been modified for black and white printing, the color version of the paper is available at http://www.research.att.com/people/Krishnan_ Shankar/ida2011_streamingpower_data.tgz. The top gray portion of the figure shows the raw attributes in the dataset, with changes present only in one dimension. The middle portion of the figure shows the power curves for the KL, Density and Rank tests, while the bottom part shows the change points for various algorithms. Rank (and its variants) could be applied only in 1D. The blue dots corresponding to change detection in 1D, are the real changes detected by the algorithm. KL detects fewer and fewer changes as the number of dimensions increases and the power degrades. This is to be expected, since the kdq-tree based multidimensional histograms accommodate the additional dimensions while losing the accuracy in the marginal distribution of the first dimension that contains the change. However, the reverse is true for Density. The number of changes detected increases with dimension (black, cyan, brown) even though there are no changes in the additional dimensions, and conversely, even when the power remains high in 5D (magenta), no changes are declared. A similar plateau like behavior is exhibited by Density in Figure 4 (a). This behavior is surprising and inconsistent since with such a high value of power, we would expect many more changes to be detected. One possible explanation is that Density is very sensitive to sampling variation and the resampling done for the power computation results in small perturbations in the data that are picked up by Density algorithm as a change, even when there is no change. 4.3
Real World Applications
Figure 4 (a) illustrates the results of testing KL and Density on a real data stream. The individual attributes are shown in gray. Change points, as detected by a single application of the algorithm to the data stream are denoted by dots. Clustered changes that occur successively are shown in gray at a slightly lower level. The power curve is computed by resampling and bootstrapping as described in Section 3. Weather Phenomena: The data stream in panel Figure 4 (a) consists of a 6D weather data stream that is publicly available, described in Section 4.5. It is characterized by spikes as can be seen in the plot of the raw attributes shown in gray. The power curve of Density remains high almost all the time, sometimes even while no change is detected by the original algorithm as evidenced by the absence of a dot.
Robustness of Stream Mining Algorithms
135
(a)
Raw Values
Change D in 1D
KL Power
Density Rank(1D) Rank(1D) Density
Change Points
KL
Time (b)
Fig. 4. (a) Weather Data-6D: Real data stream with bursty changes resulting in successive change points. Change points that seem to be related to the same big change are shown in gray. Density finds many changes but also exhibits the characteristic plateau shape of sustained high power while detecting no changes. KL finds frequent changes during periods of turbulence. (b) Power behavior of the three algorithms as the number of dimensions increases, where only the first dimension of the 5D data has change (hybrid data stream). Blue=1D, Brown=2D, Cyan=3D, Black=4D, Magenta=5D. As additional dimensions are added, no new changes should be detected, and power should either decrease or remain unchanged. Density detects additional changes as additional dimensions are added and its power increases even though there is no change in the additional dimensions. KL detects more or less the same changes and the power gets diluted as expected. Rank(Xi) is applicable only in 1-D, and detects most of the level and scale shifts but none of the mass shifts. This is true of the other Rank variants as well.
136
T. Dasu, S. Krishnan, and G.M. Pomann
KL detects multiple changes in regions where the stream is undergoing a shift, and the change points are consistent with periods of high power. When the data stream is undergoing constant change, change detection algorithms generate continuous alarms. This behavior can be seen in Figure 4 (a), where changes are declared continuously during a turbulent phase of the data stream. In order to suppress multiple alarms related to the same root cause, we ignore change points that are within a window length of each other. 4.4
Findings
We describe below our findings from running extensive experiments, only a few of which are described in this paper due to space constraints. Rank offers low type I error but low power and is applicable only in one dimension. Density is not robust in higher dimensions demonstrating a spuriously high value of power that corresponds presumably to sampling variation, and a high local-shift (sensitivity) error and gross-error sensitivity. KL is robust but its rejection point exceeds δ = 1 in some of our experiments, indicating that there is a low probability that it might miss egregious changes. The practitioner needs to decide which of these properties is most essential for the task at hand. 4.5
Data DNA: Building Blocks for Test Data Streams
A majority of the real and hybrid data segments that are used in this paper are available, along with data description, at http://www.research.att.com/ people/Krishnan_Shankar/ida2011_streamingpower_data.tgz.
5
Conclusion
We have proposed a framework that uses the novel concept of streaming power in conjunction with robustness to systematically evaluate and compare the behavior of CDAs. Within this framework, we evaluated three change detection algorithms [11,13,5] by defining a rate curve and exploring its properties analogous to finite rejection point, gross-error sensitivity and local-shift error of the influence function in robust statistics. In addition, we have provided a mechanism for constructing data streams, along with a valuable repository of test streams.
References 1. Aggarwal, C.C.: A framework for diagnosing changes in evolving data streams. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 575–586 (2003) 2. Chakrabarti, S., Sarawagi, S., Dom, B.: Mining surprising patterns using temporal description length. In: Proceedings of 24rd International Conference on Very Large Databases, pp. 606–617 (1998)
Robustness of Stream Mining Algorithms
137
3. Chawathe, S.S., Garcia-Molina, H.: Meaningful change detection in structured data. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, pp. 26–37 (1997) 4. Cox, D.R., Hinkley, D.V.: Theoretical Statistics. Wiley, New York (1974) 5. Dasu, T., Krishnan, S., Lin, D., Venkatasubramanian, S., Yi, K.: Change (Detection) you can believe in: Finding distributional shifts in data streams. In: Adams, N.M., Robardet, C., Siebes, A., Boulicaut, J.-F. (eds.) IDA 2009. LNCS, vol. 5772, pp. 21–34. Springer, Heidelberg (2009) 6. Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. Chapman and Hall (1993) 7. Ganti, V., Gehrke, J., Ramakrishnan, R., Loh, W.-Y.: A framework for measuring differences in data characteristics, pp. 126–137 (1999) 8. Huber, P.J.: Robust Statistics. John Wiley, New York (1981) 9. Hulten, G., Spencer, L., Domingos, P.: Mining time-changing data streams. In: KDD, pp. 97–106 (2001) 10. Keogh, E., Lonardi, S., Chiu, B.Y.: Finding surprising patterns in a time series database in linear time and space. In: KDD, pp. 550–556 (2002) 11. Kifer, D., Ben-David, S., Gehrke, J.: Detecting changes in data streams. In: Proceedings of the 30th International Conference on Very Large Databases, pp. 180–191 (2004) 12. Kleinberg, J.: Bursty and hierarchical structure in streams. Data Mining and Knowledge Discovery 7(4), 373–397 (2003) 13. Song, X., Wu, M., Jermaine, C., Ranka, S.: Statistical change detection for multidimensional data. In: ACM SIGKDD 2007, pp. 667–676 (2007) 14. Zhu, Y., Shasha, D.: Efficient elastic burst detection in data streams. In: Proceedings of the Ninth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 336–345 (2003)
GaMuSo: Graph Base Music Recommendation in a Social Bookmarking Service Jeroen De Knijf1 , Anthony Liekens2 , and Bart Goethals1 1
Department of Mathematics and Computer Science, Antwerp University 2 VIB Department of Molecular Genetics, Antwerp University
Abstract. In this work we describe a recommendation system based upon user-generated description (tags) of content. In particular, we describe an experimental system (GaMuSo) that consists of more than 140.000 user-defined tags for over 400.000 artists. From this data we constructed a bipartite graph, linking artists via tags to other artists. On the resulting graph we compute related artists for an initial artist of interest. In this work we describe and analyse our system and show that a straightforward recommendation approach leads to related concepts that are overly general, that is, concepts that are related to almost every other concept in the graph. Additionally, we describe a method to provide functional hypothesis for recommendations, given the user insight why concepts are related. GaMuSo is implemented as a webservice and available at: music.biograph.be.
1
Introduction
The last decade, social bookmarking services have emerged as a valuable tool to collectively organize online content. Well known examples of such services are the popular photo-sharing site Flickr, del.icio.us a webservice to share bookmarks of the WWW, CiteUlike that let users store and tag scholarly articles, and last.fm a webservice focused on describing bands and artists. The common feature that these diverse webservices share is that they provide users with the ability to tag resources. Tags are freely chosen keywords and provide a simple tool to organize, search and explore the content. From an intelligent data analysis point of view, these so called social bookmarking resources are extremely precious to analyse and predict complex real live resources. For example, in case one wants to construct a video or audio recommender system, the major problem is obtaining a real life dataset on which the recommendations can be based. Specifically, these data is often property of commercial companies and is due to privacy regulations and strategic business interest not freely available. For example, the well-known webservice Youtube stores user profiles containing videos that the user watched. Based upon this watching behavior Youtube recommend videos of interest to the users [2]. Likewise, recommendation systems of Amazon and Itunes are based on purchasing behaviour of the customers on the respective webstore. This in contrast with social bookmarking sites, where the user-defined tags are—in general—property of the respective users. Moreover, the goal of J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 138–149, 2011. c Springer-Verlag Berlin Heidelberg 2011
Graph Base Music Recommendation in a Social Bookmarking Service
139
these sites is to provide users with tools to share and discover content of interest, thus ensuring that user-defined tags are freely accessible to their respective community. Despite the many benefits of collective tag based systems, collaborative tagging services also introduce serious challenges in order to analyze these resources. That is, because users are freely to chose tags, irrelevant, subjective, fraudulent and erroneous tags often occurs in collective tag based systems [1]. For example, for the last.fm tagset, we found—among many others—the following tags for U2: I Like, britpop, american band, The Moonlight Experience. While the first tag is obviously irrelevant and the britpop tag might give rise to discussion, the ’american band’ tag is unmistakable wrong. The last tag is most likely a fraudulent type of tag to boost the popularity of an unknown band (11 people listened TME on last.fm). In this work we describe our system (GaMuSo) for discovering ’similar’ concepts in a social collaborative tagging service and analyze it on the well known collaborative tagging service for artists and bands: last.fm. In particular, our analysis is concerned with the following question: given an artist/band, which artists/bands are most similar to the given one ? Besides the recommendation, we also derive hypothesis why the two concepts are related. As such, our system does not work as a black box, but provides the user with background information on the recommendation. In order to perform the recommendation, we transform the derived tags/artists into a directed bipartite weighted graph. On this graph artists are only linked with other artists via user-defined tags. The recommendation algorithm of GaMuSo is based upon random walks and random walks with restart. We argue theoretically and demonstrate in our experiments that the standard method in the data mining literature to derive related nodes in a graph (see for example [9]) is likely to derive overly general related nodes. We propose an algorithm to compensate for this flaw, effectively enforcing GaMuSo to derive more specific similar concepts. In the next section we describe the tagset used in more detail, and report on the data cleaning and graph construction from this dataset. In section 3 we describe random walks, random walks with restart and our method to derive similar concepts. Moreover, in this section we describe the heuristic to derive functional hypothesis for the recommendation. In the following section, we extensively evaluate the different approaches and discuss the results. In section 5 we describe related work. Finally, in section 6 we draw conclusions and give directions for further research.
2
Graph Construction
From Last.fm we retrieved 443.816 names of artists and 127.516 tags describing these artists. For every artist, we have all the user-defined tags that are used to label this artist. Moreover, the tags for an artist are normalized, and given a weight relative to the most popular tag for the artist. These weights (between 1 and 100) correspond to the number of (distinct) users that assigned the tag to
140
J. De Knijf, A. Liekens, and B. Goethals
the artist. In particular, the most popular tag for an artist is assigned the weight 100, and all other tags are weighted in accordance with their frequency relative to the most frequent tag. The same weighting holds for the tag-to-artist relation. Note, however, that the weight for the artist-to-tag relation is in general different from the weight for the same reversed relation. This is the case, because in the former case weights are normalized per tag, while in the latter case weights are normalized per artist. The next step consists of data cleaning: tags or artists with the same label, but different in capital letters/lowercase letters, punctuation, spacing etc. are transformed into uniform writing style. To do so, we removed all non-alpha and non-digits characters from the labels, transformed all capital characters into lowercase ones, replaced “&” by “and” and removed the definite article “the” from the beginning of the label. As a result, Beatles and The Beatles are the same, as well as post-modernism, postmodernism and post modernism. The resulting dataset consists of 109.345 tags and 407.036 artists. Finally, we map the cleaned data into a weighted directed graph. A graph G = {V, E, λ} is defined as a directed weighted graph, where V is a set of vertices, E ⊆ V × V a set of edges (ordered pair of vertices) and λ a labeling function that assigns a weight from the interval [1, 100] to each edge, i.e. λ : E → [1, 100]. Besides the definition of directed weighted graph, we also needs the notion of a simple path in a directed graph. A simple path is a path containing no repeated vertices. More formally: a simple path in G is a sequence of nodes P = v1 v2 . . . vn such that for all 1 ≤ k < n : (vk , vk+1 ) ∈ E. Moreover, for all 1 ≤ j < k ≤ n it holds that vj = vk . Every artist and every tag is mapped to a distinct node in the graph. The weighted directed edges in the graph correspond to the relations from tag-toartist and from artist-to-tag. In our setting the weight of the edge is equal to the respective weight of the relation. Finally, as a last pre-processing step we removed all nodes that had less than two outlinks in the graph, effectively removing concepts that where badly connected. The resulting graph consists of 49.022 tags, describing 18.634 artists. In comparison with the original dataset, it is especially for the number of artists a huge reduction. The main reason for this is that a lot of artist are simply not tagged.
3
Algorithm
Finding related nodes in a graph, is in the data mining literature often solved by using random walk with restart [9,8]. In this section we argue that such an approach leads to related concepts that are overly general, and propose an adjustment to overcome this deficit. First, we describe random walks and random walks with restart on directed graphs. Then we propose our algorithm. Finally, we describe an heuristic to generate functional hypotheses for concepts that are assumed to be related.
Graph Base Music Recommendation in a Social Bookmarking Service
3.1
141
Random Walks and Random Walk with Restarts
Informally, a random walk on a directed weighted graph G can be described as follows: consider a random particle currently at node v in G, at each iteration the particle moves to a neighbor—via an outgoing edge—with a probability that is proportional to its edge weight. The steady state probability of a node w, denoted as u(w) states the probability that the particle visits this node in the long term. Intuitively, the steady state probability of node w states the importance of w in the graph, i.e. u(w) is also known as the eigenvector centrality of w. In order for a random walk to converge to the steady state probability, its underlying transition graph must be ergodic. A standard method [6] of enforcing this property for random walks on directed graphs, is to add a new set of complete outgoing transitions, with small transition probabilities, to all nodes, creating a complete (and thus an aperiodic and strongly connected) transition graph. Hence, instead of moving to a neighbor with a probability that is proportional to its edge weight, the particle also has small probability of moving uniformly at random to any node in the graph. The steady state probability of a random walk, can easily be computed by iteratively applying Equation 1, this method is also known as the power iteration method. In this equation MG equals the column normalized adjacency matrix of G, g the normalized restart vector where for each node of G its corresponding value in g is set to 1, and c the restart probability for the uniform restart vector (also known as the damping factor). uk+1 = (1 − c) × MG × uk + c × g
(1)
Given an initial set of nodes of interest R ⊆ V , in addition to a random walk, a random walk with restart has an additional probability d of jumping back to one of the nodes in R. The relevance score of node w with the set of nodes R (uR (w)) is then defined as the steady state probability of the random walk with restart to R. Note that, as in the previous case, a set of transition probabilities to all nodes in the graph is added. Hence, two sets of transition probabilities are added to the transition graph: one—to ensure convergence—with transition probability c/|V | to all the nodes of the graph and one with transition probability d/|R| to all the nodes of the initial set of interest R. As in the previous case, the steady state probability of a random walk with restart can be computed by iteratively applying, a slightly adjusted version of, the power iteration method, shown in Equation 2. = (1 − (c + d)) × MG × ukR + (c × g) + (d × r) uk+1 R
(2)
With MG the column normalized adjacency matrix of G, c the restart probability to all nodes in the graph, g the restart vector to all nodes in the graph, d the restart probability to the interesting nodes in the graph (i.e. R) and r the restart vector for the nodes in R.
142
3.2
J. De Knijf, A. Liekens, and B. Goethals
Finding Related Nodes
Most graph mining algorithms based upon random walks with restart ( see for example [9,8]) solely use the steady state probability of the random walk with restart as the relevance score between nodes. However, consider the following example: Given two nodes w and v, and a set of restart nodes R. Suppose that uR (w) = 0.2 and uR (v) = 0.3. With this outcome, the most related node to the nodes of R is v. Now suppose that, u(w) = 0.1 and u(v) = 0.6. Hence, the apriori relevance of node v is much higher than the relevance score of node v with respect to the set of nodes R. In fact the initial set R harms the importance of v, while node w becomes far more important due to the initial set R. But nevertheless, when only using the steady state probability of the random walk with restart, v is preferred above w. In GaMuSo we take the prior importance of a node into account to adjust the score of the random walk with restart. In particular, for an initial set R, the relevance score of a node v is determined by uR (v) . u(v)
(3)
Intuitively, this adjustment lowers the probability of objects that are similar to most other objects, effectively enforcing the recommendation to be more specific. In the experimental setting we discuss the results obtained with and without the adjustment. Note that in our setting, the set of restart nodes always consists of one node. Moreover, we fixed the restart probability c to all nodes in the graph to 0.05 and the restart probability d for the set of nodes of interest , i.e. R, to 0.25. In general, different values for these probabilities results in minor changes in the recommendation, as long as d >> c. 3.3
Automatic Hypotheses Generation
The last part of GaMuSo consists of a method for automatic hypothesis generation. That is, between a source and a target node (e.g. artist of interest and related artist) a set of most likely, simple paths are generated. Most likely, in the sense that from all simple paths between source and target, the aim of the greedy heuristic is to select those that have the highest probability that the random walker traversed over this path between source and target. Assume a source node s and a target node t in G. Let P = v1 , . . . , vn be a simple path between source and target, with v1 = s and vn = t. Then, the probability of a random walker to traverse this path (Π(Ps,t )) provided it starts in s and end in t equals: n−1 p(vi , vi+1 ). (4) i=1
with p(vi , vi+1 ) the probability of moving from vi to vi+1 , that is the transition probability from vi to vi+1 .
Graph Base Music Recommendation in a Social Bookmarking Service
143
Likewise, for a path P starting in some intermediate node vi = s and ending in t, we estimate the probability that the random walk form source to target traversed this path with: ). (5) us (vi ) × Π(Ps,t where us (Vi ) is the steady state probability of the random walk with restart to the source node (Equation 2). In order to find the k most probable paths from source to target, we start from the target node and use the estimate of Equation 5 to find the most likely paths of length two. Consequently, we recursively extends these paths until we found the k most likely paths connecting source to target. At each iteration we only keep the K, with K >> k, most likely partial paths (in accordance with the estimate derived in Equation 5), effectively reducing the search space to a workable amount. The pseudo code is given in Algorithm 1. In our experiments we set the number of paths to ten (i.e. k = 10) and pruned the search space when there where more than 1000 paths under consideration (K = 1000).
Algorithm 1. Path Generation Input: graph G = {V, E, λ}, source node s, target node t, number of paths k Output: list of k most likely paths between s and t 1: S ← {{t}} 2: repeat 3: S ← {} 4: for all paths P = {v1 . . . vn , t} ∈ S do then 5: if v1 == s 6: S ← S P 7: else 8: for all n : (n, vi ) ∈ E and n ∈ {v1 . . . vn , t} do 9: S ← S {n, v1 . . . vn , t} 10: end for 11: end if 12: end for 13: S ← prune S to K most likely paths 14: until there at least k paths in S from s to t 15: return top k paths in S
4
Experiments
In this section we describe the experiments of our artist recommendation system on the last.fm dataset. Although, determining the most related artists for a given artist is a rather subjective opinion, we will evaluate the experiments by commenting on the results and by investigating the generated hypotheses for the experiments. Moreover, we compare GaMuSo with a purely random walk with restarts based approach.
144
J. De Knijf, A. Liekens, and B. Goethals Table 1. The 10 most important bands in the last.fm network 1 2 3 4 5
Radiohead The Beatles Muse The Killers Bj¨ ork
6 7 8 9 10
Arcade Fire David Bowie Coldplay The Red Hot Chili Peppers Pink Floyd
The first experiment consists of finding the most important artists in the graph, that is the artist with the largest eigenvector centrality. This measures can be computed using Equation 1. The ten most important artists in the last.fm network are displayed in Table 1. Remarkable is that most artists can best be described by the genre “alternative rock”, and that today’s most popular artists (according to the billboard charts, such as Justin Bieber, Lady Gaga, etc) are missing. Moreover, complete different music styles like classical music or Jazz are also absent in the top ten. Our next experiment consist of deriving the most related artists for The Stooges, a well known American punk band from the late sixties and early seventies. In this experiment we compare the results of the recommendation by GaMuSo with the results from a purely random walk with restart based recommendation, i.e. using only Equation2. The results are shown in Table 2. The first remarkable observation is that the results for the RWR approach consist mainly of well known bands, while the results of our recommendation service consists—except for the connoisseur of this genre—of unknown bands. A related observation is that five out of ten recommendations from the RWR approach belong to the top ten most important nodes in the last.fm network. This observation supports our claim that a pure RWR based approach for finding related concepts can result in overly general related results. Another sanity check to judge the predictions can be achieved by examining the most likely paths between The Stooges and the most related prediction by GaMuSo: MC5. These ten most likely paths are shown in Figure 1. From Table 2. The 10 most related artists for The Stooges, according to a pure RWR based approach (left) and according to GaMuSo The Beatles Radiohead Green Day David Bowie The Rolling Stones Muse The Strokes Kings of Leon The White Stripes The Killers
MC5 Modern Lovers Iggy Pop & James Williamson Richard Hell and the Voidoids Mink DeVille Flamin’ Groovies The Monks ? and the Mysterians New York Dolls The Sonics
Graph Base Music Recommendation in a Social Bookmarking Service
Monks
The Stooges
garage rock
pre-punk
detroit rock
The Sonics
garage
145
proto-punk
MC5
Fig. 1. The Stooges connect to MC5. Rectangles represent artist while diamonds represents user-defined tags. Full, dashed and dotted lines represent relations with decreasing probabilities.
these paths one can derive that the most influential tags for the prediction are proto-punk and garage rock, which make perfectly sense for The Stooges. Other influential tags are garage and detroit rock, while pre-punk is the least influential tag. Also some other top predicted bands are influential for the relation between the two concepts: The Monks and The Sonics, which both have common connections to the tags garage, proto-punk and garage rock. Our following experiment is in the classical music genre; we retrieved the ten most related artists for the classical composer Fr´ed´erique Chopin. The results are shown in Table 3, where the left column contains the results for the RWR approach and the right column contains the results of GaMuSo. Table 3. The 10 most related artists for Frederique Chopin, according to a pure RWR based approach (left) and according to GaMuSo (right) Ludwig Van Beethoven, Ludovico Einaudi Philip Glass Radiohead Erik Satie Pyotr Llyich Tchaikovsky Claude Debussy Yann Tiersen Howard Shore Anton´ın Dvoˇr´ ak
Felix Mendelssohn Gabriel Faur´e Robert Schumann Sir Edward Elgar Franz Schubert Richard Wagner Ron Pope Edvard Grieg Giuseppe Verdi Johannes Brahms
146
J. De Knijf, A. Liekens, and B. Goethals
Frédéric Chopin
classical
classic
classical piano
classical music
romantic classical
Johannes Brahms
romantic
romanticism
klassik
composers
Felix Mendelssohn
Fig. 2. Fr´ed´erique Chopin connect to Felix Mendlssohn. Rectangles represent artist while diamonds represents user-defined tags.Full, dashed and dotted lines represent relations with decreasing probabilities.
The first noteworthy observation, is that the most similar results obtained by GaMuSo are all but one known as composers of the Romantic music era. Note that, Chopin is considered as one of the most influential composers of Romantic music. The only composer that is not from the Romantic era is Ron Pope, who is a modern composer of classical piano music. Further noteworthy is that two of the related composers (Schumann and Brahms) are—according to Wikipedia [11]—greatly influenced by Chopin. Also when examining the functional hypotheses between Chopin and Mendlssohn, it is clear that the tag romantic plays an important role in the random walk from Chopin to Mendlssohn. Besides the romantic tag, also the classic tag and its many variations are of great influence. Further interesting is the relatively high impact of the tags composer and classical piano. Further investigations into the predictions posed by the RWR approach (Table 3) reveals that only three out of nine suggested composer are from the Romantic era. Three out of nine are composers from the 20T h century (Philip Glass, Yann Tiersen, Howard Shore), and have in comparison with earlier classical composers as Beethoven and Mozart, a prior importance in the last.fm graph that is almost ten times higher. Apparently, it is the case that these 20T h century composers are far more popular for the last.fm audience. Another remarkable recommendation is the high similarity between Radiohead and Chopin. Radiohead is the most important artist in the last.fm network( see Table 1), and is a well known alternative rock/indie band. Examining the paths between Chopin
Graph Base Music Recommendation in a Social Bookmarking Service
147
Duran Duran
80s
Tears for Fears
new wave
A.B.C.
Ultravox
Heaven 17
Adam & The Ants
Depeche Mode
new romantic
synthpop
Visage
Fig. 3. Duran Duran connected to Visage. Rectangles represent artist while diamonds represents user-defined tags.Full, dashed and dotted lines represent relations with decreasing probabilities.
and Radiohead reveals that practically all paths go over the tag piano, which is an adequate tag for Chopin but seems less relevant for Radiohead. Finally, there are some good arguments to show Beethoven as one of the the most related composer to Chopin. That is, Beethoven is considered as the crucial figure in transition of the classical music into the Romantic era. However, Beethoven does not occur in the top ten recommendations of GaMuSo. As a last experiment we derived the most likely paths of a well known eighties band to one of its top predictions. The connection between Duran Duran and Visage is shown in Figure 3. Also in this case, it seems a sensible prediction. The most influential tags: new wave, new romantic and synth pop forms a good description of both Duran Duran as well as Visage. Moreover, the paths over the different 80 s bands feeds the hung that this is a good prediction, because most band are somewhat related to both Visage as well as Duran Duran.
5
Related Work
Most existing work on mining tag based social bookmarking services, formulate their research question and apply their algorithm on the social bookmarking service for images: Flickr. Although, there are many similarities between the
148
J. De Knijf, A. Liekens, and B. Goethals
different social bookmarking services, a research question as finding the most similar images to an image, of for example the Eiffel tower, seems to be pointless. That is, for an image either it contains the Eiffel tower or it does not. As such, different degrees of similarity, as in the case of social bookmarking sites for artists, scholarly papers and webpages, seems to apply to a far lesser extend for images. A popular research question is the one of tag recommendation: given a set of tags for a certain concept, which tags are further appropriate for it ? Sigurbj¨ ornsson et al. [7] propose a co-occurrence based method for tag recommendation. Krestel et al. [5] propose a tag recommendation system base upon a Latent Dirichlet Allocation model. Another commonly posed research question, is of finding groups—often as complete as possible—of similar concepts or tags. The work by Van Leeuwen et al. [10], tries to accomplish this task by mining frequent tagsets and select the most promising ones according to the MDL principle. Related work to our hypotheses generation algorithm is the work by Faloutsos et al. [3] and the work by Hintsanen et al. [4]. The purpose of both methods is to find the best connecting subgraph between a source and target node. The main difference is, that the aim of our method is to select the paths that have the highest influence on the random walk from source to target.
6
Discussion and Conclusion
In this paper we described the GaMuSo system, a recommendation system based upon tagged content, created in social bookmarking services. Despite the many disadvantages of social bookmarking services, it has great potential usefulness for the intelligent data analysis community. This is mainly due to the wide public availability of user created tags, and the different types of—often complex— content that is being tagged. One of the foremost disadvantages concerned with analysing tagsets of social bookmarking services, are the inherent linguistic problems that comes with user-generated tagsets. Our experiments revealed that—in spite of the data cleaning—many syntactical variations for the same tag exists. For example, the tags: classic, classical, klassik and classical music all occurred in one experiment. It is our aspiration to develop advanced data cleaning techniques to tackle this problem in the GaMuSo system. One of the main features of GaMuSo consists of a recommendation engine that forces the related concepts to be more specific than the recommendation based upon the commonly used RWR approach. In the experiments, we showed that this adjustment often leads to considerably better results. However, as noticed in the experiments of finding related composers to Chopin, one can think of particular settings in which more general concepts are preferred over specific ones. For example, in a recommendation system for scholarly papers, it make sense to take the experience of the user into account. That is, for beginning researchers, such as a Master or a starting PhD student, more general recommendations would be more appropriate. On the other hand, very specific recommendations
Graph Base Music Recommendation in a Social Bookmarking Service
149
for experienced researcher are also desirable. But also when considering music recommendation, these different settings of a recommendation system make sense. For example, for people who are exploring a new music genre, general recommendations seem more useful than specific ones. This is an open issue in the GaMuSo system, for which further research is needed in order to take these different scenarios into account. Another salient property of GaMuSo is its ability to provide functional hypotheses why two concepts are related. These hypotheses have proven to be useful in order to investigate why Chopin and Radiohead were related. But also in other recommendations where these hypotheses valuable. In the experiments we focused on artist recommendation, however, in principle the GaMuSo system is also suited as a recommendation engine for different types of tagged content. However, an important requirement is that different degrees of similarity between the content exists. As previously argued, this is a suitable setting for social bookmarking sites of artists, scholarly papers and webpages and seems to apply to a far lesser extend when the content consists of images.
References 1. Ames, M., Naaman, M.: Why we tag: motivations for annotation in mobile and online media. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI 2007, pp. 971–980 (2007) 2. Davidson, J., Liebald, B., Liu, J., Nandy, P., Van Vleet, T., Gargi, U., Gupta, S., He, Y., Lambert, M., Livingston, B., Sampath, D.: The youtube video recommendation system. In: Proceedings of the Fourth ACM Conference on Recommender Systems, RecSys 2010, pp. 293–296 (2010) 3. Faloutsos, C., McCurley, K., Tomkins, A.: Fast discovery of connection subgraphs. In: ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2004 (2004) 4. Hintsanen, P., Toivonen, H.: Finding reliable subgraphs from large probabilistic graphs. Data Min. Knowl. Discov. 17, 3–23 (2008) 5. Krestel, R., Fankhauser, P., Nejdl, W.: Latent dirichlet allocation for tag recommendation. In: ACM Conference on Recommender Systems, RecSys 2009 (2009) 6. Page, L., Brin, S., Motwani, R., Winograd, T.: The pagerank citation ranking: Bringing order to the web. Technical report, Stanford Digital Library Technologies Project (1998) 7. Sigurbj¨ ornsson, B., van Zwol, R.: Flickr tag recommendation based on collective knowledge. In: ACM Conference on World Wide Web, WWW 2008 (2008) 8. Sun, J., Qu, H., Chakrabarti, D., Faloutsos, C.: Neighborhood formation and anomaly detection in bipartite graphs. In: IEE International Conference on Data Mining, pp. 418–425 (2005) 9. Tong, H., Faloutsos, C., Pan, J.: Random walk with restart: fast solutions and applications. Knowl. Inf. Syst. 14(3), 327–346 (2008) 10. van Leeuwen, M., Bonchi, F., Sigurbj¨ ornsson, B., Siebes, A.: Compressing tags to find interesting media groups. In: ACM Conference on Information and Knowledge Management, CIKM 2009 (2009) 11. Wikipedia. Fr´ed´eric Chopin — Wikipedia, the free encyclopedia (2011) (online; accessed July 23, 2011)
Resampling-Based Change Point Estimation Jelena Fiosina and Maksims Fiosins Clausthal University of Technology, Institute of Informatics, Julius-Albert Str. 4, D-38678, Clausthal-Zellerfeld, Germany {Jelena.Fiosina,Maksims.Fiosins}@gmail.com
Abstract. Change point detecting problem is an important task in data mining applications. Standard statistical procedures for change point detection, based on maximum likelihood estimators, are complex and require building of parametric models of data. Instead, the methods of computational statistics replace complex statistical procedures with an amount of computation; so these methods become more popular in practical applications. The paper deals with two resampling tests for change point detection. In well known bootstrap-based CUSUM test we derive the formulas to estimate the efficiency of this procedure by taking an expectation and variance of the estimator into account. We propose also another simple pairwise resampling test and analyze its properties and efficiency too. We illustrate our approach by numerical example considering the problem of decision making of vehicles in city traffic. Keywords: Change point, resampling, estimation, expectation, variance, CUSUM test.
1
Introduction
Change point (CP) analysis is an important part of data mining, which purpose is to determine if and when a change in a data set has occurred. Online detection of CP is useful in modeling and prediction of data sequence in application areas such as finance, biometrics [9], robotics and traffic control [4]. CP analysis can be used: 1) for determining if changes in the process control led to changes in an output, 2) for solving a class of problems, such as control, forecasting etc., and 3) trend change analysis ([9]). Traditional statistical approach to the problem of CP detection is maximum likelihood estimation (MLE). In this approach, a model of data is constructed, the likelihood function for CP is written and the estimator of CP is a result of the likelihood function minimization. This approach requires knowledge of exact data model and its parameters as well as complex analytical or numerical manipulations with likelihood function [6].
We would like to thank Lower Saxony Technical University project ”Planning and Decision Making for Autonomous Actors in Traffic” and European Commission FP7 Marie Curie IEF Career Development Grant for support.
J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 150–161, 2011. c Springer-Verlag Berlin Heidelberg 2011
Resampling-Based Change Point Estimation
151
In the case of small samples this approach does not allow us to choose the probability distributions correctly and properly estimate their parameters. Alternatively, methods of computational statistics (CST) [10] are widely used, where complex statistical procedures are replaced with a big amount of computations. One technique for detecting if and when a CP (shift) has occurred is a cumulative sum chart (CUSUM chart). A form of a CUSUM chart allows to see visually if there is a CP. A confidence level may be assigned for each detected change. It can be constructed using bootstrapping approach[5], which is one of computational statistics methods [11]. The paper firstly deals with bootstrap-based CUSUM CP test, slightly modified and described in terms of resampling approach [1], [2], [3], [7], which allows more accurate analysis by estimating its theoretical properties. We derive analytical formulas to estimate the efficiency of this technique by taking expectation and variance as efficiency criteria. Secondly we propose another simple resampling test, based on pairwise comparisons of randomly selected data and estimate its efficiency too. We illustrate our approach with numerical examples considering the problem of decision making of vehicles in city traffic [8]. The paper is organized as follows. In Section 2, we formulate CP problem formally and describe a standard CUSUM approach. In Section 3, we present the efficiency analysis of CUSUM-based approach in resampling terms. Section 4 proposes alternative simple pairwise resampling test and analyses its efficiency. Section 5 demonstrates a case study, with numerical examples in traffic applications. Section 6 contains final remarks and concludes the paper.
2
Problem Formulation
Let us formulate CP detection problem. Let X = {X1 , X2 , . . . , Xn } be a sequence of random variables. Let us divide the sample X as X = {XB , XA }. We say that there is a CP at a position k, if the variables XB = {X1 , X2 , . . . , Xk } have a distribution FB (x, ΘB ), but the variables XA = {Xk+1 , Xk+2 , . . . , Xn } have a distribution FA (x, ΘA ), ΘB = ΘA . The aim of CP analysis is to estimate the value of k (clear that in the case of k = n there is CP absence). We are interested in the case, when the distributions F B (·) and F A (·) differ with a mean value. CP analysis using CUSUM charts in combination with Bootstrap approach [9] is the following: first, a CUSUM chart is constructed, which presents a difference between sample data and a mean. If there is no CPs in mean of sample data, the CUSUM chart will be relatively flat. Alternatively in the case of CP existence, there will be obvious minimum or maximum in CUSUM chart. sum Si at each data point i is calculated as follows: Si = iThe cumulative ¯ where i = 1, 2 . . . n, Xj is the current value, and X ¯ is the mean. (X − X), j j=1 A CUSUM chart starts at zero (S0 = 0) and always ends at zero (Sn = 0). Increasing (decreasing) of the CUSUM chart means that the data Xj are permanently greater (smaller) than the sample mean. A change in the direction of CUSUM chart allows to spread about the CP in the mean.
152
J. Fiosina and M. Fiosins
60
100 50
40 0 20 -50 -100
0 5
10
Observations
15
20
5
10
15
20
Observations
Fig. 1. Sample data (left) and sample CUSUM chart(right)
Figure 1 presents an example of initial sample (left) and corresponding CUSUMs (right); a bold line represents CUSUMs calculated on initial sample, dotted lines - CUSUMs on bootstrapped data. An initial CUSUM chart well detects a change point at k = 10. For each change it is possible to calculate a confidence level using bootstrapping of initial data. For this purpose, initial data are randomly permuted (bootstrapped). N bootstraps are produced with the sample data set. For each bootstrapped data set we construct CUSUMS S ∗ (r) and estimate its range, so for the r-th bootstrap iteration ΔS ∗ (r) = maxi {Si∗ (r)} − mini {Si∗ (r)}, i = 1, 2, . . . , n. The final step in determining of the confidence level is to calculate the percent of times that the range for the original CUSUM data ΔS = maxi {Si }−mini {Si } exceeds the range for the bootstrapped CUSUM data ΔS ∗ (r), r = 1, 2 . . . N . For this purpose we need to build an empirical distribution function for bootstrap ranges ΔS ∗ (r), r = 1, 2, . . . N , as N #{ΔS ∗ (r) ≤ x} 1 ∗ FˆΔS ∗ (x) = 1 . = N N r=1 {ΔS (r)≤x}
(1)
Let us consider a hypotheses H 0 , about CP absence in data against the alternative H 1 that there is CP. It is appropriate to set a predetermined confidence level γ beyond which a change is considered significant. Typically, γ = 0.95 or γ = 0.99 is selected. Then using the cdf FˆΔS ∗ (x) (1) we construct a bootstrap approximation of −1 1−γ ˆ −1 1+γ ˆ −1 a confidence interval for ΔS: [FˆΔS ∗( 2 ); FΔS ∗ ( 2 )], where FΔS ∗ (γ) is the quantile of the distribution FˆΔS ∗ of the level γ. If the interval does not cover the value ΔS, then we can conclude, that initial and bootstrapped data significantly differ, and we reject H 0 (and this means CP existence).
Resampling-Based Change Point Estimation
3
153
Statistical Properties of CUSUM Test
CUSUM test interpretation as a resampling [3] test allows to derive some its properties and estimate the efficiency on the base of expectation and variance of the estimator. Let us consider a test for a CP at a point k under H 0 . Here, we do not deal with the ranges of CUSUMs, but with values itself. We produce N iterations of resampling procedure. At r-th iteration, we extract, without replacement, k elements from the sample X, forming the resample {X1∗r , X2∗r , . . . , Xk∗r } and construct the CUSUM estimator for the point k: Sk∗ (r) =
k
¯ = (Xi∗r − X)
i=1
k
¯ Xi∗r − k X,
(2)
i=1
¯ is an average over the sample X. where X After N such realizations we obtain a sequence Sk∗ (1), Sk∗ (2), . . . , Sk∗ (N ). Now we calculate the resampling estimator Fk∗ (x) of the distribution function N of bootstrapped CUSUMS: Fk∗ (x) = N1 r=1 1{Sk∗ (r)≤x} . We are interested in the expectation and variance of the estimator Fk∗ (x). The expectation of Fk∗ (x) can be expressed as N 1 ∗ E[Fk (x)] = E 1{Sk∗ (r)≤x} = P {Sk∗ (r) ≤ x} = N r=1 k k ∗r ∗r ¯ ¯ (3) =P X − kX ≤ x = P X ≤ kX + x = i
i=1
=P
k
Xi∗r ≤ x
i
i=1
= P {BA∗k (r) ≤ x } .
i=1
In order to find the distribution of the sum BA∗k (r), it is important to note that the resample {X1∗r , X2∗r , . . . , Xk∗r } contains elements from the samples XB and XA with possibly different distributions F B (x) and F A (x). So let us split the sum BA∗k (r) into two sums: the sum By∗r (r) contains elements selected from XB and the sum A∗k−yr (r) contains elements selected from XB : BA∗k (r) = By∗r (r) + A∗k−yr (r), where yr is a number of elements extracted from XB on r-th step. Now we can write a formula for the distribution of BA∗k (r): k k ∗ ∗r FBA (x) = P {BAk (r) ≤ x} = P Xi ≤ x = = =
k yr =0 k yr =0
i=1
q(yr ) · P By∗r (r) + A∗k−yr (r) ≤ x = q(yr ) ·
∞
−∞
(y )
(k−yr )
FB r (x − u)dFA
(4)
(u),
where F (c) (x) is the c-th fold convolution of cdf F (x) with itself, F (0) (x) ≡ 1.
154
J. Fiosina and M. Fiosins
Note that the probability q(yr ), that the sum BA∗k (r) contains a fixed number yr of elements from XB , and the rest elements are from XA is
k n−k n q(yr ) = / . (5) yr k − yr k Taking into account the previous discussion (4)-(5), the formula (3) can be now k (x ). rewritten as E[Fk∗ (x)] = P {Sk∗ (r) ≤ x} = FBA Now we derive an expression for the variance of the estimator (2) N
2 1 ∗ V ar[Fk (x)] = V ar 1{Sk∗ (r)≤x} = N (6) r=1 (N − 1) 1 Cov 1{Sk∗ (r)≤x} , 1{Sk∗ (p)≤x} , r = p. = V ar 1{Sk∗ (r)≤x} + N N The variance V ar 1{Sk∗ (r)≤x} does not depend on the resampling procedure: 2 V ar 1{Sk∗ (r)≤x} = E (1{Sk∗ (r)≤x} )2 − E 1{Sk∗ (r)≤x} =
(7) k k (x ) − (FBA (x ))2 . = FBA The term Cov 1{Sk∗ (r)≤x} , 1{Sk∗ (p)≤x} depends on the resampling procedure: Cov 1{Sk∗ (r)≤x} , 1{Sk∗ (p)≤x} = = E 1{Sk∗ (r)≤x} · 1{Sk∗ (p)≤x} − E 1{Sk∗ (r)≤x} E 1{Sk∗ (p)≤x} , which can be expressed using a mixed moment μ11 as μ11 = E 1{Sk∗ (r)≤x} · 1{Sk∗ (p)≤x} = P {BA∗k (r) ≤ x , BA∗k (p) ≤ x } . For the fixed value of y = (yr , yp ) we have μ11 (y) = P By∗r (r) + A∗k−yr (r) ≤ x , By∗p (p) + A∗k−yp (p) ≤ x .
(8)
(9)
(10)
For μ11 (y) calculation we use the notation of α-pair ([1], [2], [7]). Let α = (αB , αA ), where αB be the number of common elements in By∗r (r) and By∗p (p) and αA be the number of common elements in A∗k−yr (r) and A∗k−yp (p). Denote WyBr (r) ⊂ XB a set of elements which produce the sum By∗r (r) and A Wk−yr (r) ⊂ XA a set of elements which produce the sum A∗k−yr (r) given yr , A }. Let MkB (y) = {max(0, yr − yp − k), . . . , min(yr , yp )}, Wk,yr (r) = {WyBr , Wk−y r MkA (y) = {max(0, 3k − n − yr − yp ), . . . , min(k − yr , k − yp )}, Mk (y) = MkB (y) × MkA (y). We say that Wk,yr (r) and Wk,yp (p) produce an α-pair, α ∈ Mk (y), if A A |WyBr (r) ∩ WyBp (p)| = αB and |Wk−y (r) ∩ Wk−y (p)| = αA . r p Let Ar,p (α, y) denote an event ”Wk,yr (r) and Wk,yp (p) produce α-pair”, Pr,p (α, y) = P {Ar,p (α, y)} its probability. All realizations r = 1 . . . N are statistically equivalent and we can omit the lower indices r, p and write P (α, y).
Resampling-Based Change Point Estimation
155
The probability P (α, y) to obtain an α-pair given y, α ∈ Mk (y) is
yr k − yr k k − yr n − 2k + yr n−k P (α, y) = / · / . (11) αB yp − αB αA k − yp − αA yp k − yp Let μ11 (α, y) be a conditional mixed moment μ11 given α and y. Then μ11 (9) can be expressed in the following form: μ11 = μ11 (α, y)P (α, y)q(yr )q(yp ). (12) y α∈Mk (y) To obtain an expression for μ11 (α, y), we consider the sums By∗r (r) and By∗p (p) at two different realizations r and p, which produce an α-pair and y is given: they contain αB common and yr − αB and yp − αB different elements. Let us split each of these sums into two parts, which contain common and different (r, p) + Bydif (r, p), By∗p (p) = elements for realizations r and p: By∗r (r) = Bαcom B r −αB dif Bαcom (p, r)+ Bydif (p, r). Analogously A∗k−yr (r) = Acom αA (r, p)+ Ak−yr −αA (r, p), B p −αB
dif Ak−yp (p) = Acom αA (p, r) + Ak−yp −αA (p, r). Then the previous sums for two realizations r and p can be written as follows: dif BA∗k (r) = Bαcom (r, p) + Bydif (r, p) + Acom αA (r, p) + Ak−yr −αA (r, p), B r −αB dif ∗ com com BAk (p) = BαB (p, r) + Byp −αB (p, r) + AαA (p, r) + Adif k−yp −αA (p, r).
(13)
Then ∗ ∗ μ11 (α, y) = P {BAk (r) ≤ x , BAk (p) ≤ x |α, y} =
dif dif = P Bαcom (r, p) + Acom αA (r, p) + Byr −αB (r, p) + Ak−yr −αA (r, p) ≤ x , B dif dif (p, r) + Acom Bαcom αA (p, r) + Byp −αB (p, r) + Ak−yp −αA (p, r) ≤ x |α, y = B ∞ = P Bydif (r, p) + Adif k−yr −αA (r, p) ≤ x − u|α, y · r −αB −∞ dif αB ,αA P Bydif (p, r) + A (p, r) ≤ x − u|α, y dFBA (u) = k−yp −αA p −αB ∞ yp −αB ,k−yp −αA yr −αB ,k−yr −αA αB ,αA = FBA (x − u) · FBA (x − u)dFBA (u),
(14)
−∞
b,a (x) is cdf of the sum of b elements from XB and a elements from XA . where FBA For some distributions we can obtain explicit expressions.
Exponential case. Firstly we need to find the distribution of the sum of a independent exponentially distributed r.v. {Xi } with parameter λ and b independent exponentially distributed r.v. {Yi } with parameter ν. Following Afanasyeva [1], where a case of the difference of the sums of random variables was discussed a b ∞ b,a Xi + Yi ≤ x = F (a) (x − u)f (b) (u)du = FBA (x) = P i=1
i=1
−∞
= FEr(ν,a) (x)− b−1 i (i − p + a − 1)! e−λx ν a λi i FEr(ν−λ,i−p+a) (x), − (−1)i−p xp · (a − 1)! i=0 i! p=0 p (ν − λ)a−p+i
156
J. Fiosina and M. Fiosins
where FEr(ν,a) (x) is cdf of Erlang distribution with parameters ν, a. Now using (4) - (14) we obtain the properties of the resampling estimator. Normal case. We consider a case of a sum of a independent normally distributed r.v. {Xi } with parameters βX and σX and b independent normally distributed r.v. {Yi } with parameters βY and σY . As the sum of normally distributed r.v. is normally distributed: a b x − (βX · a + βY · b) b,a (15) , Xi + Yi ≤ x = Φ FBA (x) = P 2 + b · σ2 a · σX Y i=1 i=1 where Φ(x) is cdf of standard normal distribution N (0, 1). Now we can use all formulas from (4) to (14) to obtain the properties of the resampling estimator.
4
Pairwise Resampling CP Test and Its Efficiency
We propose an alternative resampling CP test. Let us test a point k. The idea behind this method is based on the consideration of the probability P {X ≤ Y }, where r.v. X is taken randomly from the subsample XB and r.v. Y from the subsample XA . If the samples XB and XA are from one distribution, this probability should be equal to 0.5. However, for our test we scale this value by multiplying to the difference y − x in the case when x ≤ y. So our characteristic of interest is Ψ (x, y) = I{x) (5) where < . > denotes average over all n.
3
Prediction of Computer Performance Dynamics
The prediction results in this section are presented in the time domain. In each figure, we plot the comparison signal with an “x” and the predicted signal with an “o”. Error bars are provided to aid in visual comparison. Figures 1 and 4 show the true and predicted values of instructions per cycle (ipc) and cache misses (cache) of the column major program for both SLMA and k-ball SLMA prediction algorithms. We explored a range of k values for the latter, finding that k = 3 minimized the RMSPE. RMSPE values for these predictions are tabulated in Table 1. As is clear from these plots and numbers, these predictions were quite accurate for the column major program, with k-ball SLMA delivering superior results on both traces. Note that RMSPE is in the same units as the data; the average value of the cache signal for column major, Table 1. Root Mean Squared Prediction Error (RMSPE) for all Predictions cache SLMA RMSPE k-Ball SLMA RMSPE column major 15.2381 11.5660 row major 206.7005 59.8668 ipc SLMA RMSPE k-Ball SLMA RMSPE column major 8.1577 × 10−4 5.0020 × 10−4 row major 0.0327 0.0208
Predicting Computer Performance Dynamics
181
4
2.3
x 10
Cache misses
2.25
2.2
2.15
2.1
2.05
0
100
200
300
400 500 600 time (instructions x 100,000)
700
800
900
1000
R Fig. 4. Prediction of column major cache-miss rate on an Intel Core2 Duo: RMSPE for SLMA and k-ball SLMA are 15.2381 and 11.5660, respectively
1700
1600
Cache Misses
1500
1400
1300
1200
1100
0
100
200
300
400 500 600 time (instructions x 100,000)
700
800
900
1000
R Fig. 5. Prediction of row major cache-miss rate on an Intel Core2 Duo: RMSPE for SLMA and k-ball SLMA are 206.7005 and 59.8668, respectively
for instance, is about 2.2 × 104 , so RMSPE values of 15.3 and 11.6 are 0.07% and 0.05%, respectively. These are impressive predictions. The results for row major were not quite as good, as is clearly visible from Figures 5 and 6 and the lower rows of Table 1—though k-ball SLMA works better than SLMA in this situation as well. k = 3 was the best value for this data set as well. In percentages, these RMSPE values are 14.8% and 4.3% for SLMA and k-ball SLMA on cache—a couple of orders of magnitude worse than for the column major cache results.
182
J. Garland and E. Bradley 2
1.98
Instructions per cycle
1.96
1.94
1.92
1.9
1.88
1.86
0
100
200
300
400 500 600 time (instructions x 100,000)
700
800
900
1000
R Fig. 6. Prediction of row major processor usage on an Intel Core2 Duo: RMSPE for SLMA and k-ball SLMA are 0.0327 and 0.0208, respectively
This result surprised us, since row major was specifically designed to work well with modern cache design. We had conjectured that this would lead to simpler, easier to predict dynamics, but that turned out not to be the case. The reason for this is subtle, and related to speed and sampling. row major runs much faster and our fixed sample rate (100,000 instructions) was simply not fast enough to capture its dynamics accurately. Interrupting more frequently, however, visibly perturbed the dynamics [11], so we could not increase the sample rate without degrading the prediction.
4
Conclusions and Future Work
Predicting the performance of a computer is critical for a variety of important reasons, ranging from compiler and cache design to power reduction. The computer systems research community has devoted significant effort to this problem over the past few decades (e.g., [2]), but none of the resulting approaches use the methods of nonlinear dynamics. Given that computers are not linear, time-invariant systems, this is a serious limitation. The results presented here represent a proof of concept for a prediction strategy that relaxes that limitation by using mathematics that is truly up to the task of predicting the behavior of a running computer: a complex, nonlinear, but deterministic silicon and metal machine. These results are potentially transformational, as they could enable ‘on the fly’ adaptation of tasks to resources—and vice versa. An accurate prediction of processor or memory loads, for instance, could be used to power down unneeded resources—or to shift those loads to different resources (e.g. moving memory-bound computation threads to lower-speed processing units). A variety of issues remain to be addressed before this can be realized in practice: measurement, noise, dimensionality, and nonstationarity.
Predicting Computer Performance Dynamics
183
– Measurement: As is clear from the issues discussed at the end of the previous section, effective measurement infrastructure is critical to the success of this strategy. This is of course not unique to our approach, but the computer systems community has largely neglected this issue until very recently. Different profilers, for instance, deliver very different verdicts about the performance of the same piece of code—and some of them actually alter that performance [11]. – Noise: the k-ball adaptation of Section 2.3 is a preliminary solution to the unavoidable issue of noise in real-world measurements, but the choice of k is an issue—both its value and the fact that it remains fixed through the prediction. We are working on adaptive − δ algorithms that map collections of points and regions of embedded space back and forth in time in order to automatically choose and adapt k values. – Dimensionality: delay-coordinate embedding requires significant post facto analysis—and expert human interpretation—in the choice of the parameters τ and m. This is obviously impractical in any kind of “on the fly” prediction strategy. However, there are results in dynamical systems theory that suggest that lower-dimensional projections of the full state-space dynamics are useful (even if not perfect) for the purpose of prediction. That is, full topological conjugacy may be an unnecessary luxury in a noisy, undersampled world. If this is the case in our application—something that we suspect is so—then we should be able to skip the full embedding process and work with a twodimensional reconstruction of the dynamics (i.e., a plot of ipc at time t vs. ipc at time t + τ ). – Nonstationarity: real computer programs do not look like row major and column major; rather, they bounce around between different loops of different lengths with complicated transients in between. Delay-coordinate embedding is effective in“learning” the structure of a single attractor—a state-space structure with fixed geometry to which the system trajectories converge. Real computer programs travel on and between many attractors, and a single delay-coordinate embedding of such a trajectory will be a topological stew of different dynamics—very unlikely to be useful for any prediction at all. There are two elements in the solution to this problem: separating the signals and adapting the prediction to each. We are working on a topology-based signal separation scheme that uses continuity to identify transitions between different dynamical regimes and machine-learning strategies for adapting the prediction algorithms on the fly when regime shifts are detected.
References 1. Alexander, Z., Mytkowicz, T., Diwan, A., Bradley, E.: Measurement and dynamical analysis of computer performance data. In: Cohen, P.R., Adams, N.M., Berthold, M.R. (eds.) IDA 2010. LNCS, vol. 6065, pp. 18–29. Springer, Heidelberg (2010) 2. Armstrong, B., Eigenmann, R.: Performance forecasting: Towards a methodology for characterizing large computational applications. In: Proc. of the Int’l Conf. on Parallel Processing, pp. 518–525 (1998)
184
J. Garland and E. Bradley
3. Bradley, E.: Analysis of time series. In: Berthold, M., Hand, D. (eds.) Intelligent Data Analysis: An Introduction, vol. 2, pp. 199–226. Springer, Heidelberg (2000) 4. Browne, S., Deane, C., Ho, G., Mucci, P.: PAPI: A portable interface to hardware performance counters. In: Proceedings of Department of Defense HPCMP Users Group Conference. Department of Defense (1999) 5. Fraser, A., Swinney, H.: Independent coordinates for strange attractors from mutual information. Phys. Rev. A 33(2), 1134–1140 (1986) 6. Hegger, R., Kantz, H., Schreiber, T.: Practical implementation of nonlinear time series methods: The TISEAN package. Chaos 9(2), 413–435 (1999) 7. Kantz, H., Schreiber, T.: Nonlinear Time Series Analysis, vol. 2. Cambridge University Press, Cambridge (2003) 8. Kennel, M., Brown, R., Abarbanel, H.: Determining embedding dimension for phase-space reconstruction using a geometrical construction. Phys. Rev. A 45(6), 3403–3411 (1992) 9. Lorenz, E.: Atmospheric predictability as revealed by naturally occurring analogues. Journal of the Atmospheric Sciences 26(4), 636–646 (1969) 10. Meiss, J.: Differential Dynamical Systems. Mathematical Modeling and Computation. Society for Industrial and Applied Mathematics, Philadelphia (2007) 11. Mytkowicz, T.: Supporting experiments in computer systems research. Ph.D. thesis, University of Colorado (November 2010) 12. Mytkowicz, T., Diwan, A., Bradley, E.: Computer systems are dynamical systems. Chaos 19(3), 033124–033124–14 (2009) 13. Sauer, T., Yorke, J., Casdagli, M.: Embedology. Journal of Statistical Physics 65, 579–616 (1991) 14. Takens, F.: Detecting strange attractors in turbulence. In: Rand, D., Young, L.S. (eds.) Dynamical Systems and Turbulence, Warwick 1980. Lecture Notes in Mathematics, vol. 898, pp. 366–381. Springer, Heidelberg (1981)
Prototype-Based Classification of Dissimilarity Data Barbara Hammer, Bassam Mokbel, Frank-Michael Schleif, and Xibin Zhu CITEC centre of excellence, Bielefeld University, 33615 Bielefeld - Germany {bhammer,bmokbel,fschleif,xzhu}@techfak.uni-bielefeld.de
Abstract. Unlike many black-box algorithms in machine learning, prototype-based models offer an intuitive interface to given data sets, since prototypes can directly be inspected by experts in the field. Most techniques rely on Euclidean vectors such that their suitability for complex scenarios is limited. Recently, several unsupervised approaches have successfully been extended to general, possibly non-Euclidean data characterized by pairwise dissimilarities. In this paper, we shortly review a general approach to extend unsupervised prototype-based techniques to dissimilarities, and we transfer this approach to supervised prototypebased classification for general dissimilarity data. In particular, a new supervised prototype-based classification technique for dissimilarity data is proposed.
1
Introduction
While machine learning techniques have revolutionized the possibility to deal with large and complex electronic data sets, and highly accurate classification and clustering models can be inferred automatically from given data, many machine learning techniques have the drawback that they largely behave as black boxes. In consequence, applicants often have to simply ‘trust’ the output of such methods. It is in general hardly possible to see why an automatic classification method has taken a particular decision, nor is it possible to change the behavior or functionality of a given model from the outside due to the black box character. Hence, many machine learning techniques are not suited to inspect large data sets in a meaningful human-understandable way. Prototype-based methods represent their decisions in terms of typical representatives contained in the input space. Prototypes can directly be inspected by humans in the field in the same way as data points: for example, physicians can inspect prototypical medical cases, prototypical images can directly be displayed on the computer screen, prototypical action sequences of robots can be performed in a robotic simulation, etc. Since the decision in prototype-based techniques usually depends on the similarity of a given input to the prototypes stored in the model, a direct inspection of the taken decision in terms of the responsible prototype becomes possible. Similarly, k-nearest neighbor classifiers rely on the k most similar data points, given an input pattern, and thus, they J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 185–197, 2011. c Springer-Verlag Berlin Heidelberg 2011
186
B. Hammer et al.
often constitute very intuitive default baseline classifiers. Note however, that knearest neighbor classifiers depend on all given training data, and thus, they are not suited for large data sets and they do not offer a compact way to inspect the given classes represented by all members of the class. In contrast, prototypebased techniques compress the data in terms of a priorly fixed number of (usually few) prototypes, such that fast classification and inspection of clusters becomes possible. Many different algorithms have been proposed in the literature which derive prototype based models from given data. Unsupervised techniques include popular clustering algorithms such as simple k-means or fuzzy-k-means clustering, topographic mapping such as neural gas or the self-organizing map, and statistical counterparts such as the generative topographic mapping [18,14,2]. Supervised techniques take into account a priorly given class labeling, and they try to find decision boundaries which accurately describe priorly known class labels. Popular methods include in particular different variants of learning vector quantization, some of which are derived from explicit cost functions or statistical models [14,24,27]. Besides different mathematical derivations of the models, these learning algorithms have in common that they arrive at sparse representations of data in terms of prototypical vectors, they form decisions based on the similarity of data to these prototypes, and their training is often very intuitive based on Hebbian principles. In addition to their direct interpretability, prototype-based models provide excellent generalization ability due to their sparse representation of data, see e.g. the work [10,25] for explicit large margin generalization bounds for prototype-based techniques. Prototypes offer a compression and efficient representation of the important aspects of given data which very naturally allows to wrap the basic algorithms into an incremental life-long learning paradigm, treating the prototypes as a compact representation of all already seen data. This aspect has been used in diverse scenarios which deal with incremental settings or very large data sets, see e.g. [5,13,1]. One of the most severe restrictions of prototype-based methods is their dependency on the Euclidean distance and their restriction to Euclidean vector spaces only. This makes them unsuitable for complex or heterogeneous data sets: input features often have different relevance; further, high dimensionality easily disrupts the Euclidean norm due to accumulated noise in the data. This problem can partially be avoided by incorporating appropriate metric learning into the algorithms such as proposed e.g. in [25] or by looking at kernel versions of the techniques, see e.g. [23]. However, data in complex dynamical systems are often inherently non-Euclidean, such that an explicit or implicit representation in terms of Euclidean vectors is not possible at all. Rather, data have a complex structural form and dedicated dissimilarity measures should be used. Popular examples include dynamic time warping for time series, alignment for symbolic strings, graph or tree kernels for complex structures, the compression distance to compare sequences based on an information theoretic ground, and similar. These settings do not allow a vectorial representation of data at all, rather, data are given implicitly in terms of pairwise dissimilarities or relations; we refer to a
Prototype-Based Classification of Dissimilarity Data
187
‘relational data representation’ in the following when addressing data sets which are represented implicitly by means of pairwise dissimilarities dij of data. D denotes the corresponding matrix of dissimilarities. Recently, popular prototype-based clustering algorithms have been extended to deal with relational data. Since no embedding vector space is given a priori in these settings, the adaptation of prototypes by means of vectorial operations is no longer possible. One simple way around this problem is to restrict prototype positions to data positions. For techniques derived from a cost function, an optimization in the restricted feasible set of data positions leads to concrete learning algorithms such as, for example, median clustering or affinity propagation [4,6]. One drawback of this procedure consists in the fact that prototypes are very restricted if parts of the data space are sampled only sparsely, such that optimization is often very complicated. Due to this reason, the obtained accuracy can be severely reduced as compared to representations by prototypes in a continuous vector space. In contrast, relational clustering implicitly embeds dissimilarity data in pseudo-Euclidean space, and, in this way, enables an implicit continuous update of prototypes for relational data which is equivalent to the standard setting in the Euclidean case, see e.g. [9]. An embedding in pseudo-Euclidean space is possible for every data set which is characterized by a symmetric matrix of pairwise dissimilarities [22], such that this approach covers a large number of relevant situations. A highly improved flexibility of the prototypes is achieved since they can be represented in a smooth way independent of the sampling frequency of the data space. This approach has been integrated into unsupervised topographic mapping provided by neural gas, the self-organizing map, and the generative topographic mapping [9,11,8]. In all cases, a very flexible prototypebased data inspection technique for complex data sets which are described by pairwise dissimilarities arises. Example applications include the mapping of symbolic music data, large text data sets, or complex biomedical data sets such as mass spectra [20,12,7]. So far, the models proposed in the literature widely deal with unsupervised batch algorithms only. Supervised prototype-based classification for relational data described by pairwise dissimilarities has not yet been considered. The task of supervised classification occurs in diverse complex applications such as the classification of mass spectra according to the biomedical decision problem, the classification of environmental time series according to related toxicity, or the classification of music according to underlying composers or epochs. Supervised prototype-based techniques for general dissimilarity data would offer one striking possibility to arrive at human understandable classifiers in such settings. In this contribution, we shortly review relational clustering algorithms for dissimilarity data and we propose a way to extend these techniques to supervised settings, arriving in particular at a relational extension of the popular supervised prototype-based learning vector quantization (LVQ) [14]. We derive an explicit algorithm based on a formalization of LVQ via a cost function [24,27], and we test the accuracy of the approach in comparison to unsupervised alternatives in
188
B. Hammer et al.
several benchmark scenarios. Based on the very promising accuracy achieved in these examples, we propose different extensions of the techniques to improve the sparsity, efficiency, and suitability to deal with large data sets.
2
Prototype-Based Clustering and Classification
Assume data xi ∈ Rn , i = 1, . . . , m, are given. Prototypes are elements w j ∈ Rn , j = 1, . . . , k, of the same space. They decompose data into receptive fields R(w j ) = {xi : ∀k d(xi , w j ) ≤ d(xi , w k )} based on the squared Euclidean distance d(xi , wj ) = xi − wj 2 . The goal of prototype-based machine learning techniques is to find prototypes which represent a given data set as accurately as possible. In the unsupervised setting, the accuracy is often measured in terms of the accumulated distances of prototypes and data points in their receptive fields. Learning techniques can be derived from cost functions related to this objective. We exemplarily consider neural gas (NG) which constitutes a high-level technology to infer a prototype-based topographic mapping [18,19]. NG is based on the objective exp(−rk(xi , wj )/σ 2 ) · d(xi , wj ) ENG = i,j
where rk(xi , wj ) denotes the rank of prototype wj , i.e. the number of prototypes which are closer to xi than wj measured according to the distance d. The parameter σ determines the degree of neighborhood cooperation. Batch optimization as introduced in [4] iteratively optimizes assignments and prototypes by means of the updates i j i and j based on d(xi , wj ) compute k ij := rk(x , w ) for all j 2 i set w := i exp(−kij /σ ) · x / i exp(−kij /σ 2 ) for all j
Starting from a random initialization, NG robustly determines prototype locations which represent data accurately as measured by the distances. In addition, the ranking of prototypes allows to infer the inherent data topology: prototypes are neighbored if and only if they are closest for at least one given data point [19]. In supervised settings, data xi are equipped with prior class labels c(xi ) ∈ {1, . . . , L} in a finite set of priorly known classes. An unsupervised prototypebased clustering gives rise to a classification by means of posterior labeling: a prototype w j is assigned the label c(wj ) which corresponds to the majority of the labels of data points observed in its receptive field R(wj ). While this often yields astonishingly accurate classifiers, unsupervised training algorithms do not take into account the priorly known classes such that decision boundaries are not optimal. Learning vector quantization (LVQ) tries to avoid this problem by
Prototype-Based Classification of Dissimilarity Data
189
taking the labeling into account while positioning the prototypes [14]. Here we restrict to a variant of LVQ as proposed in [24], generalized LVQ (GLVQ), which has the benefit of a mathematical derivation from a cost function which can be related to the generalization ability of LVQ classifiers [25]. We assume every prototype is equipped with a label c(w j ) prior to training. The cost function of GLVQ is given as EGLV Q =
i
Φ
d(xi , w+ (xi )) − d(xi , w− (xi )) d(xi , w+ (xi )) + d(xi , w− (xi ))
where Φ is a differentiable monotonic function such as the hyperbolic tangent, and w+ (xi ) refers to the prototype closest to xi with the same label as xi , w− (xi ) refers to the closest prototype with a different label. This way, for every data point, its contribution to the cost function is small iff the distance to the closest prototype with a correct label is smaller than the distance to a wrongly labeled prototype, resulting in a correct classification of the point. A learning algorithm can be derived thereof by means of a stochastic gradient descent. After a random initialization of prototypes, data xi are presented in random order. Adaptation of the closest correct and wrong prototype takes place by means of the update rules Δw+ (xi ) ∼ − Φ (μ(xi )) · μ+ (xi ) · ∇w+ (xi ) d(xi , w + (xi )) Δw− (xi ) ∼ Φ (μ(xi )) · μ− (xi ) · ∇w− (xi ) d(xi , w− (xi ))
where μ(xi ) =
d(xi , w + (xi )) − d(xi , w − (xi )) , d(xi , w + (xi )) + d(xi , w − (xi ))
μ+ (xi ) =
2 · d(xi , w− (xi )) , (d(xi , w+ (xi )) + d(xi , w − (xi ))2
μ− (xi ) =
2 · d(xi , w+ (xi ) . (d(xi , w+ (xi )) + d(xi , w− (xi ))2
and
For the squared Euclidean norm, the derivative yields ∇wj d(xi , wj ) = −(xi − wj ), leading to Hebbian update rules of the prototypes which take into account the priorly known class information, i.e. they adapt the closest prototypes towards / away from a given data point depending on the correctness of the classification. GLVQ constitutes one particularly efficient method to adapt the prototypes according to a given labeled data set, alternatives such as techniques based on heuristics or algorithms derived from statistical models are possible [27,26].
190
3
B. Hammer et al.
Dissimilarity Data
Prototype-based techniques as introduced above are restricted to Euclidean vector spaces such that their suitability to deal with complex non-Euclidean data sets is highly limited. Since data are becoming more and more complex in many application domains, e.g. due to improved sensor technology or dedicated data formats, the need to extend these techniques towards more general data has attracted some attention recently. In the following, we assume that data xi are not given as vectors, rather pairwise dissimilarities dij = d(xi , xj ) of data points numbered i and j are available. D refers to the corresponding dissimilarity matrix. Note that it is easily possible to transfer similarities to dissimilarities and vice versa, see [22]. We assume symmetry dij = dji and we assume dii = 0. However, we do not require that d refers to a Euclidean data space, i.e. D does not need to be embeddable in Euclidean space, nor does it need to fulfill the conditions of a metric. One very simple possibility to transfer prototype-based models to this general setting is offered by a restriction of prototype positions. If we restrict wj ∈ X = {x1 , . . . , xm } for all j, dissimilarities d(xi , wj ) are well defined and the cost functions of NG and GLVQ can be evaluated. Because of the discrete nature of the space of possible solutions, training can take place by means of an exhaustive search in X to find good prototype locations, in principle. This principle has been proposed to extend SOM and NG [4,15]. One drawback of this technique is given by the restriction of flexibility of the prototypes on the one hand, and the complexity due to the exhaustive search. Exhaustive search is quadratic in the worst case due to the fact that, for every prototype, a sum over a possibly unlimited fraction of the data is to be taken into account for all possible exemplars, on the one side, and all exemplars constitute possible prototypes, on the other side, see [4]. As an alternative, NG has been extended to so-called relational NG in [9]. Data described by pairwise dissimilarities can always be embedded in pseudoEuclidean space, provided D is symmetric and has zero diagonal [22]. Thereby, the term ‘pseudo-Euclidean space’ refers to a vector space which is equipped with a symmetric bilinear form consisting of two parts: a Euclidean part (corresponding to the positive eigenvalues) and a correction (corresponding to the negative eigenvalues), see [22]. This symmetric bilinear form induces the pairwise dissimilarities of the given data. As demonstrated in [9], NG can be performed in the pseudo-Euclidean vector space using this bilinear form. Since the embedding is usually not given explicitly and computation of an explicit embedding takes cubic complexity, the prototypes are usually adapted only implicitly based on the following observations: assume prototypes are represented as linear combinations of data points αji xi with αji = 1 . wj = i
i
Prototype-Based Classification of Dissimilarity Data
191
Then dissimilarities can be computed implicitly by means of the formula d(xi , wj ) = xi − wj 2 = [D · αj ]i −
1 t · α Dαj 2 j
where αj = (αj1 , . . . , αjn ) refers to the vector of coefficients describing the prototype wj implicitly. This equation can easily be derived when using the representation of prototypes by means of linear combinations, and symmetry and bilinearity of the given form, see e.g. [9] for an explicit computation. For NG, the values αij are non-negative, albeit this does not constitute a necessary condition to formalize the approach. We assume non-negative values αij in the following, for simplicity. This way, batch adaptation of NG in pseudo-Euclidean space can be performed implicitly by means of the iterative adaptation: compute kij := rk(xi , w j )based on d(xi , wj ) = [D · αj ]i − set αji := exp(−kij /σ 2 )/ i exp(−kij /σ 2 ) for all j and i
1 2
· αtj Dαj
This way, prototypes are represented implicitly by means of their coefficient vectors, and adaptation refers to the know pairwise dissimilarities dij only. We refer to relational NG (RNG) in the following. Initialization takes place by setting the coefficients to random vectors which sum up to 1. Note that the assumption i αji = 1 is automatically fulfilled for optima of NG. Even for general settings, this assumption is quite reasonable since we can expect that the prototypes lie in the vector space spanned by the data. For GLVQ, a kernelized version has been proposed in [23]. However, this refers to a kernel matrix only, i.e. it requires Euclidean similarities instead of general symmetric dissimilarities. In particular, it must be possible to embed data in a possibly high dimensional Euclidean feature space. Here we transfer the ideas of RNG to GLVQ, obtaining a valid algorithm for general symmetric dissimilarities. We assume that prototypes are given by linear combinations of data in pseudoEuclidean space as before. Then, we can use the equivalent characterization of distances in the GLVQ cost function leading to the costs of relational GLVQ (RGLVQ): ERGLVQ =
i
Φ
[Dα+ ]i − [Dα+ ]i −
1 2 1 2
· (α+ )t Dα+ − [Dα− ]i + · (α+ )t Dα+ + [Dα− ]i −
1 2 1 2
· (α− )t Dα− · (α− )t Dα−
,
where, as before, the closest correct and wrong prototype are addressed, indicated by the superscript + and −, respectively. A stochastic gradient descent leads to adaptation rules for the coefficients α+ and α− : component k of these vectors is adapted by the rules ∂ [Dα+ ]i − 12 · (α+ )t Dα+ + i + i Δαk ∼ − Φ (μ(x )) · μ (x ) · ∂α+ k 1 − − t − ∂ [Dα ] − i − i − i 2 · (α ) Dα Δαk ∼ − Φ (μ(x )) · μ (x ) · ∂α− k
192
B. Hammer et al.
where μ(xi ), μ+ (xi ), and μ− (xi ) are as above. The partial derivative yields ∂[Dαj ]i − 12 · αtj Dαj = dik − dlk αjl . ∂αjk l
After every adaptation step, normalization takes place to guarantee i αji = 1. This way, a learning algorithm which adapts prototypes in a supervised manner similar to GLVQ is given for general dissimilarity data, whereby prototypes are implicitly embedded in pseudo-Euclidean space. Note that, due to the complexity of the cost function, numeric optimization takes place and multiple local optima are possible. Since the constraints are very simple, a direct normalization is easily possible; as an alternative, the constraints could be integrated into the cost function by means of a penalty term. The prototypes are initialized as random vectors, i.e we initialize αij with small random values such that the sum is one. It is possible to take class information into account by setting all αij to zero which do not correspond to the class of the prototype. The resulting classifier represents clusters in terms of prototypes for general dissimilarity data. Although these prototypes correspond to vector positions in pseudo-Euclidean space, they can usually not be inspected directly because the pseudo-Euclidean embedding is not computed directly. Therefore, we use an approximation of the prototypes after training, substituting a prototype by its K nearest data points as measured by the given dissimilarity. To achieve a fast computation of this approximation, we enforce αij ≥ 0 during the updates. Note that generalization of the classification to new data can be done in the same way as for RNG: given a novel data point x characterized by its pairwise dissimilarities D(x) to the data used for training, the dissimilarity to the prototypes is given by d(x, w j ) = D(x)t · αj − 12 · αtj Dαj . For an approximation of prototypes by exemplars, obviously, only the dissimilarities to these exemplars have to be computed, i.e. a very sparse classifier results.
4
Experiments
To demonstrate the performance of the proposed classification method, we first evaluate the algorithm on three benchmark data sets, where data are characterized by pairwise dissimilarities. Afterwards, we show on an example, how the resulting prototypes can be interpreted according to the specific data. The following data sets were used for evaluation: 1. The Copenhagen chromosomes data set constitutes a benchmark from cytogenetics [16]. A set of 4,200 human chromosomes from 22 classes (the autosomal chromosomes) are represented by gray-valued images. These are transferred to strings measuring the thickness of their silhouettes. These strings are compared using edit distance with insertion/deletion costs 4.5 [21]. 2. The vibrio data set consists of 1, 100 samples of vibrio bacteria characterized by mass spectra. The spectra contain ≈ 42, 000 mass positions. The full data set consists of 49 classes of vibrio-sub-species. The objective is to identify
Prototype-Based Classification of Dissimilarity Data
193
a bacteria species based on its spectrum signature, compared to a reference database. The mass spectra are preprocessed with a standard workflow using the BioTyper software [17]. In our case, problem-specific score-similarities are calculated using the measure provided by the BioTyper software [17], it votes the corresponding peaks between a given signature and a new test spectrum. The vibrio similarity matrix S has a maximum score of 3. The corresponding dissimilarity matrix is therefore obtained as D = 3 − S. Experts use the BioTyper-Framework to identify bacteria mainly on the quality of the score, with values of > 2 indicating a safe identification. Due to the growing number of references in the database the identification increases linearly, clustering methods raise hope to keep the query time rather constant. This is even more important in high-throughput settings with in-time identification demands. An interpretable prototype, subsuming a larger set of similar signatures can be used to get at least a quick pre-identification, canceling out large sets of the remaining database. For this setting, the prototype must however be representable as a true spectrum to calculate the score and permit visual inspection. 3. The sonatas data set contains complex symbolic data, similar to [20]. It is comprised of pairwise dissimilarities between 1,068 sonatas from the First Viennese School period (by Beethoven, Mozart and Haydn) and the Baroque era (by Scarlatti and Bach). The musical pieces were given in the MIDI1 file format, taken from the online collection Kunst der Fuge2 . Their mutual dissimilarities were measured with the normalized compression distance (NCD), see [3], using a specific preprocessing, which provides meaningful invariances for music information retrieval, such as invariance to pitch translation (transposition) and time scaling. This method uses a graph-based representation of the musical pieces to construct reasonable strings as input for the NCD, see [20]. The musical pieces are classified according to their composer. These three data sets constitute typical examples of non-Euclidean data which occur in complex systems. In all cases, dedicated preprocessing steps and dissimilarity measures for structures are used. The dissimilarity measures are inherently non-Euclidean and cannot be embedded isometrically in a Euclidean vector space. We report the results of RNG in comparison to RGLVQ for these data sets. The number of prototypes is picked according to the number of priorly known classes, i.e., k = 63 for the chromosomes data (the smallest classes are represented by only one prototype), k = 49 for the vibrio data set, and k = 5 for the sonatas data set. The prototypes are initialized randomly, and training is done for 5 epochs (chromosomes) or 10 epochs (vibrio, sonatas), respectively. The results are evaluated by the classification accuracy, i.e., the percentage of points on the test set which are classified correctly by their closest prototypes, which are labeled using posterior labeling on the training set, in a repeated stratified 10-fold cross-validation with two repeats (for chromosomes) or ten repeats 1 2
http://www.midi.org http://www.kunstderfuge.com
194
B. Hammer et al.
Table 1. Results of supervised and unsupervised prototype based classification for dissimilarity data on three different data sets. Evaluation is done by the classification accuracy measured in a repeated cross-validation. The standard deviation is given in parenthesis.
RNG RNG RNG RNG RNG
(K (K (K (K
RGLVQ RGLVQ RGLVQ RGLVQ RGLVQ
= 7) = 5) = 3) = 1) (K (K (K (K
= 7) = 5) = 3) = 1)
Chromosomes 0.911(0.004) 0.915(0.004) 0.912(0.003) 0.907(0.003) 0.893(0.002)
Vibrio 0.898(0.005) 0.993(0.004) 0.987(0.004) 0.976(0.004) 0.922(0.006)
Sonatas 0.745(0.002) 0.762(0.006) 0.754(0.008) 0.738(0.006) 0.708(0.004)
0.927(0.002) 0.923(0.005) 0.917(0.001) 0.912(0.002) 0.902(0.000)
1.000(0.000) 1.000(0.000) 1.000(0.000) 0.999(0.000) 0.999(0.001)
0.839(0.002) 0.794(0.005) 0.788(0.006) 0.780(0.009) 0.760(0.008)
(vibrio, sonatas), respectively. The results are reported in Tab. 1, choosing different values K to approximate the prototypes by their nearest K exemplars. In all cases, a good classification accuracy can be obtained using prototypebased relational data processing. Interestingly, the results obtained from trained RNG and RGLVQ classifiers using a K-approximation of the prototypes do not lead to much decrease of the classification accuracy, so it is possible to represent the classes in terms of a small number of data points only. This way, the resulting classifiers can be inspected by experts in the field, because every class can be represented by a small number of exemplary data points. In all cases, the incorporation of label information into the classifier leads to an increased classification accuracy of the resulting model, since priorly available information about class boundaries can better be taken into account in this setting. Thus, RGLVQ constitutes a very promising method to infer a highquality prototype-based classifier for general dissimilarity data sets, which offers the possibility to inspect the clustering by directly referring to the prototypes or their approximation in terms of exemplars, respectively. To demonstrate this possibility, we consider a small example regarding the vibrio data. Figure 1 shows a mass spectrum that corresponds to the prototype which represents the Vibrio Anguillarum class. It is calculated as the α-weighted combination of the 3 nearest data points used for K-approximation (K = 3). For comparison, the mass spectrum of an arbitrary data point from the considered class is shown. The problem-specific similarity score between the two spectra is given as > 2, which indicates that the prototypical spectrum closely resembles the characteristics of the chosen data. In this way, experts could use the prototype representations for a direct comparison with any newly acquired spectrum as a tool for quick class identification. This would offer a much faster alternative to their standard work flow.
Prototype-Based Classification of Dissimilarity Data
195
−3
x 10
−4
x 10 intensity arb. unit
1.5
Vibrio_anguillarum prototype 2
Unknown spectrum Difference spectrum 1
0
Estimated score 2.089828
−2
0.5
6100
6200
6300
6400
6500
6600
0
0.2
0.4
0.6
0.8
1 1.2 m/c in Dalton
1.4
1.6
1.8
2 4
x 10
Fig. 1. The prototype spectrum (straight line drawn on the positive ordinate) represents the class of the chosen test spectrum (dashed line drawn on the negative ordinate). Their spectral difference is shown by the dash-dotted line. The prototype has the class label Vibrio Anguillarum. It shows high symmetry to the test spectrum and the similarity of matched peaks (the zoomed-in view) highlights good agreement by bright gray shades, indicating the local error of the match. This is only meaningful if the identified prototype mimics the data characteristics of the represented class, and the signal shape of the prototype is biochemically plausible. It allows to judge the identification accuracy using the priorly mentioned bioinformatics score of the test spectrum. The score of > 2 indicates a good match. The prototype model allows direct identification and scoring of matched and unmatched peaks, which can be assigned to its mass to charge (m/c) positions, for further biochemical analysis.
5
Conclusions
In this contribution, we have proposed a high-quality supervised classification technique for general dissimilarity data which represents the decisions in the form of prototypes. Due to this representation, unlike many alternative blackbox techniques, it offers the possibility of a direct inspection of the classifier by humans. Further, unlike alternatives which are based on kernels such as the kernel GLVQ [23] or the relevance vector machine [28], the technique does not require that data are embeddable into Euclidean space, rather, a general symmetric dissimilarity matrix can be dealt with. Due to these properties, this technique seems very suited for interactive data inspection and modeling tasks, since it allows to deal with general dissimilarities (thus also very complex data or structural elements, or user-adapted dissimilarity values), and it allows an inspection by the user (including a direct change of the behavior by including or changing prototypes). We demonstrated the accuracy of the technique for three different nonEuclidean data sets along with a small example showing the interpretability of prototypes. More experimental evaluation and, in particular, an application to very large data sets are the subject of ongoing research. Especially, applications to spatiotemporal data as they occur in dynamic systems or robotics seem very interesting, where techniques such as temporal and spatial alignment offer
196
B. Hammer et al.
very popular and powerful dissimilarity measures. Note that the extension of supervised prototype-based classification to general dissimilarities by means of relational extensions is not restricted to GLVQ. Rather, alternative formulations based on cost functions such as soft robust LVQ, as introduced in [27], can be extended in a similar way. One central problem of relational classification has not yet been considered in this contribution: while we arrive at sparse solutions by using approximations of prototypes, the techniques inherently possess quadratic complexity because of their dependency on the full dissimilarity matrix. This can lead to memory problems to store the full matrix, besides a long training time for large data sets. In applications, probably the biggest bottleneck is given by the necessity to compute the full dissimilarity matrix. In [7], two different ways to approximate the computation by linear time techniques which refer to an only linear subpart of the full dissimilarity matrix have been proposed and compared in the unsupervised setting: the classical Nystr¨om approximation [29] to approximate the dissimilarity matrix by a low-rank matrix, on the one side, and patch processing to compress the data consecutively by means of a prototype-based representation, on the other side. The approximation of RGLVQ by similar linear time techniques is a matter of ongoing research. Acknowledgments. Financial support from the Cluster of Excellence 277 Cognitive Interaction Technology funded in the framework of the German Excellence Initiative and from the ”German Science Foundation (DFG)“ under grant number HA-2719/4-1 is gratefully acknowledged.
References 1. Alex, N., Hasenfuss, A., Hammer, B.: Patch clustering for massive data sets. Neurocomputing 72(7-9), 1455–1469 (2009) 2. Bishop, C., Svensen, M., Williams, C.: The generative topographic mapping. Neural Computation 10(1), 215–234 (1998) 3. Cilibrasi, R., Vitanyi, M.B.: Clustering by compression. IEEE Transactions on Information Theory 51(4), 1523–1545 (2005) 4. Cottrell, M., Hammer, B., Hasenfuss, A., Villmann, T.: Batch and median neural gas. Neural Networks 19, 762–771 (2006) 5. Denecke, A., Wersing, H., Steil, J.J., Koerner, E.: Online Figure-Ground Segmentation with Adaptive Metrics in Generalized LVQ. Neurocomputing 72(7-9), 1470– 1482 (2009) 6. Frey, B.J., Dueck, D.: Clustering by passing messages between data points. Science 315, 972–976 (2007) 7. Gisbrecht, A., Hammer, B., Schleif, F.-M., Zhu, X.: Accelerating dissimilarity clustering for biomedical data analysis. In: Proceedings of SSCI (2011) 8. Gisbrecht, A., Mokbel, B., Hammer, B.: Relational generative topographic mapping. Neurocomputing 74(9), 1359–1371 (2011) 9. Hammer, B., Hasenfuss, A.: Topographic Mapping of Large Dissimilarity Data Sets. Neural Computation 22(9), 2229–2284 (2010) 10. Hammer, B., Villmann, T.: Generalized relevance learning vector quantization. Neural Networks 15(8-9), 1059–1068 (2002)
Prototype-Based Classification of Dissimilarity Data
197
11. Hasenfuss, A., Hammer, B.: Relational Topographic Maps. In: Berthold, M.R., Shawe-Taylor, J., Lavraˇc, N. (eds.) IDA 2007. LNCS, vol. 4723, pp. 93–105. Springer, Heidelberg (2007) 12. Hasenfuss, A., Boerger, W., Hammer, B.: Topographic processing of very large text datasets. In: Daglie, C.H., et al. (eds.) Smart Systems Engineering: Computational Intelligence in Architecting Systes (ANNIE 2008), pp. 525–532. ASME Press (2008) 13. Kietzmann, T., Lange, S., Riedmiller, M.: Incremental GRLVQ: Learning Relevant Features for 3D Object Recognition. Neurocomputing 71(13-15), 2868–2879 (2008) 14. Kohonen, T. (ed.): Self-Organizing Maps, 3rd edn. Springer-Verlag New York, Inc., New York (2001) 15. Kohonen, T., Somervuo, P.: How to make large self-organizing maps for nonvectorial data. Neural Networks 15(8-9), 945–952 (2002) 16. Lundsteen, C., Phillip, J., Granum, E.: Quantitative analysis of 6985 digitized trypsin g-banded human metaphase chromosomes. Clinical Genetics 18, 355–370 (1980) 17. Maier, T., Klebel, S., Renner, U., Kostrzewa, M.: Fast and reliable maldi-tof ms– based microorganism identification. Nature Methods (3) (2006) 18. Martinetz, T.M., Berkovich, S.G., Schulten, K.J.: ’Neural-gas’ network for vector quantization and its application to time-series prediction. IEEE Trans. on Neural Networks 4(4), 558–569 (1993) 19. Martinetz, T., Schulten, K.: Topology representing networks. Neural Networks 7(3) (1994) 20. Mokbel, B., Hasenfuss, A., Hammer, B.: Graph-Based Representation of Symbolic Musical Data. In: Torsello, A., Escolano, F., Brun, L. (eds.) GbRPR 2009. LNCS, vol. 5534, pp. 42–51. Springer, Heidelberg (2009) 21. Neuhaus, M., Bunke, H.: Edit distance based kernel functions for structural pattern classification. Pattern Recognition 39(10), 1852–1863 (2006) 22. Pekalska, E., Duin, R.P.W.: The Dissimilarity Representation for Pattern Recognition. Foundations and Applications. World Scientific, Singapore (2005) 23. Qin, A.K., Suganthan, P.N.: A novel kernel prototype-based learning algorithm. In: Proc. of ICPR 2004, pp. 621–624 (2004) 24. Sato, A., Yamada, K.: Generalized learning vector quantization. In: Mozer, M.C., Touretzky, D.S., Hasselmo, M.E. (eds.) Proceedings of the 1995 Conference. Advances in Neural Information Processing Systems, vol. 8, pp. 423–429. MIT Press, Cambridge (1996) 25. Schneider, P., Biehl, M., Hammer, B.: Adaptive relevance matrices in learning vector quantization. Neural Computation 21(12), 3532–3561 (2009) 26. Schneider, P., Biehl, M., Hammer, B.: Distance Learning in Discriminative Vector Quantization. Neural Computation 21(10), 2942–2969 (2009) 27. Seo, S., Obermayer, K.: Soft learning vector quantization. Neural Computation 15(7), 1589–1604 (2003) 28. Tipping, M.E.: Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211–244 (2001) 29. Williams, C., Seeger, M.: Using the nystr¨ om method to speed up kernel machines. In: Advances in Neural Information Processing Systems, vol. 13, pp. 682–688. MIT Press, Cambridge (2001)
Automatic Layout Design Solution Fadratul Hafinaz Hassan1,2 and Allan Tucker1 1
School of Information Systems, Computing and Mathematics, Brunel University, UK 2 School of Computer Science, University Science of Malaysia, Penang, Malaysia {fadratul.hassan,allan.tucker}@brunel.ac.uk
Abstract. A well designed space must consider not only ease of movement, but also ensure safety in a panic situation, such as emergency evacuation in a stadium or hospital. The movement statistics connected with pedestrian flow can be very helpful in feasible layout design. In this study, we examined fourway pedestrian movement statistics generated from heuristic search techniques to find feasible classroom layout solutions. The aim of the experiment is to compare how fast and effective the algorithms can generate automatic solutions to the layout. Experiments from our preliminary study have shown that promising results for simulated annealing and genetic algorithm operator algorithms in pedestrian simulation modelling. Our experimental results are compared with current classroom layouts. We find a feasible layout with a staggering shaped and wider lane, objects shift aside to the walls creating bigger lanes in the middle of layout, or escape routes are created surrounding the clustered objects. Keywords: Genetic Algorithm, Simulated Annealing, Cellular Automata, Pedestrian statistics, Classroom layout.
1 Introduction Pedestrian simulations on egress plans provide in-depth analysis on pedestrian movement during evacuation processes but do not offer any consideration of the distribution of objects or obstacles inside the floor layout itself. The smooth flow of pedestrians is formed by a feasible allocation of obstacles in a layout. Thus, in architectural planning, it is important to carefully consider the position of obstacles in a layout as well as the location of exits. However, many pedestrian simulation studies on evacuation plans are often focused on pedestrian behaviour and do not assess the interaction with obstacles inside the floor plan. If we could analyse the relationship of different pedestrians with the obstacles, then it would allow us to configure a better distribution of obstacles in a layout. For example, pedestrian statistics generated from the simulations could be investigated further and show us where the collision points are in a full grid layout. That way, we can identify ‘bad’ regions and predict feasible layouts. If we could also study different types of pedestrian flow in a microscopic level of the simulation, then it will give us greater control on different sets of pedestrians in and out of the layout. For example, an evacuation process in a hospital, where the allocation of some critical resources, namely medical and non-medical J. Gama, E. Bradley, and J. Hollmén (Eds.): IDA 2011, LNCS 7014, pp. 198–209, 2011. © Springer-Verlag Berlin Heidelberg 2011
Automatic Layout Design Solution
199
staff, to the various units of a hospital at the time of emergency. This way, we have a choice to first evacuate sets of patients as fast as possible and save more lives. In this paper, we implement simulated annealing (SA) and hill climbing (HC) with updates that involve simple genetic algorithm (GA) operators in order to find solutions to a classroom layout. Previously, we have used these algorithms to simulate a 10-by-10 grid size with two types of pedestrian moving inside the space; one moving to the left and one moving to the right. In the previous experiments, left and right pedestrians kept moving in the same direction until the end of the simulation and without any distinctive exit on the layout [2], [3]. In this work, we improved the current layout by adding static objects such as walls, distributed on every side of the room with two openings on each side, and non-static objects randomly distributed. The size of the room was also increased from 10-by-10 to 20-by-20 grid size. We added two more types of pedestrian moving inside the space: one moving up-to-down and another moving down-to-up based upon the cellular automata model of Yue et al. [12]. The existing fitness function was updated to reflect movement from all four types of pedestrian. In order to create a realistic simulation, we enhanced the pedestrian behaviour by allowing each type of pedestrian to randomly change their direction during the simulation. The aim of the experiments was to compare and understand how fast and effective the algorithms can generate automatic solutions to the spatial layout problem by using statistics generated from cellular automata pedestrian simulations. This paper is organised as follows. Section 2.1 introduces Hill Climbing (HC), Simulated Annealing (SA) and extended SA using Genetic Algorithm-style operator (SA-GAO). Sections 2.2 and 2.3 describe move operator and fitness function. Section 3 presents experimental results on the classroom layout. Finally we present our conclusions in section 4.
2 Methodology The experiments involved applying SA, HC and SA-GAO to solve the spatial layout problem. It was not feasible to have a full GA implementation due to the very complex fitness function involving several pedestrian simulations. 2.1 Hill Climbing, Simulated Annealing and SA Genetic Algorithm Operators HC is a comparatively simple local search algorithm that works to improve a single candidate solution, starting from a randomly selected point. From that position, the neighbouring search space is evaluated. If a fitter candidate solution is found, the search moves to that point. If no better solution is found in the vicinity, the algorithm terminates. The main disadvantage of using HC is that it often gets stuck in local maxima during the search [8]. SA is an extension to HC, reducing the chance of converging at local optima by allowing moves to inferior solutions under the control of a temperature function. This solution is followed if a better successive can be found. Otherwise it accepts a worse state with a certain probability that decreases with temperature. This is less extreme than taking randomised HC each time but still has the ability of escaping from the
200
F.H. Hassan and A. Tucker
possible trap of local maximum/minimum or plateau [8]. The pseudocode for our implementation of this approach is listed below. Input: Number of iterations, iteration, and a random starting layout, startrep, starting temperature, temperature oldrep = startrep; Apply 10 pedestrian simulations to generate statistics, stats Fit = fitness(stats) bestfit = fit for loop = 1:iteration rep = oldrep; Apply move operator to rep Apply 10 pedestrian simulations to stats newfit = fitness(stats); dscore = newfit-fit if ((bestfit < newfit) OR (dscore/temperature) (rand(0,1) <e )) bestfit = newfit oldrep = rep; else rep = oldrep; end if temperature = temperature*0.9 end for In order to set this as a HC, the starting temperature is set to zero. Note that the fitness here uses the statistics generated from 10 repeated runs of the CA pedestrian simulation. This is done to ensure that one simulation does not result in a ‘lucky fitness score for one layout based upon the starting positions of the pedestrians. Then we extended our work by using GA-style operators [3]. A full GA implementation was not feasible due to the very complex fitness function involving several pedestrian simulations. The initial ‘parents’ were selected from the best solutions generated (selections were made based on more consistent fitness values with a fitness higher than the original fitness which was 43.434 or above) from a number of SA experiments. We experimented with different styles of combination for two ‘parents’ that more or less acted like a uniform crossover.
’
2.2 Move Operator The move operator is an enhancement from the previous study[2] and takes into account the size of the simulation grid, randomly moving one object a fraction of this distance (determined by the parameter changedegree). The result of the move is then
Automatic Layout Design Solution
201
checked to see if the new coordinates are within the bounds of the grid and do not result in the object overlapping with others. In this update, we also checked the result did not overlap with the static object (wall). The operator is defined below, where unidrnd(min,max) is a uniform discrete random number generator with the limits of min and max. The pseudocode for move operator is listed below. Input: Size of simulation grid, W, size of objects, sizobj, current x-coordinate, oldx, current ycoordinate, oldy Set the degree of change to make based upon fraction of the grid size: Changedegree = W/2; Choose a random object in the grid, i [oldx, oldy] = current x and y coordinates of object i xchange = unidrnd(-changedegree/2, changedegree/2) ychange = unidrnd(-changedegree/2, changedegree/2) if ((oldy + ychange) and (oldx + xchange) is within grid boundary AND new object position does not overlap another object taking into account sizobj) newx = oldx + xchange; newy = oldy + ychange; end if Move object i to position [newx, newy] Output: newx newy 2.3 Fitness Function The fitness function is calculated based on the statistics that are generated using the pedestrian simulation. The statistics take the form of a 3x3 matrix, leftstats, representing the sum of decisions for left-moving pedestrians and a similar decision matrix for right-moving pedestrians; rightstats, up-moving pedestrians; upstats, and down-moving pedestrians; downstats. Therefore, the middle cell in each grid represents how many times the pedestrians decided to stay in the same cell as last time. As we wish to encourage free flow we wish to increase the fitness for layouts that result in many cases of left moving pedestrians moving left, right-moving pedestrians moving right, up-moving pedestrians moving up and down-moving pedestrians moving down whilst penalising the fitness of any decisions where the leftmoving pedestrians move right, up-moving pedestrians move down and vice-versa. For example, consider the four stats matrices for left, right, up and down pedestrians. It is clear that the leftstats reflect a ‘good’ result as the pedestrians have generally moved in the desired direction more often whereas for rightstats, upstats and downstats this is not the case.
202
F.H. Hassan and A. Tucker
upstats 1 2 1 3 2 1 4 3 3
downstats 3 4 0 1 3 2 2 1 2
3 1 1 7 3 0 3 2 2 leftstats
3 3 2 2 2 2 2 3 1 rightstats
Fig. 1. Up, Down, Left and Right statistics – describe as 3x3 matrix
In general, we wish to maximise the first column in leftstats, the third column in rightstats, the first row in upstats and the third row in downstats whilst minimising the other statistics (shaded in the example, Figure 1). Therefore, we use the following fitness function: rightfitness = sum(rightstats(3,1:3))- sum(rightstats(1,1:3)) .
(1)
leftfitness = sum(leftstats(1,1:3))-sum(leftstats(3,1:3)) .
(2)
upfitness = sum(upstats(1:3,1))-sum(upstats(1:3,3)) .
(3)
downfitness = sum(downstats(1:3,3))-sum(downstats(1:3,1)) .
(4)
fitness = rightfitness+leftfitness+upfitness+downfitness .
(5)
Higher fitnesses should reflect simulations whereby pedestrians have moved in the direction that they wish more often.
3 Results The experiments involved running HC, SA and SA-GAO on the problem of trying to arrange 15 pre-defined objects in a 20x20 grid with ‘left’ pedestrians, ‘right’ pedestrians, ‘up’ pedestrians and ‘down’ pedestrians. Each type of pedestrian randomly changes his/her direction during simulation. Each algorithm was run 10 times and the learning curves were inspected. The final fitnesses and quality of the layouts were then investigated. Finally, some inspection of sample simulations on the final layouts was carried out in order to find interesting characteristics. 3.1 Summary Statistics Table 1 shows the minimum, maximum, mean values and standard deviation for the final fitness of each algorithm over 10 experiments, where SA0.5 represents SA with initial temperature of 0.5, SA0.7 represents a temperature of 0.7 and SA0.9 represents a temperature of 0.9. The statistical values of SA0.5 show the robustness of the solutions with the standard deviation of 3.687. The standard deviation is relatively low, which indicates that SA with initial temperature of 0.5 is among the most consistent approach in finding a good solution. However, SA0.9 has the maximum
Automatic Layout Design Solution
203
value of final fitness (the mean is also the highest). It indicates that at the highest temperature, SA can sometimes escape some of the local optima. It even achieves higher fitness when compared to the original layout (43.434). Note that also, SA0.9 generates two fitnesses which are higher than the original value (45.478 and 44.256). None of the fitness values achieves better values than the original layout fitness from for HC, SA0.5 and SA0.7. Table 1. Summary statistics of final fitness Method HC SA0.5 SA0.7 SA0.9
Min. 29.170 31.100 23.902 27.380
Max. 42.800 42.506 40.154 45.478
Mean 36.334 35.685 29.764 36.980
Std. Dev. 4.165 3.687 5.495 5.743
We then expanded our work to SA-GAO. Based on the result above, ‘parents’ from SA0.9 were selected. We ran 100 further simulations on SA0.9 to find more parents which have better fitness than the original layout fitness. Eight ‘parents’ with fitness higher than the original fitness (43.434) were selected. We next tried to experiment with 100 random combinations of mappings. Around 40 children have fitness better than their ‘parents’. The highest fitness of the children was 56.876, as shown in the table below, generated from ‘child2’ with mapping ‘112121121222111’. Note that the same mapping also generates a better fitness for ‘child1’ with fitness value of 49.316. The second highest fitness is from ‘child1’ with Table 2. Children‘s mapping with highest fitness values from parents Child1 222211211112111 211212111121212 122122122122222 121112121112212 122121122112221 211112212211122 122221222122221 122111221121121 112112121221211 221112222212121 112121121222111 221122211222112 221121121221121 211211111222211 212211122122221 212211121112122 221111112211221 122121111222221 122222111212121 222121211121221
Fitness 47.466 49.330 52.972 45.756 46.770 46.266 49.276 52.234 48.590 45.570 49.316 53.750 47.160 48.334 45.934 47.692 45.810 46.460 46.800 49.642
Child2 211122211122111 222211211112111 212212112111121 112212212121112 211222222221121 122111122111122 111221112221212 112112121221211 222222112211211 112121121222111 221122211222112 112221221121211 212211121112122 122111221222212 212212221211121 122212122121211 122112221221221 122222111212121
Fitness 53.360 46.970 48.104 51.244 48.536 46.710 48.540 48.838 50.092 56.876 46.658 46.826 45.652 51.728 50.676 46.654 47.006 45.744
204
F.H. Hassan and A. Tucker
mapping ‘221122211222112’ and its fitness value was 53.75. Note that the same mapping generates a good fitness for ‘child2’. Another four mapping breeds both ‘child1’ and ‘child2’ with fitness better than their ‘parents’. This result shows that it may be necessary to run quite a large number of recombinations of parents to ensure improved fitness. It may also imply that further mutations are required to fine-tune these new solutions. The fitnesses of the ‘children’ scored better compared with the ‘parents highest fitness value. This implies that recombining the best of the SA solutions can indeed improve the overall layout without having to implement a full GA which would not be feasible for such an expensive fitness function.
’
3.2 Exploring the Final Layouts with Their Final Pedestrian Statistics We now explore some of the characteristics of the final layouts discovered by the algorithms. The red square symbols are the permanent walls, the blue squares in Figure 2-4 represent the final positions of random objects; 500 iterations using SA, HC and SA-GAO algorithm. The black and red arrows represented left, right, up and down pedestrians moving in each ways.
upstats
downstats
1973 760 212
6545 102 225
2335 725 123
131 578 1916
109 52 6111
112 592 2199
1484 5955 1490
571 386 576
321 908 309
148 38 153
708 53 626
1593 7353 1528
leftstats
rightstats
Fig. 2. Original classroom layout with up, down, left and right statistics
The layout from Figure 2 is based on classical distribution of seats in classrooms [6]. This configuration produced clogging phenomena near the exit [7]. It shows in the layout above where all pedestrians moved and clogged near the exit on the upper right of the room. Exits on the top and bottom walls are not separated far enough; an average exit throughput becomes even significantly less due to a collective slowdown that emerges among pedestrians crossing each other’s paths. This disruptive interference effect led the ‘up’ and ‘down’ pedestrian to share the same lane [11].
Automatic Layout Design Solution
205
Exits on the left and right room are separated by long walls, thus allow ‘left’ and ‘right’ pedestrians to choose different lanes. Notice the ‘right’ pedestrians only move freely on the upper side of the layout, thus their final statistics, rightstats (7353) is the highest among all.
upstats
downstats
upstats
downstats
1790 1103 457
5071 10 69
2002 1079 119
474 757 1904
248 35 5767
295 741 1979
1568 1326 504
4756 90 306
1724 1233 493
98 584 1836
416 31 5389
311 658 1777
1359 4606 1254
1557 216 1364
538 854 952
523 112 613
1751 13 1925
2229 3548 1286
1622 5205 1978
1024 221 1217
495 1032 506
367 227 585
1327 106 1363
1889 4390 1246
leftstats
leftstats
rightstats
rightstats (b)
(a)
upstats
downstats
upstats
downstats
1562 941 352
5716 136 531
1724 987 251
736 1294 1225
613 125 4862
671 1473 1401
2514 589 106
5789 16 471
1710 451 154
100 748 2212
22 21 5298
127 673 1999
1608 5284 1536
1013 109 938
341 860 210
700 257 703
1481 43 1577
1191 3773 1375
1363 3463 1400
1764 593 1640
516 881 680
948 371 1034
2124 40 1928
1434 2725 1296
leftstats
rightstats (c)
leftstats
rightstats (d)
Fig. 3. HC and SA final layouts with the lowest final fitness and their final pedestrians statistic, (a) HC (fitness=29.170), (b) SA0.5 (fitness=31.100), (c) SA0.7 (fitness=23.902), (d) SA0.9 (fitness=27.38)
206
F.H. Hassan and A. Tucker
upstats
downstats
upstats
downstats
2294 565 341
6688 117 262
2168 490 575
258 817 2097
104 26 5980
382 704 1832
1881 784 413
6832 21 244
2132 755 438
175 742 2676
204 95 6234
139 933 2002
1564 5677 2069
450 204 516
27 614 179
297 24 89
1172 2 762
1947 6202 2105
2008 5940 1918
536 343 705
165 417 568
184 22 322
715 9 852
2102 5461 2533
leftstats
rightstats
leftstats
rightstats
(a)
(b)
upstats
downstats
upstats
downstats
1975 445 145
7082 28 367
2471 477 210
149 896 2063
84 41 6070
166 914 2217
1900 601 129
6704 32 90
2043 516 185
150 621 2103
169 76 6074
285 765 1957
1970 7170 2007
807 101 606
289 411 39
230 15 155
924 4 1028
1941 5528 2075
2556 4904 1932
958 62 892
208 644 644
263 55 407
1722 19 1894
1897 5444 1699
leftstats
rightstats (c)
leftstats
rightstats (d)
Fig. 4. SA-GAO final layouts and their final pedestrians statistic, (a) the highest fitness of ‘chi1d1’ with fitness of 53.750 and mapping of ‘221122211222112’, (b) ‘child2’ with same mapping of (a) and fitness of 46.658, (c) ‘child1’ with the same mapping of (d) and fitness of 49.316, (d) ‘child2’ with fitness of 56.876 and mapping of ‘112121121222111’ represents the highest fitness of all children
Automatic Layout Design Solution
207
From Figure 3 (a)-(d), we can see clearly that the bad layouts are generated from the series of lowest fitness. Figure 3(a) is the ‘bad’ layout with the lowest fitness from 10 run of HC. Notice that the objects scattered around in the layout. The objects also create lots of enclosed/ cul-de-sac area which caused the pedestrians being trapped inside. One of eight exits was totally blocked by the objects. There is no clear escape route created. The left and right pedestrians bump to each other because of the narrow path which they have to share. Notice a jammed group of ‘left’ and ‘right’ pedestrians in the lower layout where it is a high density location. ‘Left’ and ‘right’ have the lowest final statistics and the largest middle value for leftstats is 216, In the second layout taken from SA0.5, Figure 3(b) there is jam pattern especially among ‘left’ and ‘right’ pedestrians. Notice the bigger middle values for both of leftstats and rightstats (221 and 106). The third layout, Figure 3(c) shows more than one cul-de-sac that encircled and trapped the pedestrian. This is no surprise since it has the lowest fitness compared to the other three layouts. Notice that ‘right’ pedestrian stuck in both culde-sac resulted their rightstats is the lowest compare to the others with only 3773. Around three cul-de-sacs were created in the final layout, Figure 3(d), where the biggest cul-de-sac is in the middle of the layout. One exit on the bottom right of the layout is completely blocked. Clogging among pedestrians can be seen clearly at the high densities region. The higher the fitness, the better the layout produced as in Figure 4 (a)-(d). All four layouts were generated from SA-GAO simulations. Most of the objects are shifted aside to the walls creating bigger major lanes in the upper side of every layout. The first layout, Figure 4(a) shows the objects move creating a group in one side, giving free spaces near the exits. Therefore, all the pedestrians can move easily without being blocked is shown by the ‘right’ pedestrians where their rightstats middle value reflects it by having the smallest value of 2. Some of the layouts created escape routes that surround the clustered objects as in Figure 4(b). It appears that the ‘up’ pedestrian leave the virtual trace (chemotaxis) for the behind pedestrian to follow. Figure 4(c) has a bigger lane in the upper layout, allowing pedestrians to move left and right freely. Notice the ‘left’ and ‘right’ pedestrians used the straight and bigger path in the upper side of the room that brings out the highest leftstats value at 7170 and a small middle value for rightstats at 4 (means less pedestrian getting stuck). The fourth layout Figure 4(d), with the highest fitness of all, creates more paths (some zigzag shape) when compare to other layout. It seems that all the pedestrians move freely and have alternatives to choose any path they prefer. The final statistics for all pedestrians spread averagely even.
4 Conclusions In this paper we proposed pedestrian flow simulations combined with heuristic searches to assist in the automatic design of classroom layout. Using pedestrian simulations, the activity of crowds can be used to study the consequences of different spatial layouts. Based on the results that have been observed in this paper, we have demonstrated that simple heuristic searches appear to deal with the NP-hard spatial layout design problem to some degree, at least on the very much simplified problem addressed here. The solution is further improved when we paired ‘parents’ and apply
208
F.H. Hassan and A. Tucker
a GA style operator using our method SA-GAO. Whilst it is not guaranteed that the optimal solution will be found, this does not mean that useful and unexpected designs cannot be discovered using these types of approaches. Exploring the final pedestrian statistics we found a few characteristics for a feasible layout design. We found that a feasible layout with higher fitness value has better characteristics as zigzag-shaped path, bigger lane and more alternative paths. A zigzag-shaped path can reduces the pressure in panicking crowds and ease the pedestrian movement. A wider lane and several different escape paths help avoid a long waiting lane and clogging effect. It shows in the final statistic where a smooth flow pedestrians has a small middle value of statistics means less waiting or stucking phenomena. Also, we found that where the exits are not separated widely enough, a disruptive interference effect can happen. This scenario results in pedestrians sharing the same path and they have lower statistics when compared to where pedestrians do not share the same path. Indeed, the real positive outcome of the experiments here is that we found certain characteristics that may not have been immediately expected. Future work will be directed towards the gathering of real world data sets and complex systems in order to validate the model equations. The fitness function that will be based on time-to-escape is also a distinct possibility for future study.
References 1. Haklay, M., O’Sullivan, D., Thurstain-Goodwin, M., Schelhorn, T.: “So go downtown”: simulating pedestrian movement in town centres. Environment and Planning B 28(3), 343–360 (2001) 2. Hassan, F.H., Tucker, A.: Using Cellular Automata Pedestrian Flow Statistics with Heuristic Search to Automatically Design Spatial Layout. In: 2010 22nd IEEE International Conference on Tools with Artificial Intelligence (ICTAI), p. 32. IEEE, Los Alamitos (2010) 3. Hassan, F.H., Tucker, A.: Using Uniform Crossover to Refine Simulated Annealing Solutions for Automatic Design of Spatial Layouts. In: International Joint Conference on Computational Intelligence (IJCCI), p. 373 (2010) 4. Helbing, D.: A fluid dynamic model for the movement of pedestrians. Arxiv preprint cond-mat/9805213 (1998) 5. Helbing, D., Buzna, L., Johansson, A., Werner, T.: Self-organized pedestrian crowd dynamics: Experiments, simulations, and design solutions. Transportation Science 39(1), 1 (2005) 6. Helbing, D., Isobe, M., Nagatani, T., Takimoto, K.: Lattice gas simulation of experimentally studied evacuation dynamics. Physical Review E 67(6), 67101 (2003) 7. Kirchner, A., Schadschneider, A.: Simulation of evacuation processes using a bionicsinspired cellular automaton model for pedestrian dynamics. Physica A: Statistical Mechanics and its Applications 312(1-2), 260–276 (2002) 8. Michalewicz, Z., Fogel, D.B.: How to solve it: modern heuristics. Springer, Berlin (2000) 9. Nishinari, K., Kirchner, A., Namazi, A., Schadschneider, A.: Extended floor field CA model for evacuation dynamics. Arxiv preprint cond-mat/0306262 (2003) 10. Pan, X., Han, C.S., Dauber, K., Law, K.H.: Human and social behavior in computational modeling and analysis of egress. Automation in Construction 15(4), 448–461 (2006)
Automatic Layout Design Solution
209
11. Perez, G.J., Tapang, G., Lim, M., Saloma, C.: Streaming, disruptive interference and power-law behavior in the exit dynamics of confined pedestrians. Physica A: Statistical Mechanics and its Applications 312(3-4), 609–618 (2002) 12. Yue, H., Hao, H., Chen, X., Shao, C.: Simulation of pedestrian flow on square lattice based on cellular automata model. Physica A: Statistical Mechanics and its Applications 384(2), 567–588 (2007) 13. Yue, H., Guan, H., Zhang, J., Shao, C.: Study on bi-direction pedestrian flow using cellular automata simulation. Physica A: Statistical Mechanics and its Applications 389(3), 527–539 (2010)
An Alternative to ROC and AUC Analysis of Classifiers Frank Klawonn1,2 , Frank H¨oppner1, and Sigrun May3 1 Department of Computer Science Ostfalia University of Applied Sciences Salzdahlumer Str. 46/48, D-38302 Wolfenbuettel, Germany 2 Bioinformatics and Statistics Helmholtz Centre for Infection Research Inhoffenstr. 7, D-38124 Braunschweig, Germany 3 Biological Systems Analysis Helmholtz Centre for Infection Research Inhoffenstr. 7, D-38124 Braunschweig, Germany
Abstract. Performance evaluation of classifiers is a crucial step for selecting the best classifier or the best set of parameters for a classifier. The misclassification rate of a classifier is often too simple because it does not take into account that misclassification for different classes might have more or less serious consequences. On the other hand, it is often difficult to specify exactly the consequences or costs of misclassifications. ROC and AUC analysis try to overcome these problems, but have their own disadvantages and even inconsistencies. We propose a visualisation technique for classifier performance evaluation and comparison that avoids the problems of ROC and AUC analysis.
1 Introduction Statistics, machine learning and data mining provide a variety of methods to solve classification problems: logistic regression, decision trees, Bayes classifiers, neural networks, nearest neighbour classifiers or support vector machines just to mention a few of them. Most of these classifiers even have parameters that can be chosen or adapted. For a given classification problem, it is often difficult to judge what is the best classifier. Very often, simply the misclassification rate is used to judge the performance of a classifier. But this performance measure assumes that all misclassifications have equal consequences or “costs”. This is, however, in many cases not true. Although it is often possible to specify what has more serious consequences: To classify an object from class 0 to class 1 or the reverse. But it is usually difficult or even impossible to exactly specify how large the difference between these two types of misclassifications is. Therefore, receiver operating characteristic (ROC) and area under curve (AUC) analysis have become very popular for the evaluation and comparison of classifiers, at least for binary classification problems. But these concepts have certain disadvantages and especially AUC turns out to be even inconsistent [1]. In this paper, we introduce an intuitive visualisation technique for classifier performance and comparison that tries to overcome the problems of ROC and AUC. In Section 2 we briefly review the concepts of ROC and AUC analysis. Section 3 explains J. Gama, E. Bradley, and J. Hollm´en (Eds.): IDA 2011, LNCS 7014, pp. 210–221, 2011. c Springer-Verlag Berlin Heidelberg 2011
An Alternative to ROC and AUC Analysis of Classifiers
211
the general problem of classifier evaluation and the disadvantages of ROC and AUC. In Sections 4 and 5, we explain our new visualisation technique and an efficient algorithm to compute it, before providing experimental results in Section 6 and an outline of future work.
2 ROC and AUC Analysis Supervised classification in data mining, often simply called classification, requires a data set in the form of a data table with predictor or independent attributes A1 , . . . , Ap and a dependent nominal target attribute C. The predictor attributes can also be nominal, but do not have to be. Let dom(Ai ) denote the domain, i.e. the set of possible values of attribute Ai . The finite number of values in dom(C) are called classes. A classifier is a function f : dom(A1 ) × . . . × dom(Ap ) → dom(C) that assigns a class f (a1 , . . . , ap ) to each object (a1 , . . . , ap ) ∈ dom(A1 ) × . . . × dom(Ap ). Supervised classification tries to learn this classification function from a given data set. In this paper, we focus on binary classification problems with only two classes, i.e. |dom(C)| = 2. Without loss of generality, let us assume C = {0, 1}. In the ideal case, the classifier will always assign the correct class to an object (a1 , . . . , ap ). But in practical applications, it will seldom happen that the classifier is always correct. The evaluation of the performance of a classifier is based on a data table of the same structure as the one from which the classifier was derived. However, it is not advisable to use the same data for deriving and evaluating the classifier. This will easily lead to overestimate the performance of a classifier and will prefer those classifiers that tend to overfitting. The so-called resubstitution error is not a very good indicator for the performance of the classifier when it has to face new data that were not used for deriving the classifier. Therefore, it is common to split the data into disjoint training and test data. Training data are used for deriving the classifier, whereas the test data are used to evaluate the performance of the classifier. Instead of only splitting a given data set once into a training and a test data set, it is very common to apply k-fold cross-validation (see for instance [2]), where k = 10 is a standard choice. For k-fold cross-validation, the data set is partitioned into k subsets of (approximately) the same size. Then the first of the k subsets is removed from the data set and a classifier is derived from the remaining data. The performance of the classifier is evaluated based on the subset that had been removed before constructing the classifier. This procedure is repeated for each of the k subsets, so that one obtains k evaluations of the classifiers. The overall evaluation of the classifier is then taken as the average of these k evaluations. How should a classifier be evaluated? A very simple approach is based on the misclassification rate, i.e. the proportion of objects in the (test) data set that are wrongly classified. But such an evaluation assumes that a misclassification from class 0 to class 1 has more or less the same effect as a misclassification from class 1 to class 0. This assumption is often not realistic. For instance, if a classifier is supposed to serve as diagnostic tool for screening purposes in medicine for a specific disease, judging based on some measurement from a patient, whether the patient is healthy (class 0) or has
212
F. Klawonn, F. H¨oppner, and S. May
the considered disease (class 1), the consequences of misclassifications might differ strongly. A false positive, i.e. a misclassification of a healthy patient (true class 0) to have this disease (assigned to class 1), might lead to further examinations of the patients causing additional costs. A false negative – a patient suffering from the disease (true class 1) considered as healthy (assigned to class 0) – might have more serious consequences and might lead in the worst case to the death of the patient when he is not treated for the corresponding disease. The classification decision of a classifier is often based on a threshold value. Especially in the case of binary classification problems with two classes that we consider here, a classifier usually first provides a so-called score – a real value – and then the classification decision is based on the score and a threshold. All objects with a score below the threshold are assigned to class 0 and all objects above or equal to the threshold are assigned to class 1. The score is either derived from the classifier or sometimes – like in medical diagnostics or for biomarker purposes [3,4] – directly from a single numerical attribute. Most classifiers provide a score value. Just to provide some examples: – A (na¨ıve) Bayesian classifier provides likelihoods for the two classes from which we can derive the probability for class 0 and use this as a score. – The score of a k-nearest neighbour classifier is the fraction of the k nearest neighbours of the object to be classified that belong to class 0. – A neural network like a multilayer perceptron provides a value between 0 and 1 by default. – The score rendered by a decision tree is the proportion of elements of class 0 in a leaf node. For an overview on classifiers in general we refer to [5,6]. The choice of the optimal threshold for making the distinction between the two classes depends on the consequences of misclassifications. If a misclassification from class 0 to class 1 has the same consequences as a misclassification from class 1 to class 0, then the threshold should be chosen in such a way that the number of misclassifications is minimised. If, however, a misclassification in one direction has more serious consequences than a misclassification in the other direction, it might not be advisable to simply minimise the overall misclassification rate for both classes together. In order to reduce the number of misclassifications with serious consequences, one would accept more misclassifications to the class where less serious consequences are expected. One way to express such differences of misclassifications in different directions, are cost matrices that will be explained in Section 3. It is often difficult to specify the exact costs that are needed for such cost matrices. Therefore, receiver operating characteristic (ROC) and area under curve (AUC) analysis have been introduced. Instead of fixing a threshold value for the score, for each possible threshold value the rate of false positives is drawn against the rate of true positives1 in a graph as it is illustrated in Fig. 1. The diagonal line corresponds to a classifier that is not better than pure guessing. The upper curve shows a classifier with a good performance. The rate of true positives gets close to 100%, while the rate of false positives still remains close to zero. The curve in the middle corresponds to a classifier that is just slightly better than pure guessing. 1
Objects from class 1 correctly assigned to class 1.
An Alternative to ROC and AUC Analysis of Classifiers
213
true positive rate
1 0.8 0.6 0.4 0.2 0
0
0.2
0.4
0.6
0.8
1
false positive rate
Fig. 1. Examples of ROC curves
One can use the ROC curves to compare classifiers. If the ROC curve of classifier A is always above the ROC curve of classifier B – like the upper curve compared to the curve in the middle in Fig. 1 – then classifier A performs better than classifier B. However, for real data sets, it often happens that the ROC curves intersect, so that the simple criterion that one curve ROC must always be above the others is not helpful for selecting the “best” classifier. In the case study in [7] for only one out of ten data sets there was a ROC curve dominating all others. The common way out of this dilemma is to consider the area under (the ROC) curve (AUC). This value can range between 0.5 – for the pure random guessing classifier – and 1 for the perfect classifier. In principle, values less than 0.5 are also possible for a classifier that “systematically” makes the wrong guesses. In this case, one can simply exchange the predictions for classes 0 and 1 to obtain a ROC curve with an AUC value larger than 0.5. A variant of the AUC is the Gini coefficient G defined as G = 2AUC−1, in this way normalising the range of values between zero and one. Although at first sight, the AUC or the Gini coefficient seem to be suitable methods to compare the performance of classifiers, it turns out that these measures are actually inconsistent as will be discussed in Section 3. There are also extensions of the concept of ROC curves and AUC to classification problems with more than two classes [8,9,10], but they suffer from the same problems as ROC curves and AUC for two-class problems. Various alternatives to ROC and AUC analysis have been proposed. Adams and Hand [11] assume a fixed probability distribution for the costs and generate cost and comparison curves. Drummond and Holte [12,13] focus on a graphical representation of the difference of misclassification costs of two classifiers. Hern´andez-Orallo et al. [14] propose Brier curves that present the misclassification costs of a classifier against its operating condition using a different normalisation than we use here in the paper. In contrast to other approaches, we also include information about how reliable the resulting cost curves are.
3 Problem Formalization As already mentioned in the previous section, cost matrices are one way to express whether a misclassification of an object of class 0 to class 1 – a false positive – has more serious consequences than a misclassification of an object of class 1 to
214
F. Klawonn, F. H¨oppner, and S. May
Table 1. A cost matrix for a classification prob- Table 2. A normalised cost matrix for a classification problem with two classes lem with two classes predicted class 1 true class 0 0 0 c0,1 1 c1,0 0
predicted class 1 true class 0 0 0 1 c 0 1
class 0 – a false negative. The structure of such a cost matrix2 for a classification problem with two classes is shown in Tab. 1. The misclassification costs for a false positive are c0,1 , whereas the misclassification costs for a false negative are c1,0 . For a given threshold t and a classification rule of the form If score< t, then class 0, otherwise class 1, the average misclassification costs are 1 · c1,0 ·(# objects from class 1 with score < t) n + c0,1 ·(# objects from class 0 with score ≥ t) .
(1)
The optimal threshold can now be determined by minimising Eq. (1). It should be noted that the optimal choice of the threshold does not depend on the c absolute values of c0,1 and c1,0 (see also [1]), but only on their proportion c = c1,0 , 0,1 since the choice of the optimal threshold does not change when we multiply both values c0,1 and c1,0 with the same constant. Therefore, by multiplying the misclassification 1 costs by the factor c0,1 , we can assume without loss of generality that the costs c0,1 are equal to 1 and we only need to specify the relative costs c, so that the cost matrix in Tab. 1 can be simplified to the normalised cost matrix in Tab. 2. The normalised cost matrix requires only the specification of one parameter, the relative costs c. Here we can already see one disadvantage of the ROC curve representation. Even when we know the cost matrix or the relative costs, it is impossible to derive the optimal threshold from the ROC curve, since the ROC curve contains no information about the relative frequency of the two classes. In real applications, it is often difficult to specify a suitable cost matrix or at least the relative misclassification costs c. To relax the need of specifying an exact value for the relative costs, one can instead provide a probability distribution over the possible relative costs to express the uncertainty over the exact value of the costs. The comparison of classifiers can then be carried out on the basis of the expected misclassification costs using the weighting of the possible misclassification costs given by the corresponding probability distribution. In [1], it is shown that AUC can be interpreted in terms of such a measure of expected misclassification costs. However, the probability distribution over the misclassification costs depends on the classifier itself. Therefore, the comparison of classifiers based on AUC imposes different assumptions for the misclassification costs for each classifier. D. Hand has proposed a new measure called AUCH to overcome this inconsistency of AUC [1]. Although being consistent, AUCH still reduces the 2
Here we assume that the (additional) costs for correct classifications (also called test costs) are 0. Turney [15] provides a detailed discussion where such test costs play an importat role.
An Alternative to ROC and AUC Analysis of Classifiers
215
evaluation of a classifier to a single value, in this way losing a lot of information. We propose to directly visualise the misclassification costs of a classifier as a function of the relative misclassification costs c. The details will be explained in the following section.
4 Visualisation of the Misclassification Costs Behaviour We cannot expect that the relative misclassification costs c in Tab. 2 can be exactly specified. We also do not require a probability distribution over the possible values of c. Since the performance of a classifier, i.e. its misclassification costs, depends only on the parameter c, we visualise the misclassification costs as a function in c. In this way, we can visualise the misclassification costs as functions of c for various classifiers in the same diagram and can see for which values of c a classifier dominates the others. In contrast to the ROC curves, one can see directly the behaviour of a classifier with respect to the (relative) misclassification costs. Therefore, if we estimate that a misclassification of an object from class 0 to class 1 is three to five times as bad as a misclassification of an object from class 1 to class 0, we can look at the corresponding region in the plot of the misclassification costs and see whether there is a dominating classifier in this region. We improve this simple visualisation by various modifications. First of all, instead of the relative misclassification costs on the x-axis, we use the logarithm of the relative misclassification costs c. In this way ln(c) = 0 corresponds to the same misclassification costs for misclassification from class 0 to 1 and vice versa. The positive range of ln(c) means that a misclassification of an object of class 1 is more serious than a misclassification of an object from class 0. The negative range of ln(c) is just the reverse. If we would not use the logarithm on the x-axis, we would have an artificial asymmetry between the two ranges. Misclassification costs that are higher for class 1 would be in the infinite interval (1, ∞), whereas misclassification costs that are higher for class 0 could be found in the bounded interval (0, 1) in a non-logarithmic scale. Another problem of this visualisation is the extremely high values for the costs for large values of c and for values close to zero, i.e. when the misclassification costs for one class are extremely high compared to the other one. Therefore, we use a normalised value of the misclassification costs on the y-axis. We divide the misclassification costs of the corresponding classifier by the misclassification costs of a na¨ıve classifier that always predicts the same class for given relative misclassification costs, ignoring the score, but relying only on the frequencies of the two classes. Assume we have a data set with n objects, k of them coming from class 0 and (n−k) from class 1. Consider a fixed value of the relative misclassification costs c. If the na¨ıve classifier always predicts class 1, its misclassification costs would be simply k. If the na¨ıve classifier always predicts class 0, its misclassification costs would be simply (n − k) · c. The na¨ıve classifier will therefore predict the class which yields the smaller value, so that the misclassification costs of the na¨ıve classifier are min{k, (n − k) · c}.
(2)
The misclassification costs of any classifier we consider, will always be divided by this value in our visualisation. This means that the value 1 on the y-axis in our visualisation
216
F. Klawonn, F. H¨oppner, and S. May
is the performance of the na¨ıve classifier and a classifier exploiting the score – assuming that the score contains some information about the classes – should always have a value lower than 1. The performance of a classifier will be evaluated based on 10-fold cross-validation. There is a difference in the treatment of real classifiers and classifiers for biomarkers or diagnostic purposes where the score is already directly available. Let us first consider a classifier that must first be constructed from data and then will yield score values. The data set will be partitioned randomly into 10 disjoint subsets of approximately the same size. Then one of the subsets is removed and the classifier is constructed with the remaining 90% of the data. The scores are computed for the 90% data that were used for training and also for the subset that had been left out. The optimal threshold for given relative misclassifcation costs c is determined based on the scores of the 90% data that were used for training. The misclassification costs for the classifier are then calculated using the scores from the smaller subset that was excluded for the construction of the classifier. This procedure is repeated 10 times – once for each of the subsets – so that we obtain 10 estimations for the misclassification costs for each value of c for a classifier. For the visualisation we use the mean of this value. We also draw the range of the standard deviation of these 10 estimations as a semi-transparent area. In the scenario when classifiers in terms of biomarkers or medical diagnosis based on a single numerical attribute are considered, there is no need to construct a classifier, only a suitable threshold is needed. The values of the attribute are directly interpreted as the score. In this case, cross-validation is directly carried out on these score values. The set of score values is randomly partitioned into 10 disjoint subsets of approximately the same size. One of the subsets is removed and the optimal threshold for given relative misclassifcation costs c is determined based on the scores of the remaining 90% of the data. The misclassification costs for the classifier are then calculated using the scores from subset that had been removed. This is repeated 10 times, yielding also 10 estimations for the misclassification costs. Figure 2 shows our visualisation of the classifier performance for a data set with random assignment of the classes to the scores where each class has the same probability. The performance of the classifier fluctuates around the value one which is what
Fig. 2. The visualisation of the performance of a classifier for a data set with random assignment of the classes to the scores where each has the same probability
An Alternative to ROC and AUC Analysis of Classifiers
217
we would expect, since the score does not carry information about the classes and one cannot be better than the na¨ıve classifier that ignores the score and relies only on the frequencies of the two classes and the relative misclassification costs. It can be seen that the standard deviation of the performance of the (non-na¨ıve) classifier is large when the misclassification costs are extremely biased to one class – i.e. c → 0 or c → ∞ – and when the misclassification costs are roughly the same for the two classes (c = 1, ln(c) = 0). The non-na¨ıve classifier will always try to exploit the score. To explain this, assume that c is very large, meaning that misclassifications from true class 1 to class 0 must be avoided. The na¨ıve classifier will simply always predict class 1. However, if just by chance, the two objects with the lowest score, say 0.1 and 0.2, in the training set belong to class 0 then the non-na¨ıve classifier will find the rule if score≤0.2 then class 0, else class 1 better than just always predicting class 1 like the na¨ıve classifier. But in the test set, this rule might lead to a very costly misclassification of class 1 to class 0. Of course, all this happens by chance and therefore the standard deviation becomes high. For equal misclassification costs, the classifier will try to find a split in the training set that will give – by chance – the most unbalanced distribution of the two classes and formulate a corresponding rule. Of course, the distribution of the classes in the test will probably have a more balanced distribution in these ranges and will lead to more misclassifications and to higher costs. We will discuss more examples in Section 6. So far, we have not explained, how we actually calculate the optimal threshold to minimise the misclassification costs for a given c and how we choose the sampling rate and the range for c to draw the graph. In the next section, we develop an algorithm that will minimise the computational effort to find the optimal threshold and the sampling intervals for c.
5 An Efficient Algorithm for Computing the Visualisation Consider the example in Tab. 3: We have ordered all cases in our dataset by the achieved score, such that a given threshold t divides all cases into two groups. Above or equal to t we predict class 1, below t we predict class 0.3 The remaining columns of Tab. 3 list the elements of the confusion matrix for the respective threshold t in the first column. Once a classifier has computed all score values and the cases have been sorted by the achieved score, the elements of the respective confusion matrices may be calculated in a single pass through the table. The smaller the threshold, the larger the number of 1-predictions and the smaller the number of 0-predictions. With increasing threshold t the number of false positives (FP) is monotonically increasing and the number of false negatives (FN) is monotonically decreasing. The threshold with the smallest (mis)classification costs is easily identified by finding the minimum of c · FN + 1 · FP (utilizing the cost matrix from Tab. 2). Given n cases, the sorting and scanning can be accomplished in O(n log n). However, the optimal threshold t depends on the cost matrix, that is, the cost coefficient c. But we are interested in the minimal costs (or optimal t) for any possible value of relative costs c. A naive implementation may sample the 3
We do not consider the case that a high score indicates class 0, but this is easily accomplished by simply negating the score.
218
F. Klawonn, F. H¨oppner, and S. May
Table 3. Left: Table with cases, sorted by the achieved score value. Right: Candidate elements on the Pareto front for minimal total misclassification costs. For the thresholds t we have used the score of the respective row; taking the mean with the next row yields a more robust choice. score true class 15 1 14 0 13 1 12 1 10 0 8 1 7 0 6 0 4 1 2 0
TP 1 1 2 3 3 4 4 4 5 5
TN 5 4 4 4 3 3 2 1 1 0
FP 0 1 1 1 2 2 3 4 4 5
FN Pareto FP+FN 4 *1 4 4 5 3 4 2 *2 3 2 4 1 *3 3 1 4 1 5 0 *4 4 0 5
Pareto t *1 15 *2 12 *3 8 *4 4
FP 0 1 2 4
optimal FN for c ≤ ... 4 0.5 2 1.0 1 2.0 0
range of possible c-values and determine the minimal (mis)classification costs for each of them, but we present a more efficient way to calculate the critical values of c where the optimal threshold t changes. Regardless of the specific value of c, we seek to optimize two competing criteria: the minimization of false positives (which are monotonically increasing) and the minimization of false negatives (which are monotonically decreasing). The so-called Pareto front of this two-criteria optimization problem consists of cases which are not strictly dominated by other cases. A case, represented by the pair (x, y) of false positives x and false negatives y, strictly dominates another case (x , y ) if x ≥ x and y ≥ y where at least one of these two inequalities must be strict (> instead of ≥). In any of these cases, regardless of the actual choice of c, we have: c · x + 1 · y < c · x + 1 · y . The fact that the false positives are monotonically increasing and the false negatives are monotonically decreasing makes it very easy to identify the Pareto front in a sorted score table such as Tab. 3: An element of the Pareto front is either the last row in a block of rows with an identical count of false positives or the first row in a block of rows with an identical count of false negatives. Due to the monotonicty of the FP and FN columns, this corresponds to the rows where FP + FN becomes locally minimal. Regardless of the actual choice of c, the minimal (mis)classification costs are given by a threshold associated with one of these lines (marked by stars in the column labelled ’Pareto’ in Tab. 3). For which value of c do these (local) minima actually represent the global minimum of the cost function? Let us reduce the dataset to the k elements of the Pareto front, the FP and FN columns are now strictly increasing and decreasing in t by construction (cf. Tab. 3, right). The sum of a strictly increasing and a strictly decreasing function (scaled by c) has a unique minimum, which moves towards smaller values of t as c increases. Let FPi and FNi be the FP and FN values of the row marked by *i . At c = 0 the total costs c · FN + 1 · FP are identical to the FP column, and thus the highest
An Alternative to ROC and AUC Analysis of Classifiers
219
0.0
0.2
0.4
True positive rate
0.6
0.8
1.0
Comparison naive bayes/ neural network
0.00
0.05
0.10
0.15
0.20
0.25
0.30
False positive rate
Fig. 3. The visualisation of the performance of a neural network and a na¨ıve Bayes classifier
threshold at *1 yields the minimal total costs. As c increases, suppose we have currently reached a cost level of ci for which the global minimum is achieved by threshold ti in row *i . If we increase c further, for which value of c will the global minimum be observed for threshold tj , j > i, rather than ti ? Switching to tj changes the total costs by (FPj − FPi ) − c · (FNi − FNj ), so the total costs decrease if this term becomes negative. This may happen for any of the following rows *k , k > i, of the Pareto front at a cost value ci,k = (FPk − FPi )/(FNi − FNk ). The smallest cost value that affects the global minimum is thus given by cj where j = argmini<j≤k ci,j . For costs c ∈ [ci , cj ] the current threshold ti is optimal, whereas for c > cj the threshold tj leads to the minimum of the cost function. This calculation is iterated to find a sequence of increasing cost values with associated thresholds that partitions the domain of c. For each interval the associated threshold ti yields the minimal (mis)classification costs by c·FNi +FPi . In our example we obtain the cost partition {[0, 0.5], ]0.5, 1], ]1, 2], ]2, ∞[} with associated thresholds 15, 12, 8, 4. If we change the true class of the third case from class 1 to 0, we obtain the partition {[0, 1.5], ]1.5, 2], ]2, ∞[} with thresholds 15, 8, 4.
6 Experimental Results As an example, we consider the Wisconsin Breast Cancer Data Set from the UCI Machine Learning Repository (http://archive.ics.uci.edu/ml/) providing a classification problem with two classes (malign, benign) that should be predicted based on
220
F. Klawonn, F. H¨oppner, and S. May
Fig. 4. The visualisation of the performance of a neural network and a na¨ıve Bayes classifier
9 attributes. The two classes do not have the same frequency. Only roughly one third of the instances belong to the class malign. We have applied two classifiers to solve this classification problem: A neural network in the form of a multilayer perceptron and a na¨ıve Bayes classifier. Figure 3 shows the resulting ROC curves that intersect in more than one position and it is not clear for which costs which classifier performs better. Figure 4 shows the result of our visualisation method. Of course, the curves will also intersect. But it can be clearly seen that when it is considered more harmful to classify malign as benign, the na¨ıve Bayes classifier performs better than the neural network. If the other type of misclassification is considered more harmful, than sometimes the neural network and sometimes the na¨ıve Bayes classifier performs better. Taking a look at the standard deviation, it can also be seen that the two classifiers differ not significantly in their performance.
7 Conclusions We have introduced a visualisation technique for the evaluation of classifiers that provides an alternative to ROC analysis and the inconsistent concept of AUC. We explicitly use a range of relative misclassification costs which is very helpful, if one has a rough idea how serious false positives in comparison to false negatives are.
An Alternative to ROC and AUC Analysis of Classifiers
221
So far, our approach is – like ROC and AUC – limited to two-class problems. Future work will include extensions to multi-class problems that are much more difficult to visualise, since the number of parameters increases quadratically with the number of classes.
References 1. Hand, D.: Measuring classifier performance: a coherent alternative to the area under the ROC curve. Machine Learning 77, 103–123 (2009) 2. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, pp. 1137–1143. Morgan Kaufmann, San Mateo (1995) 3. Obuchowsky, N., Lieber, M., Wians Jr., F.: ROC curves in clinical chemistry: Uses, misuses and possible solutions. Clinical Chemistry 50, 1118–1125 (2004) 4. Søreide, K.: Receiver-operating characteristic (ROC) curve analysis in diagnostic, prognostic and predictive biomarker research. Clinical Pathology 62, 1–5 (2009) 5. Berthold, M., Borgelt, C., H¨oppner, F., Klawonn, F.: Guide to Intelligent Data Analysis: How to Intelligently Make Sense of Real Data. Springer, London (2010) 6. Hand, D., Mannila, H., Smyth, P.: Principles of Data Mining. MIT Press, Cambridge (2001) 7. Provost, F., Fawcett, T., Kohavi, R.: The case against accuracy estimation for comparing induction algorithms. In: Proceedings of the 15th International Conference on Machine Learning (1998) 8. Mossman, D.: Three-way ROCs. Medical Decision Making 19, 78–89 (1999) 9. Hand, D., Till, R.: A simple generalisation of the area under the ROC curve for multiple class classification problems. Machine Learning 45, 171–186 (2001) 10. Li, J., Fine, J.: ROC analysis with multiple classes and multiple tests: methodology and its application in microarray studies. Biostatistics 9, 566–576 (2008) 11. Adams, N., Hand, D.: Comparing classifiers when the misallocation costs are uncertain. Pattern Recognition 32, 1139–1147 (1999) 12. Drummond, C., Holte, R.: Explicitly representing expected cost: An alternative to ROC representation. In: Proc. Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 198–207. ACM Press, New York (2000) 13. Drummond, C., Holte, R.: Cost curves: An improved method for visualizing classifier performance. Machine Learning 65, 95–130 (2006) 14. Hern´andez-Orallo, J., Flach, P., Ferri, C.: Brier curves: a new cost-based visualisation of classifier performance. In: Getoor, L., Scheffer, T. (eds.) Proc. 28th International Conference on Machine Learning (ICML 2011), pp. 585–592. ACM, New York (2011) 15. Turney, P.: Cost-sensitive classification: Empirical evaluation of a hybrid genetic decision tree induction algorithm. Journal of Artificial Intelligence Research 2, 369–409 (1995)
The Algorithm APT to Classify in Concurrence of Latency and Drift Georg Krempl University of Graz, Department of Statistics and Operations Research, Universit¨ atsstraße 15/E3, 8010 Graz, Austria
[email protected] Abstract. Population drift is a challenging problem in classification, and denotes changes in probability distributions over time. Known driftadaptive classification methods such as incremental learning rely on current, labelled data for classification model updates, assuming that such labelled data are available without verification latency. However, verification latency is a relevant problem in some application domains, where predictions have to be made far into the future. This concurrence of drift and latency requires new approaches in machine learning. We propose a two-stage learning strategy: First, the nature of drift in temporal data needs to be identified. This requires the formulation of explicit drift models for the underlying data generating process. In a second step, these models are used to substitute scarce labelled data for updating classification models. This paper contributes an explicit drift model, which is characterising a mixture of independently evolving sub-populations. In this model, the joint distribution is a mixture of arbitrarily distributed sub-populations drifting over time. An arbitrary sub-population tracker algorithm is presented, which can track and predict the distributions by the use of unlabelled data. Experimental evaluation shows that the presented APT algorithm is capable of tracking and predicting changes in the posterior distribution of class labels accurately. Keywords: Population Drift, Verification Latency, Unsupervised Classifier Update, Cluster Tracking, Nonparametric Representation.
1
Introduction
This paper addresses the problem of change in distributions over time. In literature, this phenomenon is known as population drift (see e.g. [10] and references therein), or concept drift (see for example [21] and [18]). Drifting distributions may require an update of classification models. Most existing approaches (see [21] and [18] and references therein) require actual training data for such updates. More precisely, labelled training data are required, thus the data must comprise the true class labels of the new observations. A straightforward approach here entails the sole use of new training data. However, this has one important limitation: The required amount of training data may exceed the data available. J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 222–233, 2011. c Springer-Verlag Berlin Heidelberg 2011
The Algorithm APT to Classify in Concurrence of Latency and Drift
223
In such a case, more sophisticated approaches such as incremental learning strategies are commonly used. They take advantage of the fact that changes often occur gradually or locally, and some historical data may thus retain its validity. However, as the change in distributions increases, an increasing amount of new, labelled training data are also required. This poses a problem, if actual, labelled data are scarce or only available with a great lag. Such a lag between the moment at which classification is done and the moment at which verification of prior classification becomes possible was denoted by Marrs et al. [15] as verification latency. However, they discussed this issue in the more general context of an exemplars life-cycle in learning and thus did not provide a solution to this problem. The concurrence of drift and latency challenges existing approaches in supervised machine learning: Drift requires current, labelled data for model updates. However, even if new observations are available, they can not be used until their labels are known, and by the time their labels are available, these observations have become outdated. Thus one has to use the information that is available at the moment of classification: This comprises old, possibly outdated labelled data, and new but as yet unlabelled data. The question is, how can new but unlabelled data be used to ensure the model is up-to-date ? This task is somehow similar to the objective of semi-supervised learning (see for example [20]), although there is one important difference: In semi-supervised learning, the standard assumption is that labelled and unlabelled data are drawn from the same sample, i.e. that the distributions do not change. Thus, the approaches discussed in the literature on semi-supervised learning can not be applied as such when choosing between the use of numerous but outdated data, or new, but scarce data. Nevertheless, there is a solution to this dilemma: If one can assume systematic drift in distributions, these systematics can be exploited to track and predict the changes of distributions over time. This paper assumes a systematic drift model where the underlying population comprises several, differently but gradually evolving sub-populations over time. One could restrict the feature distribution of the sub-populations to parametric distributions such as Gaussians. However, in the present paper arbitrarily distributed sub-populations are studied, requiring a non-parametric characterisation of density distributions. The arbitrary sub-populations tracker algorithm presented later is capable of tracking such arbitrarily distributed sub-populations, with the particularity of providing an up-to-date representation of the underlying distributions at any one time, while solely requiring unlabelled data for model updates. The rest of this paper is organised as follows: In the next section, the systematic drift model that is assumed in the rest of this paper is outlined. Furthermore, a corresponding tracking and classification algorithm named arbitrary sub-populations tracker (abbreviated as APT ) is introduced and related literature is reviewed. The third section is dedicated to the experimental evaluation of the APT -algorithm. Finally, the paper concludes with a discussion on capabilities, limitations and extensions of this approach and possible further research directions.
224
2
G. Krempl
Tracking the Evolution of Arbitrarily Distributed Sub-populations
2.1
A Systematic Drift Model: Evolving Sub-populations
It was pointed out in the introduction that – in order to use unlabelled data for classifier updates – assumptions on the systematics behind changes in distributions are required. The legitimation for these assumptions lies in the origin of the data: It is plausible that the data generating process is subject to limitations in the way it can change over time, such that some characteristics of the underlying distributions are likely to be stable or to change only gradually over time. In this paper, we will discuss an evolving sub-populations drift model. This model assumes that the underlying population consists of several sub-populations, which might evolve differently over time. The corresponding data generating process is a mixture model, whose components are subject to change over time. We denote the feature distribution by P (X), the component prior distribution (or mixing proportions) by P (Z), and the distribution of class labels by P (Y ). We do not assume the conditional feature distributions of the components P (X|Z) to be Gaussian distributions or any parametric distribution. In fact we rather allow arbitrary distributions of P (X|Z). Their density distributions are modelled in a non-parametric way using kernels, as explained below. Evolution can affect the conditional feature distributions. Thus a components’ location in feature space as well as its shape might change over time. However, we assume static conditional posterior distributions P (Y |Z), i.e. a components’ class label does not change over time. In addition, the posterior distribution does only depend on the (latent) component membership, i.e. given the component memberships the posterior distribution is independent of the feature distribution: P (Y |Z) = P (Y |Z, X). Furthermore, we assume the prior distribution of components P (Z) to be static or changing only gradually over time. Non-Parametric Representation of Density Distributions. The standard non-parametric approach to modelling the density distribution f (x) underlying a sample X = {x1 , x2 , · · · , xM } of M realisations entails using a kernel ([17]) estimator: Group Group 3 3
Group Group 2 2
Group Group 3 3
Group Group 2 2
positive, positive, group group 2 2
positive, positive, group group 2 2
negative, negative, group group 1 1 Group Group 1 1
Group Group 1 1
negative, group group 1 1 negative, positive, positive, group group 3 3
positive, positive, group group 3 3
Distribution of new, unlabelled observations Distribution of old, labelled observations
Fig. 1. Drifting sub-populations
The Algorithm APT to Classify in Concurrence of Latency and Drift M 1 KX (x − xm ) fˆ(x) = M m=1 Here, KX (x) is a kernel function satisfying KX (x)dx = 1, xKX (x)dx and x2 KX (x)dx = 0. Given a D-dimensional, real-valued feature space RD , a common choice for KX is the use of Gaussian kernels, i.e. D 1 1 KX (x − xm ) = (2π)− 2 |Σ −1 | 2 exp − (x − xm )T Σ −1 (x − xm ) 2
225
(1) = 0, X⊆
(2)
where Σ is the covariance matrix or bandwidth matrix. In the context of a drifting sub-populations model, the changes of the density distribution over time must also be considered. Let E be a sample of exemplars, i.e. historic observations that are exemplary for the underlying mixture distribution. These exemplars are observed at positions XE = {x1 , x2 , · · · , xM } , at times TE = {t1 , t2 , · · · , tM }, and with labels YE = {y1 , y2 , · · · , yM }. Furthermore, let ZE = {z1 , z2 , · · · , zM } be a clustering that partitions the sample into K disjoint subsets. In the simplest case, we can assume that ZE = YE , i.e. each class corresponds to one cluster and vice versa. In this case, the initial clustering is obtained from the labels of the observations. Otherwise, the initial clustering must be obtained using an appropriate clustering algorithm. Such mixture decomposition has already been studied exhaustively, thus we refer to literature (e.g. [11], [9]) for the choice of algorithms to estimate ZE . Of special interest are approaches that are capable of detecting non-spherical clusters in data, such as geometric clustering approaches (see [14], [8], and [7]). In accordance with the drift model, we assume that each component is subject to an individual movement, which results in a gradual drift of the density distribution in feature space, corresponding to a fixed offset μΔ k per time step for component k. Given the observed exemplars XE ,TE , and ZE , we can now estimate the density distribution at time t and a point x in feature space as M 1 fˆ(x, t) = gm (x, t) M m=1
(3)
1 T −1 D 1 2 exp (4) gm (x, t) = (2π)− 2 |Σz−1 | − Σ d d m m 2 m zm is the drift-adjusted Gaussian kernel function representing the m-th observation in D-dimensional feature space. In the first formula, it was assumed that the bandwidth was the same for all kernels. However, in this current formulation, we now allow a different bandwidth matrix Σκ for each component, by letting zm denote the component the m-th exemplar belongs to. ˜m (t), is the difference between the position x at which the Here, dm = x − x density is evaluated at time t, and the estimated position of the m-th exemplar at that time. The latter can be estimated as
where
x ˜m (t) = xm + (t − tm ) ∗ μΔ zm
(5)
226
G. Krempl
given that the exemplar belongs to component zm and was originally observed at time tm and position xm . μ0k defines the position of the cluster centre at time 0 0 Δ zero, μΔ k defines the cluster movement vector, and δm = xm − μzm − tm ∗ μzm is the offset of exemplar m with regards to its cluster centre at time zero. 2.2
An Adaptive Learning Algorithm: Tracker APT
Arbitrary Population
Likelihood Maximisation Problem. The objective is to estimate the feature distributions conditioned on components for a later moment in time, given a set of M known, clustered exemplars, which represent kernel density estimates of these distributions at a previous point in time. One approach is to determine a trend of cluster movements in the set of exemplars first, and to subsequently extrapolate this trend to later points in time. However, this requires that the set of exemplars comprises observations from different points in time. In this case, the trend in the movements of components can be determined using weighted regression as explained in 2.2 below. However, this means that one has to assume a polynomial drift path of the component centres over time. As this assumption seems fairly strong, we suggest relaxing it and assuming that the drift path can be approximated for sufficiently small time intervals by linear segments. This requires, however, that the drift parameters are recalculated periodically. This can be done using new observations, although then the clustering is not known a priori. This results in the following problem: Given a set of M exemplars representing kernels, we want to know how they evolved over time to generate a set of N new instances. Assuming that the two sets are of the same cardinality, i.e. M = N , we want to known which new instance corresponds to which known, old exemplar. Knowing this then allows us to infer the values of the latent cluster variable for the new instances from their old counterparts, and, in analogy, to infer the class labels of the new instances. The corresponding likelihood maximisation problem given a set of N new observations collected at positions X = {x1 , x2 , · · · , xN } and at times T = {t1 , t2 , · · · , tN } is then: L(Θ; X, T ) =
N M
gm (xn , tn )znm
(6)
n=1 m=1
where znm is the latent instance-to-exemplar correspondence: 1 if instance n corresponds to exemplar m znm = 0 otherwise
(7)
Given that the instance-to-exemplar correspondence is known, the instance-tocluster assignment can be achieved by passing the cluster assignment from exemplars to their assigned instances. In a similar way, the class of an exemplar can be passed on to the class of its assigned instance.
The Algorithm APT to Classify in Concurrence of Latency and Drift
227
Δ Θ = {μ01 , · · · , μ0K , μΔ 1 , · · · , μK } is the set of parameters, while δ = {δ1 , δ2 , · · · , δM } and Σ = {Σ1 , Σ2 , · · · , ΣK } are predetermined based on the set of exemplars and assumed to be constant. Selecting the correct values for the bandwidth matrices Σk is an important issue, for which various approaches are discussed in literature. Among the common first and second generation approaches is smoothed cross-validation, for a discussion of this approach we refer to Wand and Jones [19] for the unit case and to Duong and Hazelton [6] for the multivariate case.
Iterative Optimisation. This results in the following log-likelihood maximisation problem: l(Θ; X, T ) =
N M
znm (−2dnm Σzm dnm ) → max
(8)
n=1 m=1
subject to
N
znm = 1
∀m
∈ 1, 2, · · · , M
znm = 1
∀n
∈ 1, 2, · · · , N
(9)
n=1 M
(10)
m=1
We further assume that M =N znm ∈ {0, 1}
(11) (12)
The expectation-maximisation algorithm [5] is the standard approach to solving this type of optimisation problem. In the expectation-step the instance-toexemplar assignment problem is solved, while the (locally) optimal parameter set is determined in the maximisation step. However, in this context, the two optimisation steps take particularly interesting forms: Expectation Step: Linear Sum Assignment Problem. In the expectation step, we assume the parameter set to be constant, thus the term (2dnm Σzm dnm ) itself is constant and corresponds to a distance measure between the estimated position of known exemplar m and new instance n. Thus the optimisation problem corresponds to the well-known linear sum assignment problem [4]. This problem is equivalent to its continuous relaxation, where the integer constraint on znm is relaxed to zi,j ≥ 0 (see [4], chapter 4.1.1 and references therein). This means that even a solution to the continuous relaxation, which can be obtained using a SIMPLEX approach, will result in a one-to-one assignment between instances and exemplars. However, more problem specific and computationally more efficient approaches exist, such as the Hungarian method [12] proposed by
228
G. Krempl
Kuhn and its variations suggested by Munkres [16] and by Lawler [13], which solve the problem in O(N 4 ) or O(N 3 ), respectively. Nevertheless, there are two shortcomings of these algorithms: First, they require that M = N , i.e. that the number of exemplars and instances is identical. One solution could be obtained by introducing a scaling factor and formulating the problem as a linear program, thus solving for its continuous relaxation. However, this aggravates the second issue, which is the high computational cost of determining an assignment. While algorithms for the linear sum assignment problem are still polynomial, they might be too high to be computed for each iteration of the EM-algorithm. To address both issues at the same time, we propose the following heuristic, which is based on random sub-sampling of exemplars: Given that M observations have been added to a pool of exemplars, we suggest choosing a subset-size Q such that Q ≤ M and Q ≤ N . The set of new instances is then randomly split N Q N into N i=1 Ni = Q batches, such that Ni ≤ Q ∀ i ∈ 1, 2, · · · , Q and N . Subsequently, the linear sum assignment problem is solved for each batch i separately, by drawing a random sample of Ni exemplars without replacement, and assigning them to the new instances in the current batch. Computational complexity can thus be reduced to O(Q3 N ). However, one can not make Q arbitrarily small, as its smallest reasonable value depends on the number of clusters and on the dimensionality D of the data. However, robustness could be improved further by repeating the assignments with different random samples in a boot-strap-like approach. Maximisation Step: Weighted Regression Problem. The instance-toexemplar assignments obtained in the previous expectation step serve as weights in solving the weighted regression problem concerning calculation of the likelihoodΔ maximising set of parameters Θ∗ = {μ01 , · · · , μ0K , μΔ 1 , · · · , μK }. These parameters are calculated separately for each cluster by selecting those assignments that comprise an exemplar of the selected cluster. Without loss of generality one can assume that the exemplars are ordered by their cluster membership. This allows us to sum over the exemplars corresponding to a given cluster κ. These examples are mκ , mκ + 1, · · · , mκ+1 − 1, where mk is the first exemplar of the k-th cluster, and mK+1 = M . For each cluster κ, the following weighted regression problem is obtained:
h(d) =
mκ+1 −1 N m=mκ n=1
znm (−dTnm Σκ dnm ) → max
(13)
Here, 0 dnm = xn − δm − μ0κ + tn ∗ μΔ κ
(14)
0 , regressand xn , regressors tn 0 and tn 1 , and regression with constants znm , Σκ , δm 0 Δ coefficients μκ and μκ .
The Algorithm APT to Classify in Concurrence of Latency and Drift
229
As there exists exactly one n for each m such that znm = 1, one can reorder the instances such that n = m, and define dm = dnm , yielding the simplification: mκ+1 −1
h(d) =
m=mκ
2.3
(−dTm Σκ dm ) → max
(15)
Related Work
The problem of verification latency was pointed out in a work by Marrs et al. [15], but no solution has yet been suggested. The problem of identifying the type of drift in data is related to the paradigm of change mining, a term introduced by B¨ottcher, Hoppner and Spiliopoulou (see [3] as well as references therein). They define the objective of change mining as “understanding the changes themselves” and review the current state of the art. Nevertheless, population drift has not been addressed by means of change mining so far. Thus the APT algorithm presented here solves a problem which has not yet been addressed in literature: How to track changes in a non-parametric density distribution. More precisely, it allows to track down changes in the unconditional density distribution of features to changes in the corresponding density distributions conditioned on components (or classes). Several approaches are available which enable the tracking and – to a certain extent – the prediction of changes in density distributions over time given a sequence of samples. One example is the work of Aggarway ([1],[2] and references therein), which allows for the computation of temporal and spatial velocity profiles of density distributions. There is, however, one important difference: Such approaches do not allow us to make any inferences regarding the density distributions conditioned on classes, unless the class labels are at hand. Thus, they are not applicable in the context of verification latency. In contrast, the APT algorithm presented above, does have this ability. There is a relation between nearest neighbour as well as Parzen classifiers and the principle underlying the presented classification approach: In nearest neighbour each instance is assigned to its closest exemplar (or to a set of closest exemplars). This results in a one-to-many relation between exemplars and instances. The APT -algorithm, however, allows only one-to-one relations between exemplars and instances. This difference means that the APT -algorithm respects cluster priors and preserves the shapes of density distributions, whereas the nearest neighbour algorithm does not. Given static density distributions, the results are expected to be comparable. However, in the presence of drift, this distinction becomes important: If the decision boundary is static, the nearest neighbour algorithm will provide correct results, while the performance of the APT -algorithm depends on whether the changes in the feature distributions indicate a change of the decision boundary. Nevertheless, in a drifting sub-populations scenario, where the decision boundary changes, the nearest neighbour algorithm will fail, while the APT -algorithm can adapt. This means that investigations are required, prior to the application of an adaptive algorithm, to verify that assumptions on the type of drift model are in fact met.
230
3
G. Krempl
Experimental Evaluation
In this experimental evaluation the capabilities of the algorithm are studied, given that the distributions of the data follow the assumed drift model. This is one reason for the use of synthetic data. Another reason is the fact that synthetic data enables comparisons with the true Bayes error rate, thus providing an upper bound for the best possible classifier performance. As there exist to the best of our knowledge to date no other algorithms which can use unlabelled data for updating a classification model to address verification latency, it is important to have this bounds for comparison. Seventeen synthetics data sets were used in the experimental evaluation, simulating drifting populations in two-dimensional feature space. 3.1
Data Generator
The data generator simulates a mixture of drifting components. The centre of each component is subject to linear drift over time. Furthermore, the mixing proportions also change, following a linear functions of time. Each component itself corresponds again to a mixture of Gaussians with a varying number of sub-components. Each sub-component has a fixed offset from the cluster centre, thus the sub-components of each component move coherently over time. The 17 data sets used in the simulations consist of two main components, each corresponding to one class. The path of the main component centres, the offsets of the sub-components, their covariance matrices as well as the coefficients for the linear function determining the mixing proportions all were chosen randomly. While the first 1000 observations were used as a training sample, the subsequent 2500 instances were split into five, chronologically ordered batches which were used as validation samples. Each batch corresponds to a disjoint time interval, and thus to a different extent of verification latency. 3.2
Measures and Algorithms
The APT -algorithm is compared to the true Bayes error rate. First, the performance of an adaptive, informed Bayes classifier, who has knowledge of the actual distribution parameters at any time, is calculated and denoted as AIB. Since the true parameter settings are used, no bias can result from insufficient or inappropriate parameter tuning. Furthermore, the performance of a static Bayes classifier SIB, who has knowledge of the actual distribution parameters at the first period, is evaluated. This serves as an upper bound for the performance of a classifier who is not updated. Its error rate is likely to increase with time, as it corresponds to an increasing verification latency in the data set. Thus it is also used to illustrate the effect of verification latency on a classifiers’ performance. Furthermore, we compare the performance of our non-parametric APT algorithm to the performance of an adaptive, but parametric EM algorithm, which assumes (linearly) drifting Gaussian components in the mixture. This is legitimate, as the data in the data sets was generated by Gaussians. This adaptive
The Algorithm APT to Classify in Concurrence of Latency and Drift
231
but parametric approach identifies and tracks clusters using incremental EM. It is similar to the APT -algorithm in that unlabelled data can be used for model updates. However, in contrast to the APT -algorithm, the tracked components need to be Gaussians, which restricts the applicability of this algorithm. As performance measures, the area under the curve, denoted as AUC, as well as the accuracy, denoted as ACC and defined as the percentage of correctly classified instances, were calculated. Time AIB SIB APT EM 1 0.92 0.89 0.88 0.87 ± 0.09 ± 0.07 ± 0.12 ± 0.11
Time AIB SIB APT EM 1 0.86∗ 0.80 0.83 0.80 ± 0.12 ± 0.08 ± 0.13 ± 0.13
2
0.94∗ 0.80∗∗ 0.87 0.88 ± 0.09 ± 0.11 ± 0.12 ± 0.13
2
0.89∗∗ 0.69∗∗ 0.83 0.80 ± 0.11 ± 0.10 ± 0.12 ± 0.14
3
0.95 0.74∗∗ 0.91 0.86 ± 0.08 ± 0.16 ± 0.10 ± 0.13
3
0.91∗∗ 0.64∗∗ 0.86 0.80∗∗ ± 0.10 ± 0.14 ± 0.10 ± 0.15
4
0.94 0.73∗∗ 0.91 0.85∗ ± 0.11 ± 0.18 ± 0.09 ± 0.13
4
0.92∗∗ 0.64∗∗ 0.86 0.78∗∗ ± 0.11 ± 0.15 ± 0.11 ± 0.16
5
0.93 0.70∗∗ 0.91 0.84∗ ± 0.12 ± 0.22 ± 0.08 ± 0.13
5
0.92∗ 0.63∗∗ 0.86 0.77∗∗ ± 0.12 ± 0.16 ± 0.10 ± 0.16
Fig. 2. AUC over Time Fig. 3. ACC over Time p-value ≤ .05, ∗∗ p-value ≤ .01 APT compared to other methods using Wilcoxon signed rank test ∗
3.3
Results
In terms of accuracy (ACC), in all but the first time interval the APT -classifier performed better than both, the parametric EM and the static, informed Bayes classifier. In terms of the area under the curve (AUC), the overall result is the same, with an exception in the second time interval, where APT is better than the static, informed Bayes classifier, but slightly worse than the parametric EM approach. Compared to the static classifier, the difference in performance in the time intervals 2 to 5 is highly significant (p ≤ 0.01). Compared to the parametric EM, the difference in ACC is highly significant in the time intervals 3 to 5, whereas the difference in AUC is only significant (p ≤ 0.05) in the last two time intervals. Compared to the informed, adaptive Bayes classifier, the APT classifier performs worse. However, this difference is in all but the second interval only significant in terms of accuracy. Furthermore, the performance of the APT classifier is stable (or even increases slightly) over time, as the accuracy as well as the area under the curve increases over time. It should be remarked that the degree of classification difficulty stays more or less the same over time, as the performance of the informed, adaptive Bayes classifier in terms of area under the curve is –after an initial slight increase– decreasing slightly over time. Therefore the improvement of the APT -classifier over time indicates that the algorithm
232
G. Krempl
could profit from further data in terms of identifying the model parameters more precisely. Finally, the effect of an increasing verification latency on a classifier which can not update its model without labelled data is made evident by the performance of the static, informed Bayes classifier: Its performance decreases constantly as verification latency increases.
4
Conclusion
The APT -algorithm presented here is capable of tracking changes in a conditional feature distribution over time, as arising in the context of drifting subpopulations. An important capability of the algorithm is that it does not depend on current, labelled data but rather uses information on changes in the unconditional feature distribution for its updates. The experimental evaluation has shown that the algorithm yields results which are in many cases comparable to the true Bayesian error rate. Furthermore, the results confirm the importance of verification latency on classifier performance. Further research will focus on statistical tests to assess the goodness-of-fit of hypothetical explicit drift models. This is a requirement for further experimental evaluation of the approach on real-world data sets, as the algorithm should only be applied if its assumed drift model has been confirmed. However, with the algorithm presented here, the most likely distribution parameters for this model can be identified, which is a prerequisite for any subsequent goodness-of-fit test. Given that the assumption on the drift model holds, the experiments performed so far promise a viable strategy to address the problems of drift and verification latency. Acknowledgements. For their constructive comments on this paper in particular and drift modelling in general, I owe special thanks to Vera Hofer and Bernhard Nessler, as well as to the referees of this paper. I would also like to thank the development teams of GNU Octave and of the R language and environment, as well as Yi Cao of Cranfield University for sharing his fast implementation of the Munkres algorithm.
References 1. Aggarwal, C.C.: On change diagnosis in evolving data streams. IEEE Transactions on Knowledge and Data Engineering 17(5), 587–600 (2005) 2. Aggarwal, C.C., Han, J., Wang, J., Yu, P.S.: A framework for clustering evolving data streams. In: Proceedings of the VLDB Conference (2003) 3. B¨ ottcher, M., H¨ oppner, F., Spiliopoulou, M.: On exploiting the power of time in data mining. ACM SIGKDD Explorations Newsletter 10(2), 3–11 (2008) 4. Burkard, R.E., Dell’Amico, M., Martello, S.: Assignment Problems. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2009) 5. Dempster, A.P., Laird, N.M., Rubin, D.: Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B 39, 1–38 (1977)
The Algorithm APT to Classify in Concurrence of Latency and Drift
233
6. Duong, T., Hazelton, M.L.: Cross-validation bandwidth matrices for multivariate kernel density estimation. Scandinavian Journal of Statistics 32, 485–506 (2005) 7. Eldershaw, C., Hegland, M.: Cluster analysis using triangulation. In: Noye, B.J., Teubner, M.D., Gill, A.W. (eds.) Proceedings of the Computational Techniques and Applications Conference (1997) 8. Estivill-Castro, V., Lee, I.: Autoclust: Automatic clustering via boundary extraction for mining massive point-data sets. In: In Proceedings of the 5th International Conference on Geocomputation, pp. 23–25 (2000) 9. Frey, B.J., Dueck, D.: Clustering by passing messages between data points. Science 315(5814), 972–976 (2007) 10. Hand, D.J., Mannila, H., Smyth, P.: Principles of Data Mining. In: Adaptive Computation and Machine Learning. The MIT Press, Cambridge (2001) 11. Kogan, J.: Introduction to Clustering Large and High-Dimensional Data. Cambridge University Press, Cambridge (2007) 12. Kuhn, H.W.: The hungarian method for the assignment problem. Naval Research Logistics Quarterly 2, 83–97 (1955) 13. Lawler, E.: Combinatorial Optimization: Networks and Matroids. Dover Publications, New York (1976) 14. Liu, D., Nosovskiy, G.V., Sourina, O.: Effective clustering and boundary detection algorithm based on delaunay triangulation. Pattern Recognition Letters 29, 1261– 1273 (2008) 15. Marrs, G., Hickey, R., Black, M.: The impact of latency on online classification learning with concept drift. In: Bi, Y., Williams, M.-A. (eds.) KSEM 2010. LNCS, vol. 6291, pp. 459–469. Springer, Heidelberg (2010) 16. Munkres, J.: Algorithms for the assignment and transportation problems. Journal of the Society for Industrial and Applied Mathematics (SIAM) 5, 32–38 (1957) 17. Parzen, E.: On estimation of a probability density function and mode. Annals of Mathematical Statistics 33, 1065–1076 (1962) 18. Tsymbal, A.: The problem of concept drift: definitions and related work. Technical report, Department of Computer Science, Trinity College Dublin (2004) 19. Wand, M.P., Jones, M.C.: Kernel Smoothing. Chapman and Hall, Boca Raton (1995) 20. Zhu, X.: Semi-supervised learning literature survey. Technical Report 1530, University of Wisconsin (2005) 21. Zliobait˙e, I.: Learning under concept drift: an overview. Technical report, Vilnius University (2009)
Identification of Nuclear Magnetic Resonance Signals via Gaussian Mixture Decomposition Martin Krone1,2 , Frank Klawonn1,2 , Thorsten L¨ uhrs2 , and Christiane Ritter2 1
Department of Computer Science Ostfalia University of Applied Sciences Salzdahlumer Str. 46/48, D-38302 Wolfenbuettel, Germany {ma.krone,f.klawonn}@ostfalia.de 2 Division of Structural Biology Helmholtz-Centre for Infection Research Inhoffenstr. 7, D-38124 Braunschweig, Germany {thorsten.luehrs,christiane.ritter}@helmholtz-hzi.de
Abstract. Nuclear Magnetic Resonance spectroscopy is a powerful technique for the determination of protein structures and has been supported by computers for decades. One important step during this process is the identification of resonances in the data. However, due to noise, overlapping effects and artifacts occuring during the measurements, many algorithms fail to identify resonances correctly. In this paper, we present a novel interpretation of the data as a sample drawn from a mixture of bivariate Gaussian distributions. Therefore, the identification of resonances can be reduced to a Gaussian mixture decomposition problem which is solved with the help of the Expectation-Maximization algorithm. A program in the Java programming language that exploits an implementation of this algorithm is described and tested on experimental data. Our results indicate that this approach offers valuable information such as an objective measure on the likelihood of the identified resonances. Keywords: Nuclear Magnetic Resonance spectroscopy, Gaussian mixture decomposition, Expectation-Maximization algorithm.
1
Introduction
Nuclear Magnetic Resonance spectroscopy (NMR) is a commonly used technique for determining the 3D structure of proteins [1]. Proteins carry out multiple cellular functions, like cell-metabolism and cell-signaling. The majority of current pharmacological compounds, including antibiotics, interfere with specific protein functions, where the specificity of drugs depends on the unique 3D structure of the targeted proteins. To be able to solve protein 3D structures efficiently is consequently a central requirement of modern biological research. The process to solve protein 3D structures by NMR is usually divided into (i) data collection and data processing, (ii) identification of resonances (so-called peaks), (iii) resonance assignment, (iv) constraint generation, and finally (v) structure calculation. During the second and the fourth step, peaks have to be identified from J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 234–245, 2011. c Springer-Verlag Berlin Heidelberg 2011
Identification of NMR Signals via Gaussian Mixture Decomposition
235
the spectra as intensities in a two-dimensional data matrix. The shape of individual peaks is usually approximated by a mixture of Gaussian and Lorentzian functions in both dimensions where the relative contribution of these functions to the peak-shapes is dependent on the routines applied for data processing. While a manual analysis of the data by an NMR spectroscopist is a slow process, an automated identification of resonances remains a challenging task owing to non-ideal and non-uniform peak-shapes, spectral artifacts and noisy data. However, as the reliable identification of overlapping peak positions is usually the most complex problem for both humans and algorithms, confident and complete peak picking can still be considered [2] a major bottleneck towards the fully automated determination of protein structures. One commonly used method for automatic peak picking was proposed by Koradi et al. and is implemented in the AUTOPSY program [3]. Their algorithm identifies peaks in overlapping regions using lineshapes that were previously extracted from well-separated peaks in combination with symmetry considerations. Alipanahi et al. [4] presented a singular value decomposition (SVD) approach to split each set of points above the noise into a set of lineshapes that can be searched for local extrema followed by a multi-stage refinement procedure. Carrara et al. used neural networks trained with backpropagation [5] for peak picking and their results are comparable to those of an experienced spectroscopist. In this paper, a novel method for peak identification and peak separation is presented. Under the assumption that the Lorentzian contribution to the peakshapes can be neglected due to the way the data are processed, the spectrum is interpreted as a sample drawn from a mixture of bivariate Gaussian distributions. The decomposition of this mixture by the Expectation-Maximization (EM) algorithm [6,7] allows to determine the exact locations of the peak-centers with the help of the properties of the individual distributions. This idea is used in a program in the Java programming language that exploits an implementation of the EM algorithm provided by the MCLUST package [8] of the R framework [9]. Unlike other methods, our approach provides an objective measure of how well a model can be fitted to the data in the spectrum and allows for an easy comparison of these models. Furthermore, this approach offers clear advantages if the spectra contain peaks with non-uniform line-shapes, which is typically the case for NMR spectra of folded proteins. A related strategy was previously proposed by Hautaniemi et al. who fitted data to single Gaussian distributions in the context of microarray quality control [10]. The remainder of this paper is organized as follows. In Sect. 2, we review Gaussian mixture models as well as the problem of their decomposition. In Sect. 3, we show how the decomposition of mixture models can be applied to the identification and separation of peaks in NMR spectra and present an outline of our strategy. In Sect. 4, we describe our implementation in the Java programming language which exploits several routines of the R framework and of the MCLUST package. In Sect. 5, this implementation is tested on an experimental NMR spectrum. Finally, the conclusions are presented in Sect. 6.
236
2
M. Krone et al.
Gaussian Mixture Models
The probability density function of a bivariate Gaussian distribution can be written in the form 1 1 T −1 (˜ x − μ) exp − Σ (˜ x − μ) , (1) f (˜ x | μ, Σ) = 1 2 2π|Σ| 2 T
where μ = (μ1 , μ2 ) describes the mean and Σ is the positive definite and invertible 2 × 2 covariance matrix. A mixture of Gaussians is a set of c (overlapping) Gaussian distributions (also referred to as components) with overall probability density function c g(˜ x | θ) = pi f (˜ x | θi ) , (2) i=1
where θ = (θ1 , θ2 , . . . , θc ), θk = (μk , Σk ), pi ∈ [0, 1] for all i ∈ {1, 2, . . . , c} and c i=1 pi = 1. When drawing a sample from g, the mixture weight pj can be interpreted as the probability that the sample will be drawn from a bivariate Gaussian distribution with parameters (μj , Σj ). The probability density function of a sample mixture of three bivariate Gaussian distributions is shown in Fig. 1.
Fig. 1. Overall probability density function of a mixture of three bivariate Gaussian distributions
The form of the individual components in (2) depends on their covariance matrices. In [11], it was proposed to parameterize Σk with the help of its eigenvalue decomposition Σk = λk Dk Ak DkT , (3) where the scalar λk controls the volume of the kth component, Dk its orientation and Ak its shape. Restricting some of these parameters will result in the components sharing the same corresponding properties. For n independent bivariate samples (˜ y1 , . . . , y˜n ), the likelihood L that the data was generated by a Gaussian mixture model with d components, mixture weights (p1 , . . . , pd ) and parameters (θ1 , . . . , θd ) is given by
Identification of NMR Signals via Gaussian Mixture Decomposition
L = P (˜ y1 , . . . , y˜n | p1 , . . . , pd , θ1 , . . . , θd ) =
d n
pj f (˜ y i | θj ) .
237
(4)
i=1 j=1
Given an observation, mixture decomposition algorithms aim at finding the model for which the observed data is most likely, i.e. they seek the number and values of the parameters that maximize the likelihood L. In contrast to other methods that have been proposed for finding the maximum likelihood estimate, for example the Newton-Raphson approach or Fisher’s scoring algorithm, the EM algorithm is both numerically stable and easy to implement which makes it suitable for this problem. This iterative algorithm consists of two steps, a so-called Expectation or E-step and a Maximization or M-step that are invoked alternatingly until convergence is obtained. However, like other prototype-based methods such as k-means, the EM algorithm is not able to determine the number of components automatically. Consequently, this number either needs to be passed to the algorithm or has to be estimated from the dataset. While several approaches have been proposed to determine the number of components automatically, for example using a repeated split and merge technique [12] or via a minimum description length approach [13], we propose a simplier strategy suitable for peak identification in Sect. 4.
3 3.1
New Approach to Peak Picking Peak Picking via Gaussian Mixture Decomposition
Our key idea is to interpret the spectrum as a sample drawn from a mixture of bivariate Gaussian distributions with an unknown number of components and unknown parameters. The decomposition of this mixture based on the sample will allow to determine the number and position of the peaks with the help of the parameters of the individual distributions. Let Ix,y denote the intensity value in the spectrum at the point (x, y). As a consequence of our interpretation of the data, this value (or Ix,y , respectively, as Ix,y is usually not an integer) equals the absolute frequency with which (x, y)T has been drawn from the mixture of bivariate Gaussian distributions. Thus, the sample that is used for the decomposition into the individual distributions will contain (x, y)T exactely Ix,y times. However, depending on the size of the spectrum and the number of peaks, this dataset can easily contain thousands of elements resulting in high running times for the EM algorithm. Instead, we propose to scale these numbers in a linear way as follows: 1. Determine the maximum intensitiy Imax present in the spectrum and choose a real number k. 2. For each point (x, y) in the spectrum, add (x, y)T to the sample exactely Ix,y Fx,y = k · (5) Imax times. If a noise level was estimated for the spectrum, it needs to be taken into account in (5). However, this is beyond the scope of this paper.
238
M. Krone et al.
Once the dataset has been generated, the EM algorithm can be used for the mixture decomposition in order to determine the number of components as well as their parameters. The estimated mean of each component can be interpreted as the center of the corresponding peak and will be used during the subsequent steps of the analysis. Due to the way the NMR experiments are recorded, the orientation of the peaks is a priori known to be parallel to the coordinate axes while the volume and shape can vary among the peaks. Consequently, for our field of application, the eigenvalue decomposition of Σk in (3) can be simplified to Σk = λk Dk Ak DkT = λk IAk I T = λk Ak ,
(6)
where I denotes the identity matrix. 3.2
Outline of the Strategy
Interpreting the spectrum as a sample drawn from a mixture of bivariate Gaussian distributions will allow the use of certain density-based algorithms, such as the DBScan algorithm [14] for segmenting the spectrum into clusters above the noise. However, our approach also requires some preprocessing-steps prior to the actual identification and separation of peaks as well as experience of the user, for example by deciding whether or not the results of the peak picking require further refinement. An overview of our strategy is shown in Fig. 2.
Read spectrum ȱ¡ȱ ȱȱ ȱ
ȱę ȱ Semi-automatic ȱ
of the data
¡ȱ No further ę required
ȱ
ȱȱȱ
£ȱ ę required
Semi-automatic ęȱ¢ȱȱ No further ęȱ
ęȱ
ȱ
Fig. 2. Overview of our strategy for the peak identification and peak separation
Identification of NMR Signals via Gaussian Mixture Decomposition
4 4.1
239
Implementation Overview
The R framework is considered one of the most powerful tools for statistical computing and is used in many projects. Due to its modular design, new features can easily be added with the help of packages. The MCLUST package, for example, provides routines for estimating the parameters of multivariate mixture models via the EM algorithm [15,16]. In order to carry out the peak picking and peak separation via Gaussian mixture decomposition, a program in the Java programming language was created that reads spectra in the XEASY format [17] and visualizes them on screen. Other formats can be added with little effort as the internal representation of spectra in the program is independent of their storage on disk. As it was already pointed out in the introduction, the program itself does not contain an implementation of the EM algorithm. Instead, several routines of the MCLUST package are exploited. Clearly, this approach requires an interface that is capable of running R commands from within the Java program. While there are several interfaces available to the public, most of them are either limited to programs in the C programming language or lack features that are required by our approach, for example the capability of loading separate R packages, and were consequently not considered further. The Rserve implementation [18], however, offers several useful features such as remote connections based on the TCP/IP protocol as well as the ability of handling multiple clients at the same time and was consequently used in the Java program. Figure 3 provides an overview of the software architecture.
DBScan
R framework
MCLUST
Rserve
TCP / IP
Java application
EM algorithm
Fig. 3. Visualization of the software architecture
As a consequence of using the TCP/IP protocol for the communication between Rserve and the Java application, very little effort is required to separate the peak picking from the visualization on screen. A central node could be used for all computations and is the only computer that requires the R framework and the MCLUST package to be installed while the application can be run on many client computers at the same time. This separation would result in less maintenance work when updating the R framework or the MCLUST package.
240
4.2
M. Krone et al.
Peak Picking Implementation
Once a spectrum in the XEASY format has been loaded, the following preprocessing steps (see. Fig. 2) are performed: 1. At first, either the noise in the spectrum is estimated and removed from the data or a threshold below which all intensity values are set to zero is defined. Only the second option is currently implemented, but a new method for the noise level estimation is in development. 2. In the second step, the entire spectrum is segmented into clusters of intensity values that have not been removed in the first step. These clusters are determined with the help of the technique described in Sect. 3.1 and the DBScan implementation of the R framework. This algorithm uses two parameters: the maximum distance ε between points and m, the minimum number of points in a cluster. While the application uses ε = 2 and m = 7 by default, the user is able to adjust these values to the specific properties of the given spectrum. Each cluster C is then represented by the set of points (x, y) that belong to C. Finally, all clusters are visualized on screen by rendering their convex hull that is calculated with the help of the Graham scan method [19]. After these preprocessing steps, the peak picking and peak separation is done as follows. For each cluster C, the intensity values of all points in C are analyzed and the number of local maxima for a chosen point tolerance Δ is determined. A point (x, y) ∈ C is denoted a local maximum of C with respect to Δ if Ix,y =
max {Ix ,y | x − Δ ≤ x ≤ x + Δ ∧ y − Δ ≤ y ≤ y + Δ}
(x ,y )∈C
(7)
holds. Note that the number of local maxima is determined with the help of the original intensity values and not using the number of times the coordinates are added to the sample (5) as this approach is likely to be error-prone. Now, the minimum number of peaks, l, is estimated by the number of local maxima with Δ = 4 while the upper limit of peaks, u, is determined with Δ = 1. This estimation serves to reduce the effect of noise that is still present in the data after the first preprocessing step. If the number of peaks was simply estimated by Δ = 1, an overestimation would likely occur. Once the lower and upper limits on the number of peaks have been determined for the cluster that is currently being analyzed, the sample for the mixture decomposition is created from the points in C with the help of the technique described in Sect. 3.1 where not the maximum intensity of the entire spectrum is taken as reference, but of all points in C. The EM algorithm is then called from the routines provided by the MCLUST package with the help of the Rserve interface and the sample, the values of l and u and the form of the eigenvalue decomposition of the covariance matrices (6) are passed as parameters. The routines determine the most likely mixture model with at least l and at most u components with respect to the Bayesian information criterion [20] which adds a penalty to complex models to avoid overfitting. Finally, the most likely model is returned to the Java program and the means of the components are parsed from the output. Once all clusters have been processed, the means are visualized as the peak-centers on screen.
Identification of NMR Signals via Gaussian Mixture Decomposition
241
In contrast to performing the peak picking and peak separation on the entire spectrum in a single step, segmenting the spectrum into clusters that are handled individually provides the following advantages: – Estimating the number of peaks in a single cluster is easier than estimating the number of peaks in the entire spectrum. – The peak picking process can easily be repeated on a single cluster if the number of peaks was estimated incorrectly. – The running time of the EM algorithm is decreased as the number of elements in the sample does not depend on the maximum intensity value of the entire spectrum, but on the highest value within each cluster. If the spectroscopist does not endorse the results of the automatic peak picking for a certain cluster, this is most likely due to an incorrect estimation on the number of peaks in this cluster. Consequently, an option was added to the Java program that allows the spectroscopist to specify the lower and upper bound on the number of peaks manually and repeat the mixture decomposition for this cluster until no further refinement is considered neccessary.
5
Experimental Results
To evaluate our strategy, the Java program was used to perform the peak picking and peak separation step on an 1 H-15 N correlation NMR spectrum of the prion protein HET-s from the fungal wheat pathogen Fusarium graminearum [21]. The protein had been subjected to hydrogen/deuterium exchange for 24 hours resulting in a broad distribution of peak intensities including very weak and fully absent peaks (corresponding to a complete loss of 1 H). The spectrum was recorded on a 600 MHz Bruker Avance III spectrometer equipped with a cryoprobe unit. The raw data were processed with the program PROSA [22] resulting in a data matrix of 1024 points in the 1 H dimension and 256 points in the 15 N dimension. In the first step, a manually chosen threshold was used to remove most of the noise from the spectrum. Then, the DBScan algorithm was invoked with the default parameter values and segmented the entire spectrum into a total of 20 clusters. The result of this segmentation can be seen in Fig. 4 where the coloring relates to the intensity value in the data at the corresponding position. While most of the clusters contain only very few peaks and can easily be analyzed, the automatic identification and separation of peaks within the two highlighted clusters in Fig. 4 proved to be more difficult and will now be described in detail. With the help of the technique described in Sect. 4.2, the number of peaks in the largest cluster (indicated by the right arrow) was estimated to be seven or eight. To verify the accuracy of this estimation, the likelihood that the data in this cluster can best be described by a mixture model with n components was evaluated with respect to the Bayesian information criterion for 3 ≤ n ≤ 12. It can be seen in Fig. 5(a) that the maximum value of the BIC is obtained by a mixture model with eight components. During the automatic analysis of this spectrum, the routines of the MCLUST package were invoked with l = 7 and u = 8
242
M. Krone et al. ω (1 H) 8.4
8.2
8.0
7.8
7.6 122
118
114
110
ω (15 N)
Fig. 4. Segmentation of the spectrum after a manually chosen threshold (selected as 400) was applied -26000 -26200 -26400
BIC
-26600 -26800 -27000 -27200 -27400 -27600 -27800
3
4
5
6
7 8 9 Number of components
(a)
10
11
12
(b)
Fig. 5. (a) Values of the Bayesian information criterion for models with 3 to 12 components for the data in the largest cluster. (b) Visualization of the most likely mixture model for the largest cluster. The estimated peak-centers are marked with white crosses.
and returned the most likely mixture model with eight components. This model is visualized in Fig. 5(b) where the peak-centers are marked with white crosses. In contrast, both the lower and upper limit on the number of peaks in the second largest cluster (highlighted by the left arrow in Fig. 4) was estimated to be five. However, the visualization of the peak-centers of the most likely model with five components by white squares in Fig. 6(b) indicates that this estimation is probably incorrect. This impression is supported by the plot of the BIC-values for the most likely mixture models with at least 2 and at most 10 components in Fig. 6(a). In this plot, the maximum BIC-value is obtained by a model with six components. With the help of the refinement capability of the software, the peak picking for this cluster was manually restarted with both l and u being set to 6 and the most likely mixture model with six components is shown in Fig. 6(b) where the peak-centers are marked with white crosses. Clearly, it may happen that it is very difficult for the user to estimate the number of peaks in a cluster. In those cases, the objective measure provided by the Bayesian information criterion can be used as a starting point for the number of components in models that are fitted to the data in the cluster.
Identification of NMR Signals via Gaussian Mixture Decomposition
243
-13800 -13900 -14000 -14100
BIC
-14200 -14300 -14400 -14500 -14600 -14700 -14800
2
3
4
6 7 5 Number of components
(a)
8
9
10
(b)
Fig. 6. (a) Plot of the BIC-values for the most likely models with 2 to 10 components for the data in the second largest cluster. (b) Visualization of the most likely mixture models with five components (white squares) and six components (white crosses).
6
Conclusions
The correct identification and separation of peaks in NMR spectra can be a difficult task and may result in severe problems during the automatic protein structure determination. Under the assumption that the Lorentzian contribution to the peak-shapes can be neglected due to the way data are processed, we have presented a novel interpretation of the spectrum as a sample drawn from a mixture of bivariate Gaussian distributions. This interpretation allows to reduce peak identification and peak separation to a Gaussian mixture decomposition problem that is solved by the Expectation-Maximization algorithm. We described a program in the Java programming language that exploits routines of the R framework and an implementation of the EM algorithm provided by the MCLUST package to select the most likely mixture model from a set of models with a varying number of components. This selection is based on the Bayesian information criterion in order to avoid overfitting. The objective likelihood measure provided by this criterion may be considered advantageous in comparison to manual peak-picking strategies. Currently, our implementation is still at an early stage where several improvements remain to be added. These include: – Routines for a better estimation of the noise level in the spectrum. – A superior strategy for determining the number of peaks in a given cluster, for example with the help of the minimum description length principle. – Currently, points in the spectrum can only be added to the sample for the mixture decomposition if their coordinates are integers. However, as the peak centers are most likely not integers, it might be helpful to increase the resolution of the data by means of interpolation techniques. – Peak integration with the help of the parameters found by the EM algorithm to support the remaining steps of the protein structure determination.
244
M. Krone et al.
– An investigation whether or not a mixture of Gaussian and Lorentzian functions for the peak-shapes can be considered for the mixture decomposition by the EM algorithm.
References 1. W¨ uthrich, K.: NMR of Proteins and Nucleic Acids. John Wiley, New York (1986) 2. Williamson, M.P., Craven, C.J.: Automated protein structure calculation from NMR data. J. Biomol. NMR 43, 131–143 (2009) 3. Koradi, R., Billeter, M., Engeli, M., G¨ untert, P., W¨ uthrich, K.: Automated Peak Picking and Peak Integration in Macromolecular NMR Spectra Using AUTOPSY. J. Magn. Reson. 135, 288–297 (1998) 4. Alipanahi, B., Gao, X., Karakoc, E., Donaldson, L., Li, M.: PICKY: a novel SVD-based NMR spectra peak picking method. Bioinformatics 25, i268–i275 (2009) 5. Carrara, E.A., Pagliari, F., Nicolini, C.: Neural Networks for the Peak-Picking of Nuclear Magnetic Resonance Spectra. Neural Networks 6, 1023–1032 (1993) 6. Dempster, A.P., Laird, N.M., Rubin, D.B.: Maximum Likelihood from Incomplete Data via the EM Algorithm. J. Roy. Stat. Soc. B. Met. 39, 1–38 (1977) 7. McLachlan, G.J., Krishnan, T.: The EM Algorithm and Extensions. John Wiley & Sons, Chichester (1997) 8. Fraley, C., Raftery, A.E.: MCLUST Version 3 for R: Normal Mixture Modeling and Model-based Clustering. Technical report, University of Washington (2009) 9. R Development Core Team: R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2010) ISBN 3-900051-07-0 10. Hautaniemi, S., Edgren, H., Vesanen, P., Wolf, M., J¨ arvinen, A.-K., Yli-Harja, O., Astola, J., Kallioniemi, O., Monni, O.: A novel strategy for microarray quality control using Bayesian networks. Bioinformatics 19, 2031–2038 (2003) 11. Banfield, J.D., Raftery, A.E.: Model-Based Gaussian and Non-Gaussian Clustering. Biometrics 49, 803–821 (1993) 12. Wang, H.X., Luo, B., Zhang, Q.B., Wei, S.: Estimation for the number of components in a mixture model using stepwise split-and-merge EM algorithm. Pattern Recogn. Lett. 25, 1799–1809 (2004) 13. Pernkopf, F., Bouchaffra, D.: Genetic-Based EM Algorithm for Learning Gaussian Mixture Models. IEEE T. Pattern Anal. 27, 1344–1348 (2005) 14. Ester, M., Kriegel, H.-P., Sander, J., Xu, X.: A Density-Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise. In: Simoudis, E., Han, J., Fayyad, U.M. (eds.) Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD 1996), pp. 226–231. AAAI Press, Menlo Park (1996) 15. Fraley, C., Raftery, A.E.: Model-based clustering, discriminant analysis, and density estimation. J. Am. Stat. Assoc. 97, 611–631 (2002) 16. McLachlan, G.J., Basford, K.E.: Mixture models: Inference and applications to clustering. Dekker, New York (1988) 17. Bartels, C., Xia, T.-H., Billeter, M., G¨ untert, P., W¨ uthrich, K.: The program XEASY for computer-supported NMR spectral analysis of biological macromolecules. J. Biomol. NMR 6, 1–10 (1995) 18. Urbanek, S.: Rserve – A Fast Way to Provide R Functionality to Applications. In: Hornik, K., Leisch, F., Zeileis, A. (eds.) Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), Vienna, Austria (2003)
Identification of NMR Signals via Gaussian Mixture Decomposition
245
19. Graham, R.L.: An efficient algorithm for determining the convex hull of a planar set. Inform. Process. Lett. 1, 132–133 (1972) 20. Schwarz, G.: Estimating the Dimension of a Model. Ann. Stat. 6, 461–464 (1978) 21. Wasmer, C., Zimmer, A., Sabat´e, R., Soragni, A., Saupe, S.J., Ritter, C., Meier, B.H.: Structural similarity between the prion domain of HET-s and a homologue can explain amyloid cross-seeding in spite of limited sequence identity. J. Mol. Biol. 402, 311–325 (2010) 22. G¨ untert, P., D¨ otsch, V., Wider, G., W¨ uthrich, K.: Processing of multi-dimensional NMR data with the new software PROSA. J. Biomol. NMR 2, 619–629 (1992)
Graphical Feature Selection for Multilabel Classification Tasks Gerardo Lastra, Oscar Luaces, Jose R. Quevedo, and Antonio Bahamonde Artificial Intelligence Center. University of Oviedo at Gij´ on, Asturias, Spain www.aic.uniovi.es
Abstract. Multilabel was introduced as an extension of multi-class classification to cope with complex learning tasks in different application fields as text categorization, video o music tagging or bio-medical labeling of gene functions or diseases. The aim is to predict a set of classes (called labels in this context) instead of a single one. In this paper we deal with the problem of feature selection in multilabel classification. We use a graphical model to represent the relationships among labels and features. The topology of the graph can be characterized in terms of relevance in the sense used in feature selection tasks. In this framework, we compare two strategies implemented with different multilabel learners. The strategy that considers simultaneously the set of all labels outperforms the method that considers each label separately.
1
Introduction
Many complex classification tasks share that each instance can be assigned with more than one class or label instead of a single one. These tasks are called multilabel to emphasize the multiplicity of labels. This is the case of text categorization where items have to be tagged for future retrieval; frequently, news or other kind of documents should be annotated with more than one label according to different points of view. Other application fields include semantic annotation of images and video, functional genomics, music categorization into emotions and directed marketing. Tsoumakas et al. in [11,12] have made a detailed presentation of multilabel classification and their applications. From a computational perspective, the aim of multilabel classification is to obtain simultaneously a collection of binary classifications; the positive classes are referred to as labels, the so-called relevant labels of the instances. A number of strategies to tackle multilabel classification tasks have been published. Basically, they can be divided in two groups [11,12]. Strategies in the first group try to transform the learning tasks into a set of single-label (binary or multiclass) classification tasks. Binary Relevance (BR) is the most simple, but very effective, transformation strategy. Each label is classified as relevant or irrelevant without any relation with the other labels. On the other hand, proper multilabel strategies try to take advantage of correlation or interdependence between labels. The presence or absence of a label J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 246–257, 2011. c Springer-Verlag Berlin Heidelberg 2011
Graphical Feature Selection for Multilabel Classification Tasks
247
in the set assigned to an instance may be conditioned not only by the feature values of the instance, but also by the values of the remaining labels. Feature selection is an important issue in machine learning in general. In multilabel, accordingly to [12], most feature selection tasks have been addressed by extending the techniques available for single-label classification using the bridge provided by multilabel transformations. Thus, when the BR strategy is used, it is straightforward to employ a feature subset selection on each binary classification task, and then combining somehow the results [16]. In [10], the authors present a feature selection strategy based on the transformation called label powerset. Kong et al. [6] presented a multilabel selection method in the special case where instances are graphs so that the selection has to find subgraphs. Finally, in [18] feature selection is performed using a combination of principal component analysis with a genetic algorithm. We propose to extend a well known filter devised for multiclass classification tasks, FCBF (Fast Correlation-Based Filter) [17]. This filter computes the relation between features and the target class using a non-linear correlation measure, the Symmetrical Uncertainty (SU). For this reason we have to assume that all feature values are discrete. The core idea of the method proposed here is to represent the relationships between the variables involved (features and labels) in a multilabel classification task by means of a graph computed in two stages. First, we build the matrix of SU scores for all pairs of variables. Then, we compute the spanning tree of the complete undirected graph where the nodes are the variables, and the edges are weighted by SU scores. Clearly, the inspiration underlying this apporach is rooted in seminal papers such as [2]; in this case a spanning tree was used to factorize a probability distribution from a Bayesian point of view. In [14], this kind of graphs were used for multi-dimensional classifications. Our approach, however, is not based on Bayesian networks. We prove that the spanning tree links can be characterized in terms of relevance (in the sense of feature selection) and redundance of features and labels. The paper is organized as follows. In the next section we present the formal framework for multilabel classification including the definition of scores and loss functions devised to measure the performance of classifiers. Then we present the graphical model that relates features and labels. The fourth section is devoted to report and discuss a number of experiments carried out to evaluate the proposals of the paper. The last section summarizes some conclusions about the work presented here.
2
Formal Framework for Multilabel Classification
A formal presentation of a multilabel classification learning task can be given as follows. Let L be a finite and non-empty set of labels {l1 , . . . , l|L| }, and let X be an input space. A multilabel classification task can be represented by a dataset
248
G. Lastra et al.
D = {(x1 , Y1 ), . . . , (x|D| , Y|D| )}
(1)
of pairs of instances xi ∈ X and subsets of labels Yi ⊂ L. The goal is to induce from D a hypothesis defined as follows. Definition 1. A multilabel hypothesis is a function h from the input space to the set of subsets (power set) of labels P(L); in symbols, h : X −→ P(L) = {0, 1}L.
(2)
Given a multilabel classification task D, there is a straightforward approach to induce a multilabel hypothesis from a dataset D, the so-called Binary Relevance strategy. For each l ∈ L, this approach induces a binary hypothesis hl : X −→ {0, 1},
(3)
and then its predictions are defined as h(x) = {l : hl (x) = 1}. In any case, the prediction h(x) of a multilabel hypothesis can be understood as the set of relevant labels retrieved for a query x. Thus, multilabel classification can be seen as a kind of Information Retrieval task for each instance; in this case the labels play the role of documents. Performance in Information Retrieval is compared using different measures in order to consider different perspectives. The most frequently used measures are Recall (proportion of all relevant documents (labels) that are found by a search) and P recision (proportion of retrieved documents (labels) that are relevant). The harmonic average of the two amounts is used to capture the goodness of a hypothesis in a single measure. In the weighted case, the measure is called Fβ . The idea is to measure a tradeoff between Recall and P recision. For further reference, let us recall the formal definitions of these measures. Thus, for a prediction of a multilabel hypothesis h(x), and a subset of truly relevant labels Y ⊂ L, we can compute the following contingency matrix, h(x) L \ h(x)
Y a c
L\Y b d
(4)
in which each entry (a, b, c, d) is the number of labels of the intersection of the corresponding sets of the row and column. Notice for instance, that a is the number of relevant labels in Y predicted by h for x. According to the matrix, (Eq. 4), we thus have the following definitions. Definition 2. The Recall in a query (i.e. an instance x) is defined as the proportion of relevant labels Y included in h(x): R(h(x), Y ) =
|h(x) ∩ Y | a = . a+c |Y |
(5)
Graphical Feature Selection for Multilabel Classification Tasks
249
Definition 3. The Precision is defined as the proportion of retrieved labels in h(x) that are relevant Y : P (h(x), Y ) =
|h(x) ∩ Y | a = . a+b |h(x)|
(6)
Finally, the tradeoff is formalized by Definition 4. The Fβ is defined, in general, by Fβ (h(x), Y ) =
(1 + β 2 )a (1 + β 2 )P R = . β2P + R (1 + β 2 )a + b + β 2 c
(7)
The most frequently used F-measure is F1 . For ease of reference, let us state the formula of F1 for a multilabel classifier h and a pair (x, Y ): F1 (h(x), Y ) =
2|h(x) ∩ Y | . |Y | + |h(x)|
(8)
These measures are not proper loss functions in the sense that high scores mean good performance. Thus, for instance, to obtain a loss function from Fβ scores it is necessary to compute the complementary (1 − Fβ ). In any case, when we try to optimize Fβ , we mean to improve the performance according to this measure; that is, to maximize Fβ or to minimize 1 − Fβ . So far, we have presented functions able to evaluate the performance of a hypothesis on one instance x. To extend these functions to a test set, we shall use the so-called microaverage extension of these score functions. For further reference, let D = (xi , Yi ) : i = 1, . . . , |D | be a multilabel dataset used for testing. Moreover, for ease of reading, we have expressed the microaverage of F1 as percentages in the experiments reported at the end of the paper. Thus, the formulas for a hypothesis h are the following: |D |
100 2|h(xi ) ∩ Yi | . F1 (h, D ) = |D | i=1 |Yi | + |h(xi )|
(9)
Additionally, to avoid cumbersome notation, we have overloaded the meaning of the symbol F1 for the microaverage extensions.
3
Modeling the Relationships of Labels and Attributes
In this section we introduce a graphical representation of the relevance relationship between labels and the attributes or features used to describe input instances; in this paper we shall use attribute and feature as synonyms. To make a formal presentation, throughout this section, let D be a multilabel classification task (Eq. 1) with instances x ∈ X , and labels in L.
250
G. Lastra et al.
If X can be represented by vectors of dimension |X |, D can be seeing as a matrix M given by M = [X L] (10) where X and L are matrices with |D| rows (one for each training example), and |X | and |L| columns respectively. The first matrix, X, collects the input instance descriptions; while columns represent attributes (or features). As we said in the Introduction, we assume that the entries of matrix X are discrete values. On the other hand, the matrix L has Boolean values: L[i, j] = 1 if and only if the i-th example of D has the label lj ∈ L. In this paper we extend the filter FCBF (Fast Correlation-Based Filter) introduced in [17] to multilabel classification tasks. Since this filter was devised for dealing with multiclass classification tasks, we need to involve the whole set of labels. From a formal point of view, FCBF deals with a matrix X and just one column of matrix L. Thus, we are going to review the selection method of FCBF using the matrix M that collects all labels at the same time. Given a single class and a collection of predictive attributes or features, the filter FCBF proceeds in two steps: relevance and redundancy analysis, in this order. For both steps the filter uses the so-called symmetrical uncertainty, a normalized version of the mutual information. Let us now rewrite the formulation of this measure applied to the columns of the matrix M. It is based on a nonlinear correlation, the entropy, a measure of the uncertainty that is defined for a column mj of the matrix as follows H(mj ) = −
|D|
Pr(mij ) log2 (Pr(mij )).
(11)
i=1
Additionally, the entropy of a column mj after observing the values of another column mk is defined as H(mj |mk ) = −
|D| r=1
Pr(mrk )
|D|
Pr(msj |mrk ) log2 (Pr(msj |mrk )),
(12)
s=1
where Pr(mrk ) denotes the prior probabilities for all possible values of column mk ; and Pr(msj |mrk ) denotes the posterior probabilities of mj . In a similar way, it is possible to define H(mj , mk ) using in (Eq. 11) the joint probability distribution. The information gain (IG) of mj given mk , also known as the KullbackLeibler divergence, is defined as the difference between the prior and posterior entropy to the observed values of mj . In symbols, IG(mj |mk ) = H(mj ) − H(mj |mk ) = H(mj ) + H(mk ) − H(mj , mk ).
(13)
The information gain is a symmetrical measure. To ensure a range of values in [0, 1], FCBF uses a normalized version, the symmetrical uncertainty (SU) defined as follows IG(mj |mk ) H(mj , mk ) SU (mj , mk ) = 2 =2 1− . (14) H(mj ) + H(mk ) H(mj ) + H(mk )
Graphical Feature Selection for Multilabel Classification Tasks
251
To return the list of relevant variables for a single variable, FCBF first removes those attributes whose SU is lower or equal than a given threshold. Then, FCBF orders the remaining attributes in descending order of their SU with the class, and applies an iterative process to eliminate redundancy. This process is based on approximate Markov blankets; in the multilabel context, this concept can be formulated as follows. Definition 5. (Approximate Markov Blanket) Given three different columns m, mi and mj in M, mj forms an approximate Markov blanket for mi if and only if (15) SU (mj , m) ≥ SU (mi , m) ∧ SU (mi , mj ) ≥ SU (mi , m). Notice that the aim of this definition in [17] is to mark the feature mi as redundant with mj when the goal is to predict the values of m. To avoid tie situations that would require random choices, we exclude the equalities of (Eq. 15). In other words, we assume that all SU values are different. Hence, for further reference, we make the following definitions. Definition 6. (Redundancy) The column mi is redundant with mj for predicting m if and only if SU (mj , m) > SU (mi , m) ∧ SU (mi , mj ) > SU (mi , m).
(16)
Once we have reviewed the core of FCBF, to extend it to multilabel classification tasks, we start computing the Symmetrical Uncertainty (SU ) for all pairs of columns of matrix M. Definition 7. (Symmetrical Uncertainty Matrix) Given a multilabel classification task D, with labels in L, the SU matrix is formed by the symmetrical uncertainty of all columns of M (Eq. 10), SU = [SU (m, m ) : m, m ∈ columns]. This matrix represents a weighted undirected graph in which the set of vertices is the set of columns; that is, the set of attributes of X and labels in L. To order this graph, we now compute the spanning tree with maximum SU values. Definition 8. (Maximum Spanning Tree) MST is the maximum spanning tree of the SU matrix. Figure 1 shows one MST for an hypothetical dataset. Our aim now is to explain the meaning of this tree in terms of relevance of the attributes and labels. The general idea is to compare the topology of the MST with the results of applying the filter FCBF considering each column as the category and the others as predictors. To compute the MST we may use, for instance, Kruskal’s algorithm [7]. The edges are ordered from the highest to the lowest SU values. Then, starting from an empty MST, the algorithm iteratively adds one edge to the MST at each step, provided that it does not form a cycle in the tree. We shall see that this basic building step can be interpreted in terms of redundancy. First, however, we state some propositions to establish the ideas presented here.
252
G. Lastra et al.
X2
X2
X2
X2
X1
X2
X1
X1
l
X1
l
l
X1
X1
X2
X1
X1
X2
X2 X1 X2 X2
Fig. 1. Maximum Spanning Tree of an hypothetical multilabel task, see definition 8. Nodes marked with l stand for labels. Nodes with an Xi represent attributes: X1 are the attributes at distance 1 from labels, and X2 are attributes at distance 2.
Proposition 1. If m and m are two adjacent nodes in the MST defined in (Def. 8), then m is relevant for m using the filter FCBF. Proof. If we assume that there is another label m that removes m from the list of relevant nodes for m, then m will be redundant with m for m. In symbols, SU (m , m) > SU (m , m) ∧ SU (m , m ) > SU (m , m).
(17)
In this case, however, the link between m and m could not be included in the MST. Therefore, there cannot exist such a node m , and so m is relevant for m according to the filter FCBF. Proposition 2. If m is adjacent to m , and this label is adjacent to m (m = m) in the MST, then m is redundant with m for m. Proof. Consider the triangle of vertices m, m , m in the complete graph represented by the SU matrix (Def. 7). Since the edge m − m is not included in the MST, we have that SU (m , m) > SU (m , m) ∧ SU (m , m ) > SU (m , m). Thus, m is redundant with m for m according to (Def. 6). The conclusion is that, given a column m in M, its adjacent labels in the MST are relevant for it. Moreover, if m, m , m is a path in the MST, m is relevant for m , which is in turn relevant for m. Hence, m is redundant with m for m. However, sometimes redundant information helps classifiers to increase their performance, thus we may heuristically select some redundant items in order to achieve better performance. In our case, this heuristic is implemented fixing in the graph the distance to targets from predictors; this distance will be called level of proximity.
Graphical Feature Selection for Multilabel Classification Tasks
3.1
253
Multilabel Ranker
Taking into account the previous results, in order to select a subset of features to predict the labels, given a dataset D, we can fix a level of proximity k, and then our proposal is the following: – Compute the Symmetrical Uncertainty Matrix SU of attributes and labels. – Compute the Maximum Spanning Tree (MST). – Select the attribute nodes whose distance to any label is smaller than or equal to k, see Figure 1. Notice that this method produces a ranking of features; thus, we call it Multilabel Feature Ranker (MLfR). In fact, increasing the level of proximity (k) we obtain a sequel of features in decreasing order of usefulness to predict a set of labels. However, the features are returned in chunks (in quanta) instead of one by one. In general, there are more than one feature at a given distance from the set of labels.
4
Experimental Results
In this section we report a number of experiments conducted to test the multilabel feature ranker MLfR in two and complementary dimensions: the classification performance and the quality of the ranking. First we check the capacity of MLfR to optimize a performance score like F1 (Eq. 9). For this purpose, with each training data we built the MST (Def. 8) and then we selected the best k (Section 3.1) using an internal (in the training set) 2-fold cross validation repeated 5 times [4]. The range of k values included {1, 2, . . . , 20} and k50 , k75 and k100 , where kt is the smallest k value that ensures that the t% of all features are selected. Since the aim was to compare strategies for feature selection, the number of discretization bins were constant in all cases. To obtain a multilabel learner with this selection scheme we used two state of the art multilabel base learners. The first one is IBLR-ML [1]. We used the implementation provided by the authors in the library Mulan [13,12], which is built on the top of Weka [15]. We wrote an interface with MatLab. The second base learner used was the Ensemble of Classifier Chains (ECC) [9] in the version described in [3]; for this reason we called it ECC*. The implementation was made in MatLab using the BR built with LibLinear [8,5] with the default parameters: a logistic regression learner with regularization parameter C = 1. On the other hand, to test the quality of the ranking of features produced by MLfR, we computed the number of features selected by the first chunk (k = 1) and the F1 achieved with those features. To summarize in one number the quality of the first chunk of the ranking, we computed the contribution of each feature to the F1 as follows, contribution =
F1 (K = 1) . #f eatures(K = 1)
(18)
254
G. Lastra et al.
To compare the results obtained by MLfR, we computed the F1 scores achieved by the base learners without performing any selection at all. Additionally, to compare the quality of the rankings we wanted to contrast the multilabel ranking with a purely binary ranker; that is, a ranker that considers labels one by one. To make a fair comparison we implemented a binary relevance version of MLfR as follows. For each label l ∈ L, we computed MLfR considering only that label. The set of features at distance k from l obtained in this way, M Lf Rl (k), were joined together for all labels to get a chunk of level k in the so-called Binary Relevance Feature Ranker (BRfR). M Lf Rl (k). (19) BRf R(k) = l∈L
The comparison presented here was carried out using 8 datasets previously used in experiments reported in other papers about multilabel classification. Table 1 shows a summarized description of these datasets including references to their sources. Attributes with continuous values have been discretized in 10 bins using a same frequency procedure. The comparison was performed using a simple hold-out method. We used the split of datasets in training and testing sets provided by the sources of the data, when available. The size of the splits are also shown in Table 1. Other details about the datasets, included preprocessing, can be found following the references provided in the table. Table 1. The datasets used in the experiments, associated statistics, and references to the sources of the data
enron genbase medical slashdot emotions reuters scene yeast
#Instances train test total 1123 579 1702 463 199 662 333 645 978 2500 1282 3782 391 202 593 5000 2119 7119 1211 1196 2407 1500 917 2417
#fea. 1001 1185 1449 1079 72 243 294 103
|L| 53 27 45 22 6 7 6 14
Cardinality 3.38 1.25 1.25 1.18 1.87 1.24 1.07 4.24
Source [13] [13] [13] [9] [13] [1,19,20] [13] [13]
The scores achieved in F1 by the classifiers compared are shown in Table 2. To make statistical comparisons we considered together the scores obtained with all base learners, since the objective was to compare different selection strategies and not base learner scores. Thus, we observe that although the scores obtained by selectors are higher in average than those achieved without any selection, the differences are not significant using a paired, two-sided, Wilcoxon signed rank test. Also, there are no significant differences between the scores obtained with the two selection approaches. On the other hand, both selectors reduce considerably the number
Graphical Feature Selection for Multilabel Classification Tasks
255
Table 2. Number of features and F1 scores achieved in test data when the aim in grid search (for selectors) was to optimize F1
dataset IBLR-ML enron genbase medical slashdot emotions reuters scene yeast ECC* enron genbase medical slashdot emotions reuters scene yeast
Base #fea. F1 1001 1186 1449 1079 72 243 294 103 1001 1186 1449 1079 72 243 294 103
41.78 99.00 47.33 15.80 64.41 74.38 70.29 61.72 53.49 99.41 61.62 37.54 60.59 76.87 56.29 59.51
MLfR #fea. F1 26 46 88 46 72 189 294 103 920 68 88 1079 38 207 294 30
49.79 99.15 68.24 26.04 64.41 74.78 70.29 61.72 52.82 98.31 69.81 37.54 60.10 78.02 56.29 58.02
BRfR #fea. F1 49 40 85 48 72 151 294 103 1001 65 85 1079 37 204 294 38
48.25 98.31 70.41 26.27 64.41 76.78 70.29 61.72 53.49 98.31 69.14 37.54 59.37 78.07 56.29 59.03
Table 3. F1 and number of features selected in the first chunk (k = 1) for MLfR and BRfR. The scores achieved when no selection is performed is included for comparison. Additionally, for each ranker we computed the contribution of each feature to the F1 score (see Eq. 18).
dataset IBLR-ML enron genbase medical slashdot emotions reuters scene yeast ECC* enron genbase medical slashdot emotions reuters scene yeast
Base #fea. F1 1001 1186 1449 1079 72 243 294 103 1001 1186 1449 1079 72 243 294 103
41.78 99.00 47.33 15.80 64.41 74.38 70.29 61.72 53.49 99.41 61.62 37.54 60.59 76.87 56.29 59.51
#fea. 26 46 88 46 1 44 3 2 26 46 88 46 1 44 3 2
MLfR BRfR F1 contri. #fea. F1 contri. 49.79 99.41 68.24 26.04 33.04 74.13 24.58 51.58 42.58 97.81 69.81 25.72 36.42 74.95 13.85 54.33
1.92 2.16 0.78 0.57 33.04 1.68 8.19 25.79 1.64 2.13 0.79 0.56 36.42 1.70 4.62 27.16
49 40 85 48 5 45 6 9 49 40 85 48 5 45 6 9
48.25 0.98 99.15 2.48 70.41 0.83 26.27 0.55 52.44 10.49 73.93 1.64 38.35 6.39 54.65 6.07 51.77 1.06 98.31 2.46 69.14 0.81 25.40 0.53 49.97 9.99 74.91 1.66 25.98 4.33 56.57 6.29
256
G. Lastra et al.
of features used for classification. The differences are not significant between selectors. It could be expected important reductions of the number of features and the error rates for high dimensional problems such as Enron or Slashdot. But the scores shown in Table 2 for ECC* report null or insignificant reductions; the reason can be found in the poor quality of the classifiers, both datasets provide the smallest F1 scores for this learner. The statistically significant differences appear when we check the quality of the first chunk. Thus, the contribution of each of the features obtained with MLfR for k = 1 is significantly higher than that of the features returned by BRfR in the same conditions. In this sense we conclude that the ranking learned from a multilabel point of view is better than the ranking obtained considering each label separately.
5
Conclusions
We have presented an algorithm to learn a ranking of the features involved in a multilabel classification task. It is an extension of the FCBF (Fast CorrelationBased Filter) [17], and it uses a graphical representation of features and labels. The method so obtained, MLfR (multilabel feature ranker), was compared with a version that considers each label separately, in the same way as BR (Binary Relevance) learns a multilabel classifier. We experimentally tested that the multilabel version achieves significantly better results than the BR release when testing the quality of the rankings. Moreover, the graph built by MLfR provides a valuable representation of the correlation and interdependence between labels and features. We proved formally that the topology of the graph can be read in terms of relevancy and redundance of the features and labels. Acknowledgements. The research reported here is supported in part under grant TIN2008-06247 from the MICINN (Ministerio de Ciencia e Innovaci´on, Spain). We would also like to acknowledge all those people who generously shared the datasets and software used in this paper.
References 1. Cheng, W., H¨ ullermeier, E.: Combining Instance-Based Learning and Logistic Regression for Multilabel Classification. Machine Learning 76(2), 211–225 (2009) 2. Chow, C., Liu, C.: Approximating Discrete Probability Distributions with Dependence Trees. IEEE Transactions on Information Theory 14(3), 462–467 (1968) 3. Dembczy´ nski, K., Cheng, W., H¨ ullermeier, E.: Bayes optimal multilabel classification via probabilistic classifier chains. In: Proceedings of the 27th International Conference on Machine Learning (ICML) (2010) 4. Dietterich, T.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Computation 10(7), 1895–1923 (1998)
Graphical Feature Selection for Multilabel Classification Tasks
257
5. Fan, R., Chang, K., Hsieh, C., Wang, X., Lin, C.: LIBLINEAR: A Library for Large Linear Classification. Journal of Machine Learning Research 9, 1871–1874 (2008) 6. Kong, X., Yu, P.: Multi-label feature selection for graph classification. In: 2010 IEEE International Conference on Data Mining (ICDM 2010), pp. 274–283. IEEE, Los Alamitos (2010) 7. Kruskal Jr., J.: On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem. Proceedings of the American Mathematical Society 7(1), 48–50 (1956) 8. Lin, C.J., Weng, R.C., Keerthi, S.S.: Trust Region Newton Method for Logistic Regression. Journal of Machine Learning Research 9, 627–650 (2008) 9. Read, J., Pfahringer, B., Holmes, G., Frank, E.: Classifier Chains for Multi-label Classification. In: Buntine, W., Grobelnik, M., Mladeni´c, D., Shawe-Taylor, J. (eds.) ECML PKDD 2009. LNCS, vol. 5782, pp. 254–269. Springer, Heidelberg (2009) 10. Trohidis, K., Tsoumakas, G., Kalliris, G., Vlahavas, I.: Multilabel classification of music into emotions. In: Proc. 9th International Conference on Music Information Retrieval (ISMIR 2008), Philadelphia, PA, USA (2008) 11. Tsoumakas, G., Katakis, I.: Multi Label Classification: An Overview. International Journal of Data Warehousing and Mining 3(3), 1–13 (2007) 12. Tsoumakas, G., Katakis, I., Vlahavas, I.: Mining Multilabel Data. In: Maimon, O., Rokach, L. (eds.) Data Mining and Knowledge Discovery Handbook. Springer, Heidelberg (2010) 13. Tsoumakas, G., Vilcek, J., Spyromitros, L.: Mulan: A Java Library for Multi-Label Learning, http://mulan.sourceforge.net/ 14. Van Der Gaag, L., De Waal, P.: Multi-dimensional Bayesian Network Classifiers. In: Proceedings of the Third European Workshop in Probabilistic Graphical Models, Prague, pp. 107–114 (2006) 15. Witten, I., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann Pub., San Francisco (2005) 16. Yang, Y., Pedersen, J.: A comparative study on feature selection in text categorization. In: Proceedings of the International Conference on Machine Learning (ICML 1997), pp. 412–420. Citeseer (1997) 17. Yu, L., Liu, H.: Efficient Feature Selection via Analysis of Relevance and Redundancy. Journal of Machine Learning Research 5, 1205–1224 (2004) 18. Zhang, M.L., Pe˜ na, J.M., Robles, V.: Feature selection for multi-label naive bayes classification. Inf. Sci. 179, 3218–3229 (2009) 19. Zhang, M.L., Zhou, Z.: M3MIML: A Maximum Margin Method for Multi-instance Multi-label Learning. In: Eighth IEEE International Conference on Data Mining, ICDM 2008, pp. 688–697 (2008) 20. Zhou, Z.: Learning And Mining from DatA (LAMDA), http://lamda.nju.edu.cn/data.ashx
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages Christian Loyek1, Jan Kölling1 , Daniel Langenkämper1 , Karsten Niehaus2 , and Tim W. Nattkemper1 1 2
Biodata Mining Group, Faculty of Technology, Bielefeld University, Germany Genome Research and Systems Biology, Proteome and Metabolome Research, Faculty of Biology, Bielefeld University, Germany
Abstract. Life science research aims at understanding the relationships in genomics, proteomics and metabolomics on all levels of biological self organization, dealing with data of increasing dimension and complexity. Bioimages represent a new data domain in this context, gaining growing attention since it closes important gaps left by the established molecular techniques. We present a new, web-based strategy that allows a new way of collaborative bioimage interpretaion through knowledge integration. We show, how this can be supported by combining data mining algorithms running on powerful compute servers and a next generation rich internet application (RIA) front-end offering database/project management and high-level tools for exploratory data analysis and annotation. We demonstrate our system BioIMAX using a bioimage dataset from High-Content Screening experiments to study bacterial infection in cell cultures. Keywords: Life Science, Bioimage Informatics, Data Mining, Exploratory Data Analysis, Information Visualization, High-content screening, Web2.0, Rich Internet Application, Semantic Annotation.
1
Introduction
One field of research which is of growing importance regarding the development and application of intelligent data analysis is life science research, combining a multitude of fields such as molecular biology (genomics, proteomics, metabolomics), biophysics, biotechnology, biochemistry, systems biology, biomedicine, etc. The aim is to understand and model the building blocks of dynamic living systems, which are built by entities from different scales (proteins, chemical compounds, cells) and relationships of different kinds and abstraction levels (interacts-with, inhibition/excitation, co-localizes-with, ...). While most of the molecular data has been extracted for homogenized samples, i.e. without any spatial information for the molecular entities, spatial information has been identified recently as one of the last remaining open gaps in systems biology and life sciences, which has to be closed if one wants to render a comprehensive picture of living systems on all levels of biological self-organization [1]. As J. Gama, E. Bradley, and J. Hollmén (Eds.): IDA 2011, LNCS 7014, pp. 258–269, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages
259
a consequence, new bioimaging techniques have been developed and proposed to close this gap, like MALDI imaging or High Content Screening [1]. This new data promises to close many of the aforementioned gaps, but also trigger a new demand for new technologies to analyze this data. For instance image data produced by high-content screenings (HCS) is increasingly getting richer and more complex, since a growing number of variables is associated to each spatial element (i.e. pixel) of the sample. While this is an enormous gain in information (e.g. in pharmaceutical screenings each of the n variables encode a protein of interest or a cell compartment), it is impossible to access, quantify and extract all relevant image information in one session by one researcher. In fact, the images need to be evaluated by researchers from different fields (biophysics, cell biology, chemistry, computer science, statistics, ... ) regarding different aspects (image quality/noise, semantics, cell/function classification, staining specificity, statistical significance, ...) and the result of their studies need to be integrated much earlier in research as it is done now in many projects, where researchers from different institutes in different countries meet maybe once a year. To foster integration of results and views from different aspects of bioimage analysis a new approach is needed, that covers a large variety of bioimage analytics, ranging from manual annotation based on direct visual inspection to full automatic data mining using unsupervised machine learning. Due to the recent developments of web technology, allowing rapid dynamic integration of user generated content into new user-shaped knowledge data bases (such trends are sometimes referred to as Web2.0 or even Science 2.0 [2,3]) we started the development of a purely web-based bioimage analysis platform which allows the user to apply different analyses to the data, share data and results with other researchers without a complicated and time-consuming act or data modeling. So the aim is not to design a web-based LIMS (laboratory information management system), but to provide a web-based work bench to interpret bioimages within a web-organized project together with a chosen group of other researchers, independent from their whereabouts condition to an internet connection. Our fully web-based software approach to intelligent data analysis of bioimage data is called BioIMAX (BioImage Mining, Analysis and eXploration) [4], developed to augment both, an easy initial exploratory access to complex highcontent image data and the collaboration of geographically distributed scientists. BioIMAX was developed as a rich internet application (RIA), i.e. a web application whose performance and look-and-feel is comparable to a standard desktop application, but will mostly be executed in a web browser allowing for platform independency and avoiding additional installation costs. With BioIMAX, several types of high-content image data can be uploaded and organized in personalized projects through a simple web-based interface. This allows a rapid data search and retrieval of own datasets and easily supports sharing of data with other collaborating researchers by inviting them to own projects. With the BioIMAX Labeler, a graphical and textual annotation tool, the users have the possibility to annotate, discuss and comment specific image regions, e.g. by linking chat-like discussions to image coordinates. In order to initially explore high-content image
260
C. Loyek et al.
data, the BioIMAX VisToolBox provides general methods to get an initial visual access to the n-dimensional signal domain of high-content images. Higher level data mining applications (such as dimension reduction or clustering) which are computationally more expensive are triggered and evaluated in BioIMAX, but are computed on powerful external compute servers using specialized C/C++ machine learning libraries. In recent years, several different toolboxes for bioimage informatics have been proposed and we review them briefly here. General imaging analysis tools like ImageJ [5,6] or ITK [7,8] aim at providing a large variety of image processing methods for tasks such as registration, filtering, thresholding or segmentation. In contrast, single purpose tools such as CellProfiler [9,10] focus on special biological or biomedical problems as well as on data from specific imaging techniques. Another group of approaches are meant as general technological platforms to store and organize large amounts of image data in a central repository on a remote server architecture. In addition to the data management, analysis platforms can include selected methods for data visualization, annotation and analysis. One of the first tools published in this context is OME (Open Microscopy Environment) [11]. Bisque [12] is a recently introduced powerful tool, which provides a platform with an automatic 3D nuclei detection or microtubule tracking. Although tools such as CellProfiler, OME or Bisque represent great steps towards improvements in bioimage data analysis, most of them are focussed on particular well defined biological problems and provide specially adapted analysis methods to solve these problems. However, in many cases the analysis goal is vague and little a priori knowledge is available about the underlying data. Thus, it is not clear in advance, which analysis strategy should be applied. This is a general problem in the analysis of high-content bioimage data, which usually needs to be discussed by collaborating researchers from different disciplines. In the context of HCS analysis, especially pharmaceutical HCS, analysis-related decisions increasingly take place associated to a particular region of interest (ROI). Thus, discussion needs to be linked to particular (x,y)-coordinates, which leads to less trivial design issues in database and graphical user interface development. In addition to this, a successful (cross-domain) collaboration is often impeded, since the involved researchers are usually distributed across several research institutes. In the future, we assume more impacts of web technology developments for bioimage analysis. Especially the fact, that the web is getting more collaborative and user-shaped (effects referred to as Web2.0 ) and offers more and more powerful graphics applications, will stimulate new developments such as ours. As an example, we demonstrate several aspects of BioIMAX, which could support the study of bacterial infection of cells with Listeria monocytogenes. BioIMAX can be accessed at http://ani.cebitec.uni-bielefeld.de/BioIMAX with the username “tuser” and the password “test1” for testing purposes.
2
Materials
Listeria monocytogenes is an intracellular pathogenic bacterium that causes a food-borne disease called Listeriosis in both humans and animals. Listeriosis is
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages
261
Fig. 1. Example high-content fluorescence image showing infected cells: (a) Cell channel: cytoplasm, (b) Nuclei channel, (c) Listeria channel: GFP stained Listeria and (d) RGB composition of the three channels (a)-(c)
a rare but serious disease with a high overall mortality rate of 30%, most common in pregnant woman or immunocompromised individuals [13]. The bacteria is an important model organism for infection, intracellular proliferation and host-pathogen interactions. Those intracellular bacteria are protected against the host immune system and are poorly accessible for treatment with antibiotics. Therefore, the invasion of the host cells is an important and crucial step in Listeria pathogenesis and virulence [14]. In order to study the grade of host cell invasion with L. monocytogenes, a high-content screen has been set up using automated microscopy and L. monocytogenes expressing the green fluorescent protein (GFP). Figure 1 shows an example high-content image, obtained with the ScanR screening station (Olympus).
3
Architecture
As previously mentioned, BioIMAX software was designed as a rich internet application. The usage of RIAs has several advantages, which meet the necessary requirements for the development of a system like BioIMAX. In contrast to conventional thin-client web applications, RIAs provide a richer and more complex graphical interface, resembling desktop applications’ interface interactivity
262
C. Loyek et al.
and computation power. The RIA technology improves the efficiency of web applications by moving part of the computation, interaction and presentation to the client, thereby lowering the amount and frequency of client-server traffic considerably and permitting asynchronous client-server communication. As a result, the usability of web applications will be improved, annoying installation routines will be avoided and the software will be accessible from any location. The BioIMAX client side was developed with Adobe Flex [15], which is an open-source framework for building expressive web applications. RIAs developed with Adobe Flex deploy consistently on all major browsers and operating systems by leveraging the Adobe Flash Player. In order to efficiently and consistently manage the data collected, MySQL [16] is used as a relational database management system. The communication between the Flex client and the server-side database is realized by using AMFPHP [17], which is one of the fastest client server communication protocol available to Flash Player developers. Once a user has been authenticated by a username and password login procedure, she/he is presented with the BioIMAX start page. The start page is designed in the style of a social media platform, creating a personalized environment, which provides, e.g. access to the system-internal mail box or a navigation panel for general data handling such as image upload, project management or access to the BioIMAX data browser. With the data browser the user can search and browse the BioIMAX database. In addition to visualizing and managing the search results, the data browser serves as starting point for all data exploration tasks. In the following, we give a detailed description of the BioIMAX VisToolBox for visual data exploration tasks and the BioIMAX Labeler for semantic image annotation.
4
Visual Data Exploration
The BioIMAX VisToolBox (illustrated in figure 2) provides a set of methods to explore and analyze the signal domain of high-content images. The graphical display of the VisToolBox is divided into two panels. One panel contains an image viewer, especially designed for high-content or multivariate images (see figure 2(a)). The image viewer includes basic functions such as zooming or panning for image navigation purposes and allows scrolling through a stack of high-content images, image by image. The other panel (see figure 2(b)) comprises several methods from the fields of visualization, co-location analysis and exploratory data analysis (EDA), chosen by tabs, which will be described more detailed in the following. Image comparison: This tool provides two different methods to compare up to three single image channels of a high-content image simultaneously on a structural/morphological level. The first method is called Alpha blending and aims at comparing three images while superimposing them as layers and manually adjusting the opacity value of the respective layers by moving the mouse cursor over the opacity triangle (see figure 2(c)). This can be a useful tool, e.g. while
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages
263
Fig. 2. Screenshot of the VisToolBox. This tool provides several methods to explore and analyze the signal domain of high-content images. It consists of an image viewer (a) and a panel (b) including methods from the fields of visualization, co-location analysis and exploratory data analysis (EDA), separated by tabs. In (c), three selected images can be visualized and compared simultaneously by adjusting the opacity of the respective images. By moving the mouse cursor over the red triangle, the opacity value of each single image will be adapted in real-time depending on the distance to the corners of the triangle, which represents the three selected images. Selection of images will be done consecutively per drag-and-drop from the image list (d) to one of the boxes displaying the letters A, B or C (e). The small figures below exemplary show further exploration displays: (f) Histogram and (g) Scatter plot. In (h) a co-location study of two images with statistical measurements (Manders’ score and Pearson correlation coefficient) is displayed in an bar chart.
evaluating analysis methods such as segmentation methods regarding their accuracy. Thus, the user can detect structural differences or similarities between the selected images. The purpose of the second method (RGB pseudo coloring) is to generate a pseudo color fusion image from three selected images, by interpreting each image as one color channel in a RGB image. Image Manipulation: This tab includes two histogram dialogs, which display information about the statistical distribution of grey values in the currently selected image. Both histograms are interactive, i.e. the user can manipulate the distribution and the visualization on the left is adapted in real-time. In the histogram the user can filter out irrelevant / wrong signals or study various thresholds needed for analysis tasks.
264
C. Loyek et al.
Co-Fluorescence analysis: Here, the user can compare two selected images on a statistical level by calculating (i) the Pearson correlation coefficient or (ii) the Manders’ score, which is a frequently used index for co-location studies in fluorescence microscopy [18]. The results are displayed in a bar chart (see figure 2(h)). Gating, Link and Brush for in-depth visual exploration: The last part of the VisToolBox allows a more detailed exploration of specific image regions. Here, the user can focus the study of L. monocytogenes invasion on a single cell level, e.g. to examine cell invasion in the nucleus (see figure 3). For this purpose, the user first has to select a region of interest (ROI) by drawing a rectangle on the displayed image in the image viewer. In a next step the user chooses one of the three visualization techniques at the top of the Visualization window, which opens a new plot dialog. Dependent on the chosen dialog, the user will be asked to drop one or more images from the image list to the dialog. After that, all pixels within the ROI will be displayed in the respective plot, i.e. a histogram, scatter plot or parallel coordinates. Selection of points in one plot triggers highlighting the referring pixels in the image on the left (for detailed description see figure 3). This process can also be referred to as “gating” or “link-and-brush” [19]. Clustering and dimension reduction: Since with the growing number of variables (i.e. grey values) a visual inspection using the above techniques will only give a limited view on the image data and the complex high-dimensional manifold given by its n-variate features, i.e. pixel values. As a consequence one is interested in using methods from unsupervised learning to reduce the complexity of the data so it can be visualized, like clustering to reduce the number of patterns to be assigned to graphical parameters (such as colour) or dimension reduction to reduce the number of variables directly. In the design of BioIMAX we created an interface that allows the integration of such algorithms so these can run on remote compute servers and write the results into the BioIMAX data base. For each algorithm at least one individual tool is integrated as well, so the user can start the computationally expensive methods from the web interface, wait for the results and can inspect these again through the web inside BioIMAX. In figure 5 we show an example result obtained with the clustering tool TICAL (Toolbox for Image Clustering ALgorithms, currently in alpha release stage) for one of our images.
5
Semantic Image Annotation
The BioIMAX Labeler tool allows one to graphically annotate image regions in single image channels. The interface provides the image viewer described before on the left and an options toolbar on the right (see figure 4(a,b)). In the toolbar the user can adjust several label properties like geometry, color or size before labeling (see figure 4(c)). Furthermore, the user can select specific semantic label types, which will be textually associated to the label. The user can choose one
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages
265
Fig. 3. Interactive exploration of bivariate data from a selected region of interest (ROI) in the image viewer (a). For the ROI, the user selects two image channels (here the nucleus channel (Bisbenzimid) and the GFP-marked L. monocytogenes channel) and one tool, e.g. a scatter plot (b) from the visualization tab (see figure 2). The pixel values corresponding to the same location within the ROI are displayed as points in the scatterplot. Selection of points x in the plot (c) triggers highlighting the referring pixels in the image (displayed as red regions superimposing on the original image), with max min ≤ l(x) ≤ tmax , respect to the following criterion: Γ = x|tmin b ≤ b(x) ≤ tb ∧ tl l
max defines with Γ describing the selection of points x in the scatter plot, tmin b and tb the minimum and maximum of the selection range regarding Bisbenzimid values and b(x) is the Bisbenzimid value of point x. The same applies to the L. monocytogenes values, accordingly. This process is often referred to as “gating” or “link-and-brush”.
of the label types from the predefined semantic categories, e.g. Cell or Cellular compartment or she/he can create own label types or categories (see figure 4(d,e)). By clicking in the currently selected image, the annotations are placed as graphical objects on an invisible layer belonging to each single image, allowing for easy modification existing labels, e.g. reforming, recoloring or resizing. A set of labels can be stored into the database by saving all parameters for each single label, i.e. location, type, size, color and form, and will be linked to the respective image channel. In order to get and modify detailed information about single labels, the user can open the annotation/info window by selecting the toggle button showing the callout icon. With the Labeler tool we are aiming at two goals. First, user shall be enabled to label interesting image regions, which can be important in quantification and evaluation tasks. In this study, the experts have to annotate cells in a large number of images into different semantic categories. With the Labeler the experts can define and insert new label types representing different infection grades (see figure 4(d)) and can start labeling cells, e.g. using circles with different colors, each color representing a specific infection grade. Using the Labeler, the process of establishing a gold standard
266
C. Loyek et al.
Fig. 4. Screenshot of the Labeler. It consists of the image viewer on the left (a), on which the user can place a number of graphical objects (called labels), to annotate specific image regions. On the right, the options toolbar (b) provides options for adjusting several label properties auch as geometry, color or size (c) and the possibility to link specific semantic label types to single annotations, which can be predefined types selected by (d) or newly generated types (e), depending on the current scientific context. In order to initiate a discussion about a specifc label on a higher semantic level, the user can invoke an annotation/info window by selecting the toggle button (f) at the top of the toolbar. The annotation/info window provides an option to start or open an existent chat-like discussion about an label (g).
from several experts is speeded up and simplified, e.g. there is no need to transfer multiple copies of images to the experts. The users can easily login to the BioIMAX system and can immediately start labeling from any location and all label results will centrally be stored in the database and can be inspected by all collaborating researchers at any time. Second, we want to link chat-like discussions to image regions to link high-level semantics to morphological features. Therefore, the Labeler provides a chat window (see figure 4(g)), which can be accessed from the annotation/info window. Here, several users can communicate about the selected label and the conversation will additionally be stored together with the label. This facilitates Web2.0 style collaborative work on one image, while the stored states of communication content are directly linked to image coordinates/ROIs. While developing new analysis strategies for high-content image data, researchers have to discuss aspects about the original data, e.g. the trustworthiness of signals, and about analysis methods, e.g. the quality of intermediate results such as registration or segmentation. Figure 6 illustrates both scenarios.
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages
267
Fig. 5. A clustering based visualization of one three-dimensional HCS bioimage. Combining clustering with dimension reduction can be done by a) using a self-organizing map or b) a combination of other vector quantization algorithms (k-means, neural gas) with dimension reduction techniques (PCA, LLE, t-sne). Both approaches allow the mapping of cluster prototypes to colors which is used to colorize each pixel applying the best matching criterion to the pixel and all the cluster prototypes. In the middle of the display, one can the the result pseudo color image. On the bottom, a small number of clusters has been chosen in the overview (right panel) and displayed. The lengths of the horizontal boxes display the average signal intensity in that cluster.
6
Discussion
In this paper we proposed a Web2.0 approach for the collaborative exploration of high-content screening bioimages in life sciences and demonstrated its application with an example dataset from Listeria monocytogenes cell invasion analysis. Due to the complexity of high-content image data, the extraction and quantification of all image information and the generation of analysis strategies is a difficult task for researchers and different aspects need to be discussed by collaborating researchers from different disciplines. Thus, we presented fully web-based tools, which support both, exploratory analysis of high-content image data and important collaborative aspects in Listeria monocytogenes infection analysis. The BioIMAX VisToolBox provides methods from the field of exploratory data analysis, in order to gain initial insights into the structural characteristics of the underlying data, following Ben Shneidermans information visualization mantra: Overview first, zoom in and filter, details on demand. The concept of
268
C. Loyek et al.
Fig. 6. Illustration of a chat-like discussion about image regions with the BioIMAX Labeler. The figure demonstrates two possible discussion scenarios based on the same image data: discussion about the raw image and discussion about analysis methods or results.
VisToolBox does not include predefined analysis pipelines regarding a special biological question in the form of a black box model, which gets an image as input and the user is presented with the finalized result. With the VisToolBox, the user is directly involved in the knowledge discovery process, while exploring the data space themselves with specific information visualization techniques. This is an important strategy in the field of visual data mining and exploration [20]. Using the clustering tool TICAL, even higher dimensional image data can be visually explored without the need to install a machine learning toolbox on ones desktop, since BioIMAX allows the application of clustering independent from the users whereabouts, condition to an internet connection. The BioIMAX Labeler provides tools to communicate and discuss about specific image regions, which is of great value, since analysis-related decisions are increasingly associated to particular regions of interest. The Labeler allows to annotate image regions with graphical objects and to link chat-like discussions representing high-level semantics to morphological features. Since BioIMAX is designed as a rich internet application, one of the key feature is, that a user only needs a login and a password to get access to the BioIMAX platform provided an internet connection is available. Except for the installation of the Flash Player, which is available for most browsers, no additional software packages and libraries have to be installed. The fact that all collaboration and exploration tasks will be performed within one web-based platform is of great value, since it simplifies and speeds up several aspects in the analysis process, e.g. avoiding transfer of data between researchers, since all researchers work on the same copy centrally stored in the BioIMAX database. We believe, that in the age of the ongoing development of web technologies, our Web2.0 approach is an important step forward, to support complex analysis tasks regarding high-content data. Such an approach is of particular benefit
A Web2.0 Strategy for the Collaborative Analysis of Complex Bioimages
269
to those scientific projects, where several scientists from different institutes at different locations are involved and has to collaborate.
References 1. Megason, S.G., Fraser, S.E.: Imaging in Systems Biology. Cell 130(5), 784–795 (2007) 2. Shneiderman, B.: Science 2.0. Science 319, 1349–1350 (2008) 3. Waldrop, M.M.: Science 2.0 - Great new tool, or great risk? Scientific American (January 9, 2008) 4. Loyek, C., et al.: BioIMAX: A Web 2.0 approach for easy exploratory and collaborative access to multivariate bioimage data. BMC Bioinformatics 12, 297 (2011) 5. Image Processing and Analysis in Java, http://rsbweb.nih.gov/ij/ 6. Abramoff, M.D., Magelhaes, P.J., Ram, S.J.: Image Processing with Image. J. Biophoto. Int. 11(7), 36–42 (2004) 7. Insight Segmentation and Registration Toolkit (ITK), http://www.itk.org 8. Yoo, T.S., et al.: Engineering and Algorithm Design for an Image Processing API: A Technical Report on ITK - the Insight Toolkit. In: Westwood, J. (ed.) Proceedings of Medicine Meets Virtual Reality, pp. 586–592. IOS Press, Amsterdam (2002) 9. Carpenter, A.E., et al.: CellProfiler: image analysis software fpr identifying and quantifying cell phenotypes. Genome Biol. 7(10), R100 (2006) 10. Lamprecht, M.R., Sabatini, D.M., Carpenter, A.E.: CellProfiler: free, versatile software for automated biological image analysis. Biotechniques 42, 71–75 (2007) 11. Swedlow, J.R., et al.: Informatics and Quantitative Analysis in Biological Imaging. Science 300, 100–102 (2003) 12. Kvilekval, K., et al.: Bisque: a platform for bioimage analysis and management. Bioinformatics 26(4), 544–552 (2010) 13. Ramaswamy, V., et al.: Listeria - review of epidemiology and pathogenesis. J. Microbiol. Immunol. Infect. 40(1), 4–13 (2007) 14. Ireton, K.: Entry of the bacterial pathogen Listeria monocytogenes into mammalian cells. Cell Microbioll. 9(6), 1365–1375 (2007) 15. Adobe Flex, http://www.adobe.com/products/flex/ 16. MySQL, http://www.mysql.com/ 17. AMFPHP - Action Message Format PHP, http://amfphp.sourceforge.net/ 18. Manders, E., et al.: Dynamics of three-dimensional replication patterns during the S-phase, analysed by double labelling of DNA and confocal microscopy. J. Cell Science 103, 857–862 (1992) 19. Ware, C.: Information Visualization - Perception for Design. Morgan Kaufmann Publishers Inc., San Francisco (2004) 20. Keim, D.A.: Information Visualization and Visual Data Mining. IEEE Transactions on Visualization and Computer Graphics 7(1), 100–107 (2002)
Data Quality through Model Checking Techniques Mario Mezzanzanica1, Roberto Boselli1 , Mirko Cesarini1 , and Fabio Mercorio2 1
Department of Statistics, C.R.I.S.P. research center, University of Milan Bicocca, Italy {firstname.lastname}@unimib.it 2 Department of Computer Science, University of L’Aquila, Italy
[email protected] Abstract. The paper introduces the Robust Data Quality Analysis which exploits formal methods to support Data Quality Improvement Processes. The proposed methodology can be applied to data sources containing sequences of events that can be modelled by Finite State Systems. Consistency rules (derived from domain business rules) can be expressed by formal methods and can be automatically verified on data, both before and after the execution of cleansing activities. The assessment results can provide useful information to improve the data quality processes. The paper outlines the preliminary results of the methodology applied to a real case scenario: the cleansing of a very low quality database, containing the work careers of the inhabitants of an Italian province. The methodology has proved successful, by giving insights on the data quality levels and by providing suggestions on how to ameliorate the overall data quality process. Keywords: Data Quality, Model Checking, ETL Certification, Data Analysis.
1 Introduction Despite a lot of research effort has been spent and many techniques and tools for improving data quality are available, their application to real-life problems is still a challenging issue [15]. When alternative and trusted data sources are not available, the only solution is to implement cleansing activities relying on business rules, but it is a very complex, resource consuming, and error prone task. Developing cleansing procedures requires strong domain and ICT knowledge. Diverse actor types are required (e.g. ICT and Business) who should collaborate, but knowledge sharing is hindered by their different cultural backgrounds and interpretation frameworks. Fort this reason, several cleansing tools have been introduced into the market, focusing on user friendly interfaces to make them usable by a broad audience. In the data quality domain there is a call for methodologies and techniques to improve the design and implementation of data quality processes. In this paper the authors propose to leverage formal methods to effectively support and simplify the development process of data cleansing systems. Formal methods don’t replace the procedural approach typically used to develop data cleansing procedures, but they can help in formalising (and sharing) the domain knowledge and to automatically check the consistency of the cleansed data. The authors have identified in this paper a specific class of data quality problems where formal methods can provide useful results. J. Gama, E. Bradley, and J. Hollm´en (Eds.): IDA 2011, LNCS 7014, pp. 270–281, 2011. c Springer-Verlag Berlin Heidelberg 2011
Data Quality through Model Checking Techniques
271
The paper is organised as follows: in the remaining of this section we provide a brief survey of the related works; in Sec. 2 a framework for data quality based on model checking is presented; in Sec. 3 we (compactly) describe model checking techniques on Finite State Systems; then, in Sec. 4 we present our industrial case study, describing how our framework was successfully applied to data analysis, and showing some experimental results in Sec. 5; finally, some concluding remarks and the future work are outlined in Sec. 6. 1.1 Related Work From the research perspective, data quality has been addressed in different contexts, including statistics, management and computer science [19]. This paper focuses on improving database instance-level consistency. In such a context, research has mostly focused on business rules, error correction (known as both data edits and data imputation in statistics [19]), record linkage (known as object identification, record matching, and merge-purge problem), and profiling [12]. A description of several data cleansing tools can be found in [15,2,4,17]. Even the adoption of cleansing techniques based on statistical algorithms or on machine learning requires a huge human intervention for assessment activities. Errors that involve relationships between one or more fields are often very difficult to uncover with existing methods. These types of errors require deeper inspection and analysis [15]. Similar considerations can be applied to data profiling tools. Data profiling is a blurred expression that can refer to a set of activities including data base and data warehouse reverse engineering, data quality assessment, and data issues identification. According to [11] the principal barrier to more generic solutions to information quality is the difficulty of defining what is meant by high or poor quality in real domains, in a sufficiently precise form that it can be assessed in an efficient manner. As stated in the Par. 1.2 this paper contributes to address the just described issue. Many cleansing tools and database systems exploit integrity analysis (including relational integrity) to identify errors. While data integrity analysis can uncover a number of possible errors in a data set, it does not address complex errors [15]. Some research activities (e.g. [12]) focus on expanding integrity constraints paradigms to deal with a broader set of errors. In this streamline the approach adopted in this paper contributes to manage a broader set of consistency errors with respect to the integrity constraints tools and techniques currently available. The application of automata theory for inference purposes was deeply investigated in [20,14] for the database domain. The approach presented in [1] deals with the problem of checking (and repairing) several integrity constraint types. Unfortunately most of the approaches adopted can lead to hard computational problems. Only in the last decade formal verification techniques were applied to databases, e.g. model checking was used in the context of database verification [6] to formally prove the termination of triggers. Model checking has been used to perform data retrieval and, more recently, the same authors extend their technique to deal with CTL (Computation Tree Logic) in order to solve queries on semistructured data [10].
272
M. Mezzanzanica et al.
1.2 Contribution The work described in this paper is driven by the idea that formal methods can be helpful (in some specific data quality scenarios, e.g. see Par. 1.3) to express business rules and automate data consistency verification, to make more robust the overall data quality process, and to improve domain understanding, since formal methods can facilitate knowledge sharing between technicians and domain experts. It is worth to note that evaluating cleansed data accuracy against real data is often either unfeasible or very expensive (e.g. lack of alternative data sources, cost for collecting the real data), then consistency based methods may contribute reducing the accuracy evaluation efforts. Hence, the main paper contributions are in: (1) the definition of a methodology, namely the Robust Data Quality Analysis (RDQA), which uses formal methods both to formalise consistency rules and (2) to automatically verify these on huge datasets through model checking techniques. Furthermore (3) the RDQA has been successfully exploited on a real industrial data quality case study. In the end, to the best of our knowledge no contribution in literature has exploited formal methods for analysing the quality of (real-life) database contents. Indeed formal methods contribute to manage a broader set of consistency errors with respect to the integrity constraints tools and techniques currently available. 1.3 Finite State Events Database Several database contents can be modelled as sequences of events (and related parameters), where the possible event types being a finite set. For example, the registry of (university) students’ scores, civil registries, the retirement contribution registry, several public administration archives, and financial transaction records may be classified in such category. The event sequences that populate such databases might be modelled by Finite State System (FSS, see [13]), which in turn open several possibilities with respect to consistency check and data quality improvement. FSS can be used to model the domain business rules so that the latter can be automatically checked against database contents by making use of formal methods, e.g. Model Checking. Furthermore FSS representations (e.g. Finite State Automata) can be easily understood by domain experts and by ICT actors involved in Data Quality improvement activities. Model Checking tools can be used to evaluate the consistency (expressed by means of FSS formalisms like Finite State Automata) of databases both before and after the application of data cleansing activities. By comparing the consistency check results of the two database instances (before and after the cleansing process), it is possible to obtain useful insight about the implementation of the cleansing procedures. This evaluation helps improving the data cleansing development processes since feedbacks can be achieved on the consistency of the results. Developing a cleansing procedure for a large domain may be a very complex task which may require to state several business rules, furthermore their maintenance could be an onerous task, since the introduction of new rules may invalidate some of the existing ones. The possibility to model a correct behaviour using FSS formalisms and to check the results of data cleansing can effectively reduce the effort of designing and maintaining cleansing procedures. Then, we define “Finite State Event Dataset” (FSED) and “Finite State Event Database” (FSEDB) as follows.
Data Quality through Model Checking Techniques
273
Definition 1 (Finite State Event Dataset). Let ε = e1 , . . . , en be a finite sequence of events, we define a Finite State Event Dataset (FSED) as a dataset S whose content is as a sequence of events S = {ε} that can be modelled by a Finite State System. Definition 2 (Finite State Event Database). Let Si be a FSED, we define a Finite State Event Database (FSEDB) as a database DB whose content is DB = ki=1 Si where k ≥ 1. We introduced the set of sequences in the FSEDB definition since many database contents can be easily modelled by splitting their content is several subsets (each being a sequence of events) and then modelling each sequence with a single (or a limited set of) FSS. Although the whole content could be modelled by a single FSS, splitting into subsets can reduce the complexity of the FSS(s) used to model the sequences. Many Public Administration archives can be classified as Finite State Event Databases, and the possibility to use FSS formalisms to improve cleansing activities is extremely valuable.
2 Robust Data Quality Analysis In the following, we describe our Robust Data Quality Analysis (RDQA). Roughly speaking, assume clr to be a function able to clean a source (and dirty) dataset into a cleansed one according to some defined cleansing rules (or business rules). To this regard, we can take on loan the definition given in [3] where consistency refers to “the violation of semantic rules defined over a set of data items. With reference to the relational theory, integrity constraints are a type of such semantic rules. In the statistical field, data edits are typical semantic rules that allow for consistency checks”. In this settings, several questions arise: “what is the degree of consistency achieved through clr? Can we improve the consistency of the cleansed dataset? Can we be sure that function clr does not introduce any error in the cleansed dataset?” . The set DBS represents a dirty database whilst DBC is the cleansed instance of DBS computed by function clr working iteratively on each subset Si ⊆ DBS where Ci = clr(Si ) and Ci ⊆ DBC . Since many consistency properties are defined or scoped on portions of the original database, the cleansing activity is not carried out on the whole dataset DBS but on several subsets Si of the original one. The clr function applied to Si may produce: a Ci that is unchanged with respect to Si (in case Si had a good quality); or it may produce a changed Ci (in case some quality actions have been triggered). Since the semantics of the changed/unchanged are domain dependent, an equals function which looks for equality between Si and Ci is required. Moreover, since the function clr might not effectively cleanse the data, an evaluation of its behaviour is carried out using a further function ccheck which is based on formal methods. ccheck is used to verify the consistency of both Si and Ci . Several outcomes of the cleansing routines can be identified in this way e.g., a dirty Si may have been cleansed into a consistent Ci , or a dirty Si may have been turned into a not consistent Ci , or a clean Si may have been modified into a not consistent Ci . Nevertheless, even if ccheck is based on formal methods, no enough guarantees are given about the correctness of ccheck (i.e., we cannot use ccheck as an oracle). Instead, the compared results given by functions ccheck, equals, and clr allow one to obtain
274
M. Mezzanzanica et al.
useful insights about the consistency of the clr function and at the same time it is helpful to evaluate the ccheck and equals functions. This procedure will be further detailed in the following paragraph by means of examples. For sake of clarity, we formally describe the RDQA process defining the following functions: Function 1 (clr). Let S be a dataset according to Definition 1, then clr : S → C is a total function where C represents the cleaned instance of S. Function 2 (rep). Let X be a dataset according to Definition 1, then rep : X → e is a total function which returns a representative element e ∈ X. Function 3 (ccheck). Let K be a dataset according to Definition 1, then ccheck : K → {0, 1} where ccheck(K) returns 1 if exists a sequence ε ∈ K such that ε contains an error, 0 otherwise. Clearly, function ccheck can be realised by using any formal method. In such context, we use Model Checking techniques, as described in Section 3. Function 4 (equals). Let S and C be datasets according to Definition 1 we define equals : S × C → {0, 1} which returns 0 if no differences between S and C are found, 1 otherwise. The RDQA procedure is applied iteratively refining at each step the functions clr and ccheck until a desired consistency level is reached. In Fig. 1(a) it is shown a graphical representation of a RDQA iteration whilst Tab. 1(b) outlines the semantics of the FS+− , FC+− , and D+− sets, which are used in Tab. 1(a) and Fig. 1(a). Each iteration computes the Double Check Matrix (DCM), e.g. Tab. 1(a), where the just introduced information are summarised in order to analyse the reached consistency level. Row 1 of Tab. 1(a) gives the number of items for which no error was found by ccheck applied both on Si and Ci and no differences between the original instance and the cleansed one was found by equals. In this case both ccheck and clr agreed that the original data was clean and no intervention was needed. Differently, row 4 shows the number of items for which no error was found by ccheck(Si ) whilst the equals(Si ,Ci ) states that a cleansing intervention took place, producing a wrong results recognised as dirty by ccheck(Ci ) = 1. The case identified by row 4 is very important since it discloses bugs either in in the cleansing procedure, or in the clr, or in the ccheck function (or a combination thereof). Row 8 shows another interesting case, where it is reported the number of items that where originally dirty (check(Si ) = 1), an intervention took place (equals(Si ,Ci ) = 1) that was not effective since ccheck(Ci ) = 1. Due to lack of space, we do not comment the cases related to the rows 2, 3, 5, 6, 7, where useful information can be derived too. They will be extensively commented in Sec. 4 exploiting a real example. Is worth to note that, thanks to the comparisons outlined, the DCM can be used as a bug hunter to start an improvement process which lead to better understand the domain rules, and to refine the implementation of the cleansing activities. The RDQA approach does not guarantee the correctness of the data cleansing process, nevertheless it helps making the process more robust with respect to data consistency.
Data Quality through Model Checking Techniques
275
Table 1. (a) The Double Check Matrix. (b) The definition of sets resulting by ccheck and equals functions. (a) Conditions ccheck(Si ) equals(Si ,Ci ) ccheck(Ci ) 0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
Result Cardinality |FS− ∩ D− ∩ FC− | |FS− ∩ D− ∩ FC+ | |FS− ∩ D+ ∩ FC− | |FS− ∩ D+ ∩ FC+ | |FS+ ∩ D− ∩ FC− | |FS+ ∩ D− ∩ FC+ | |FS+ ∩ D+ ∩ FC− | |FS+ ∩ D+ ∩ FC+ |
(b)
FS− = (rep(Si )|ccheck(Si ) = 1) FS+ = (rep(Si )|ccheck(Si ) = 0) + FC = (rep(Ci )|ccheck(Ci ) = 0) − FC = (rep(Ci )|ccheck(Ci ) = 1) − D = (rep(Si )|equals(Si ,Ci ) = 0) D+ = (rep(Si )|equals(Si ,Ci ) = 1)
3 The Role of Model Checking in Data Analysis We don’t introduce in depth the model checking technique due to the lack of space, for a detailed description the reader can refer to [6,5]. Informally speaking, model checking is a technique that allows one to analyse all the possible configurations (states) in which a system can be in during its execution (reachability analysis). In such context, a model checker verifies if all system states satisfy a given set of properties (invariants). In the following, model checking based on Finite State System will be exploited. 3.1 Model Checking on Finite State Systems Definition 3 (Finite State System). A Finite State System (FSS) S is a 4-tuple (S,I,A,F), where: S is a finite set of states, I ⊆ S is a finite set of initial states, A is a finite set of actions and F : S × A → S is the transition function, i.e. F(s, a) = s iff the system from state s can reach state s via action a. In order to define the model checking problem for such a system, we assume that a set of invariant conditions φ = {ϕ1 , . . . , ϕn } has been specified. A state e ∈ E is an error state if the invariant formula (ϕ1 ∧ . . . ∧ ϕn ) is not satisfied. Then, we can define the set of error states E ⊆ S as the union of such states. In order to have a Finite State System we require that each error sequence reaches an error e ∈ E in at most T actions. Note that this restriction, although theoretically quite relevant, has a limited practical impact in our contexts (e.g. see Section 4). Now we are in position to state the model checking problem on Finite State Systems. Definition 4 (Model Checking Problem on FSS). Let S = (S, I, A, F) be an FSS. Then, a model checking problem (MCP in the following) is a triple P = (S , φ, T ) where φ = {ϕ1 , . . . , ϕn } is the set of the invariant conditions and T is the finite horizon. Intuitively, a solution to an MCP is a set of sequences for any initial states, that is a set of paths in the system transition graph, starting from each initial state i ∈ I and ending in an error state e ∈ E. More formally, we have the following.
276
M. Mezzanzanica et al.
Definition 5 (Trajectory). A trajectory in the FSS S = (S, I, A, F) is a sequence π = (< s0 , a0 >< s1 , a1 >< s2 , a2 > . . . < sn−1 , an−1 >< sn >) where, ∀i ∈ [0, n − 1], si ∈ S is a state, ai ∈ A is an action and F(si , ai ) = si+1 . Definition 6 (Feasible Solution). Let S = (S, I, A, F) be an FSS and let P = (S , φ, T ) be a MCP. Then an feasible solution for P is a trajectory π in S s.t.: ∀sI ∈ I, |π| = k, k ≤ T, π(1) = sI and π(k) ∈ E.
3.2 The CMurphi Model Checker CMurphi [7] is an explicit model checker which allows one to model a system as a Finite State System (according to Definition 3). The system dynamics and properties can be described through the CMurphi description language, the former as boolean expressions called invariants and the latter as a collection of transition rules (according to Definition 4). The output of the CMurphi verification process is an error trace which describes the sequence of actions (according to Definition 5) that lead to an error state. CMurphi has proved to be very effective thanks to its high-level programming language for FSS, which has the capability to use of external C/C++ functions [9] embedding their definitions in the model. That feature makes the model easily extensible with respect to the interaction with other tools or libraries (e.g., ODBC to handle database connections). Moreover, CMurphi implements many state space reduction techniques which are useful to deal with (possibly) huge state space (see [18] for details).
4 An Industrial Application The RDQA approach has been tested on a real case scenario. The C.R.I.S.P research center [8] exploits the content of a Public Administration database to study the labour market dynamics at territorial level [16]. A lot of errors and inconsistencies (missing information, incorrect data, etc.) have been detected in the database, therefore a cleansing process is executed, and the RDQA approach has been used to improve such process. 4.1 Domain Description According to the Italian law, every time an employer hires or dismisses an employee, or a contract of employment is modified (e.g. from part-time to full-time, or from fixed-term contract to unlimited-term) a communication (Mandatory Communication hereafter) is sent to a registry (job registry hereafter) by the employer. The registry is managed at provincial level, so every Italian province has its own job registry recording the working history of its inhabitants. An Italian province is an administrative division which encompass a set of cities and towns geographically close. In this scenario, the database of a province is used by the C.R.I.S.P to extract longitudinal data upon which further analysis are carried out. Every mandatory notification (event hereafter) contains several data, among which the most important for the purpose of this paper are: event id (a numeric id identifying the communication), employee id (an id identifying the employee), event data
Data Quality through Model Checking Techniques
277
(the communication date), event type (whether it is the start, the cessation, the extension or the conversion of a working contract), full time flag (a flag stating whether the event is related to a full-time or a part-time contract), employer id (an identifier of the employer), contract type (e.g. fixed-term contract, unlimited-term contract, Apprenticeship, etc.). 4.2 Career (Simplified) Model For the sake of simplicity, we have modelled a set of events which maps onto the mandatory communications data. The events are: Start: the worker has signed a contract and has started working for an employer. Further information describing the event are: the date, the employee id, the employer id, the contract type, the full time flag. Cessation: the worker has stopped working and the contract is terminated. Further information describing the event are: the date, the employee id, the employer id. Extension: a fixed-term contract has been extended to a new date. Further information describing the event are: the current date, the employee id, the employer id, the new termination date. Conversion: a contract type has changed, e.g. from fixed-term to unlimited-term contract. Further information describing the event are: the date, the employee id, the employer id, the new contract type. Some business rules can be inferred by the Italian Labour Law which states for examples that an employee can have only a full-time contract active at the same time, or alternatively no more than two part-time contracts. According to the law, the career of a person showing two start events of a full-time contract without a cessation in between is to be considered invalid. Such errors may happen when Mandatory Notifications are not recorded or are recorded twice. In our context a job career is a temporal sequence of events describing the evolution of a worker’s state, starting from the beginning of her/his working history. It is worth to note that a person can have two contracts at the same time only if they are part-time and they have been signed with two different organisations. 4.3 Graph Representation Figure 1(b) shows a simplified representation of the evolution of a job career where nodes represent the state of a worker at a given time (i.e., the number of active parttime/full-time contracts) whilst edges model how an event can modify a state. To give an example, a valid career can evolve signing two distinct part-time contracts, then proceeds closing one of them and then converts the last part-time into a full-time contracts (i.e., unemp, emp1, emp2 ,emp1 , emp4 ). For sake of clarity, Figure 1(b) focuses only on nodes/edges describing a correct evolution of a career (i.e., the white nodes) whilst all other nodes/edges are omitted (e.g., careers having events related to unsubscribed contracts). Nevertheless, Figure 1(b) contains two filled nodes (i.e. emp5 , emp6 ) which are helpful to describe some invalid careers. To this regard, a career can get wrong subscribing three or more part-time contracts (i.e., unemp, emp1, emp2 , emp6 ) or activating both part-time and full-time contracts (i.e., unemp, emp1, emp2 , emp5 ).
278
DBS
M. Mezzanzanica et al.
clr
DBC
start
cs st
unemp −−
emp1 |PT | = 1
cs st
emp2 |PT | = 2
ex
cn
ccheck
equals
s
ex
cn st
cn
st cs
t∨
ccheck emp4 |FT | = 1
FS+ FS− D+ D− FC+ FC−
(a)
st
emp5 |FT | + |PT | > 1
emp6 |PT | > 2
ex
(b)
Fig. 1. (a) A schematic view of the Robust Data Quality Analysis iteration. (b) An Abstract representation of the dynamics of a job career where st = start, cs = cessation, cn = conversion, and ex = extension.
4.4 The CMurphi Model In this section we describe the principal part of the model used to represent the domain consistency rules. Obviously, depending on the properties we want to verify, the model can be modified and/or extended. We can see our model as composed by three main parts: Statical Part. It is composed both by declarative statements which are used to model the employment status of a worker, and by declarations of constants, datatypes, and external C/C++ functions through the CMurphi language. Behavioural Part. It describes the dynamics of the system through the definition of start states and transition rules, as shown in Figure 2. The ruleset keyword is used to define the initial states of each career (the unemp state of Figure 1(b)). Then, the transition rule next_event allows CMurphi to compute the evolution of a career reading events from the corresponding dataset. Invariant Part. In this part properties which must be satisfied along a career are defined by using a C++ external function safe_transition. More precisely, CMurphi verifies if each invariant clause is always satisfied along any career evolution (safety properties). Then, CMurphi returns the set of careers violating at least an invariant clause (i.e. error trajectory as in Definition 5). Note that the output can be easily used for the RDQA methodology, as described in Section 2. In refer to example of Figure 1(b), the trajectory (< unemp, st >< emp1 , st > < emp2 , cn >< emp5 >) violates both the Part-time and Full-time invariants of Figure 2.
5 Robust Data Analysis: Experimental Results In this section we show some experimental results performed on an administrative database DBS having 1, 248, 752 events (i.e. |DBS | ) and 213, 566 careers (i.e., all
Data Quality through Model Checking Techniques The behavioural part ruleset p: min_worker.. max_worker do startstate " start " BEGIN no_errors := startstate_call() ; worker := get_worker(p); event_iterator := get_index( worker ); no_errors := set_path( event_iterator,p); END; END; rule " next_event" exists_an_event( worker ) == > BEGIN event_iterator := event_iterator + 1; END;
279
The invariant part invariant " Valid Start Event " safe_transition( event_iterator ,"st "); invariant " Valid Extension Event " safe_transition( event_iterator ,"ex "); invariant " Valid Conversion Event " safe_transition( event_iterator ,"cn "); invariant " Valid Cessation Event " safe_transition( event_iterator ,"cs "); invariant " Part - time " max_PT = false; invariant " Full - time " max_FT = false; invariant " Part - time and Full - time " double_FT_PT = false;
Fig. 2. The behavioural and invariant part of the CMurphi model
distinct subsets Si where i ∈ [1, . . . , 213, 566]). Note that the results are referred to the first iteration of the RDQA process described in Section 2 in order to highlight how the RDQA process was useful to identify inconsistencies in real data cleansing operations. The application of the function clr (defined according to Function 1) on DBS generated a new dataset DBC with |DBC | = 1, 089, 895. Then, the function ccheck has been realised according to Definition 3, using the CMurphi model checker as detailed in Section 4. The summarised DCM shown in Table 2 was crucial in order to refine the clr function. The RDQA was performed on a 32 bits 2.2Ghz CPU in about 20 minutes using 100 MB of RAM. Results are shortly commented in the following list: Case 1: represents careers already clean that have been left untouched, which are about 45% of the total. Case 2: refers to careers considered (by ccheck) valid before but not after cleansing, although they have not been touched by clr. As expected this subset is empty. Case 3: describes valid careers that have been improperly changed by clr. Note that, despite such kind of careers remain clean after the intervention of clr, the behaviour of clr has been investigated to prevent that the changes introduced by clr could turn into errors in the future. Case 4: represents careers originally valid that clr has made invalid. These careers have proven to be very useful to identify and correct bugs in the clr implementation. Case 5: refers to careers considered (by ccheck) not valid before but valid after cleansing, although they have not been touched by clr. Though the number of careers is negligible, this result was useful to identify and repair a bug in ccheck. Case 6: describes invalid careers, that clr was able neither to detect nor to correct, and consequently they were left untouched. Case 7: describes the number of (originally) invalid careers which ccheck recognises as properly cleansed by clr at the end. Case 8: represents careers originally invalid which have been not properly cleansed since, despite an intervention of clr, the function ccheck identifies them still as invalid.
280
M. Mezzanzanica et al. Table 2. The Double Check Matrix on an administrative database Conditions Result Case ccheck(Si ) equals(Si ,Ci ) ccheck(Ci ) Cardinality 1 2 3 4 5 6 7 8
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
96, 353 0 32, 789 1, 399 3 40 74, 904 8, 078
The DCM shows that the original database had a very low quality of data (only 45% of the original careers were not affected by consistency issues), therefore justifying the need for data cleansing. Furthermore, considering the single DCM entries, cases 3, 4, 6, and 8 provided useful information for improving the clr, while case 5 provided information for improving the ccheck. In summary the ccheck was used to check the clr and the clr was used to check the ccheck (here comes the name double check). Since both ccheck and clr implementations cannot be guarantee as error free, there is no assurance that all the possible errors could be found through the DCM, nevertheless it has helped to improve the cleansing routines in the just introduced industrial example.
6 Conclusions and Future Work The paper has introduced the RDQA (Robust Data Quality Analysis) methodology which proved to be effective in a specific data quality industrial case, namely cleansing a database containing the work careers of the inhabitants of an Italian province. The cleansed data is used to study the labour market dynamics at territorial level. Moreover, the authors are working on applying the RDQA on other real life domains, indeed the RDQA approach can be used for a general class of data contents which has been identified within the paper. Such class comprises all the databases (or broadly speaking all the data sources) containing sequences of events that can be modelled by Finite State Systems. The business rules holding for such domains can be turned into consistency rules expressed by means of formal methods, which can be automatically verified against data (before and after the cleansing). This approach can exploit tools and techniques developed and validated in the formal domain (e.g. model checking). As a future work the authors would like to explore the temporal logic (or graphic formalisms based on temporal logic) as a way to express consistency rules in a more user friendly way, to facilitate the validation activities of domain experts (who may have little knowledge on formal topics). Currently, the author’s present research goes into the direction of exploiting formal methods to carry out sensitivity analysis on dirty data, i.e. to identify to what extent the dirty data, as well as the cleansing routines, may affect the value of statistical indicators that are computed upon the cleansed data.
Data Quality through Model Checking Techniques
281
References 1. Afrati, F.N., Kolaitis, P.G.: Repair checking in inconsistent Databases: Algorithms and Complexity. In: Proceedings of the 12th International Conference on Database Theory, ICDT 2009, pp. 31–41. ACM, New York (2009) 2. Barateiro, J., Galhardas, H.: A Survey of Data Quality Tools. Datenbank-Spektrum 14, 15–21 (2005) 3. Batini, C., Cappiello, C., Francalanci, C., Maurino, A.: Methodologies for Data Quality Assessment and Improvement. ACM Comput. Surv. 41, 16:1–16:52 (2009) 4. Batini, C., Scannapieco, M.: Data Quality: Concepts, Methodologies and Techniques. In: Data-Centric Systems and Applications. Springer, Heidelberg (2006) 5. Burch, J.R., Clarke, E.M., McMillan, K.L., Dill, D.L., Hwang, L.J.: Symbolic Model Checking: 1020 States and beyond. Inf. Comput. 98(2), 142–170 (1992) 6. Clarke, E.M., Grumberg, O., Peled, D.A.: Model Checking. The MIT Press, Cambridge (1999) 7. CMurphi web page, http://www.dsi.uniroma1.it/˜ tronci/cached.murphi.html 8. CRISP Research Center web page, http://www.crisp-org.it 9. Della Penna, G., Intrigila, B., Melatti, I., Minichino, M., Ciancamerla, E., Parisse, A., Tronci, E., Venturini Zilli, M.: Automatic Verification of a Turbogas Control System with the Murϕ Verifier. In: Maler, O., Pnueli, A. (eds.) HSCC 2003. LNCS, vol. 2623, pp. 141– 155. Springer, Heidelberg (2003) 10. Dovier, A., Quintarelli, E.: Applying Model-checking to solve Queries on semistructured Data. Computer Languages, Systems & Structures 35(2), 143–172 (2009) 11. Embury, S.M., Missier, P., Sampaio, S., Greenwood, R.M., Preece, A.D.: Incorporating Domain-Specific Information Quality Constraints into Database Queries. J. Data and Information Quality 1, 11:1–11:31 (2009) 12. Fan, W., Geerts, F., Jia, X.: A Revival of Integrity Constraints for Data Cleaning. Proc. VLDB Endow. 1, 1522–1523 (2008) 13. Gill, A.: Introduction to the Theory of Finite-state Machines. McGraw-Hill, New York (1962) 14. Khoussainov, B., Nerode, A.: Automata Theory and Its Applications. Birkhauser, Boston (2001) 15. Maletic, J., Marcus, A.: Data cleansing: beyond Integrity Analysis. In: Proceedings of the Conference on Information Quality, pp. 200–209 (2000) 16. Martini, M., Mezzanzanica, M.: The Federal Observatory of the Labour Market in Lombardy: Models and Methods for the Costruction of a Statistical Information System for Data Analysis. In: Larsen, C., Mevius, M., Kipper, J., Schmid, A. (eds.) Information Systems for Regional Labour Market Monitoring - State of the Art and Prospectives. Rainer Hampp Verlag (2009) 17. M¨uller, H., Freytag, J.C.: Problems, Methods and Challenges in Comprehensive Data Cleansing. Technical Report HUB-IB-164, Humboldt-Universit¨at zu Berlin, Institut f¨ur Informatik (2003) 18. Murphi web page, http://sprout.stanford.edu/dill/murphi.html 19. Scannapieco, M., Missier, P., Batini, C.: Data Quality at a Glance. Datenbank-Spektrum 14, 6–14 (2005) 20. Vardi, M.Y.: Automata Theory for Database Theoreticians. Theoretical Studies in Computer Science, pp. 153–180. Academic Press Professional, Inc., London (1992)
Generating Automated News to Explain the Meaning of Sensor Data Martin Molina1, Amanda Stent2, and Enrique Parodi1 1 Department of Artificial Intelligence Technical University of Madrid, Spain
[email protected],
[email protected] 2 AT&T Labs - Research Florham Park, NJ, USA
[email protected] Abstract. An important competence of human data analysts is to interpret and explain the meaning of the results of data analysis to end-users. However, existing automatic solutions for intelligent data analysis provide limited help to interpret and communicate information to non-expert users. In this paper we present a general approach to generating explanatory descriptions about the meaning of quantitative sensor data. We propose a type of web application: a virtual newspaper with automatically generated news stories that describe the meaning of sensor data. This solution integrates a variety of techniques from intelligent data analysis into a web-based multimedia presentation system. We validated our approach in a real world problem and demonstrate its generality using data sets from several domains. Our experience shows that this solution can facilitate the use of sensor data by general users and, therefore, can increase the utility of sensor network infrastructures. Keywords: Interactive data analysis, intelligent multimedia presentation system, virtual newspaper, data-to-text system.
1 Introduction Collections of large quantitative datasets from distributed sensors are now becoming widely available online. To use these datasets human analysts usually examine the data, choose analytical methods (statistical analyses, clustering methods, etc.), decide how to present the results (e.g. by choosing and constructing relevant graphics), and explain what the results mean. However, human analysts are expensive, and since the meaning of quantitative data is not always explicit, understanding of these large datasets is too often restricted only to domain experts. Many analytical methods are implemented in statistics and data mining packages such as R [13] and Weka [20], or visualization packages such as Google Charts. This has led to the development of automatic solutions for intelligent data analysis. These systems can examine data, select and apply analytical methods, and visualize the results. However, existing automatic solutions for intelligent data analysis provide limited help to explain the meaning of the data to non-expert end-users [16]. J. Gama, E. Bradley, and J. Hollmén (Eds.): IDA 2011, LNCS 7014, pp. 282–293, 2011. © Springer-Verlag Berlin Heidelberg 2011
Generating Automated News to Explain the Meaning of Sensor Data
283
On-line solutions that automatically construct textual or multimedia explanations of the meaning of sensor data using non-technical language can facilitate access by non-expert users with any level of knowledge and, therefore, can increase the utility of sensor network infrastructures. For example, consider a web server in the hydrological domain that collects real time data about water levels, water flows and other hydrological information recorded by a sensor network. In contrast to a specialized web application that only presents results of analyses to expert hydrologists graphically, a web application with text explanation capabilities is potentially more useful to a wider range of users (e.g., municipalities, civil protection, engineering consultants, educators, etc.). Other examples of public web servers with large quantitative datasets potentially useful for different communities of users are ship traffic [18], crime statistics [17] and wildlife tracking data [12]. In this paper, we describe our research on systems that can automatically generate explanatory descriptions about the meaning of quantitative sensor data for general users. We propose a type of web application, a virtual newspaper with automated generated news stories, as a way to communicate this information to general users. To develop this type of application it is necessary to find an efficient combination of methods from different fields such as automated data analysis, multimedia presentation systems (discourse planning, natural language generation, graphics generation), and hypermedia-based representations. In the paper we focus on our automatic method for selecting data analyses and planning discourses explaining the results. In addition, we describe the type of presentation our system generates, and a general architecture that supports interaction with these presentations. We also present the results of an evaluation of our solution in a real-world hydrological task, and a comparison with related work.
2 The Problem of Generating Explanatory Descriptions In order to characterize the problem of generating explanatory descriptions, we present some general desiderata. Our first set of desiderata is related to the data input to our system. We assume that the input to our system is a quantitative dataset corresponding to measures from a set of geographically distributed sensors. Each measure includes temporal references (date/time) and spatial references (latitude/ longitude) as well as, potentially, other information sampled periodically by the sensor over a certain interval of time (for example, a number of days, months or years). We assume that the input to our system normally includes a large number of measures (thousands or millions of measures). An example of this type of dataset is the information measured the sensor network of the national information system SAIH in Spain (SAIH is the Spanish acronym for Automatic Information System in Hydrology). This system collects hydrologic data about water flows and water levels and meteorological data (e.g., rainfall) using thousands of sensors located at river channels, reservoirs and other selected locations. We used this system as an example of sensor network data for our task. In this case, data is measured using static sensors located in fixed geographic locations. We also used datasets from moving sensors. For example, we used data from a large set of environmental information collected by ships over the last ten years for
284
M. Molina, A. Stent, and E. Parodi
the international VOSclim project [18]. This data set includes tens of measures for each of hundreds of thousands of events for hundreds of ships. This publicly available data set is a good representative example of a sensor data set with moving sensors. Other examples of data from moving sensors come from Obis-Seamap [12], a web site that includes wildlife tracking datasets (including whales, turtles, seals, and birds). Our second set of desiderata is related to the content and form of the generated presentations. Our goal is to generate explanatory descriptions of the meaning of the input sensor data. According to the context of our research work, we assume that generated descriptions should: • • •
Be informative and persuasive. The objective is to inform the user about the content of the sensor data. The descriptions should summarize relevant information and include persuasive explanations and evidence. Be useful for non-expert users. In order to be understood by non-expert users, the content of the description should include a minimum number of specialized terms and should assume minimal background knowledge about the domain. Have a uniform style. The style of the presentation should be uniform and easy to read. The style of the presentation should be domain independent to facilitate applicability of this solution to different problems.
Finally, we have desiderata relating to the user-system interaction. We assume that the system should produce interactive presentations. The user should be able to easily request additional explanations of subsets of the data. This is important to let the user find evidence supporting presented information and/or find additional relevant information not found by the system. The user-system interaction should be based on well-known standards and intuitive communication metaphors to be easy to use by general users (for example, using hypermedia solutions). In the following sections we describe our system, focusing on two main tasks: (1) generating the content of the news (what information to communicate) and (2) generating the presentation (how to present the information).
3 Generating the Content of the News To automatically generate the content of our ‘sensor news’ stories we need to construct descriptions that summarize relevant information and provide explanations and evidence. This task requires finding and abstracting relevant information from sensor data, and constructing descriptions with appropriate explanatory discourses. We studied how humans write descriptions that summarize this type of information. For static sensors, we analyzed sensor data and descriptions in the hydrological domain corresponding to the SAIH system. We studied how expert hydrologists generate descriptions related to water management and alerts related to floods. For this purpose, we used as main sources: interviews with experts, specialized web sites where human experts write summaries and news stories from newspapers related to hydrologic problems (such as floods). For moving sensors, we studied how humans describe geographical movements. We created a corpus of human-authored descriptions of geographic movements of wild animals from
Generating Automated News to Explain the Meaning of Sensor Data
285
scientific papers on wildlife tracking [3]. We also created a separate corpus for geographic movements by crowdsourcing using Amazon Mechanical Turk1. In general, the analyzed descriptions include the following main characteristics: (1) relevant facts, the texts include descriptions that summarize the main characteristics of the data, at an appropriate level of abstraction, including discovered patterns, trends and general properties; (2) explanatory descriptions, the texts elaborate the presentation of relevant facts using discourse patterns such as causal explanations, chronological descriptions, comparison of values, enumerations and examples that provide supporting evidence; and (3) graphical information, the descriptions usually combine natural language text and graphics (maps, charts, etc.). To generate the content of this type of descriptions, we use a combination of a data analyzer and a discourse planner. The role of the data analyzer is to discover new facts and abstract information. The discourse planner uses the data analyzer to automatically construct explanatory descriptions. 3.1 The Discourse Planner Our discourse planner is conceived as a knowledge-based hierarchical planner. The knowledge base includes partial discourse patterns at different levels of abstraction, together with conditions for selecting each discourse pattern. Given an input presentation goal (e.g. to summarize a subset of the data), the planner constructs the output presentation plan by iterative refinement of the goal using the discourse patterns. The planner itself is a modified version of HTN (Hierarchical Task Network) planning [4]. The generated discourse plan includes information about the rhetorical structure of the discourse (with rhetorical relations [7]) and information about propositions in the discourse, represented as references to subsets of the input data. GOAL: elaborate_movement(x) CONDITION: (path(x) ∧ aggregated_stops_chronologically(x, 7, y)) SUBGOALS: {goal(describe_beginning(x), d1), goal(describe_stops_chronologically(y), d2)} DISCOURSE: relation(temporal_sequence, nucleus({d1,d2}), satellite({})))
Fig. 1. Example discourse pattern
We collected and manually authored discourse patterns for each specific domain. For example, we found that it was possible to generate discourses about flood risks in the hydrologic domain with 245 discourse patterns. We also were able to generate summaries of basic geographic movements of moving sensors with 56 discourse patterns. Figure 1 shows an example of one of these patterns. This pattern means: if it is possible to make a spatio-temporal clustering of a path x with 7 or fewer stops, then (1) generate a description for the beginning of the movement (description d1), (2) generate a chronological description for the sequence of these stops (description d2), and (3) link d1 and d2 with the rhetorical relation temporal-sequence. We currently use 14 standard rhetorical relations such as elaboration-general-specific, exemplify, list, 1
These datasets are available to the research community; contact the authors.
286
M. Molina, A. Stent, and E. Parodi
and contrast. Our discourse planner can be adapted by modifying the discourse strategies for new domains and including new strategies. In addition, many discourse patterns (for example, patterns for spatio-temporal descriptions and certain general patterns for causal descriptions) are reusable in different domains. 3.2 The Data Analyzer The goal of the data analyzer is to find relevant information, patterns and abstractions from the sensor data. The data analyzer can: cluster the data, analyze trends in the data, or describe the data in terms of most common feature values, example instances, and exceptions. This process creates subsets of the data, from which propositions for the discourse are produced by the discourse planner. {InitialTime(2010-06-01T18:30), FinalTime(2010-07-01T11:30) Geometry(Region(…)), Ocean(‘Pacific Ocean’), …}
Aggregates
a1 a2 a3
a5
a4
Paths p1
p2
Augmented events e1'
e2´
e3´
e2
e5´
e6´
e7´
e8´
e9´
e6
e7
e8
e9
{…,City(‘Valparaiso´), Surface(Water), … }
Observed events e1
e4´
e3
e4
e5
{InitialTime(2010-06-15T10:30), Geometry(Point(-33,0346,-71,617), Agent_type(‘Cruise ship’), …}
Fig. 2. Data representation for data analysis
We designed a general data representation able to cope with large sensor datasets; it has the following basic elements (see Fig. 2): •
Events. Input sensor data is represented as a set of events E ={e1, e2, …}. Each event is characterized by a set of attribute values. We use several spatio-temporal attributes: date/time, and geographic attributes (spatial point, geographical point, spatial area, etc.). In addition, each particular domain uses additional quantitative or qualitative attributes corresponding to the specific observable properties of the dynamic system being measured by the sensors (e.g., temperature, pressure, etc.).
Generating Automated News to Explain the Meaning of Sensor Data
• •
287
Paths. Each path pi is a sequence of events (e1, e2, …). Paths are used, for example, to represent the geographical movement of a vehicle. Paths are represented by the set P = {p1, p2, …}. Aggregates. Aggregates are represented by the set A = {a1, a2, …}. Each element ai aggregates a set of events, a set of paths, or other aggregates. Each aggregate is described with a set of attributes.
In our data analyzer we distinguish between two main steps: (a) attribute extension and (b) event aggregation. The first step finds the values of additional attributes for each event and the second step creates aggregate events. Figure 2 summarizes this process and our data representation. At the bottom, there are events recorded by the sensors, each event with a set of attribute values. Above them, there are extended events, i.e. the recorded events augmented with new attribute values corresponding to the attribute extension step. Events can be optionally linked into paths, if they indicate movement of individuals in space/time. Finally, events are grouped by aggregations created by the data analysis process. The new attribute values of an event are obtained from other attribute values of the event (for example by qualitative interpretation, aggregation or generalization). Abstraction goals
aggregated_stops(x, y, z)
aggregated_stops_ chronologically(x, y, z)
aggregated_stops_ by_duration(x, y, z)
pattern_ periodicity(x, y, z, u, v)
pattern_ common_date(x, y, z)
Description Given a trip x and a maximum number of places y, this predicate generates a set z of n places which includes all the places visited. The level of abstraction for each place is selected to satisfy n ≤ y. Abstraction method: Hierarchical agglomerative clustering on extended attributes of locations, including country, state/county, city, address and toponym. Given a trip x and a maximum number of places y, this predicate generates an ordered set z of n places which includes all the places visited in chronological order. The level of abstraction for each place is selected to satisfy n ≤ y. Abstraction method: Hierarchical agglomerative clustering based on locations and durations between stops. Given a trip x and a maximum number of places y, this predicate generates an ordered set z of n places which includes all the places visited sorted by duration. The level of abstraction for each place is selected to satisfy n ≤ y. Abstraction method: Hierarchical agglomerative clustering based on extended attributes of durations, including year, month, week, day, hour, etc. Given a trip x and a place y that was visited several times, this predicate finds the periodicity of the visits, described by z (integer) number of times, u is the time unit (D for day, M for month, etc.), and v is the periodicity represented as a duration (ISO 8601). Abstraction method: Regular expression matching over the input durations. Given a trip x and a place y that could be visited several times, this predicate generates the value z that represents the common date of the visits. Abstraction method: Hierarchical agglomerative clustering based on extended attributes of dates, including year, month, week, day, hour, etc.
Fig. 3. Example abstraction goals and their corresponding abstraction methods
288
M. Molina, A. Stent, and E. Parodi
For this purpose, we use knowledge bases with abstractions about time and space, as well as other domain specific representations. For example, for interval events we add a duration attribute; for locations we add city, state, country, and toponym. Event aggregation is done using clustering procedures and trend analysis. Instead of having general clustering methods to find any kind of aggregation, we use a fixed set of abstraction methods (each one based on different subsets of attributes) defined by named abstraction goals (Fig. 3). This is important to establish adequate control of the discourse planning. Most of the clustering and trend analysis methods we use are implemented in R. For example, we currently use methods for spatial clustering, temporal clustering, spatio-temporal clustering, clustering based on output of a statistical classifier, clustering based on event actors, and temporal trend analysis. For each domain, we can add new abstraction methods.
4 Generating the Sensor News Presentations The goal of presentation generation is to determine how to present information to the user. Our solution to this an on-line virtual newspaper following a journalistic style [10]. The journalistic layout and presentation style is understandable by a wide audience. It uses texts organized in headlines, summaries and progressive descriptions using natural language together with appropriate graphic illustrations.
Fig. 4. Example interactive presentation following a journalistic style
Generating Automated News to Explain the Meaning of Sensor Data
289
Our generated presentations follow a presentation style based on existing styles of newspapers (e.g, the New York Times). Presentation elements include titles, subtitles, tables, text elements, hyperlinks, and figures of various types (Fig. 4). The layout always includes text (headline and text body) and, optionally, one or several figures. Our solution uses a set of prefixed layouts; the best layout is automatically selected based on the number of figures and amount of text in the presentation to be generated. The user interacts with the presentation by clicking on hyperlinks in the text or by manipulating interactive coordinated graphics, or by selecting data points in the graphical components of a presentation. For example, Figure 4 illustrates some forms of text and graphics coordination. In the figure, some text is blue and underlined because the user has put the cursor on this text and it is a within-presentation link. When the user clicks this link, the 2D chart, a temporal series, is displayed (if it is not yet displayed) and highlighted. The user can manipulate the chart to consult values or to select time intervals. When the user changes the time interval, the map automatically shows the boat’s route in that interval. When the user clicks on a specific time point in the chart, the map shows the corresponding geographic location. In order to support this type of interaction with the required efficiency we designed a general software architecture (Figure 5) having the following main components: (1) components for content planning (the discourse planner and the data analyzer), (2) components for presentation planning integrating specialized components for natural language generation and graphics generation, and (3) the data base with raw sensor data and for the results of data analyses; all of these producing (4) an automatically generated hypermedia virtual newspaper (in HTML and JavaScript) accessible by remote web browsers through the Internet. An original characteristic of this architecture is that we separate the content of the presentation into two parts: (1) the discourse plan and (2) the data base with sensor data and abstractions. It is easier to efficiently manage large datasets with a database system (for example, a relational database) with general tools for data analysis (for example, R) and, separately, to use rich knowledge representations for the discourse plan (for example, logic predicates to represent RST relations and propositions).
User
Web Browser
User Query
Hypermedia presentation
Content Planner Data Analyzer
Analysis Goals Discourse Planner
Presentation Generation Discourse Plan
Natural Language Generator
Graphic Generator
Database: Sensor Data & Abstractions
Fig. 5. The general architecture for our automatic description generation system
290
M. Molina, A. Stent, and E. Parodi
5 Evaluation We have evaluated our general design in different domains. For example, we developed a real-world application called VSAIH in the hydrology domain following the journalistic approach described in this paper. Figure 6 shows an example generated presentation with a headline, a body text with hyperlinks, and two graphics: an animated illustration showing the movement of a storm using pictures from meteorological radars, and an interactive map with the location of relevant sensors.
Flow above normal in the Ebro river at Ascó - The Ebro River at Ascó has recorded a flow of 362 m3/s which represents an increase of 10.0 m3/s compared to the previous hour. The normal flow at this point of the river is 308 m3/s. With respect to this, the following hydrological behavior can be highlighted. There have been changes in volume in 3 reservoirs in the Ebro River over the past 24 hours. The maximum decrease in volume has occurred in the Ribarroja reservoir with a decrease of 4.18 Hm3 over the past 24 hours.
Fig. 6. Example of web application in hydrology, with English translation underneath
VSAIH has operated continuously for more than one year. Every hour VSAIH generates a virtual newspaper of 20-30 pages of news summarizing 44,736 events corresponding to sensor data from the national hydrologic information system SAIH (see Section 2). Each page is generated in less than 5 seconds. To evaluate its practical utility, we compared VSAIH with existing web applications that present hydrological sensor data. We compared the time taken by general users to search and analyze data for given tasks relating to water management and flood alerts. Users took
Generating Automated News to Explain the Meaning of Sensor Data
291
up to about 5 hours (4 hours, 46 minutes) to synthesize information using other existing web applications. This represents the amount of time that users could save by using our application, which is significant, especially in the presence of emergencies. The development of VSAIH showed the feasibility and the practical utility of the type of application described in this paper. VSAIH was developed for a specific domain with static sensors (more details about this application domain can be found in [9]). In order to design a more general solution, we used other datasets from different domains, for example: wildlife tracking (1,438 measures) [12], ship traffic (8,307,156 measures, from the VOSclim project) [18], crime statistics (27,699,840 measures) [17], twitter data (7,589,964 measures), network traffic data (52,232 measures) and general geographic movements [11]. We found that the general design presented in this paper was appropriate for all these domains. The data representation (database design and file formats) and algorithms were reused in all cases. However, we found it necessary to perform the following main additional tasks to adapt our architecture for each domain: (a) create specific domain knowledge bases with new domain attributes and new abstraction functions for the data analysis task; and (b) extend/adapt the discourse patterns for the discourse planner.
6 Related Work Our system is related to intelligent multimedia presentation systems (such as the system prototypes WIP [19] and COMET [8]). Our solution uses the architecture typically adopted in multimedia presentation systems [1] (e.g., content planning, graphic generation, etc.) with specific knowledge and data representations designed for sensor data. In contrast to other multimedia presentation systems, our system also uses a special presentation style (a virtual newspaper with text explanations and interactive graphics) suitable for our task (generating explanatory descriptions to help non-expert users understand sensor data). Autobrief [5] is an experimental prototype related to our system. Autobrief generates presentations in the domain of transportation scheduling. Like our system, it is interactive and combines generated text and graphics. Autobrief was validated in one specific domain and generates basic presentations (with two or three lines of text and bar graphs). In contrast, our system was constructed to generate more complex presentations (with larger text segments and more complex graphics) based on the idea of a virtual newspaper. It has been validated in several domains (with millions of data points). In addition, our system uses a different representation that includes rhetorical relations to relate graphics and text (which may provide more portability). In the field of natural language generation, our solution has some similar general components to data-to-text systems (for weather forecasting [15] or medicine [6]). A significant problem of these systems is that they still don’t generate appropriate narratives (normally they mainly list the events in natural language) [14]. Our system is able to generate complex narratives (with rhetorical relations such as contrast, exemplify, cause, elaboration, etc.) thanks to the use of a data analyzer with a prefixed set of abstraction functions working in combination with a discourse planner with discourse patterns. Also, compared to data-to-text systems, our system is able to generate presentations including text and graphics.
292
M. Molina, A. Stent, and E. Parodi
7 Conclusions In this paper, we have identified a general problem: how to automatically generate explanations about the meaning of sensor data for non-expert users. To our knowledge, there is no previous research that identifies this problem as a whole and addresses it with a general solution. We argue that a solution to this problem helps to increase the utility of sensor networks. We propose a solution to this problem based on the idea of a virtual newspaper with automatically generated news. We designed an architecture that integrates a variety of techniques from automated data analysis, multimedia presentation systems (discourse planning, natural language generation, graphics generation), and web applications (hypermedia representations). Specific contributions of our proposal are: •
•
•
We designed a novel type of user interface with a presentation style that uses a journalistic metaphor (a headline and a body text, complemented by graphics) which is familiar to general users. For this presentation style, we identified certain types of typical narratives to explain the meaning of sensor data. We implemented the corresponding computational models to automatically generate such explanations. We permit users to interact with the system through hyperlinks and graphics manipulation. Our solution is web-based, allowing remote operation though the internet. In our architecture we pay special attention to efficiency and reusability of components. For example, we designed a data representation able to handle large data sets from diverse domains as well as abstractions over those data sets; we designed an efficient discourse planner (adapted from work on hierarchical task networks); and we reused existing tools for data mining. Our solution has been evaluated in a complex real-world domain showing efficiency (answer times in seconds) and practical utility (potential time savings of about 5 hours for certain tasks). We also have demonstrated the generality of our approach by applying our solution to different domains.
Our future plans related to this research work include the extension and further evaluation of specific components (e.g., natural language generation, and specific methods for data analysis) and generalizing knowledge bases together with other solutions to help developers construct domain models. For example, we plan to explore the applicability of representation standards related to sensor knowledge, as it is considered the context of the Semantic Web (e.g., [2]). Acknowledgements. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement PIOF-GA-2009-253331 (Project INTERACTIVEX). This work was also partially supported by the Ministry of Science and Innovation of Spain within the VIOMATICA project (TIN2008-05837). The authors thank Javier Sánchez-Soriano and Alberto Cámara for the software development of system prototypes.
Generating Automated News to Explain the Meaning of Sensor Data
293
References 1. André, E.: The generation of multimedia presentations. In: Dale, R., Moisl, H., Somers, H. (eds.) A Handbook of Natural Language Processing: Techniques and Applications for the Processing of Language as Text, pp. 305–327. Marcel Dekker Inc., New York (2000) 2. Compton, M., Henson, C., Lefort, L., Neuhaus, H., Sheth, A.: A Survey of the Semantic Specification of Sensors. In: 2nd International Workshop on Semantic Sensor Networks, at 8th International Semantic Web Conference (2009) 3. Coyne, M.S., Godley, B.J.: Satellite tracking and analysis tool (STAT): an integrated system for archiving, analyzing and mapping animal tracking data. Marine Ecology Progress Series (2005) 4. Ghallab, M., Nau, D., Traverso, P.: Automated planning: Theory and practice. Morgan Kaufmann, San Francisco (2004) 5. Green, N., Carenini, G., Kerpedjiev, S., Mattis, J., Moore, J., Roth, S.: AutoBrief: an Experimental System for the Automatic Generation of Briefings in Integrated Text and Information Graphics. International Journal of Human-Computer Studies 61(1), 32–70 (2004) 6. Hunter, J., Gatt, A., Portet, F., Reiter, E., Sripada, S.: Using natural language generation technology to improve information flows in intensive care units. In: 5th Conf. on Prestigious Applications of Intelligent Systems (2008) 7. Mann, W., Thompson, S.: Rhetorical Structure Theory: Toward a functional theory of text organization. Text 8(3), 243–281 (1988) 8. McKeown, K.R., Feiner, S.K.: Interactive multimedia explanation for equipment maintenance and repair. In: DARPA Speech and Language Workshop (1990) 9. Molina, M., Flores, V.: A presentation model for multimedia summaries of behavior. In: 13th International Conference on Intelligent User Interfaces (2008) 10. Molina, M., Parodi, E., Stent, A.: Using the Journalistic Metaphor to Design User Interfaces That Explain Sensor Data. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque, P., Winckler, M. (eds.) INTERACT 2011, Part III. LNCS, vol. 6948, pp. 636– 643. Springer, Heidelberg (2011) 11. Molina, M., Stent, A.: A knowledge-based method for generating summaries of spatial movement in geographic areas. International Journal on Artificial Intelligence Tools 19(4), 393–415 (2010) 12. Obis-Seamap, http://seamap.env.duke.edu/ 13. The R Development Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria (2009), http://www.R-project.org 14. Reiter, E., Gatt, A., Portet, F., van der Meulen, M.: The importance of narrative and other lessons from an evaluation of an NLG system that summarises clinical data. In: Proceedings of the 5th International Conference on Natural Language Generation (INLG 2008), Salt Fork, OH (2008) 15. Reiter, E., Sripada, S., Hunter, J., Yu, J., Davy, I.: Choosing words in computer-generated weather forecasts. Artificial Intelligence 67(1-2), 137–169 (2005) 16. Serban, F., Kietz, J.-U., Bernstein, A.: An overview of intelligent data assistants for data analysis. In: Planning to Learn Workshop (PlanLearn 2010) at ECAI (2010) 17. UCR, Uniform Crime Reports, http://www.fbi.gov/ucr/ucr.htm 18. VOSclim, http://www.ncdc.noaa.gov/oa/climate/vosclim/vosclim.html 19. Wahlster, W., André, E., Finkler, W., Profitlich, H.J., Rist, T.: Plan-based integration of natural language and graphics generation. Artificial Intelligence 63, 387–427 (1993) 20. Witten, H., Frank, E.: Data Mining: Practical machine learning tools and techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)
Binding Statistical and Machine Learning Models for Short-Term Forecasting of Global Solar Radiation Llanos Mora-L´ opez1, Ildefonso Mart´ınez-Marchena1, Michel Piliougine2 , and Mariano Sidrach-de-Cardona2 1
Dpto. Lenguajes y C.Computacion, ETSI Inform´ atica 2 Dpto F´ısica Aplicada II, E.Polit´ecnica Superior University of Malaga, Campus de Teatinos, 29071 M´ alaga, Spain {llanos,ilde}@lcc.uma.es,{michel,msidrach}@ctima.uma.es
Abstract. A model for short-term forecasting of continuous time series has been developed. This model binds the use of both statistical and machine learning methods for short-time forecasting of continuous time series of solar radiation. The prediction of this variable is needed for the integration of photovoltaic systems in conventional power grids. The proposed model allows us to manage not only the information in the time series, but also other important information supplied by experts. In a first stage, we propose the use of statistical models to obtain useful information about the significant information for a continuous time series and then we use this information, together with machine learning models, statistical models and expert knowledge, for short-term forecasting of continuous time series. The results obtained when the model is used for solar radiation series show its usefulness. Keywords: time series, machine learning, solar radiation, short-term forecasting.
1
Introduction
The energy production of solar plants depends on the solar energy they receive besides depending on design parameters. Short-term forecasting of energy production in solar plants has become a requirement in competitive electricity markets. In the short term, expected produced energy can help producers to achieve optimal management and can also help to use efficient operation strategies by deciding the best way of interacting with conventional grid. The estimation of energy generated by solar plants is difficult mainly because of its dependence on meteorological variables, such as solar radiation and temperature. In fact, the photovoltaic production prediction is mainly based on the prediction of global solar irradiation. The behavior of this variable can change quit dramatically in different days, even in consecutive days. As the solar radiation is the energy source of solar systems it will be very useful to have models that allow accurately forecasting the values of this variable in the short-term to be able to forecast J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 294–305, 2011. c Springer-Verlag Berlin Heidelberg 2011
Binding Statistical and Machine Learning Models for Short-Term Forecasting
295
the energy generated by these systems. The global radiation values are recorded systematically by weather stations for years. Therefore, we have used these historical records as input data for predicting the production of photovoltaic plants. In this paper, we analyze the time series recorded for this parameter. Short-term forecasting with continuous time sequences has been performed using both statistical models and machine learning models. In general, all the methods used for short-term forecasting with continuous time series aim to find models which are able both to reproduce the statistical and sequential characteristics of the sequence and to forecast short and long term values. On the one hand, statistical approaches follow the well known time series methods that are based on the assumption that data have an internal structure; this structure can be identified by using simple and partial autocorrelations, [1], [2]; using these functions, a previous selection of the models is performed and their parameters are estimated. It is usually assumed that the relationships between the parameters of the models is constant over time. Time series forecasting methods are then used for detecting and exploring such a structure. On the other hand, methods and techniques from the machine learning area have been also used for process and time series forecasting, for instance, neural networks have been used in [3], fuzzy time series have been proposed in [4], [5], dynamic Bayesian networks have been used in [6] and switching state-space models are presented in [7]. Statistical models can deal with continuous values but some machine learning models are restricted to data described by discrete numerical or nominal attributes (features); thus, when attributes are continuous, a discretization algorithm which transforms them into discrete ones must be applied, as it has been pointed out, for instance, in [8], [9]. For this task, many different methods have been proposed, see e.g. [9] and [10]. We propose a model that binds the use of both statistical and machine learning models for short-time forecasting of continuous time series of solar radiation. This model allows us to manage not only the information in the time series, but also other important information supplied by experts. In a first stage, we propose the use of statistical models to obtain useful information about the significant information for a continuous time series and then we use this information, together with some machine learning models, statistical models and expert knowledge, for short-term forecasting of continuous time series. The proposed model is capable of detecting and learning the important information in the time series and of incorporating other type of information. This model is described in the second section. In that section, a procedure to add expert knowledge and to select what must be learned (and how) is presented. It is also described the proposed iterative procedure to discretize the values of the continuous variables -this is necessary for using some machine learning models. The discretization procedure uses both the information of the statistical analysis and the feedback from the machine learning models in order to decide the best discretization for each continuous variable. In the third section, an example of the use of the model for a real continuous time series of global radiation is presented.
296
2 2.1
L. Mora-L´ opez et al.
Short-Term Forecasting Model Data Exploration and Foundations of the Procedure
This paper seeks to propose a model for short-term forecasting of continuous time series of global solar radiation using both statistical and machine learning models. This model is built in three stages using a set of independent variables, which may include lags of the dependent variable. As it has been mentioned in the previous section, there are many different machine learning models that can be used for short-term forecasting of time series. We have included in our framework two of the most widespread approaches for forecasting global radiation series: a special type of probabilistic finite automata and a type of neural network. Some other machine learning models could have been included in our framework, but our aim is not to analyze all possible models, but to propose a framework which is capable of mixing different types of approaches to get a model which yields better forecasts than a single model. In the first stage, statistical techniques are used to determine the most significant information to predict the dependent variable. For solar global radiation series, this most significant information typically comes from the past values of the dependent variable, see for instance [11] that proves that only the one-day autocorrelation is significant. In the developed model, we propose to include other sources of information, such as experts knowledge. The information that is not from time series can be either continuous or discrete. In the first case, the variables can be directly used in the multivariate regression analysis. In the case of discrete information -such as season, type of day, and so on- dummy variables are used. Among the independent variables, the one that is globally most significant (typically, the first lag of the dependent variable) plays a key role, since it is used to classify the observations into groups. The independent variables that are significant for each group are then selected using multivariate regression analysis, see [12]. In the second stage several models have been checked in order to select for each group the best model for short-term forecasting. Forecasting a time series is usually performed with a single model - statistical model or machine-learning model -, and using the recent past values of the series. It would sometimes be very useful if the model could be different for different observations, taking into account that the influence of previous values can be different for different situations. For this reason, we have proposed to classify the observations into different groups and to analyze what variables are significant for each group and what model is the best one for each group. It is important to group the observations because our aim is to apply this procedure to situations where the relationship between the dependent variable and the independent ones may differ substantially depending on the value taken on by the most significant independent variable. Data sets with this type of behaviour can be found in Finance (interest rates, stock prices, etc.) and Climatology (values of available energy at the earth’s surface, clearness indices, etc.), among many other disciplines. For instance, in Climatology a high recent value in a sequence of clearness index (e.g. no clouds) is very
Binding Statistical and Machine Learning Models for Short-Term Forecasting
297
likely to be followed by another high value. Consequently, in this case only a few independent variables (one or two lags of the dependent variable) contain all the relevant information for prediction. However, a low recent value (e.g. patchy cloud) is usually related to higher volatility, and an accurate prediction typically requires the use of many dependent variables (not only lags of the dependent variable, but also some other type of information, such as season, temperature, wind speed, and so on). In the third stage, short-term forecasting of time series is done by using the current value of the time series and the data stored for the group this value belongs to, that is, the significant variables and the appropriate model for this group. 2.2
Description of the Procedure
Assume that observations {(Yt , Xt )}Tt=1 are available, where Yt is a univariate random variable, Xt is a q-dimensional random vector and t denotes time. We are interested in building a model for the prediction of YT +1 when XT +1 is known. We propose a three-stage procedure. In the first stage, regression analysis based on the observations {(Yt , Xt )}Tt=1 are used to determine the information that may be relevant for the prediction, and then the observations are classified into G different groups, where G is chosen by the researcher in advance. In the second stage, a model is selected for each group of observations taking into account the information obtained in the first stage. In the third stage, the information stored after the second stage, together with the value XT +1 , is used to propose a prediction of YT +1 . The first stage makes extensive use of the ordinary least squares (OLS) estimation of a regression model. This stage consists of the following three steps: – First step: Estimate by OLS the linear regression model: Yt = β0 + β1 X1t + ... + βq Xqt + Error,
(1)
using the T observations. Hereafter, we assume that the components of the random vector Xt are sorted in such a way that the first one, i.e. X1t , is the one with greatest t-statistic, in absolute value, in this OLS regression. – Second step: The observations are split into G groups, according to the value of X1t . Specifically, given real numbers c1 < ... < cG−1 (these numbers define the frontiers of each group), the sample is split as follows: the tth observation is in group 1 if X1t < c1 , in group 2 if X1t ∈ [c1 , c2 ), ..., in group G − 1 if X1t ∈ [cG−2 , cG−1 ), and in group G if X1t ≥ cG−1 . Hereafter, we use Tg to G denote the number of observations in group g; obviously g=1 Tg = T. Once the sample is split into groups, we assume that observations are sorted by group, i.e. the first T1 observations are those in Group 1, and so on. Thus, given g ∈ {1, ..., G}, the observations in group g are those observations t such g−1 g (g) (g) (g) (g) that t ∈ {T∗ , ..., T∗∗ }, where we denote T∗ ≡ 1+ i=0 Ti , T∗∗ ≡ i=1 Ti and T0 ≡ 0.
298
L. Mora-L´ opez et al.
– Third step: For each g ∈ {1, ..., G}, estimate by OLS the linear regression model (1) using only the Tg observations in group g, and then for (g) j ∈ {1, ..., q}, define Dj = 1 if the t-statistic of Xjt in this OLS regression is, in absolute value, greater than Mg , or 0 otherwise, where M1 , ..., MG are values that have been fixed in advance. Note that in the third step of the first stage the researcher decides which explanatory variables will be used for forecasting in the gth group: the jth component (g) of Xt is used if Dj = 1; also note that this decision is determined by the values M1 , ..., MG . The aim of the second stage is to select, for each group of observations, the more appropriate model to perform short-term forecasting among the three following candidates: multivariate regresssion analysis; a multilayer neural network; a special type of probabilistic finite automata (PFA). In each group, the selected model is the one with lowest in-sample mean relative square prediction error (MRSPE); for group g and model m (where g ∈ {1, ..., G} and m ∈ {1, 2, 3}) this quantity is defined as: (g)
T∗∗ (Yt,m − Yt )2 1 , MRSPE(g, m) = Tg Yt2 (g)
(2)
t=T∗
where Yt,m is the prediction of Yt derived from model m. For an observation Yt in group g, the predicted value Yt,m is obtained applying the following procedure: – Model 1: Estimate by OLS a linear regression model using as a dependent variable Yt , as independent variables only those components Xjt of Xt for (g) which Dj = 1 (an intercept is also included), and only the observations in group g. Yt,1 is the predicted value of Yt in this regression model. – Model 2: Build an artificial neural network with one hidden layer, and using the same dependent and independent variables as in Model 1, and only the observations in group g. The backpropagation learning algorithm is implemented using the Levenberg-Marquardt method, and the arctan function as the transference function. Yt,2 is the predicted value of Yt in this neural network model. A detailed description of this kind of models can be found, for instance, in [13], [14] and [15]. – Model 3: First, each component of Xt is discretized for all the observations (g,j) (g,j) in group g, i.e. for j ∈ {1, ..., q} and given real numbers d1 < ... < dI(g,j) , we consider the random variable ⎧ (g,j) (g,j) ⎪ d1 if Xjt < d1 ⎪ ⎪ ⎪ (g,j) (g,j) (g,j) (g,j) ⎪ ⎪ + d2 )/2 if Xjt ∈ [d1 , d2 ) ⎨ (d1 (g)∗ ... (3) Xjt := ... ⎪ (g,j) (g,j) (g,j) (g,j) ⎪ + d )/2 if X ∈ [d , d ) (d ⎪ jt ⎪ I(g,j)−1 I(g,j) I(g,j)−1 I(g,j) ⎪ ⎪ (g,j) ⎩d if Xjt ≥ d I(g,j)
I(g,j)
Binding Statistical and Machine Learning Models for Short-Term Forecasting
299
Once the discretization process has been performed, the prediction of Yt is derived as follows: i) a probabilistic finite automata (PFA) with the observations of the gth group is used in order to find the subscripts s in (g) (g) (g)∗ (g) (g)∗ (g) {T∗ , ..., T∗∗ } that satisfy Xjs Dj = Xjt Dj for all j ∈ {1, .., q} (see the procedure described in [16]); the set of subscripts that satisfy this condition will be denoted by St ; ii) the predicted value Yt,3 is the median of (g)∗ {Ys }s∈St (if a prediction interval is preferred, then the prediction interval (g,0) (g,0) for Yt is defined as the interval [dk , dk+1 ) which contains Yt,3 ). Note that (g)∗
mean or the mode of {Ys }s∈St could also be used instead of the median, depending on the kind of error function that one wants to minimize. Also note that the PFA that we propose to use is a special type of automata that allows some past values of the series to be learnt and others to be forgotten; it is based on the automata proposed in [22], but it allows to incorporate information of different types -this PFA can only be applied with discrete data or nominal attributes; hence, it is necessary to discretize the continuous significant variables. In the third stage, once XT +1 is observed, the procedure to predict YT +1 is the same that is described in the stage 2 for predicting Yt . Hence, the model that is used for this prediction is is the one which yields lowest MRSPE for the group to which XT +1 belongs. The proposed model has been used for short-term forecasting of a real continuous time series. The obtained results are presented in the next section. 2.3
Parameter Selection
In practice, the proposed procedure requires to fix various input data, which should be chosen by the researcher in advance. Specifically, this is the list of input data that are required, together with hints about how this input data should be selected. 1. The values c1 , ..., cG−1 determine the G initial groups taking into account the most significant variable. 2. The values M1 , ..., MG determine whether a variable is considered significant or not in each one of the G regressions. In principle, all of them could be fixed to 1.96, which amounts to saying that a 0.05 significance level is used to determine whether a variable is significant or not, see [12]. (g,j) (g,j) 3. In the PFA, the values d1 , ..., dI(g,j) describe the discretization of the jth explanatory variable in the gth group. The problem of how to choose the thresholds that must be used to discretize a continuous variable has been widely used in contexts similar to these (see e.g. [9]), since many algorithms used in supervised machine learning are restricted to discrete data. Additionally, it is possible to include a maximum admissible error ε in the PFA; the goal of this inclusion is to force the repetition of the discretization process (selecting different thresholds, and possibly a higher number of them) if the MRSPE is above that maximum admissible error.
300
L. Mora-L´ opez et al.
4. The parameters for building the PFA, according to the procedure proposed in [22], i.e. the probability threshold and the order (length or memory) of the PFA; the former is used to decide when a node is added to the PFA, depending on the minimum number of observations that will have each node. It should depend on the number of observations and it is used to avoid the problem of overfitting (in our empirical application, the criterion proposed in [16] is used). In any case, experience and knowledge about the behaviour of the dependent variable play arguably the most important role when selecting these input data. It is also important to emphasize that, once these input data are decided, the procedure works in an automatic way.
3
Using the Model for Global Solar Radiation Continuous Time Series
The proposed model has been used to forecast the next value of an hourly sequence of global solar radiation. Each value in the sequence corresponds to the total solar radiation received for an hour. In order to remove the yearly trend observed in these sequences, we have used a variable obtained from this parameter, the clearness index kt , where t means hour. This variable is very useful to establish the performance of all the systems that use solar radiation as an energy source, such as photovoltaic systems, see for instance [17] [18], [19]. The sequences of hourly clearness index are obtained from the hourly global solar radiation sequences Gt that are recorded at weather stations and the values of hourly extraterrestrial solar radiation, G0,t , which are calculated from the wellknown astronomical sun-earth relations, [20]. The expression is the following: kt =
Gt G0,t
(4)
Monthly sequences of hourly global solar radiation from 10 Spanish locations are the data set used. The hourly sequences of clearness index have been built using 6 values for each day (from 9 : 00 to 15 : 00 hours). The hourly exposure series of global radiation have been constructed in an artificial way, as it has been proposed in [21], because data from different days were linked together: the last observation used for each day is followed by the first observation of the following day. The statistical results reported in that paper allow us to consider that these series are homogeneous. Moreover, additional information has been used: the values of daily clearness index for the three previous days for each observation and the season at which each observation is taken. This last information has been incorporated using three dummy variables -a dummy variable is a binary variable that takes the values 0 or 1 to indicate the absence or presence of some categorical effect; in this case, the dummy variable is used to indicate whether the observations happened or not in one season. The values of daily clearness index have been
Binding Statistical and Machine Learning Models for Short-Term Forecasting
301
obtained from the daily series of global solar radiation using the eq.4 where t means day. Therefore, the dependent variable, Yt in section 2.2, is the hourly clearness index, denoted by kt . The independent variables, Xj,t where j = 1, ..., q in section 2.2, are the following: 1. X1,t = kt−1 , X2,t = kt−2 , X3,t = kt−3 , that correspond to the hourly clearness index values for the three previous hours in the sequence, 2. X4,t = kt−6 , X5,t = kt−12 , X6,t = kt−18 , that correspond to the hourly clearness index values for the three previous days at the same hour t (they are used 6 hours for each day), 3. and X7,t = S1,t , X8,t = S2,t , X9,t = S3,t , that are the seasonal dummy variables corresponding to the dependent variable kt -only three dummy variables are used as it has been explained before. Then, the intercept, β0 , and nine independent variables have been used (q = 9). In order to check the accuracy of the procedure, the analysis is performed by dividing all the available observations in two sets: the training set, which is used to apply the model such as it is described in the previous section, and the validation (test) set that has been used for checking the out-of-sample predictive performance of the procedure. We have divided the observations in a random way, selecting the 90 percent of the observations for the training set and the 10 percent for the validation set. 3.1
Results
First stage In the first step, the linear regression model (eq.1) has been estimated by OLS using the aforementioned independent variables. As expected, the most significant variable in this empirical application proves to be kt−1 , that is the clearness index for the previous hour. Using this variable, in the second step, the sample has been splitted into G = 9 different groups depending on the value of this variable. The values of the real numbers ci used are the following: i/10 for i = 1, .., 8 ci := (5) 1.0 for i = 9 In the third stage, the regression model (eq.1) has been estimated by OLS for each group of observations. In Table 1 we report the independent variables that are significant in each group, using 0.05 as the significance level. Taking into account the results in Table 1, in groups 1 and 2 we have decided (1) (3) to use for forecasting only kt−1 and kt−3 , i.e. for g = 1, 2, Dg = Dg = 1 and (j) Dg = 0 for j ∈ / {1, 3}, whereas in the remaining groups we have decided to use for forecasting only the three explanatory variables with greater t-statistics, i.e. the first three variables in the second column of the corresponding row in Table 1. In these groups we have decided to use only three explanatory variables
302
L. Mora-L´ opez et al.
Table 1. Significant independent variables for each group of observations (significance level=0.05 ) Interval Selected var 0-0.1 kt−1 , kt−3 0.1-0.2 kt−1 , kt−3 0.2-0.3 kt−1 , kt−3 , kt−18 0.3-0.4 kt−1 , kt−3 , kt−6 , kt−18 0.4-0.5 kt−1 , kt−3 kt−6 , kt−18 0.5-0.6 kt−1 , kt−6 , kt−3 , kt−18 0.6-0.7 kt−1 , kt−6 , kt−12 , kt−18 , S3 , kt−3 , S1 , S2 0.7-0.8 kt−1 , kt−6 , kt−2 , kt−12 , S3 , S2 , kt−18 , S1 , kt−3 0.8-1 kt−2 , kt−6 , kt−18 , kt−12
because the sample size is not very large; using more explanatory variables would arguably lead to an overfitting of the models. Second stage In the second stage, the three proposed models have been evaluated for each group of observations in order to determine which one is more appropriate for each group. For the third model, the special type of probabilistic finite automata has been used with the following input data: – The probability threshold used is 2, and the length of the PFA is 3 (the same as the number of explanatory variables used for forecasting in most groups). – The maximum admissible error for the prediction model is 5% (g,j) (g,j) – The values d1 , ..., dI(g,j) that are used for discretizing the continuous variables are selected in an iterative way. The three alternative discretization intervals have been considered for each variable: (g,j)
Firstvalues(iterj = 1) di = 0.1 ∗ i (g,j) Secondvalues(iterj = 2) di = 0.05 ∗ i (g,j) = 0.02 ∗ i Thirdvalues (iterj = 3) di
where i = 1, ..10 where i = 1, ..20 where i = 1, ..50
(6)
The procedure that is used to discretize the continuous explanatory variables used in group g is as follows (in this procedure nvarg = 2 for g = 1, 2 and nvarg = 3 for g > 2): for (j = 1;j ) iternvarg = iternvarg + 1; for(h = nvarg ;h > 1;− − h) if(iterh > niterh ) {
Binding Statistical and Machine Learning Models for Short-Term Forecasting
303
iterh = 1; iterh−1 = iterh−1 + 1;
} } while (M RSP E > ) AND (iter1 < niter1 + 1) Note that with this procedure various discretization intervals are considered for each variable. The expert decides how much intervals are considered for each variable, and this decision may depend on how significant this variable is. Also note that not all possible discretizations are checked in every group, since if MRSPE is less than then the process ends. Table 2 shows the number of observations in each interval and the MRSPE obtained for the short-term forecasting of the clearness index for each interval and each model in both the training set and the validation set. This table also includes the percentage of solar radiation received in each interval in order to evaluate the importance of mean relative square prediction error. As it is shown, though this error is large in terms of solar radiation (energy) for the first intervals, it is not important because it represents only about 10 percent of the total. Moreover, in order to compare our results with those of classical dynamic regression, we have also computed the MRSPE that is obtained estimating the regression equation (eq.2) with all the observations in the training set; this value proves to be 0.249, which is remarkably higher than the MRSPE that is obtained with our procedure. As it is shown, the model with the lowest MRSPE for each interval is not always the same. The integration of different models for different intervals of observations allows better predictions than when the same model is used for all the data. Prediction can be improved by up to 50 % for some intervals. That Table 2. Mean relative square prediction error (MRSPE)of the fitted models for the training and validation sets and percentage of solar radiation (energy) received in each interval. PFA: probabilistic finite automata, MR: multivariate regression, NN: neural network. (N: number of observations, TS: training set, VS: validation set, BM-TS: Best model for the training set)
Interval N 1 3089 2 7603 3 8974 4 9132 5 9500 6 14265 7 36481 8 26439 9 14329 mean BM-TS
Training set PFA MR NN 0.524 1.044 0.709 0.431 0.570 0.531 0.410 0.444 0.455 0.376 0.383 0.396 0.367 0.330 0.341 0.265 0.233 0.231 0.117 0.109 0.105 0.068 0.069 0.066 0.079 0.083 0.081 0.208 0.224 0.213 MRSPE: 0.197
N 329 811 989 1023 1109 1622 4019 2904 1582
Validation set PFA MR NN 0.552 1.137 0.760 0.444 0.581 0.505 0.380 0.416 0.468 0.379 0.382 0.419 0.382 0.352 0.324 0.239 0.207 0.224 0.113 0.108 0.113 0.075 0.075 0.064 0.095 0.086 0.083 0.206 0.223 0.215 MRSPE: 0.193
Energy (%) TS VS 0.63 0.66 2.25 2.07 3.61 3.66 4.60 4.66 5.67 5.90 9.60 9.90 30.37 30.21 27.09 26.91 16.18 16.01
304
L. Mora-L´ opez et al.
is, the behaviour of the analyzed parameter (the clearness index) can be better modelled using different techniques depending on the current value; this means that the short term forecasting will be more accurate if it is possible to use the best model for each situation. In the training set, the average MRSPE decreases between 5 and 12 percent when using the best model when comparing with PFA, MR and NN models; in the validation set, the MRSPE decreases between 6 and 13 per cent depending on the model. Improvement in prediction accuracy are also observed regarding the relative error when the results obtained for method integrated in the model are compared with any other of three checked methods. Finally, note that the results obtained with our procedure are remarkably better than those obtained with a classical dynamic regression approach.
4
Conclusions
This paper proposes a procedure for short-term forecasting of continuous time series that binds regression techniques and machine learning models. These methods are integrated in a general model that may be particularly useful short-term forecasting in situations where the number of significant independent variables, and the type of relationship between the dependent variable and the independent variables, crucially depends on the interval to which the observation belongs, that is that depends on the current value of the dependent variable. A multivariate regression is proposed to be used to identify the different relationships observed in the time series. This regression allows us to select the most significant variable in the series and to divide the observations in several intervals. For each of these intervals, we checked three different models for short-term forecasting: a multivariate regression, a special type of probabilistic finite automata and an artificial neural network. The proposed procedure is flexible enough to perform well in a wide variety of situations. In the developed model, the short-term forecasting best method for each interval is integrated. Once the model is built, it is possible to predict the next value of clearness index only using the independent significant identified variables for the time series. We present an application with solar global radiation data. Our empirical results show that our procedure leads to a remarkable improvement with respect to classical dynamic regression. This could be explained because the proposed procedure is capable of selecting only the important information for the prediction, and forgetting (not considering) the unimportant information, in contrast to the dynamic regression model, which uses all the information in a less selective way. Acknowledgments. This work has been partially supported by the projects TIN2008-06582-C03-03, ECO2008-05721/ECON and ENE07-67248 of the Spanish Ministry of Science and Innovation (MICINN).
Binding Statistical and Machine Learning Models for Short-Term Forecasting
305
References 1. Box, G.E.P., Jenkins, G.M.: Time Series Analysis forecasting and control. PrenticeHall, USA (1976) 2. Brockwell, P.J., Richard A.D.: Introduction to Time Series and Forecasting. Springer Texts in Statistics (2002) 3. Peter Zhang, G., Qi, M.: Neural network forecasting for seasonal and trend time series. European Journal of Operational Research 160 (2005) 4. Wang, C.H., Hsu, L.C.: Constructing and applying an improved fuzzy time series model: Taking the tourism industry for example. Expert Systems with Applications 34 (2008) 5. Hwang, J., Chen, S.M., Lee, C.H.: Handling forecasting problems using fuzzy time series. Fuzzy Sets and Systems 100 (1998) 6. Tucker, A., Liu, X.: Learning Dynamic Bayesian Networks from Multivariate Time Series with Changing Dependencies. In: Berthold, M.R., Lenz, H.-J., Bradley, E., Kruse, R., Borgelt, C. (eds.) IDA 2003. LNCS, vol. 2810, pp. 100–110. Springer, Heidelberg (2003) 7. Ghahramani, Z., Hinton, G.E.: Variational Learning for Switching State-Space Models. Neural Computation 12(4), 831–864 (2000) 8. Dougherty, J., Kohavi, R., And Sahami, M.: Supervised and Unsupervised Discretization of Continuous Features. In: Proceedings of the Twelf International Conference on Machine Learning, pp. 194–202. Morgan Kaufmann, Los Altos (1995) 9. Lui, H., Hussain, F., Lim Tan, C., Dash, M.: Discretization: An enabling Technique. Data Mining and Knowledge Discovery 6, 393–423 (2002) 10. Boulle, M., Khiops: A Statistical Discretization Method of Continuous Attributes. Machine Learning 55, 53–69 (2004) 11. Aguiar, R., Collares-Pereira, M.: Statistical properties of hourly global radiation. Solar Energy 48(3), 157–167 (1992) 12. Seber, G.A.F., Lee, A.J.: Linear Regression Analysis, 2nd edn. Wiley, New Jersey (2003) 13. Hertz, J., Krogh, A., Palmer, R.G.: Introduction to The Theory of Neural Computation. Addison-Wesley Publishing Company, USA (1991) 14. Hassoun, M.H.: Fundamentals of Artificial Neural Networks. The MIT Press, USA (1995) 15. Anderson, J.A.: An Introduction to Neural Networks. The MIT Press, USA (1995) 16. Mora-L´ opez, L., Mora, J., Sidrach-de-Cardona, M., Morales-Bueno, R.: Modelling time series of climatic parameters with probabilistic finite automata. Environmental Modelling and Software 20(6) (2004) 17. Luque, A., Hegedus, S.: Handbook of Photovoltaic Science and Engineering. John Wiley and Sons Ltd., England (2003) 18. Kumar, R., Umanand, L.: Estimation of global radiation using clearness index model for sizing photovoltaic system. Renewable Energy 30(15) (2005) 19. Nakada, Y., Takahashi, H., Ichida, K., Minemoto, T., Takakura, H.: Influence of clearness index and air mass on sunlight and outdoor performance of photovoltaic modules. Current Applied Physics 10 (2,1) (2010) 20. Iqbal, M.: An introduction to solar radiation. Academic Press Inc., New York 21. Mora-L´ opez, L., Sidrach-de-Cardona, M.: Characterization and simulation of hourly exposure series of global radiation. Solar Energy 60(5), 257–270 (1997) 22. Ron, D., Singer, Y., Tishby, N.: Learning Probabilistic Automata with Variable Memory Length. In: Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory (1994)
Bisociative Discovery of Interesting Relations between Domains Uwe Nagel, Kilian Thiel, Tobias K¨otter, Dawid Piątek, and Michael R. Berthold Nycomed-Chair for Bioinformatics and Information Mining Dept. of Computer and Information Science University of Konstanz
[email protected] Abstract. The discovery of surprising relations in large, heterogeneous information repositories is gaining increasing importance in real world data analysis. If these repositories come from diverse origins, forming different domains, domain bridging associations between otherwise weakly connected domains can provide insights into the data that can otherwise not be accomplished. In this paper, we propose a first formalization for the detection of such potentially interesting, domain-crossing relations based purely on structural properties of a relational knowledge description.
1
Motivation
Classical data mining approaches propose two major alternatives to make sense of knowledge representing data collections. One is to formulate specific, semantic queries on the given data. However, this is not always useful since users often do not know ahead of time what exactly they are searching for. Alternatively, Explorative (or Visual) Data Mining attempts to overcome this problem by creating a more abstract overview of the entire data together with subsequent drill-down operations. Thereby it additionally enables the search for interesting patterns on a structural level, detached from the represented semantical information. However, such overviews still leave the entire search for interesting patterns to the user and therefore often fail to actually point to interesting and truly novel details. In this paper we propose an approach to explore integrated data by finding unexpected and potentially interesting connections that hopefully trigger the user’s interest, ultimately supporting creativity and outside-the-box thinking. The approach we propose attempts to find such unexpected relations between seemingly unrelated domains. As pointed out by Henri Poincar´e [11]: “Among chosen combinations the most fertile will often be those formed of elements drawn from domains which are far apart. . . Most combinations so formed would be entirely sterile; but certain among them, very rare, are the most fruitful of all.” Consequently, instead of only fusing different domains and sources to gain J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 306–317, 2011. c Springer-Verlag Berlin Heidelberg 2011
Bisociative Discovery of Interesting Relations between Domains
307
a large knowledge base, we try to identify (possibly hidden) domains and search for rare instead of frequent patterns, i.e. exclusive, domain crossing connections. In this paper we assume a knowledge representation fulfilling very few conditions and address two subproblems: the identification of domains and the assessment of the potential interestingness of connections between these domains.
2
Networks, Domains and Bisociations
In this section, we transfer the theoretical concept of domain crossing associations which are called bisociations [8] (to emphasize the difference to associations within a single domain) to a setting where a relational description of knowledge is given. We will explain the model that incorporates our knowledge base, narrow down the basic theoretical concepts underlying domains and bisociations and identify corresponding measures of interestingness. 2.1
Knowledge Modeling
As a preliminary, we assume that the available knowledge is integrated into a unifying data model. We model this as an undirected, unweighted graph structure with nodes representing units of information and edges representing their relations. Examples for information units are terms, documents, genes or experiments. Relations could arise from references, co-occurrences or explicitly encoded expert knowledge. A graph is described as G = (V, E) with node set V , edge set E ⊆ V2 and n = |V | the number of nodes. The degree of a node, i.e. the number of incident edges, is denoted as d(v) and we access the structure of G via its adjacency matrix A, with (A)uv = 1 if {u, v} ∈ E and 0 otherwise. An important aspect of the model is the semantic of the employed links. We consider two different types of links, which either express similarity or another semantic relation. Consider for example a knowledge network with scientific articles as information units. Links in a derived network could either encode similarities between articles or the fact that one article references another. 2.2
Domains
In this context, a domain is a set of information units from the same field or area of knowledge. Domains exist with different granularity and thus can be (partially) ordered in a hierarchical way from specific to general. An example is provided by the domains of quantum physics, physics in general, and science. Consequently, the granularity of a domain depends on a specific point of view, which can be a very local one. Due to their hierarchical nature, information units can belong to several domains which are not necessarily related. E.g. the eagle belongs to the animal domain and in addition to the unrelated coat of arms domain. Intuitively, a set of highly interconnected nodes indicates an intense interrelation that should be interpreted as a common domain. While this is a sound
308
U. Nagel et al.
assumption when connections express similarities between the involved concepts, it is not true when links express other semantic relations. Consider for example scientific articles approaching a common problem. The similarity of these articles is not necessarily reflected by mutual references, especially if they were written at the same time. However, they will very likely share a number of references. Consequently, we derive domains from common neighborhoods instead of relying on direct connections between information units. This allows domains to be identified when the connections express either references or similarities since densely connected nodes also have similar neighborhoods. Domain Recovery. Two information units that share all (or - more realistically almost all) their connections to other information units should therefore belong to a common domain. Since they are in this respect indistinguishable and their relations form the sole basis for our reasoning about them, all possibly identifiable domains have to contain either both or none of them. We will show a node similarity that expresses this property and relaxes the conditions. Recursive merging of nodes based on this similarity leads to a merge tree as produced by hierarchical clustering. Consequently, we consider the inner nodes of this merge tree as candidates for domains. Note that this clustering process is distinguished from classical graph clustering by the employed node similarity. The resulting domains form a hierarchy on the information units which is similar to an ontology. I.e. considering two arbitrary domains, either one domain is completely contained in the other, or they are disjoint. Apparently, a number of domains could remain unidentified since the set of domains is not restricted to hierarchies but could also contain partially overlapping domains. We consider this as an unavoidable approximation for now, posing the extraction of domains as a separate problem. 2.3
Bisociations
A connection - usually indirect - between information units from multiple, otherwise unrelated domains is called bisociation in contrast to associations that connect information units within the same domain. The term was introduced by Koestler [7] in a theory to describe the creative act in humor, science and art. An example of a creative discovery triggered by a bisociation is the theory of electromagnetism by Maxwell [9] that connects electricity and magnetism. Up to now, three different patterns of bisociation have been described in this context: bridging concepts, bridging graphs and structural similarity [8]. Here we focus on the discovery of bridging graphs, i.e. a collection of information units and connections providing a “bisociative” relation between diverse domains. Among the arbitrary bisociations one might find, not all are going to be interesting. To assess their interestingness, we follow Boden [2] defining a creative idea in general as new, surprising, and valuable. All three criteria depend on a specific reference point: A connection between two domains might be long known to some specialists but new, surprising, and hopefully valuable to a specific observer, who is not as familiar with the topic. To account for this, Boden [2]
Bisociative Discovery of Interesting Relations between Domains
309
defines two types of creativity namely H-creativity and P-creativity. While Hcreativity describes globally (historical) new ideas, P-creativity (psychological) limits the demand of novelty to a specific observer. Our findings are most likely to be P-creative since the found connections have to be indicated by the analyzed data in advance. However a novel combination of information sources could even lead to H-creative bisociations. Analog to novelty, the value of identified bisociations is a semantically determined property and strongly depends on the viewers’ perspective. Since both novelty and value cannot be judged automatically, we leave their evaluation to the observer. In contrast, the potential surprise of a bisociation can be interpreted as the unlikeliness of a connection between the corresponding domains. We will express this intuition in more formal terms and use it as a guideline for an initial evaluation of possible bisociations. Identifying Bisociations. Based on these considerations, we now characterize the cases where a connection between two domains forms a bisociation. In the graph representation, two domains are connected either directly by edges between their nodes or more generally by nodes that are connected to both domains - the bridging nodes. These connecting nodes or edges bridge the two domains and together with the connected domains they form a bisociation candidate: Definition 1 (Bisociation Candidate). A bisociation candidate is a set of two domains and their connection within the network. Since it is impossible to precisely define what a surprising bisociation is, we rather define properties that distinguish promising bisociation candidates: exclusiveness, size, and balance. These can be seen as technical demands derived from a more information-scientific view as e.g. expressed in [5]: In Ford’s view, the creativity of a connection between two domains is related to (i) the dissimilarity of the connected domains and (ii) the level of abstraction on which the connection is established. In the following we try to transport these notions into graph theoretic terms by capturing them in technical definitions. Therein we interpret the dissimilarity of two domains as their mutual reachability by edges restricted to short connections: either direct by edges linking nodes of the different domains or indirect by nodes connected to both domains. Thus dissimilarity relates to the exclusiveness of the bisociation candidate: maximal dissimilarity is obviously rendered by two completely unconnected domains, closely followed by “minimally connected” domains. While the former case obviously does not yield a bridging graph based bisociation (i.e. the connection itself is missing) the latter is captured by exclusiveness. Exclusiveness states that a bisociation is a rare connection between the two domains, rendering the fact that bisociations are surprising connections between dissimilar domains. At the same time it excludes local exclusivity caused by nodes of high degree which connect almost everything, even unrelated domains, without providing meaningful connections. Definition 2 (Exclusiveness). A bisociation candidate is exclusive iff its domains are bridged by a graph that is small in relation to the domains and which provides only few connections that are focused on the two domains.
310
U. Nagel et al.
This can additionally be related to connection probabilities: consider the probability that two nodes from diverse domains are related by a direct link or an intermediate node. If only a few of these relations exist between the domains, the probability that such a pair of randomly chosen information units is connected is low and thus the surprise or unlikeliness is high. Directly entangled with this argument is the demand for size: a connection consisting of only a few nodes and links becomes less probable with growing domain sizes. In addition, a relation between two very small domains is hard to judge. It could be an expression of their close relation being exclusive only due to the small size of the connected domains. In that case the larger domains containing these two would show even more relations. It could also be an exclusive link due to domain dissimilarity. However, this situation would in turn be revealed when considering the larger domains, since these would also be exclusively connected. In essence, the exclusiveness of such a connection is pointless if the connected domains are very small, while it is amplified by domains of larger size. We formalize this in the following definition: Definition 3 (Size). The size of a bisociation candidate is the number of nodes in the connected domains. In terms of [5] the demand for size relates to the level of abstraction. Obviously a domain is more abstract than its subdomains and thus an exclusive link between larger (i.e. more abstract) domains is a more promising bisociation than a link between smaller domains. Finally, the balance property assures that we avoid the situation of a very small domain attached to a large one: Definition 4 (Balance). A bisociation candidate is balanced iff the connected domains are of similar size. In addition, domains of similar size tend to be of similar granularity and are thus likely to be on comparable levels of abstraction. Thereby the demand for balance avoids exclusive links to small subdomains that are actually part of a broader connection between larger ones. Summarizing, a bisociation candidate is promising if it is exclusive, of reasonable size, and balanced.
3
Finding and Assessing Bisociations
In this section, we translate the demands described in Section 2 into an algorithm for the extraction and rating of bisociations. Therein we follow the previously indicated division of tasks: (i) domain extraction and (ii) scoring of bisociation candidates. 3.1
Domain Extraction
As described in Section 2, domain affiliation of nodes is reflected by similar direct and indirect neighborhoods in the graph. Thus comparing and grouping nodes
Bisociative Discovery of Interesting Relations between Domains
311
based on their neighborhoods yields domains. In the following, we establish the close relation of a node similarity measure called activation similarity [12] to the above described demands. Based on this similarity, we show in a second part how domains can be found using hierarchical clustering. Activation similarity. The employed node similarity is based on spreading activation processes in which initially one node is activated. The activation spreads iteratively from the activated node, along incident edges, to adjacent nodes and activates them to a certain degree as well. Given that the graph is connected and not bipartite, the process converges after sufficient iterations. The final activation states are determined by the principal eigenvector of the adjacency matrix of the underlying graph as shown in [1]. Activation states of all nodes at a certain time k are represented by the activation vector a(k) ∈ Rn defined by (k) a(k) = Ak a(0) / Ak a(0) 2 , where the value av (a(k) at index v) is the activa(k)
tion level of node v ∈ V . Then av (u) represents the activation of node v at time (0) k, induced by a spreading activation process started at node u, i.e. with au = 1 (0) and av = 0 for v = u. This reflects the relative (due to normalization) reachability of node v from node u via walks of length k. More precisely, it represents the weighted fraction of weighted walks of length k from u to v among all walks of length k started at u. In order to consider more than just walks of a certain length, the activation vectors are normalized and accumulated with an additional decay α ∈ [0, 1) to decrease the impact of longer walks. The accumulated 1 kmax k (k) ∗ − 2 ˆ (u) = D (u) , activation vector of node u is then defined by a k=1 α a with D = diag(d(v1 ), . . . , d(vn )) being the degree matrix and kmax the number of spreading iterations. The degree normalization is useful to account for nodes of a very high degree. These are more likely to be reached and would thus disˆ∗v (u) represents the (normalized) tort similarities if not taken care of. The value a sum of weighted walks of different lengths 1 k kmax from u to v proportional to all weighted walks of different length starting at u and thus the relative reachability from u to v. ˆ∗ (v) describes the reachability of other nodes from In essence, the vector a v and thereby its generalized neighborhood. On this basis, we use the activaˆ ∗ (v)) of nodes u and v to compare their a∗ (u), a tion similarity σact (u, v) = cos(ˆ neighborhoods. In case of identical neighborhoods, activation spreads identically, resulting in a similarity of 1. If the same nodes can be reached similarly from u and v the similarity between them is high, which corresponds with our assumption about the properties of domains. For usual reasons, we will use the corresponding distance 1 − σact (u, v) for hierarchical clustering. Domain identification. We employ hierarchical clustering for domain identification using Ward’s linkage method [13], which minimizes the sum of squared distances within a cluster. This tends to produce compact clusters and to merge clusters of similar size and thus corresponds well with the notion of a domain. First of all, we would expect a certain amount of similarity for arbitrary information units within a domain and thus a compact shape. Further, clusters of
312
U. Nagel et al.
similar size are likely to represent domains on the same level of granularity and thus merging those corresponds to building upper-level domains. The resulting merge tree is defined as follows: Definition 5 (Merge tree). A merge tree T = (VT , ET ) for a graph G = (V, E) is a tree produced by a hierarchical clustering with node set VT = V ∪ Λ where Λ is the set of clusters obtained by merging two nodes, a node and a cluster or two clusters. ET describes the merging structure: {uλ, vλ} ⊆ ET iff the nodes or clusters u and v are merged into cluster λ ∈ Λ. However, not all clusters in the hierarchy are good domain candidates. If a cluster is merged with a single node, the result is unlikely to be an upper-level domain. Most likely, it is just an expansion of an already identified domain resulting from agglomerative clustering. These considerations lead to the domain definition: Definition 6 (Domain). A cluster δ1 is a domain iff in the corresponding merge tree it is merged with another cluster: δ1 ∈ Λ is a domain ⇔ ∃δ2 , κ ∈ Λ such that {{δ1 , κ}, {δ2 , κ}} ⊆ ET . I.e. a cluster is a domain, if it is merged with another cluster. 3.2
Scoring Bisociation Candidates
In the next step, we iterate over all pairs of disjoint domains and construct a bisociation candidate for each pair by identifying their bridging nodes: Definition 7 (Bridging nodes). Let δ1 and δ2 be two domains derived from the merge tree of the graph G = (V, E). A set of bridging nodes bn(δ1 , δ2 ) is a set of nodes that are connected to both domains: bn(δ1 , δ2 ) = {v ∈ V : ∃{v, u1 }, {v, u2 } ∈ E with u1 ∈ δ1 , u2 ∈ δ2 } . Note that this definition includes nodes belonging to one of the two domains, thus allowing direct connections between nodes of these domains. We now define the b-score, expressing the combination of exclusiveness, size, and balance as defined in Section 2. We therefore consider each property separately and combine them into an index at the end. Exclusiveness could be directly expressed by the number of nodes in bn(δ1 , δ2 ). However, this is not a sufficient condition. Nodes of high degree are likely to connect different domains, maybe even some of them exclusively. Nevertheless, such nodes are unlikely to form good bisociations since they are not very specific. On the other hand, bridging nodes providing only a few connections at all (and thus a large fraction of them within δ1 and δ2 ) tend to express a very specific connection. Since we are only interested in the latter case, the natural way of measuring exclusiveness is by using the inverse of the sum of the bridging nodes’ degrees: 2/ v∈bn(δ1 ,δ2 ) d(v). The 2 in the numerator ensures that this quantity is bound to the interval [0, 1], with 1 being the best possible value. The balance property is accounted for by relating the domain sizes in a fraction: min{|δ1 |, |δ2 |}/ max{|δ1 |, |δ2 |}, again
Bisociative Discovery of Interesting Relations between Domains
313
bound to [0, 1] with one expressing perfect balance. Finally, the size property is integrated as the sum of the domain sizes. As described above, a combination of all three properties is a necessary prerequisite for an interesting bisociation. Therefore, our bisociation score is a product of the individual quantities. Only in the case of bn(δ1 , δ2 ) = ∅ is our measure undefined. However, this situation is only possible if the domains are unconnected, so we define the score to be 0 in this case. For all non-trivial cases the score has strictly positive values and is defined as follows: Definition 8 (b-score). Let δ1 and δ2 be two domains, then the b-score of the corresponding bisociation candidate is b-score(δ1 , δ2 ) =
2
v∈bn(δ1 ,δ2 )
min{|δ1 |, |δ1 |} · · (|δ1 | + |δ2 |). d(v) max{|δ1 |, |δ2 |}
The above definition has two important properties. Firstly, it has an intuitive interpretation: In our opinion, an ideal bisociation is represented by two equally sized domains connected directly by a single edge or indirectly by a node connected to both domains. This optimizes the b-score, leaving the sum of the domain sizes as the only criterion for the assessment of this candidate. Further, every deviation from this ideal situation results in a deterioration of the b-score. Secondly, the calculation of the b-score only involves information about the two domains and their neighborhoods and not the whole graph, which is important when the underlying graph is very large. 3.3
Complexity and Scalability
To compute the pairwise activation similarities, the accumulated activation vectors for all nodes need to be determined. This process is dominated by matrix vector multiplications yielding a complexity of O(n3 ). Note however, that exploitation of the network sparsity and the quick convergence of the power iteration leads to a much more efficient calculation. The complexity of the overall process is dominated by the evaluation of bisociation candidates. Here, we propose to prune the set of candidates by removing small domains and filter highly unbalanced candidates. E.g. in the example of Section 4 roughly 75% of all bisocation candidates involved domains with less than 4 nodes.
4
Preliminary Evaluation
To demonstrate our approach, we applied our method to the Schools-Wikipedia (2008/09) dataset1 . Following the described method, we evaluated every pair of disjoint domains and manually explored the top rated bisociation candidates to verify the outcome of our method. 1
For detailed description of the Schools-Wikipedia dataset see [12].
314
U. Nagel et al.
The dataset consists of a subset of the English Wikipedia with about 5500 articles. For our experiment, we consider each article as a separate unit of information and model it as a node. We interpret cross-references as relations and introduce an undirected edge whenever one article references another. The resulting graph is connected except for two isolated nodes which we removed beforehand. For the remaining nodes we extracted the domains as described. To focus on the local neighborhood of nodes we used the decay value α = 0.3. Due to this decay and the graph structure the activation processes converged quickly allowing a restriction to kmax = 10 iterations for each process. This choice seems arbitrary, but we ensured that additional iterations do not contribute significantly to the distances. First of all, the values of the following iterations tend to vanish due to the exponentially decreasing scaling factor, e.g. 0.3−10 in the last iteration. In addition, the order of distances between node pairs is unchanged by the additional iterations. Altogether we extracted 4,154 nested domains resulting in 8,578,977 bisociation candidates. A part of a dendrogram involving birds is shown in Figure 1 to illustrate that our clustering yields conceptually well defined domains. In the example, birds of prey such as hawk, falcon, eagle etc. end up in the same cluster with carnivorous birds such as e.g. vulture, and are finally combined with non-carnivorous birds to a larger cluster. This example further illustrates that the nodes of a good domain are not necessarily connected, as there are few connections within the sets of birds, and yet they share a number of external references. Since the b-scores of the best bisociation candidates (Figure 2) decrease quickly, we considered only the top rated pairs. The bisociation candidate with the best b-score is shown in Figure 3a. One of its domains contains composers while the other incorporates operating systems. These two seemingly unrelated domains are connected by Jet Set Willy, a computer game with a title music adapted from the first movement of Beethoven’s Moonlight Sonata and a level editor for Microsoft Windows. Except for the small domain sizes, Jet Set Willy meets all formulated demands. Following three variants of the Jet Set Willy bisociation, the next best candidate is shown in Figure 3b. Nine Million Bicycles is a song connecting a south-east Asia domain with an astronomy domain. While Beijing is mentioned within the song itself, the corresponding article discusses lyrical errors concerning its statements about astronomical facts. To us these relations were new and surprising, though one might argue their value. An example of a bisociation with more than one bridging node is shown in Figure 3c. The substantially lower b-score of bisociations with more than one bridging node is a result of their lower exclusiveness. An example of a poor bisociation can be seen in Figure 3d. Clearly, this is neither balanced nor exclusive (countries have a very high degree in Schools-Wikipedia) while its size is comparable to the other described candidates. The above examples illustrate that our index discriminates well with respect to exclusiveness and balance. A detailed examination showed in addition that size
Bisociative Discovery of Interesting Relations between Domains
●
2.5
● ●
b−score
& " " ! ! " % # ! $ #
" !
315
●
2.0
● ● ● ● ● ● ● ●
1.5
● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ●
1.0 0
50
100
150
200
bisociation candidates
Fig. 1. Sub-dendrogram of articles about Fig. 2. Distribution of the b-score for the birds 200 top rated bisociation candidates
X Window System
Cosmic microwave background radiation
Linux
Unix OpenBSD
Hubble's law
Microsoft Windows
Age of the universe
Observable universe
Jet Set Willy Schubert Beethoven
Nine Million Bicycles
Schumann
Mendelssohn Mozart Haydn
Beijing Tianjin
Chopin Vivaldi
2008 Summer Olympics
Paganini
(a) b-score=2.59
(b) b-score=2.00
Pirates of the Caribbean film series Harry Potter film series
Steven Spielberg
The Lord of the Rings film trilogy The Golden Compass film Arithmetic mean
Probability distribution Standard deviation Variance
Jurassic Park film
Arnold Schwarzenegger
Probability space
Bangkok
Chaos theory
Rangers FC Celtic FC
Dundee United F C Scotland
Still Game
England Canada
Europe Walrus
Normal distribution
1909
Edinburgh
Eskimo
Arctic Ocean Polar bear
Random variable
Arctic
Mean Linear regression
(c) b-score=0.35
Hudson Bay
Sea ice
(d) b-score=0.003
Fig. 3. Example bisociations and their b-score (see text for details)
316
U. Nagel et al.
is negatively correlated with both other index components. This and the limited size of the dataset could explain the small sizes of the best rated candidates. Our preliminary evaluation indicates the potential of the presented method to detect bisociations based on the analysis of the graph structure. Even though Schools-Wikipedia is a reasonable dataset for evaluation purposes, one cannot expect to find valuable or even truly surprising bisociations therein since it is limited to handpicked, carefully administrated common knowledge, suitable for children. We opted to manually evaluate the results since the value of a bisociation is a semantic property and highly subjective, inhibiting an automatic evaluation - although an evaluation on a dataset with manually tagged bisociations would be possible, if such a dataset were available. An evaluation using synthetic data is complicated by the difficulty of realistic simulation and could in addition introduce an unwanted bias on certain types of networks, distorting the results.
5
Related Work
Although a wealth of techniques solving different graph mining problems already exist (see e.g. [4] for an overview), we found none to be suitable for the problem addressed here. Most of them focus on finding frequent subgraphs, which is not of concern here. Closely related to our problem are clustering and the identification of dense substructures, since they identify structurally described parts of the graph. Yet bisociations are more complicated structures due to a different motivation and therefore require a different approach to be detected. The exclusiveness of a connection between different groups is also of concern in the analysis of social networks. Especially structural holes and the notion of betweenness seem to address similar problems at first glance. Burt [3] regards the exlusiveness of connections in a network of business contacts as part of the capital a player brings to the competitive arena. He terms such a situation a structural hole that is bridged by the player. However, in his index only the very local view of the player is integrated, ignoring the structure of the connected domains. Further, his index would implicitly render domains a product of only direct connections between concepts, whereas we showed earlier that a more specific concept of similarity is advisable. A global measure for the amount of control over connections between other players is provided by betweenness [6]. Analog to structural holes, this concept captures one important aspect while missing the rest and thus fails to capture the overall concept. Serendipitous discoveries strongly overlap with the bisociation concept since the involved fortuitousness is often caused by the connection of dissimilar domains of knowledge. Different approaches (e.g. [10]) exist to integrate this concept in recommender systems. They differ from bisociation detection in that they are concentrating on users’ interests and not domains in general and are thus designed for a different setting and a different notion of optimality. However, none of the mentioned approaches provide a coherent, formal setting applicable to bisociation detection.
Bisociative Discovery of Interesting Relations between Domains
6
317
Conclusion
We presented an approach for the discovery of potentially interesting, domain crossing associations, so-called bisociations. For this purpose we developed a formal framework to describe potentially interesting bisociations and corresponding methods to identify domains and rank bisociations according to interestingness. Our evaluation on a well-understood benchmark data set has shown promising first results. We expect that the ability to point the user to potentially interesting, truly novel insights in data collections will play an increasingly important role in modern data analysis. Acknowledgements. This research was supported by the DFG under grant GRK 1042 (Research Training Group Explorative Analysis and Visualization of ” Large Information Spaces“) and the European Commission in the 7th Framework Programme (FP7-ICT-2007-C FET-Open, contract no. BISON-211898).
References 1. Berthold, M.R., Brandes, U., K¨ otter, T., Mader, M., Nagel, U., Thiel, K.: Pure spreading activation is pointless. In: Proceedings of the CIKM the 18th Conference on Information and Knowledge Management, pp. 1915–1919 (2009) 2. Boden, M.A.: Pr´ecis of the creative mind: Myths and mechanisms. Behavioral and Brain Sciences 17(03), 519–531 (1994) 3. Burt, R.S.: Structural holes: the social structure of competition. Harvard University Press, Cambridge (1992) 4. Cook, D.J., Holder, L.B.: Mining graph data. Wiley Interscience, Hoboken (2007) 5. Ford, N.: Information retrieval and creativity: Towards support for the original thinker. Journal of Documentation 55(5), 528–542 (1999) 6. Freeman, L.C.: A set of measures of centrality based upon betweenness. Sociometry 40, 35–41 (1977) 7. Koestler, A.: The Act of Creation. Macmillan, Basingstoke (1964) 8. K¨ otter, T., Thiel, K., Berthold, M.R.: Domain bridging associations support creativity. In: Proceedings of the International Conference on Computational Creativity, Lisbon, pp. 200–204 (2010) 9. Maxwell, J.C.: A treatise on electricity and magnetism. Nature 7, 478–480 (1873) 10. Onuma, K., Tong, H., Faloutsos, C.: Tangent: a novel, ’surprise me’, recommendation algorithm. In: Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2009, pp. 657–666 (2009) 11. Poincar´e, H.: Mathematical creation. Resonance 5(2), 85–94 (2000) 12. Thiel, K., Berthold, M.R.: Node similarities from spreading activation. In: Proceedings of the IEEE International Conference on Data Mining, pp. 1085–1090 (2010) 13. Ward Jr., J.H.: Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association 58(301), 236–244 (1963)
Collaboration-Based Function Prediction in Protein-Protein Interaction Networks Hossein Rahmani1 , Hendrik Blockeel1,2 , and Andreas Bender3 1
Leiden Institute of Advanced Computer Science, Universiteit Leiden Department of Computer Science, Katholieke Universiteit Leuven Unilever Centre for Molecular Science Informatics, Department of Chemistry, University of Cambridge 2
3
Abstract. The cellular metabolism of a living organism is among the most complex systems that man is currently trying to understand. Part of it is described by so-called protein-protein interaction (PPI) networks, and much effort is spent on analyzing these networks. In particular, there has been much interest in predicting certain properties of nodes in the network (in this case, proteins) from the other information in the network. In this paper, we are concerned with predicting a protein’s functions. Many approaches to this problem exist. Among the approaches that predict a protein’s functions purely from its environment in the network, many are based on the assumption that neighboring proteins tend to have the same functions. In this work we generalize this assumption: we assume that certain neighboring proteins tend to have “collaborative”, but not necessarily the same, functions. We propose a few methods that work under this new assumption. These methods yield better results than those previously considered, with improvements in F-measure ranging from 3% to 17%. This shows that the commonly made assumption of homophily in the network (or “guilt by association”), while useful, is not necessarily the best one can make. The assumption of collaborativeness is a useful generalization of it; it is operational (one can easily define methods that rely on it) and can lead to better results.
1
Introduction
In recent years, much effort has been invested in the construction of proteinprotein interaction (PPI) networks [11]. Much can be learned from the analysis of such networks with respect to the metabolic and signalling processes present in an organism, and the knowledge gained can also be prospectively employed e.g. to predict which proteins are suitable drug targets, according to an analysis of the resulting network [7]. One particular machine learning task that has been considered is predicting the functions of proteins in the network. A variety of methods have been proposed for predicting the functions of proteins. A large class of them relies on the assumption that interacting proteins tend to have the same functions (this is sometimes called “guilt by association”; it is also related to the notion of homophily, often used in other areas). In this J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 318–327, 2011. c Springer-Verlag Berlin Heidelberg 2011
Collaboration-Based Function Prediction in PPI Networks
319
paper we investigate a generalized version of this notion. We rely on the fact that topologically close proteins tend to have collaborative functions, not necessarily the same functions. We define collaborative functions as pairs of functions that frequently interface with each other in different interacting proteins. In this way, the assumption becomes somewhat tautological (this definition of collaborative functions implies that the assumption cannot be wrong), but the question remains whether one can, through analysis of PPI networks, correctly identify collaborative functions, and how much gain in predictive accuracy can be obtained by this. We propose two methods that predict protein functions based on function collaboration. The first method calculates the collaboration value of two functions using an iterative reinforcement strategy; the second method adopts an artificial neural network for this purpose. We perform a comprehensive set of experiments that reveal a significant improvement of F-measure values compared to existing methods. The rest of paper is organized as follows. Section 2 briefly reviews approaches that have been proposed before to solve this problem. We present the proposed collaboration-based methods in Section 3, and evaluate them in Section 4. Section 5 contains our conclusions.
2
Related Work
Various approaches have been proposed for determining the protein functions in PPI networks. A first category contains what we could call structure-based methods. These rely on the local or global structure of the PPI network. For instance, Milenkovic et al. [8] describe the local structure around a node by listing for a fixed set of small graph structures (“graphlets”) whether the node is part of such a graphlet or not. Rahmani et al. [9] describe nodes by indicating their position in the network relative to specific important proteins in the network, thus introducing information about the global graph structure. The above methods do not use information about the functions of other nodes to predict the functions of a particular protein. Methods that do use such information form a second category. A prototypical example is the Majority Rule approach [10]. This method simply assigns to a protein the k functions that occur most frequently among its neighbors (with k a parameter). One problem with this approach is that it only considers neighbors of which the function is already known, ignoring all others. This has been alleviated by introducing global optimization-based methods; these try to find global function assignments such that the number of interacting pairs of nodes without any function in common is minimal [13,12]. Another improvement over the original Majority Rule method consists of taking a wider neighborhood into account [2]. Level k interaction between two proteins means that there is a path of length k between them in the network. Proteins that have both a direct interaction and shared level-2 interaction partners have been found more likely to share functions [2]. Taking this further, one can make the assumption that in dense regions (subgraphs with
320
H. Rahmani, H. Blockeel, and A. Bender
many edges, relative to the number of nodes) most nodes have similar functions. This has led to Functional Clustering approaches, which cluster the network (with clusters corresponding to dense regions), and subsequently predict the functions of unclassified proteins based on the cluster they belong to [5,1]. A common drawback of the second category of approaches is that they rely solely on the assumption that neighboring proteins tend to have the same functions. It is not unreasonable to assume that proteins with one particular function tend to interact with proteins with specific other functions. We call such functions “collaborative” functions. Pertinent questions are: can we discover such collaborative functions, and once we know which functions tend to collaborate, can we use this information to obtain better predictions? The methods we propose next, try to do exactly this.
3
Two Collaboration-Based Methods
We propose two different methods. Each of them relies on the assumption that interacting proteins tend to have collaborative functions. They try to estimate from the network which functions often collaborate and, next, try to predict unknown functions of proteins using this information. In the first method, first we extract the collaborative function pairs from the whole network. Then, in order to make prediction for an unclassified protein, we extract the candidate functions based on position of the protein in the network. Finally, we calculate the score of each candidate function. High score candidate functions are those which collaborate more with the neighborhood of unclassified protein. The second method adopts a neural network for modeling the function collaboration in PPI networks. We use the following notation and terminology. The PPI network is represented by protein set P and interaction set E. Each epq ∈ E shows an interaction between two proteins p ∈ P and q ∈ P . Let F be the set of all the functions that occur in the PPI network. Each classified protein p ∈ P is annotated with an |F |-dimensional vector F Sp that indicates the functions of this protein: F Sp (fi ) is 1 if fi ∈ F is a function of protein p, and 0 otherwise. F Sp can also be seen as the set of all functions fi for which F Sp (fi ) = 1. Similarly, the |F |-dimensional vector N Bp describes how often each function occurs in the neighborhood of protein p. N Bp (fi ) = n means that among all the proteins in the neighborhood of p, n have function fi . The neighborhood of p is defined as all proteins that interact with p. 3.1
A Reinforcement Based Function Predictor
Consider the Majority Rule method. This method considers as candidate functions (functions that might be assigned to a protein) all the functions that occur in its neighborhood, and ranks them according to their frequency in that neighborhood (the most frequent ones will eventually be assigned).
Collaboration-Based Function Prediction in PPI Networks
321
Our method differs in two ways. First, we consider extensions of Majority Rule’s candidate functions strategy. Instead of only considering functions in the direct neighborhood as candidates, we can also consider functions that occur at a distance at most k from the protein. We consider k = 1, 2, 3, 4 and call these strategies First-FL (First function level, this is Majority Rule’s original candidate strategy), Second-FL, Third-FL and Fourth-FL. Finally, the All-FL strategy considers all functions as candidate functions. The second difference is that our method ranks functions according a “function collaboration strength” value, which is computed through iterative reinforcement, as follows. Let F uncColV al(fi , fj ) denote the strength of collaboration between functions fi and fj . We consider each classified protein p ∈ P in turn. If function fj occurs in the neighborhood of protein p (i.e., N Bp (fj ) > 0) then we increase the collaboration value between fj and all the functions in F Sp : ∀fi ∈ F Sp : F uncColV al(fi , fj )+=
N Bp (fj ) ∗ R support(fj )
If N Bp (fj ) = 0, we decrease the collaboration value between function fj and all the functions belonging to F Sp : ∀fi ∈ F Sp : F uncColV al(fi , fj )−=
P support(fj )
support(fj ) is the total number of times that function fj appears on the side of an edge epq in the network. R and P are ”Reward” and ”Punish” coefficients determined by the user. Formula (1) assigns a collaboration score to each candidate function fc : N Bp (fj ) ∗ F uncColV al(fj , fc ) (1) Score(fc ) = ∀fj ∈F
High score candidate function(s) collaborate better with the functions observed in the neighborhood of p and are more likely to be predicted as p’s functions. We call the above method “Reinforcement based collaborative function prediction” (RBCFP), as it is based on reinforcing collaboration values between functions as they are observed. 3.2
SOM Based Function Predictor
The second approach presented in this work employs an artificial neural network, and is inspired by self-organizing maps (SOMs). From the PPI network, a SOM is constructed as follows. We make a one-layered network with as many inputs as there are functions in the PPI network, and equally many output neurons. Each input is connected to each output. The network is trained as follows. All weights are initialized to zero. Next, the training procedure iterates multiple times over all proteins in the PPI network. Given a protein p with function vector F Sp and neighborhood vector N Bp , the network’s input vector is set to N Bp , and for
322
H. Rahmani, H. Blockeel, and A. Bender
each j for which F Sp (fj ) = 1, the weights of the j’th output neuron are adapted as follows: Wij,N ew = Wi,j,Current + LR ∗ (N Bp (j) − Wi,j,Current )
(2)
where Wij is the weight of the connection from input i to output j, and LR (learning rate) is a parameter. Intuitively, this update rule makes the weight vector W.j of output j gradually converge to a vector that is representative for the N B vectors of all proteins that have fj as one of their functions. Once the network has been trained, predictions will be made by comparing the N B vector of a new protein q to the weight vectors of the outputs corresponding to candidate functions, and predicting the k functions for which the weight vector is closest to N Bq (using Euclidean distance), with k a parameter determined by the user. Normally, in a SOM, the weights of the winner neurons (the output neurons whose weights are closest to the input) and that of neurons close to them in the SOM lattice are adjusted towards the input vector. The difference with our method is that our learning method is supervised: we consider as “winner neurons” all output neurons corresponding to the functions of the protein. As usual in SOMs, the magnitude of the weight update decreases with time and with distance from the winner neuron. Here, we take some new parameters into consideration which are LearningRate(LR), DecreasingLearningRate(DecLR) and T erminateCriteria(T C) parameters. LR determines how strongly the weights are pulled toward the input vector, and DecLR determines how much LR decreases with each iteration. T C determines when the training phase of SOM terminates: it indicates the minimum amount of change required in one iteration; when there is less change, the training procedure stops. Algorithm 3.1 summarizes the Training phase of the SOM method. Algorithm 3.1: SOM Training Phase(LR, DecLR, T C)
procedure SOM-Training(LR, DecLR, T C) maxChangeInN etworkW eights ← 0; repeat ⎧ for each ⎪ ⎧ classif ied protein p ∈ P ⎪ ⎪ ⎪ build N Bp ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ for each fi ∈ F ⎪ ⎪ ⎪ ⎪ ⎨ ⎨ do inputN euron(i) = N Bp (fi ) do for each ⎪ ⎪ ⎪ ⎪ f j ∈ F Sp ⎪ ⎪ ⎪ ⎪ apply F ormula (2). ⎪ ⎪ ⎪ ⎪ ⎩ do ⎪ ⎪ update maxChangeInN etworkW eights ⎪ ⎪ ⎩ LR ← LR ∗ DecLR until (maxChangeInN etworkW eights < T C)
Collaboration-Based Function Prediction in PPI Networks
4 4.1
323
Experiments Dataset and Annotation Data
We apply our method to three S. cerevisiae PPI networks: DIP-Core [3], VonMering [14] and Krogan [6] which contain 4400, 22000 and 14246 interactions among 2388, 1401 and 2708 proteins respectively. The protein function annotation for S. cerevisiae PPI networks were obtained from the Yeast Genome Repository [4]. Functions can be described in different levels of detail. For example, two functions 11.02.01 (rRNA synthesis) and 11.02.03 (mRNA synthesis) are considered the same up to the second function level (i.e., 11.02 = RNA synthesis), but not on deeper levels. The function hierarchy we use contains five different levels, which we will refer to as F-L-i. Thus, for each dataset, five different versions can be produced, one for each function level. 4.2
Parameter Tuning
Our methods have parameters for which good values need to be found. Parameters can be tuned by trying out different values, and keeping the one that performed best. Obviously, such tuning carries a risk of overestimating the performance of the tuned method, when it is evaluated on the same dataset for which it was tuned. To counter this effect, we tuned our methods on the Krogan dataset labeled with F-L-1 functions (the most general level of functions in the function hierarchy), and evaluated them with the same parameter values on the other datasets; results for DIP-Core and Von Mering are therefore unbiased. Conversely, for the Krogan dataset, we used parameter settings tuned on DIP-Core. This way, all the results are unbiased. We tuned the parameters manually, using the following simple and nonexhaustive strategy. Parameters were tuned one at a time. After finding the best value for one parameter, it was fixed and other parameters were tuned using that value. For parameters not yet fixed when tuning a parameter p, we tried multiple settings and chose a value for p that appeared to work well on average. With this approach, the order in which the parameters are tuned can in principle influence the outcome, but we found this influence to be very small in practice. Fig. 1 shows the effects of the consecutive tuning of the different parameters. The best value for “Candidate function strategy” parameter is Second-FL; using this value we found a best DecLR value of 0.9, and using these two values we found an optimum for TC at 10. For LR the default setting of 1 was used. “Candidate function strategy” = Second-FL, R = 1 and P = 2 turns out to be the best parameter setting for the RBCFP method. 4.3
Comparison to Previous Methods
In this section, we compare our collaboration-based methods (RBCFP and SOM) with similarity-based methods (Majority Rule and Functional Clustering) on the Krogan, VonMering and DIP-Core datasets, using average F-measure as
324
H. Rahmani, H. Blockeel, and A. Bender
Fig. 1. Effects of tuning the “Candidate function strategy” parameter, the “Decreasing Learning Rate”, and the “Termination Criteria” of SOM network in Krogan Dataset. Second-FL with DecLR=0.9 and TC=10 produces the best result on Krogan.
Fig. 2. F-measures obtained by the new collaboration-based methods (SOM and RBCFP), compared to existing similarity-based methods (MR and FC) at five different function levels, for the Krogan, DIP-Core, and VonMering datasets. On all levels, collaboration based methods predict functions more accurately than similarity based methods.
the evaluation criterion. We perform a leave-on-out cross-validation, leaving out one protein at a time and predicting its functions from the remaining data. For each protein, we predict a fixed number of functions, namely three; this is exactly what was done in the Majority Rule approach we compare to, so our results are maximally comparable. In the proposed methods, we use the parameter values tuned in the previous section. Majority Rule (MR) selects the three most frequently occurring functions in the neighborhood of the protein in the network. Functional clustering (FC) methods differ mainly in their cluster detection technique. Once a cluster is obtained simple methods are usually used for function prediction within the cluster. In our evaluation, we use the clusters from [4] (which were manually constructed by human experts). Fig. 2 compares collaboration-based and similarity-based methods on the Krogan, DIP-Core and VonMering datasets respectively. F-L-i refers to the i’th function level in the function hierarchy. We compare the methods on five different function levels.
Collaboration-Based Function Prediction in PPI Networks
325
Fig. 3. Difference in F-measure between the best collaboration-based and the best similarity-based method, averaged over three datasets, for five function levels. The difference increases as we consider more detailed function levels.
In all three datasets, collaboration based methods predict functions more accurately than similarity based methods. As we consider more detailed function levels, the difference between their performance increases. In order to have a general idea about the performance of two method types in different function levels, we take the average of F-measure difference between collaboration based methods and similarity based methods in three datasets. Fig. 3 shows the average F-measure difference between two method types. For general function descriptions (first and second function levels), collaboration based methods outperform the similarity based methods with some 4 percent. For more specific function descriptions, for example function level 5, the performance difference between two methods increases up to 17 percent. 4.4
Extending Majority Rule
We identified the notion of collaboration-based prediction (as supposed to similarity-based prediction) as the main difference between our new methods and the ones we compare with. However, in the comparison with Majority Rule, there is another difference: while Majority Rule assigns only functions from the direct neighborhood to a protein, we found that using candidate functions from a wider neighborhood (including neighbors of neighbors) was advantageous. This raises the question whether majority rule can also be improved by making it consider a wider neighborhood. We tested this by extending the Majority Rule so that it can consider not only direct neighbors, but also neighbors at distance 2 or 3. We refer to these versions as MR(NB-Li). Fig. 4 shows the effect of considering a wider neighborhood in Majority Rule in the three datasets Krogan, VonMering and DIP-Core. There is no improvement in the Krogan and VonMering datasets, and only a small improvement (1%) in DIP-Core, for MR(NB-L2). This confirms that the improved predictions of our methods are due to using the new collaboration-based scores, and not simply to considering functions from a wider neighborhood.
326
H. Rahmani, H. Blockeel, and A. Bender
Fig. 4. Extending Majority Rule by considering other function neighborhood levels. NB-Li represents the 1-, 2- or 3-neighborhood of the protein.
5
Conclusion
To our knowledge, this is the first study that considers function collaboration for the task of function prediction in PPI networks. The underlying assumption behind our approach is that a biological process is a complex aggregation of many individual protein functions, in which topologically close proteins have collaborative, but not necessarily the same, functions. We define collaborative functions as pairs of functions that frequently interface with each other in different interacting proteins. We have proposed two methods based on this assumption. The first method rewards the collaboration value of two functions if they interface with each other in two sides of one interaction and punishes the collaboration value if just one of the functions occurs on either side of an interaction. At prediction time, this method ranks candidate functions based on how well they collaborate with the neighborhood of unclassified protein. The second method uses a neural network based method for the task of function prediction. The network takes as input the functions occurring in a protein’s neighborhood, and outputs information about the protein’s functions. We selected two methods, Majority Rule and Functional Clustering, as representatives of the similarity based approaches. We compared our collaboration based methods with them on three interaction datasets: Krogan, DIP-Core and VonMering. We examined up to five different function levels and we found classication performance according to F-measure values indeed improved, sometimes by up to 17 percent, over the benchmark methods employed. Regarding the relative performance of the proposed methods, their classification performances are similar in the high level function levels but the RBCFP method outperforms the SOM method in more detailed function levels. Our results confirm that the notion of collaborativeness of functions, rather than similarity, is useful for the task of predicting the functions of proteins. The information about which functions collaborate, can be extracted easily from a PPI network, and using that information leads to improved predictive accuracy.
Collaboration-Based Function Prediction in PPI Networks
327
These results may well apply in other domains, outside PPI networks. The notion of homophily is well-known in network analysis; it states that similar nodes are more likely to be linked together. The notion of collaborativeness, in this context, could also be described as “selective heterophily”. It remains to be seen to what extent this notion may lead to better predictive results in other types of networks. Acknowledgements. This research is funded by the Dutch Science Foundation (NWO) through a VIDI grant. At the time this research was performed, Andreas Bender was funded by the Dutch Top Institute Pharma, project number: D1-105.
References 1. Brun, C., Herrmann, C., Gu´enoche, A.: Clustering proteins from interaction networks for the prediction of cellular functions. BMC Bioinformatics 5, 95 (2004) 2. Chua, H.N., Sung, W., Wong, L.: Exploiting indirect neighbours and topological weight to predict protein function from protein-protein interactions. Bioinformatics 22(13), 1623–1630 (2006) 3. Deane, C.M., Salwi´ nski, L., Xenarios, I., Eisenberg, D.: Protein interactions: two methods for assessment of the reliability of high throughput observations. Molecular & Cellular Proteomics: MCP 1(5), 349–356 (2002) 4. Guldener, U., Munsterkotter, M., Kastenmuller, G., Strack, N., van Helden, J., Lemer, C., et al.: Cygd: the comprehensive yeast genome database. Nucleic Acids Research 33(supplement. 1), D364+ (January 2005) 5. King, A.D., Przulj, N., Jurisica, I.: Protein complex prediction via cost-based clustering. Bioinformatics 20(17), 3013–3020 (2004) 6. Krogan, N.J., Cagney, G., Yu, H., Zhong, G., Guo, X., Ignatchenko, A., et al.: Global landscape of protein complexes in the yeast saccharomyces cerevisiae. Nature 440(7084), 637–643 (2006) 7. Ma’ayan, A., Jenkins, S.L., Goldfarb, J., Iyengar, R.: Network analysis of fda approved drugs and their targets. The Mount Sinai Journal of Medicine 74(1), 27–32 (2007) 8. Milenkovic, T., Przulj, N.: Uncovering biological network function via graphlet degree signatures. Cancer Informatics 6, 257–273 (2008) 9. Rahmani, H., Blockeel, H., Bender, A.: Predicting the functions of proteins in PPI networks from global information. JMLR Proceeding 8, 82–97 (2010) 10. Schwikowski, B., Uetz, P., Fields, S.: A network of protein-protein interactions in yeast. Nat. Biotechnol. 18(12), 1257–1261 (2000) 11. Stelzl, U., Worm, U., Lalowski, M., Haenig, C., Brembeck, F.H., Goehler, H., et al.: A human protein-protein interaction network: a resource for annotating the proteome. Cell 122(6), 957–968 (2005) 12. Sun, S., Zhao, Y., Jiao, Y., Yin, Y., Cai, L., Zhang, Y., et al.: Faster and more accurate global protein function assignment from protein interaction networks using the mfgo algorithm. FEBS Lett. 580(7), 1891–1896 (2006) 13. Vazquez, A., Flammini, A., Maritan, A., Vespignani, A.: Global protein function prediction from protein-protein interaction networks. Nat. Biotechnol. 21(6), 697– 700 (2003) 14. von Mering, C., Krause, R., Snel, B., Cornell, M., Oliver, S.G., Fields, S., et al.: Comparative assessment of large-scale data sets of protein-protein interactions. Nature 417(6887), 399–403 (2002)
Mining Sentiments from Songs Using Latent Dirichlet Allocation Govind Sharma and M. Narasimha Murty Department of Computer Science and Automation, Indian Institute of Science, Bangalore, 560012, Karnataka, India {govindjsk,mnm}@csa.iisc.ernet.in
Abstract. Song-selection and mood are interdependent. If we capture a song’s sentiment, we can determine the mood of the listener, which can serve as a basis for recommendation systems. Songs are generally classified according to genres, which don’t entirely reflect sentiments. Thus, we require an unsupervised scheme to mine them. Sentiments are classified into either two (positive/negative) or multiple (happy/angry/sad/...) classes, depending on the application. We are interested in analyzing the feelings invoked by a song, involving multi-class sentiments. To mine the hidden sentimental structure behind a song, in terms of “topics”, we consider its lyrics and use Latent Dirichlet Allocation (LDA). Each song is a mixture of moods. Topics mined by LDA can represent moods. Thus we get a scheme of collecting similar-mood songs. For validation, we use a dataset of songs containing 6 moods annotated by users of a particular website. Keywords: Latent Dirichlet Allocation, music analysis, sentiment mining, variational inference.
1
Introduction
It is required that with the swift increase in digital data, technology for data mining and analysis should also catch pace. Effective mining techniques need to be set up, in all fields, be it education, entertainment, or science. Data in the entertainment industry is mostly in the form of multimedia (songs, movies, videos, etc.). Applications such as recommender systems are being developed for such data, which suggest new songs (or movies) to the user, based on previously accessed ones. Listening to songs has a strong relation with the mood of the listener. A particular mood can drive us to select some song; and a song can invoke some sentiments in us, which can change our mood. Thus, song-selection and mood are interdependent features. Generally, songs are classified into genres, which do not reflect the sentiment behind them, and thus, cannot exactly estimate the mood of the listener. Thus there is a need to build an unsupervised system for mood estimation which can further help in recommending songs. Subjective analysis of multimedia data is cumbersome in general. When it comes to songs, we have two parts, viz. melody and lyrics. Mood of a listener depends on J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 328–339, 2011. c Springer-Verlag Berlin Heidelberg 2011
Mining Sentiments from Songs Using Latent Dirichlet Allocation
329
both music as well as the lyrics. But for simplicity and other reasons mentioned in Sec. 1.2, we use the lyrics for text analysis. To come up with a technique to find sentiments behind songs based on their lyrics, we propose to use Latent Dirichlet Allocation (LDA), a probabilistic graphical model, which mines hidden semantics from documents. It is a “topical” model, that represents documents as bags of words, and looks to find semantic dependencies between words. It posits that documents in a collection come from k “topics”, where a topic is a distribution over words. Intuitively, a topic brings together semantically related words. Finally, each document is a mixture of these k topics. Our goal is to use LDA over song collections, so as to get topics, which probably correspond to moods, and give a sentiment structure to a song, which is a mixture of moods. 1.1
Sentiment Mining
Sentiment Mining is a field wherein we are interested not in the content of a document, but in its impact on a reader (in case of text documents). We need to predict the sentiment behind it. State of the art [1], [2] suggests that considerable amount of work has been done in mining sentiments for purposes such as tracking the popularity of a product, analysing movie reviews, etc. Also, commercial enterprises use these techniques to have an idea of the public opinion about their product, so that they can improve in terms of user-satisfaction. This approach boils down sentiments into two classes, viz., positive and negative. Another direction of work in sentiment mining is based on associating multiple sentiments with a document, which gives a more precise division into multiple classes, (e.g., happy, sad, angry, disgusting, etc.) The work by Rada Mihalcea [3], [4] in Sentiment Analysis is based on multiple sentiments. 1.2
Music or Lyrics?
A song is composed of both melody and lyrics. Both of them signify its emotions and subjectivity. Work has been done on melody as well as on lyrics for mining sentiments from songs [5]. For instance, iPlayr [6] is an emotionally aware music player, which suggests songs based on the mood of the listener. Melody can, in general invoke emotions behind a song, but the actual content is reflected by its lyrics. As the dialogue from the movie, Music and Lyrics (2007) goes by, “A melody is like seeing someone for the first time ... (Then) as you get to know the person, that’s the lyrics ... Who they are underneath ...”. Lyrics are important as they signify feelings more directly than the melody of a song does. Songs can be treated as mere text documents, without losing much of their sentimental content. Also, a song is generally coherent, relative to any other sort of document, in that the emotions do not vary within a song, mostly. Thus, it is logical to demand crisp association of a song to a particular emotion. But the emotion-classes themselves are highly subjective, which makes the concept of emotions vary from person to person. In the present work, we assume that everyone has the same notion of an emotion.
330
1.3
G. Sharma and M.N. Murty
Latent Dirichlet Allocation [7]
Generally, no relation between the words within a document is assumed, making them conditionally independent of each other, given the class (in classification). Furthermore, this assumption can also be seen in some of the earliest works on information retrieval by Maron [8] and Borko [9]. But in real world documents, this is not the case, and, up to a large extent, words are related to each other, in terms of synonymy, hypernymy, hyponymy, etc. Also, their co-occurrence in a document definitely infers these relations. They are the key to the hidden semantics in documents. To uncover these semantics, various techniques, both probabilistic and nonprobabilistic have been used, few of which include Latent Semantic Indexing (LSI) [10], probabilistic LSI [11], Latent Dirichlet Allocation (LDA), etc. Among them, LDA is the most recently developed and widely used technique that has been working well in capturing these semantics. It is a probabilistic graphical model that is used to find hidden semantics in documents. It is based on projecting words (basic units of representation) to topics (group of correlated words). Being a generative model it tries to find probabilities of features (words) to generate data points (documents). In other words, it finds a topical structure in a set of documents, so that each document may be viewed as a mixture of various topics. Fig. 1 shows the plate representation of LDA. The boxes are plates representing replicates. The outer plate represents documents, while the inner plate represents the repeated choice of topics and words within a document. There are three levels to the LDA representation. The parameters α and β are corpuslevel parameters, variable θ is a document level variable, and z, w are word level variables. According to the generative procedure, we choose the number of words, N as a Poisson with parameter ζ; θ is a Dirichlet with parameter vector α. The topics zn are supposed to have come from a Multinomial distribution with θ as parameter. Actually, Dirichlet distribution is the conjugate of the Multinomial, which is the reason why it is chosen to be the representative for documents. Each
Fig. 1. The Latent Dirichlet Allocation model
Mining Sentiments from Songs Using Latent Dirichlet Allocation
331
word is then chosen from p(wi |zn ; β), where wi are words. The number of topics, k are taken to be known and fixed. We need to estimate β, a v × k matrix, v being the vocabulary size. Each entry in this matrix gives the probability of a word representing a topic (βij = p(wj = 1|z j = 1)). From the graphical structure of LDA, we have (1), which is the joint distribution of θ, z and w, given the parameters α and β. p(θ, z, w|α, β) = p(θ|α) ·
N
p(zn |θ) · p(wn |zn , β)
(1)
n=1
To solve the inferential problem, we need to compute the posterior distribution of documents, whose expression, after marginalizing over the hidden variables, is an intractable one, when it comes to exact inference. Equation (2) below shows this intractability due to coupling between θ and β. ⎞ ⎛ N k v k j Γ ( i αi ) p(w|α, β) = (2) θiαi −1 ⎝ (θi βij )wn ⎠ dθ i Γ (αi ) n=1 i=1 j=1 i=1 For this reason, we move towards approximate inference procedures. 1.4
Variational Inference [12]
Inferencing means to determine the distribution of hidden variables in a graphical model, given the evidences. There have been two types of inferencing procedures, exact and approximate. For exact inferencing, the junction tree algorithm has been proposed, which can be applied on directed as well as undirected graphical models. But in complex scenarios such as LDA, exact inferencing procedures do not work, as the triangulation of the graph becomes difficult, so as to apply the junction tree algorithm. This leads to using approximate methods for inferencing. Out of the proposed approximate inferencing techniques, one is Variational Inferencing. The terminology has been derived from the field of calculus of variations, where we try to find bounds for a given function, or on a transformed form of the function (e.g., its logarithm), whichever is concave/convex. A similar procedure is carried out for probability calculations. When applied to LDA, variational inferencing leads to simplifying our model by relaxing dependencies between θ and z, and z and w, just for the inferencing task. The simplified model is shown in Fig. 2, which can be compared with the original LDA model (Fig. 1).
2
The Overall System
Our goal is to first acquire lyrics, process them for further analysis and dimensionality reduction, and then find topics using LDA. We hypothesize that some of these topics correspond to moods. The overall system is shown in Fig. 3 and explained in the subsequent sections.
332
G. Sharma and M.N. Murty
Fig. 2. Approximation of the LDA model for variational inference
Fig. 3. Block diagram showing the overall system
2.1
Data Acquisition
There are a number of websites available on the internet which have provision for submission of lyrics. People become members of these websites and submit lyrics for songs. To collect this dataset, a particular website was first downloaded using wget, a Linux tool. Then, each file was crawled and lyrics were collected as separate text files. Let F = {f1 , f2 , · · · , fl } be the set of l such files, which are in raw format (see Fig. 4a) and need to be processed for further analysis.
(a) Raw text
(b) Processed text
Fig. 4. A snippet of the lyrics file for the song, Childhood by Michael Jackson
2.2
Preprocessing
For each song, fi in F , we carry out the following steps:
Mining Sentiments from Songs Using Latent Dirichlet Allocation
333
Tokenization. Here, we extract tokens from the songs, separated by whitespaces, punctuation marks, etc. We retain only whole words and remove all other noise (punctuation, parentheses, etc.). Also, the frequency of each token is calculated and stored along with fi For example, for the snippet in Fig. 4a, the tokens would be {have, you, seen, my, childhood, i, m, searching, · · ·, pardon, me}. Stop-word Removal. Stop-words are the words that appear very frequently and are of very little importance in the discrimination of documents in general. They can be specific to a dataset. For example, for a dataset of computer-related documents, the word computer would be a stop-word. In this step, standard stopwords such as a, the, for, etc. and words with high frequency (which are not in the standard list) are removed. For the data in Fig. 4a, the following are stop words: {have, you, my, i, been, for, the, that, from, in, and, no, me, they, it, as, such, a, but}. Morphological Analysis. The retained words are then analysed in order to combine different words having the same root to a single one. For example, the words laugh, laughed, laughter and laughing should be combined to form a single word, laugh. This is logical because all such words convey the same (loosely) meaning. Also, it reduces the dimensionality of the resulting dataset. For this, some standard morphological rules were applied (removing plurals, -ing rules, etc.) and then exceptions (such as went - go, radii - radius, etc.) are dealt with using the exception lists available from WordNet [13]. For Fig. 4a, after removal of stop-words, the following words get modified in this step: {seen - see, childhood - child, searching - search, looking - look, lost - lose, found - find, understands understand, eccentricities - eccentricity, kidding - kid} Dictionary Matching and Frequency Analysis. The final set of words are checked for in the WordNet index files and non-matching words are removed. This filters each song completely and keeps only relevant words. Then, rare words (with low frequency) are also removed, as they too do not contribute much in the discrimination of document classes. For the Fig. 4a data, the following words are filtered out according to dictionary matching:{m, ve}. This leaves us with a corpus of songs which is noise-free (See Fig. 4b) and can be analysed for sentiments. Let this set be S = {s1 , s2 , · · · , sl }. Each element in this set is a song, which contains words, along with their frequencies in that song. For example, each entry in the song si is an ordered pair, (wj , nji ), signifying a word wj and nji being the number of times it occurs in si . Along with this, we also have a set of v unique words, W = {w1 , w2 , · · · wv } (the vocabulary), where wi represents a word. 2.3
Topic Distillation
We now fix the number of topics as k and apply Variational EM for parameter estimation over S, to find α and β. We get the topical priors in the form of the
334
G. Sharma and M.N. Murty
k × v matrix, β. Each row in β represents a distinct topic and each column, a distinct word. The values represent the probability of each word being a representative for the topic. In other words, βij is the probability of the j th word, wj representing the ith topic. Once β is in place, we solve the posterior inference problem. This would give us the association of songs to topics. We calculate the approximate values in the form of the Dirichlet parameter γ, established in Sec.1.4. It is a l × k matrix, wherein the rows represent songs, and columns, topics. Each value in γ is proportional (not equal) to the probability of association of a song to a topic. Thus, γij is proportional to the probability of the ith song, si being associated to the j th topic. Now we can use β to represent topics in terms of words, and γ to assign topics to songs. Instead of associating all topics with all songs, we need to have a more precise association, that can form soft clusters of songs, each cluster being a topic. To achieve this, we impose a relaxation in terms of the probability mass we want to cover, for each song. Let π be the probability mass we wish to cover for each song (This will be clearer in Sec. 4). First of all, normalize each row of γ to get exact probabilities and let the normalized matrix be γ˜. Then, sort each row of γ˜ in decreasing order of probability. Now, for each song, si , starting from the first column (j = 1), add the probabilities γ˜ij and stop when the sum just exceeds π. Let that be the rith column. All the topics covered before the rith column should be associated to the song, si , with probability proportional to the corresponding entry in γ˜ . So, for each song si , we get a set of topics, Ti = {(ti1 , mi1 ), (ti2 , mi2 ), · · · , (tiri , miri )} where mij = γ˜ij gives the membership (not normalized) of song si to topic tij . For each song si , we need to re-normalize the membership entries in Ti and let the re-normalized mi set be T˜i = {(ti1 , μi1 ), (ti2 , μi2 ), · · · , (tir , μir )}, where μi = ri j i . Thus, for each i
i
j
n=1
mn
song si , we have a set of topics T˜i , which also contains the membership of the song to each one of the topics present in T˜i .
3
Validation of Topics
Documents associated with the same topic must be similar, and thus, each topic is nothing but a cluster. We have refined the clusters by filtering out the topics which had less association with the songs. This results in k soft clusters of l songs. Let Ω = {ω1 , ω2 , · · · , ωk } be the set of k clusters, ωi being a cluster containing ni songs, si1 , si2 , · · · , sini , with memberships ui1 , ui2 , · · · , uini respectively. These memberships can be calculated using the sets {T˜i }li=1 , which were formed in the end of Sec. 2.3. For validation of the clusters in Ω, let us have an annotated dataset which ˜ S) be the set of songs divides N different songs into J different classes. Let S(⊂ that are covered by this dataset. Let C = {c1 , c2 , · · · , cJ } be the representation ˜ = of these classes, where ci is a set of songs associated to a class i. Let Ω ˜ {˜ ω1 , ω ˜2, · · · , ω ˜ k } be the corresponding clusters containing only songs from S. To ˜ against C. validate Ω, it is sufficient to validate Ω
Mining Sentiments from Songs Using Latent Dirichlet Allocation
335
˜ viz. Purity and NorWe use two evaluation techniques [14, Chap. 16] for Ω, malized Mutual Information which are described in (3) - (6) below. In these ˜ · | is the weighted cardinality (based on membership) of the equations, | · ∩ intersection between two sets. ˜ C) = purity(Ω,
˜ C) = N M I(Ω,
k 1 J ˜ cj |. max |˜ ωi ∩ N i=1 j=1
(3)
˜ C) I(Ω; , ˜ [H(Ω) + H(C)]/2
(4)
where, I is the Mutual Information, given by: ˜ C) = I(Ω;
k
J
˜ cj | |˜ ωi ∩ i=1 j=1
N
log
˜ cj | N · |˜ ωi ∩ , |˜ ωi ||cj |
(5)
and H is the entropy, given by: ˜ =− H(Ω)
k
|˜ ωi | i=1
N
log
|˜ ωi | N
H(C) = −
J
|cj | j=1
N
log
|cj | . N
(6)
Please note that these measures highly depend on the gold-standard we use for validation.
4
Experimental Results
As mentioned earlier, we collected the dataset from LyricsTrax [15], a website for lyrics that provided us with the lyrics of (l =) 66159 songs. After preprocessing, we got a total of (v =) 19508 distinct words, which make the vocabulary. Assuming different values of k, ranging from 5 to 50, we applied LDA over the dataset. Application of LDA fetches topics from it in the form of the matrices β and γ and as hypothesised, differentiates songs based on sentiments. It also tries to capture the theme behind a song. For example, for k = 50, we get 50 topics, some of which are shown in Table 1, represented by the top 10 words for each topic according to the β matrix. Topic 1 contains the words sun, blue, beautiful, etc. and signifies a happy/soothing song. Likewise, Topic 50 reflects a sad mood, as it contains the words kill, blood, fight, death, etc. Changing the number of topics gives a similar structure and as k is increased from 5 to 50, the topics split and lose their crisp nature. In addition to these meaningful results, we also get some noisy topics, which have nothing to do with the sentiment (we cannot, in general, expect to get only sentiment-based topics). This was the analysis based on β, which associates words with topics. From the posterior inference, we get γ, which assigns every topic to a document, with some probability. We normalize γ by dividing each row with the sum of elements of that row, and get the normalized matrix, γ˜ . We then need to find the sorted
336
G. Sharma and M.N. Murty
Table 1. Some of the 50 topics given by LDA. Each topic is represented by the top 10 words (based on their probability to represent topics). Topic # Words 1 sun, moon, blue, beautiful, shine, sea, angel, amor, summer, sin 2 away, heart, night, eyes, day, hold, fall, dream, break, wait 3 time, way, feel, think, try, go, leave, mind, things, lose 5 good, look, well, run, going, talk, stop, walk, people, crazy 6 man, little, boy, work, woman, bite, pretty, hand, hang, trouble 8 ride, burn, road, wind, town, light, red, city, train, line 9 god, child, lord, heaven, black, save, pray, white, thank, mother 11 sweet, ooh, music, happy, lady, morn, john, words, day, queen 12 sing, hear, song, roll, sound, listen, radio, blues, dig, bye 50 kill, blood, fight, hate, death, hell, war, pain, fear, bleed
order (decreasing order of probability) for each row of γ˜. Assume π = 0.6, i.e., let us cover 60% of the probability mass for each document. Moving in that order, add the elements till the sum reaches π. As described in Sec. 2.3, we then find Ti for each song, si and normalize it over the membership values to get T˜i . To obtain Ω (clusters) from {T˜i }li=1 , we need to analyse T˜i for all songs, si , and associate songs to each topic appropriately. For validation, we crawled through ExperienceProject [16], and downloaded a dataset of songs classified into 6 classes, viz., Happy, Sad, Angry, Tired, Love and Funny. From these songs, we pick the songs common to our original dataset ˜ which contain 625 songs, each associated with one of the 6 S. This gives us S, classes, by the set C = {c1 , c2 , · · · , c6 }. Each class cj contains some songs from ˜ Intersecting Ω with S˜ gives Ω, ˜ consisting of only among the 625 songs in S. those songs that are in the validation dataset. Now we are ready to validate the ˜ against the annotated set C using each of the measures mentioned clustering, Ω in Sec. 3. We actually run LDA for different values of k, ranging from 5 to 50. For each of these values, we find the two evaluation measures, purity and N M I. These are summarised in Fig. 5a and Fig. 5b. It can be seen that purity of the clustering increases monotonically with the number of topics. If we had one separate cluster for each song, the purity would have been 1. Thus the variation in purity is as expected. On the other hand, NMI, being a normalized measure, can be used to compare clusterings better, with different values of k. It first increases then decreases, having a maximum at k = 25, but not much variation is observed. The measures calculated above only partly evaluate the clusters. Having an actual look at the clusters provide better semantic information as to why certain songs were clustered together. It can be said that LDA captures the subjectivity up to some extent, but also lacks in disambiguating confusing songs which contain similar words (e.g., the words heart, break, night, eyes, etc. could be part of happy as well as sad songs).
Mining Sentiments from Songs Using Latent Dirichlet Allocation
337
1 0.9
purity−−>
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0
5
10
15
20
25
k−−>
30
35
40
45
50
30
35
40
45
50
(a) Purity 0.028 0.026
NMI−−>
0.024 0.022 0.02 0.018 0.016 0.014 0
5
10
15
20
25
k−−>
(b) Normalized Mutual Information Fig. 5. Validation Results
5
Conclusion and Future Work
Topic models work on the basis of word co-occurrence in documents. If two words co-occur in a document, they should be related. It assumes a hidden semantic structure in the corpus in the form of topics. Variational inference is an approximate inference method that works on the basis of finding a simpler structure for (here) LDA. As far as songs are concerned, they reflect almost coherent sentiments and thus, form a good corpus for sentiment mining. Sentiments are a semantic part of documents, which are captured statistically by LDA, with positive results. As we can intuitively hypothesize, a particular kind of song will have a particular kind of lyrical structure that decides its sentiment. We have captured that structure using LDA. Then we evaluated our results based on a dataset annotated by users of the website [16], which gives 6 sentiments to 625 songs. Compared to the 66159 songs
338
G. Sharma and M.N. Murty
in our original dataset, this is a very small number. We need annotation of a large number of songs to get better validation results. LDA follows the bag of words approach, wherein, given a song, the sequence of words can be ignored. In the future, we can look at sequential models such as HMM to capture more semantics. Negation can also be handled using similar models. Also, (sentimentally) ambiguous words such as heart, night, eyes, etc., whose being happy or sad depends on the situation of the song, can be disambiguated using the results themselves. In other words, if a word appears in a “happy” topic, is a “happy” word. SentiWordNet [17], which associates positive and negative sentiment scores to each word, can be used to find the binary-class classification of the songs. But that too requires extensive word sense disambiguation. Using the topics, we can improve SentiWordNet itself, by giving sentiment scores for words that have not been assigned one, based on their co-occurrence with those that have one. In fact, a new knowledge base can be created using the results obtained. Acknowledgements. We would like to thank Dr. Indrajit Bhattacharya, Indian Institute of Science, Bangalore for his magnanimous support in shaping up this paper.
References 1. Pang, B., Lee, L., Vaithyanathan, S.: Thumbs up? Sentiment Classification using Machine Learning Techniques. In: Proceedings of the ACL 2002 Conference on Empirical Methods in Natural Language Processing, pp. 79–86. ACL, Stroudsburg (2002) 2. Liu, B.: Opinion Mining. In: Web Data Mining. Springer, Heidelberg (2007) 3. Mihalcea, R.: A Corpus-based Approach to Finding Happiness. In: AAAI 2006 Symposium on Computational Approaches to Analysing Weblogs, pp. 139–144. AAAI Press, Menlo Park (2006) 4. Strapparava, C., Mihalcea, R.: Learning to Identify Emotions in Text. In: Proceedings of the 2008 ACM Symposium on Applied Computing, pp. 1556–1560. ACM, New York (2008) 5. Chu, W.R., Tsai, R.T., Wu, Y.S., Wu, H.H., Chen, H.Y., Hsu, J.Y.J.: LAMP, A Lyrics and Audio MandoPop Dataset for Music Mood Estimation: Dataset Compilation, System Construction, and Testing. In: Int. Conf. on Technologies and Applications of Artificial Intelligence, pp. 53–59 (2010) 6. Hsu, D.C.: iPlayr - an Emotion-aware Music Platform. (Master’s thesis). National Taiwan University (2007) 7. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet Allocation. J. Mach. Learn. Res. 3, 993–1022 (2003) 8. Maron, M.E.: Automatic Indexing: An Experimental Inquiry. J. ACM 8(3), 404– 417 (1961) 9. Borko, H., Bernick, M.: Automatic Document Classification. J. ACM 10(2), 151– 162 (1963) 10. Deerwester, S., Dumais, S.T., Furnas, G.W., Landauer, T.K., Harshman, R.: Indexing by Latent Semantic Analysis. Journal of the American Society for Information Science 41(6), 391–407 (1990)
Mining Sentiments from Songs Using Latent Dirichlet Allocation
339
11. Hofmann, T.: Probabilistic Latent Semantic Indexing. In: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 50–57 (1999) 12. Jordan, M.I., Ghahramani, Z., Jaakkola, T.S., Saul, L.K.: An Introduction to Variational Methods for Graphical Models. Mach. Learn. 37(2), 183–233 (1999) 13. Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.: WordNet: An Online Lexical Database. Int. J. Lexicography 3, 235–244 (1990) 14. Manning, C.D., Raghavan, P., Sch¨ utze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008) 15. LyricsTrax, http://www.lyricstrax.com/ 16. ExperienceProject, http://www.experienceproject.com/music_search.php 17. Esuli, A., Sebastiani, F.: SENTIWORDNET: A Publicly Available Lexical Resource for Opinion Mining. In: Proceedings of the 5th Conference on Language Resources and Evaluation, pp. 417–422 (2006)
Analyzing Parliamentary Elections Based on Voting Advice Application Data Jaakko Talonen and Mika Sulkava Aalto University School of Science, Department of Information and Computer Science P.O. Box 15400 FI-00076 Aalto, Finland
[email protected],
[email protected] http://www.cis.hut.fi/talonen
Abstract. The main goal of this paper is to model the values of Finnish citizens and the members of the parliament. To achieve this goal, two databases are combined: voting advice application data and the results of the parliamentary elections in 2011. First, the data is converted to a high-dimension space. Then, it is projected to two principal components. The projection allows us to visualize the main differences between the parties. The value grids are produced with a kernel density estimation method without explicitly using the questions of the voting advice application. However, we find meaningful interpretations for the axes in the visualizations with the analyzed data. Subsequently, all candidate value grids are weighted by the results of the parliamentary elections. The result can be interpreted as a distribution grid for Finnish voters’ values. Keywords: Parliamentary Elections, Visualizations, Principal Component Analysis, Kernel Density Estimation, Missing Value Imputation.
1
Introduction
The parliamentary elections in Finland were held on 17 April 2011. In the elections, 200 members of parliament (MP) were elected for a four-year term. Every Finnish citizen, who has reached the age of 18 not later than on the day of the elections, is entitled to vote. The number of people entitled to vote was 4 387 701 and the voting turnout was 70.5% [1]. Helsingin Sanomat (HS), the biggest newspaper in Finland, published the voting advice application (VAA) [2] about a month before the elections. Questions from different topics were available for the candidates to answer in advance. VAA provides a channel for candidates to tell their opinions to various topical issues. Voters express their views on these issues. The output of VAA is a ranked list of candidates. On 6 April 2011 HS released the data of the candidates’ answers [3]. The aim of this research is to model the opinions, ideals and values (later just values in this paper) of Finnish citizens and the MP. Two databases were combined: the VAA data provided by HS [3] and the results of the parliamentary J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 340–351, 2011. c Springer-Verlag Berlin Heidelberg 2011
Analyzing Parliamentary Elections
341
elections 2011 [1]. These data are explained in more detail in the Section 2.The methodology is presented in Section 3. In Section 4 the data is preprocessed and converted to a high-dimension space. Then it is reduced to two dimension by Principal Component Analysis (PCA). Next, the value grids are weighted by the results of the parliamentary elections 2011 [1]. The value grid is produced with a kernel density estimation method without explicitly using the questions of the VAA. Thus, the visualization results presented in this paper do not depend on what was asked and which answer options were available. As a result a distribution grid for Finnish voters’ values is presented. Finally, more visualizations and possibilities for use are introduced. A discussion in Section 5 concludes this paper.
2
Voting Advice Application Data and Parties in Finland
Questions with a large amount of factual information were selected for the VAA [2]. These questions were divided into nine subgroups according to the topic, which were: (general) questions 1–5, pensions 6–8, economy 9–11, taxes 12–15, defense 16–17, foreign countries 18–21, domestic 22–26, localities 27–30. In question 31 three parties were asked to select for desired government. In question 21 it was asked that if Finland was in Facebook, which three countries should be her friends. So total 35 answers were assumed for each candidate. The list of registered parties is presented in Table 1. Long and precise questions enhance the reliability of the answers. It can be assumed that candidates use more time with the questionary than, e.g., citizens in gallups. For example, one of the shortest questions was ”Differences in income have increased very fast in Finland after mid-1990s. How should it be approached?”. The answer options were – – – – –
The differences in the incomes should be reduced. The differences in the incomes have to be reduced slightly. The disparity of incomes is now at an appropriate level. Income inequalities can increase moderately. Income inequalities should grow freely.
In the VAA it was also possible to give weights for the questions depending how important the questions were. The options were small, medium and the great importance. A typical way to convert qualitative data to quantitative is to give a numerical value for each answer in each question [4]. Surely, some results can be obtained by analysing this type of data, but the question is that can we really trust the results. Should ”The differences in the incomes should be reduced.” be quantized as one and ”The differences in the incomes have to be reduced slightly.” as two or maybe three? In traditional analysis, the question importances and multiple choice questions are usually omitted. In this paper, a solution for proper VAA data preprocessing is introduced.
342
J. Talonen and M. Sulkava
Table 1. The list of registered parties and approximation for party type *(left/right) [5,6]. Parties are listed by the results of the Parliamentary elections 2011 (given seats). Names in English are unofficial translations [1]. **a new member of parliament from ˚ Aland Coalition is added to RKP. Abbreviation Party
Seats-07 Seats-11 left/right*
KOK
National Coalition Party
50
44
7.2
SDP
The Finnish Social Democratic Party 45
42
3.5
PS
True Finns
5
39
6.4
KESK
Centre Party of Finland
51
35
5.8
VAS
Left-Wing Alliance
17
14
2.4
VIHR
Green League
15
10
3.7
RKP
Swedish People’s Party in Finland
10**
10**
6.3
KD
Christian Democrats in Finland
7
6
7.2
SKP
Communist Party of Finland
SEN
Finnish Seniors Party
KTP
Communist Workers’ Party
STP
Finnish Labour Party
IPU
Independence Party
KA
For the Poor
PIR
Pirate Party of Finland
M2011
Change 2011
VP
Liberty Party - Future of Finland
2.1
Data Preprocessing
For our experiments, the qualitative data was first converted to a matrix X. ⎞ ⎛ x1,1 x1,2 . . . x1,k ⎜ x2,1 x2,2 . . . x2,k ⎟ ⎟ ⎜ (1) X=⎜ . .. .. ⎟ , ⎝ .. . . ⎠ xn,1 xn,2 . . . xn,k where n is the number of candidates. nqk is defined as a sum of all answer possibilities in all questions Qs , k = s=1 #optionss , where nq is the number of questions. A matrix element xi,j is zero or Ns . All answer options are equally weighted, if Ns = 1 ∀s. In the experiments, different definitions for variable Ns were tested.
Analyzing Parliamentary Elections
343
The candidate’s selection for the importance of each answer was stored in a weight matrix W(n × k), where a matrix element wi,j is 1 − a where a ∈ [0, 1] (candidates selection for his/her answer is not so important), 1 (default) or 1 + b where b ≥ 0 (important). With larger b, radical candidates are projected far from the center by questions which really divides opinions between the candidates. For example, if a = b = 0.1, candidates with strong opinions are not clearly separated from other candidates. And in practice it means that we lose important information about the importance of the questions. A candidate answer matrix for further analysis is defined by elementwise product as Xb = X· W.
(2)
It is possible to find out which questions are important in general by selecting proper values for a and b. Finally, matrix Xb is mean centered before further analysis. More details about selection of variable Ns and parameters a and b in this study are explained in Section 4.
3 3.1
Methods Principal Component Analysis
There is often redundancy in high dimensional data. However, none of the measurements is completely useless, each of them delivers some information. The solution for this is dimensionality reduction. [7] Feature extraction transforms the data on the high-dimensional space to a space of fewer dimensions. The data transformation may be linear, as in principal component analysis (PCA), but many nonlinear dimensionality reduction techniques also exist [8]. Principal component analysis (PCA) is a useful tool for finding relevant variables for the system and model. It is a linear transformation to a new lower dimensional coordinate system while retaining as much as possible of the variation. It is selected as the main method in this paper, because results are taken into continued consideration. In addition, orthogonal axes make visualizations easier to read. The PCA scores is range scaled [9] from −100 to 100 as scorei =
scorei − min(scorei ) · 200 − 100, min(scorei ) + max(scorei )
(3)
where i is a component number. Scaling of the score values make a value grid creation in Section 3.2 easier. 3.2
Kernel Density Estimation and Missing Value Imputation
Kernel density estimation is a non-parametric fundamental data smoothing technique where inferences about the population are made, based on a finite data sample [10]. In our experiments, several matrices (grids) were defined for densities. A value grid for each candidate is defined as
344
J. Talonen and M. Sulkava
⎞ h1,1 h1,2 . . . h1,s ⎜ h2,1 h2,2 . . . h2,s ⎟ ⎟ ⎜ Bc = ⎜ . .. .. ⎟ , ⎝ .. . . ⎠ hs,1 hs,2 . . . hs,s ⎛
(4)
where hi,j corresponds the candidates probability for certain opinion (i, j). s is selected to get decent accuracy for the analysis. In this paper PCA scores were scaled, see Eq. 3. Therefore the best choice for the size for each cell in the grid is 1 × 1. Then the grid size is defined as s = max(score.,1 ) − min(score.,2 ) + 1.
(5)
A similar matrix Bp for each party or an individual candidate without a party is defined similar way as in Eq. 4. A value grid for each candidate is constructed based on PCA score results by two-dimensional Gaussian function as
Bc(i,j) = Ae
−
(i−i0 )2 2σ2 i
+
(j−j0 )2 2σ2 j
,
(6)
where the coefficient A is the amplitude, i0 and j0 is the center (in our case score1 and score2 ). A distribution volume for each candidate is scaled to one. In practice it means that if the center is near the border of the value grid Bc , coefficient A is larger. After all available data is used, missing data m is approximated by the distribution of the party value grids as s
s
Bc=m = v · Bp ,
(7)
where v = 1/ i=1 j=1 Bp=mparty . If a missing candidate is not a member of any party, a sum of the distribution matrices is defined as Btot =
#parties i=1
Bp =
n
Bc
(8)
i=1
and it is used as Bp in Eq. 7. So, all candidates have some kind of distribution in the value grids Bc (Gaussian, party distribution or total distribution) with the volume of one.
4
Experiments and Results
The concept of data mining process varies depending on the current problem. Many decisions for example in the parameter selections are based on intermediate results. In this particular context the flow of the experiments is presented in Fig. 1. Different candidate classifications and visualizations by histograms, scatter plots, factor analysis, PCA, Self-Organizing Map (SOM) and Multidimensional scaling (MDS) have been published after the data was released [11,12]. In these analysis a lot of information is just dropped off. In addition, the results are not combined with voting application data. In this paper, new visualizations and guidelines for future research are presented.
Analyzing Parliamentary Elections
345
Fig. 1. The six stages of parliamentary data analysis
4.1
Preprocessing of Voting Advice Application Data
HS sent invitation for all candidates and 469 persons did not answer the VAA [2]. In two of the questions a candidate had to choose three options, so a total of 29 · 1 + 2 · 3 = 35 answers were expected for each candidate. Although the instructions were clear, 39 candidates gave more answers than were asked. 1682 candidates gave 35 answers. 85 candidates did not answer to questions 33–34. 26 candidates gave 21–32 answers. 14 candidates gave only 1–20 answers. For several parameters, some assumptions had to be made. One question is how many answers are needed to take the candidate in the further analysis. If many candidates without data is taken in the analysis, PCA results (other candidates’ scores, latents, loadings) have bias. These candidates with missing data would have approximately the same scores even if they were from different parties. In our experiments, the candidates who gave more than 20 answers were selected. This selection is based on distribution of given answers by candidates. Only 14 of 1834 candidates were omitted and selected candidates have more than 60% of the answer information. In 31 questions there was total k = 173 answer options, see Eq. 1. A matrix X in the new higher dimension has n = 1820 rows and k = 173 columns. Next question is how to select a and b for matrix W in Eq. 2 and what do “high” or “low” importance mean. For example, when parameters were selected as a = 0.5 and b = 2, PS, VAS and VIHR parties were separated from other
346
J. Talonen and M. Sulkava
parties. This example result can be explained by the information of PCA loadings. In the first component VAS and VIHR were separated, because candidates in these parties have the following strong opinions (absolute high loadings): negative attitude to nuclear power, the differences in the incomes should be reduced and development co-operation should be improved. PS was separated from the other parties because their candidates are against giving adoption right for gay couples and the role of Swedish language should be reduced in Finland. The grades of membership functions have close association between the concept linquistic terms such as very low, low, medium (0.5), high, very high, etc. Ranges are over the interval [0, 1]. [13] In our case matrix W elements are over the interval [0, ∞[. Therefore in our experiments, a good selection for parameters are a = b = 0.5. Experiments indicated that party positions were relatively similarly placed even if a and b were varied. Moderate values for a and b are justified because our goal is to visualize a distribution of the values, not to separate radical candidates from others. The number of answer options varies between the questions and therefore, different weights for answers were tested. If weighting is not used, Ns = 1 ∀s in Eq. 1. Questions with two options get less weight in the PCA projection. Next, the answers of each question were weighted with the number of options in the question. Ns is inverse of the number of options in each question, in Eq. 1. It is evident that two variables are not enough to describe the variation measured by the 31 questions creating 173 different answers. The data was centered before PCA and after each weighting experiment. The coefficient of determination for the first principal component was 16.5% and for the second 9.48%. The problem is that questions with many options were underweighted, because there are many ways to approach it. Therefore the loadings on the first and the second component mainly depended on questions: – In 2009, Parliament approved a law that allows adoption for registered gay and lesbian couples within the family. Should gay and lesbian couples have the right to adopt children outside the family? – In the spring of 2010, the government granted two nuclear power plant licenses. The third candidate, Fortum, was left without, but hopes the next government would authorize the Loviisa power plant to replace two old reactors. Should new government give a license? – In elementary school, there are two compulsory foreign languages, one of whom must be a second national language. Should the study of second national language be optional? The next suitable experiment on the selection of variable Ns should be between an inverse of the number of options in each question and one. So, inverse of the square root of the number of options in each question is justified (Eq. 1). The coefficient of determination for the first principal component was 12.7% and for the second 7.51%. Answers in the three questions above still had meaningful loadings (not as large as with Ns = 1/#answerss ). These questions Qs have
Analyzing Parliamentary Elections
347
often been discussed in media and these questions divide some parties into different sections. Although the coefficients of determination were smaller √ there were more meaningful PCA loadings in the model. Therefore, Ns = 1/ #answerss was used in subsequent experiments. 4.2
Values of Candidates and Parties
A value grid Bp for each party was computed. The cell size in the grid was 1 × 1, so the grid size is 201 × 201 (see Eq. 5). In total 1820 PCA scores were inserted to the party value grids using a two dimensional Gaussian curve as follows. A single candidate was inserted to the grid by mean μ(x) = score1 , μ(y) = score2 . In our experiments a variance σ 2 = σi2 = σj2 was used. The values of variance σ 2 was selected so that it would produce clear visualizations. In practice this means that candidates of the same party were mainly joined together in the value grid Bp . with σ 2 = 0 the visualization looks like the original PCA score plot. With higher values of σ 2 a wider group of citizens voted the candidate. With σ 2 = ∞ a flat surface is produced. With the selected scale and grid size clear results were achieved with 42 < σ 2 < 122 , so we used σ 2 = 64 in our experiments. Missing data was approximated by the distribution of the party value grid Bp . The head of The Finnish Social Democratic Party (SDP) Jutta Urpilainen, for example, did not answer the VAA. In this study, it is supposed that her opinions and values have same distribution as her party Bc=Urpilainen = v · Bp=SDP , where v = 1/ Bp=SDP . Most probably her values are same as the mode of Bp=SDP . The minimum and maximum values in the principal components were between −1.96 and 1.91 (1. component) and −1.74 and 1.68 (2. component). These axes were scaled, see Eq. 3. This means that the second axis is stretched by 12.7% compared to the first one. Therefore, all contour plots are scaled into 1:1.127. A sum of value grids Btot (see Eq. 8) as a contour plot is visualized in Fig. 2. All value distributions were multiplied by 2πσ 2 to enhance the readability of labels in the contour plots. In the visualizations and in the analysis of the principal component coefficients it seems that the first axis depicts the economic left wing vs. right wing classification (cf. Table 1). The second axis seems to reflect at least partially liberal vs. conservative opinions. The same definitions for the axes were achieved in the previous candidate classifications [6,11,12]. Some remarks can be made such as the candidates of SKP, VAS, KTP and STP had similar opinions. Candidates of KOK had the most different opinions comparing to these candidates. An interesting remark is that they got the most seats, see Table 1. This conclusion is based on Fig. 2 and the first and second coefficients of determination in principal components without committing what is left/right or liberal/conservative. The bimodal candidate distribution have two different modes (40, 0) and (−22, 23). In this paper, all votes were combined with the candidate value grids Bc and completely new visualizations are created.
348
J. Talonen and M. Sulkava
100
1
1
10 0
80
500
0 10
1
300
60
0
0
10
VP KA
40
IPU
0
50
300 700
20
0 70
SKP VAS
500
KTP STP
700
M2011 SEN 700
100
KD 50 0
Second component
30
PS
SDP
0
300
PIR 30
0
50 0
0
100
70
VIHR
−20
KESK 10
0
KOK
−40
500
1
300
0
30
RKP
−60
0 10 1
−80
10
0
−100 100
80
60
40
20
0 −20 First component
−40
−60
−80
−100
Fig. 2. The values of the candidates. The visualization can be used to get an estimate how the median values of candidates in each party (labels) are related to each other. The ρ numbers in contour lines describe the density ρ of candidates. V = 2πσ 2 Btot = 2315.
4.3
Values of Finnish Voters
It is assumed that a citizen gave a vote based on his/her values by selecting a candidate with the same opinions and values. The values of the Finnish voters are defined 2315 as a sum of candidate value distributions weighted by votes as BFinn = c=1 votesc ·Bc . The sum of these distributions is an estimate of Finnish voters’ values. This is illustrated in Fig. 3. The modes of the party supporters can be classified to three groups, see party labels situated near the modes of BFinn . Finnish citizens who voted VAS, VIHR and SDP were classified as left-wing, see Table 1. PS, KESK, KD, SEN and M2011 supporters are more conservative than other citizens based on coefficient a and b testing results in Section 4.1. Those who voted KOK and RKP are more liberal and right-wing than PS and KESK. Finnish citizens who have the same opinions and values as KD voted more probably for either PS or KESK than KD. KESK suffered the heaviest defeat in the elections, so it can be speculated that some citizens who are not very religious but conservative voted PS rather than KD.
Analyzing Parliamentary Elections
349
VP
100
10 0
1
10
1
80
10
KA
100
60
1 10
750
1000
100
40
PS
500 750
IPU KTP
0
50
100 750
KESK 50
0
0
750
VAS
M2011
STP
500
PIR
750
0
10
50 0
VIHR
750
SDP
−20
75 0
Second component
SKP
500
KDSEN
20
500
75 0
−40
75
0
10
100
750
RKP
−60
0 50
0 10 −80
0 00
500
750 10
−100 100
80
60
10
40
20
10
100
1 KOK
1
1
0
0 −20 First component
−40
−60
−80
−100
Fig. 3. The values of the Finnish voters. This visualization can be used to get an estimate for one party how the opinions of the supporters are related to the supporters of other parties. Each party label is a mode of its supporters and those are in smaller font, if the density of the party supporters (in the mode) is less than some other partys density. The numbers in contour lines describe the density of citizens in thousands.
4.4
The Values of the Parliament
The 200 MPs’ values and the value distribution of each party in the Parliament is shown in Fig. 4. The distributions are not sums as in earlier contour plots. The increase in the number of seats for the True Finns (PS) from five to 39 is a change of historic proportions. It is expected that PS picked up votes from the voters of other party with the same opinions such as KESK. KESK have supporters with rather different values. They seem to have lost liberal right supporters to KOK, liberal left to SDP and conservative KESK supporters to PS. They suffered the heaviest defeat and lost 16 seats, see Table 1. The form of the new government was an open question on the beginning of May, 2011. Speculations of the next government have been published in local and international media [14]. Mostly majority governments KOK + SDP + P S, KOK + SDP + P S + RKP and KOK + SDP + V IHR + RKP + KD have been mentioned. Visualization of the values of the Parliament can be used for supporting the formation of the new coalition government.
350
J. Talonen and M. Sulkava
100
2
PS
2
PS
80
PS PS
4
PS PS
PS PS
4
60
PS SDP PS PS PS (PS)6 PS PS PS
SDP
PS
SDP
SDP
40
KESK 2
2
VAS VAS
VAS
KESK PS KESK KESK (KESK) KESK KESK KESK PS 4 KOK KESK KESK KOK KESK KOK KOKKOK KESK KOK
2
SDP SDP SDP
SDP
SDP
SDP
KD SDP
VIHR SDP4 SDP SDP
2
2
VIHR
KESK
SDP
KESK SDP KESK
SDP
2
SDP RKP
VIHR
VIHR KESK
RKP
KOK
RKP KESK
KOK
KOK 2 RKP KOK 4 KOK KOK
KOK KOK KOK
KOK KOK
KOK
KOK
−80
KOK KOK
KOK
KOK
4
VIHR VIHR
2 KESK RKP (RKP)
VIHR KESK
2
−60
KOK KOK KDKOK 6 KESK KOK KOK (KOK) KESK KOK KOK
4
2
VIHR
KOK KESK KOK KESK KESK
RKP
VIHR
2
KESK KOK
2 PS
KOK KESK 2 SDP KESK
2
SDP SDP SDP SDP SDP
VIHR
KD KD
SDP
(SDP)
SDP
4
SDP
6
SDP VAS SDP SDP SDP
−20
−40
2
KD
PS SDP KESK KOK
SDP
2
PS
2
2
SDP
4
PS
PS
PS
KESK
SDP VAS
SDP SDP VAS
SDP VAS
PS PS
PS KD
2
Second component
VAS
VAS
0
PS
SDP
PS
PS
KOK SDP
VAS VAS SDP
20
PS
PS
VAS VAS
PS PS
2
VAS
2
KOK KOK
KOK RKP 2
RKP KOK KOK
−100 100
80
60
40
20
0 −20 First component
−40
−60
−80
−100
Fig. 4. MPs’ values. Each party label corresponds to one politician. The president of the party is visualized by larger font. Numbers in contour lines describe the density of politicians in each party. MPs whose values are approximated (labeled in parentheses) are close to the point max(Bp ).
5
Discussion
A huge amount of new possibilities for data mining in political space was opened by combining the two data bases analyzed in this study. Using the analysis methodology, it is possible to answer, e.g., the following questions: How well the values of the citizens fit the MPs’ values? Would they fit better with some other electoral system? Is there difference in value distribution between young and old candidates? Areal information about value distribution can be achieved by limiting voting application data by electoral districts. The results for each candidate by cities or even the polling station is available. Citizens’ values can be visualized on the map for example. Difference of value distributions between some polling station, such as different parts of a town can be used to explain some other measures, such as housing prices, criminal statistics, unemployment, etc. We are working on an important research area concerning everyone in democratic countries. In this paper, dynamics (candidates → results → parliament → government) of only one election was investigated. Political data, election
Analyzing Parliamentary Elections
351
results, pollings in the parliament etc. are produced regularly. By combining results presented in this paper, much more complex analysis becomes possible. A fundamental way to argue against our approach could be that “The best argument against democracy is a five-minute conversation with the average voter.” – Winston Churchill. True or false, in this article we presented preprocessing and analysis methods for VAA data, the visualizations of value distribution of the political field that seemed to be relatively insensitive to small changes in the data. Also a number of new possibilities for future experiments in political data mining were presented.
References 1. The elections website of the ministry of justice, http://www.vaalit.fi/ (retrieved on April 2011) 2. HS Voting Advice Application (in Finnish), http://www.vaalikone.fi/ (retrieved on April 2011) 3. HS blog (in Finnish), http://blogit.hs.fi/hsnext/hsn-vaalikone-on-nyt-avointa-tietoa (retrieved on April 2011) 4. xls-dataset - Typical way to convert qualitative data to quantative, http://www.loitto.com/tilastot/hsvaalikone11/HS-vaalikone2011_num2.xls (retrieved on May 2011) 5. Manow, P., D¨ oring, H.: Electoral and mechanical causes of divided government in the european union. Comparative Political Studies 41(10), 1349 (2008) 6. National election in Finland (March 18, 2007), http://www.parlgov.org/stable/data/fin/election-parliament/ 2007-03-18.html (retrieved at May 2011) 7. Belsley, D., Kuh, E., Welsch, R.: Regression diagnostics: Identifying influential data and sources of collinearity, vol. 546. Wiley-Interscience, Hoboken (2004) 8. Hair, J., Anderson, R., Tatham, R., Black, W.: Multivariate Data Analysis, 5th edn. Prentice-Hall, Englewood Cliffs (1998) 9. Pyle, D.: Data Preparation for Data Mining. Morgan Kaufmann, San Francisco (1999) 10. Scott, D.W.: Multivariate density estimation, 5th edn. Wiley Online Library (1992) 11. Data Rangers example (in Finnish), http://extra.datarangers.fi/vaalit/ (retrieved on May 2011) 12. Candidate Map (in Finnish), http://nypon.fi/Vaalit2011/ (retrieved on May 2011) 13. Zadeh, L.: The concept of a linguistic variable and its application to approximate reasoning–I* 1. Information Sciences 8(3), 199–249 (1975) 14. Daley, S., Kanter, J.: Finland’s Turn to Right Sends Shivers Through Euro Zone. New York TImes (April 21, 2011)
Integrating Marine Species Biomass Data by Modelling Functional Knowledge Allan Tucker1 and Daniel Duplisea2 1
2
School of Information Systems, Computing and Maths, Brunel University, Uxbridge, Middlesex, UK, UB8 3PH
[email protected] Fisheries and Oceans Canada, Institut Maurice Lamontagne, Mont Joli, Quebec, Canada, G5H 3Z4
[email protected] Abstract. Ecosystems and their underlying foodwebs are complex. There are many hypothesised functions that play key roles in the delicate balance of these systems. In this paper, we explore methods for identifying species that exhibit similar functional relationships between them using fish survey data from oceans in three different geographical regions. We also exploit these functionally equivalent species to integrate the datasets into a single functional model and show that the quality of prediction is improved and the identified species make ecological sense. Of course, the approach is not only limited to fish survey data. In fact, it can be applied to any domain where multiple studies are recorded of comparable systems that can exhibit similar functional relationships.
1
Introduction
The behaviour of ecosystems within the world’s oceans is massively complex. The delicate balance of species relies on a vast interconnected set of interactions. What is more, these interactions may be played by different species depending upon the geographical region. In marine ecology, there are many hypothesised functions that play key roles in this balance of ecosystems [5]. For example, one such function is known as the ‘wasp waist’. Here, there are certain species at the bottom of the foodweb (known as ‘producers’) and other species at the top (‘consumers’). The wasp waist is a set of species that exist at a critical junction in the foodweb somewhere between producers and consumers which are called the wasp waist. [3]. We can view the wasp waist species as rendering the consumers conditionally independent of the producers. It is likely that in different geographical regions the wasp waist species will vary but the general functional relationship will be the same, as shown in the two foodwebs in Figure 1a-b. Figure 1c shows the abstracted functional model. In this paper, we explore methods for identifying variables in different datasets that capture a similar functional relationship between them. We demonstrate how this sort of analysis can be used to identify marine species that play similar roles within their complex foodwebs in different geographical regions. We also J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 352–363, 2011. c Springer-Verlag Berlin Heidelberg 2011
Integrating Marine Species Biomass Data
353
Fig. 1. The functional relationship ”Wasp Waist”as found in populations in the Northwest Atlantic (a) and the Northeast Atlantic (b). A number of nodes represent the larger species in the Northwest, which determine the population of the wasp waist species, capelin, which in turn determines the populations of the smaller species. The same functional structure is applicable to different species for a Northwest Atlantic wasp waist, sandeel. (c) The abstracted set of functional relationships for the Wasp Waist.
demonstrate how identifying similar functions enables different datasets to be integrated to build more robust functional models. In other words, once we have identified groups of variables in different datasets with similar functions, we can explore the potential of combining these data to construct models of the function itself (e.g. the wasp waist) rather than of the specific set of variables (e.g. the species specific to one region). Of course the idea of modelling functional relationships does not only apply to ecological domains and is applicable to any domain where certain known or hypothesised functions are available along with multiple datasets likely to contain that relationship. For example, the molecular biological function of regulatory genes may be similar within different biological systems, whilst the precise genes are still unknown. The process of identifying variables with similar functional relationships is similar to the feature selection [10] problem but rather than selecting features based upon a decision boundary they are selected based upon a predefined function (either proposed by an expert or as learnt from another dataset). In [15] predefined functions are explored by investigating weights in hidden nodes of neural network models. Our work differs from this in that it automatically identifies the members of these groups and uses them to integrate multiple datasets. In addition, we use Bayesian network models [9], as these facilitate transparency in the model, and naturally handle uncertainty and the exploitation of prior expertise. Therefore, we learn models like that in Figure 1c where nodes represent abstracted concepts over several datasets. There is some research into using Bayesian networks for ecological modelling [13] and in particular for modelling fish populations (e.g. [8]). There is also considerable literature on integrating heterogenous data within the data-warehousing community including for
354
A. Tucker and D. Duplisea
environmental data [1] but no exploration of integrating or comparing different variables under a single function as we do here with species. We report results from simulated data and real surveyed fish abundance data collected from the North West Atlantic and the North Sea. In the next section we describe the algorithm for identifying sets of variables with similar functional relationships as well as the algorithm for integrating different datasets through the use of a functional model. Section 4 documents the results of an empirical analysis on synthetic data and on real-world data from three marine surveys in different oceans.
2 2.1
Methods Functional Variable Search
The functional model search algorithm (which is fully documented in Algorithm 1) uses a Simulated Annealing approach [12] to search for an optimal combination of variables that fit the given function. Here we demonstrate the approach using a Bayesian network model, where the given function is in the form of a given Bayesian network structure, BN1 , and set of variables, vars1 that is parameterised from a dataset, data1 . The Bayesian network exploits the the facN torisation P (X) = i=1 P (Xi |P ar(Xi )), where P ar(Xi ) denotes the parents of node i in the graph structure. This model is then used to search for the variables in another dataset, data2 that best fit. The algorithm outputs the set of variables that best fit the given model. Of course BN1 does not have to be parameterised from a dataset and can be supplied as a prior based upon expertise.
Algorithm 1. The Functional Model Search Algorithm 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
Input: tstart , iterations, data1 , data2 , vars1 , BN1 Parameterise Bayesian Network, BN1 from data1 Generate randomly selected variables in data2 : vars2 Use vars2 to score the fit with selected model BN1 : score Set bestscore = score Set initial temperature: t = tstart for i = 1 to iterations do Randomly switch one selected variable in dataset2 and rescore: rescore dscore = rescore − bestscore if dscore ≥ 0 OR U nif Rand(0, 1) < exp( dscore/t) then bestscore = score else Undo variable switch in vars2 end if Update temperature: t = t x 0.9 end for Output: vars2
Integrating Marine Species Biomass Data
2.2
355
Data Integration for Functional Model Prediction
Not only do we want to be able to identify variables of interest from another dataset, we also want to be able to use the data from this new data to improve our functional models. In order to do that we carry out another set of experiments. Algorithm 2 documents how the discovered variables, vars2 from data2 can be used in conjunction with those supplied from data1 in order to parameterise a functional model (or FnBN as we refer to it for the remainder of the paper as we make use of BN models). The algorithm uses a k-fold cross validation approach to test the predictive accuracy of the model parameterised with the newly associated data (in other words by using the concatenation of the associated variables from each dataset). In this paper we compare this approach to standard cross validation where data2 is not integrated. For all experiments, we predict the discrete state of each node given the state of other nodes by applying junction-tree inference [11] on the discovered models. We always ensure that the test data is unseen and has not been used to either select the variables from data2 or to parameterise the model. Algorithm 2. Using the discovered functional variables to build models 1: Input: data1 , vars1 , data2 , vars2 2: Apply k-fold cross validation to build k training and testing sets on data1 3: For each training phase incorporate the data from data2 using vars2 identified in Algorithm 1 4: Test on the test sets from data1 5: Output: Predictive Accuracy
2.3
The Experiments
The synthetic data experiments involve using two simulation approaches. Firstly, a relatively simple Bayesian network was defined by hand. Data was sampled from this to produce two different datasets. Secondly, data was generated from the Alarm network [4]. We used these datasets to explore to what extent the effect that both function complexity and sample size have on the success of the functional integration approach in terms of identifying related variables and improving prediction. We compare the effect of function complexity by using simple single link functions between two variables: A single parent, a → b based upon the distribution P (a, b) = P (b|a); as well as a collider function: 2 parents, a → b ← c based upon the distribution P (a, b, c) = P (a)P (c)P (b|a, c). These are selected from both the hand-coded data and the alarm network. We apply leave-one-out cross validation on the first dataset by learning a BN model and compare the predictive results with a model that is learnt from the combined datasets using the Functional Model Learning process in Algorithm 1 to identify the functionally similar variables and Algorithm 2 to combine the datasets and generate predictions.
356
A. Tucker and D. Duplisea
We also explore the method on three real-world ecological datasets that document the biomass of numerous species of fish in different times and locations of three ocean regions: the North Sea (NS), George’s Bank (GB) and the East Scotian Shelf (ESS). See Figure 2 for a map of the ocean locations. For all datasets, the biomass was determined from research vessel fish trawling surveys using strict protocols assuring consistent sampling from year to year, thus providing a useful relative index. In each survey, all individuals in an individual trawl set are counted, weighed and classified into species, for between about 80 and 500 tows each year. The average over tows for each year in each system reflects the overall species composition and relative abundance for that year. Usually upward of 100 species were caught each year but some were caught infrequently. The community considered for each system was determined using a species filter such that only those species which were caught relatively frequently and well by the fishing gear were considered, thus removing the potential for spurious relationships that could arise simply because of poor sampling. This resulted in 44 species for Georges Bank (1963-2008), 44 species for the North Sea (1967-2010) and 42 species for the eastern Scotian Shelf (1970-2010). The sources of these datasets are documented fully in the acknowledgements. We discretise the data into 3 states: High, Medium, and Low using a quantile approach to ensure that there are equal numbers of states for each species. We wanted to find out if we could identify species in the North Sea and the East Scotian Shelf that have a similar functional relationship to those that are known in George’s Bank, and if so can we use the discovered species to improve the prediction of our models in the North Sea and the East Scotian Shelf. Specifically, we select a number of species that are associated with cod and haddock (two species of interest to ecologists) by using foodwebs of the George’s Bank region. These are then used to build the initial functional models for the experiments (Leave One Out cross validation is used on the Georges Bank data to see how good the fit is for these models). We also explore the use of bootstrapping [6] to rank the identified species in the North Sea data as being functionally similar to those selected from the foodwebs of George’s Bank. We validate these rankings using stomach surveys of the species which indicate what species prey upon / compete with others.
3 3.1
Results Synthetic Data
First we look at the behaviour of the BN functional modelling approach (FnBN) with a focus on the collider and single parent models from the hand-coded BN data. Recall that the models are essentially: a → b ← c and a → b and that we learn the model from one dataset and use the other to best fit the variables. We compare standard Leave One Out (LOO) cross validation on the first dataset with the combined data using the FnBN models. Figure 3 shows the errors when predicting the discrete states. A prediction error of zero would be recorded if all states are correctly predicted over all variables and experiments, and the error increases as more incorrect predictions are made. These errors are displayed as
Integrating Marine Species Biomass Data
357
Fig. 2. Map of the locations of the three datasets. George’s Bank (GB), The East Scotian Shelf (ESS) and the North Sea (NS).
box whisker plots of the distribution of Sum Squared Errors over 200 repeated runs. The results are shown for increasing sample sizes of data. It is clear that the LOO errors are considerably higher than those that combine the two datasets using the functional model approach and this difference appears to be more pronounced for the smaller sample sizes where data combination is likely to be of more value. The LOO results have zero standard deviation as the prediction will be the same for each experiment whilst the standard deviation varies on the functional model approach (based upon which variables have been identified). In general the difference is far less significant once the sample size is 80 or larger. This is the case for both the single link function and the collider. Over all sample sizes, we found that the correct links were found for the collider model in 52% of networks and the correct links found for the single link model in 56% of networks. This is quite low considering the network is not noisy and is likely to be due to the heavily correlated links, which is why the predictions still showed improvement. In other words, the identified variables may have not been the correct ones but they were still predictive. Very similar results were found on the Alarm data shown in Figure 4 where the improvements in prediction were evident but this time it is still evident for larger sample sizes on the single parent function. What was most surprising is that correct links were found for the collider in only 6% of networks and in the single link in only 16% of networks. This is very low considering the improvement in accuracy but highlights that for more realistic data, the interpretation of the identified variables must be explored with caution as they could easily be correlated variables. 3.2
Fish Biomass Data
The first set of experiments on the fish biomass data involved applying LOO cross validation on the George’s Bank data and comparing the prediction errors
358
A. Tucker and D. Duplisea
Fig. 3. Prediction error pairs for LOO and FnBNs with data of increasing sample size for a collider relationship (top) and a simple parent-child link (bottom) on hand-coded BN data
to the functional approach that integrates the identified North Sea variables. The species identified from the stomach data in George’s Bank were haddock (competitor), thorny skate (predator), silver hake (prey), and witch flounder (prey) for cod; and cod (competitor), winter skate (predator), goosefish (predator), American lobster (prey), white hake (prey) and pollock (prey) for haddock. Figure 5 shows the results of LOO compared to the Functional Model. Again, the
Integrating Marine Species Biomass Data
359
Fig. 4. Prediction error pairs for LOO and FnBNs with data of increasing sample size for collider relationships (top) and simple parent-child links (bottom) on Alarm data
functional model approach consistently improves the errors for both the cod and the haddock models when combined with the North Sea data and the East Scotian Shelf data. Figure 5 also shows the confusion matrices for the predictions for cod and haddock for both methods. It is clear from these that the functional models that are trained on the original George’s Bank data and combined with the identified species from the North Sea generate better predictions for the selected species. Having found an improvement in predictive accuracy in the functional
360
A. Tucker and D. Duplisea
Fig. 5. Prediction error and confusion matrices on Fish data trained on Georges Bank data and tested on North Sea cod (a), North Sea Haddock (b), East Scotian Shelf Cod (c) and East Scotian Shelf Haddock (d)
models, we now explore some of the identified species in the North Sea and East Scotian Shelf to see if they fit the expected hypotheses of ecologists. Finally, we explore the rankings generated from applying the the bootstrap to identify functionally equivalent species from the North Sea. These should have similar functions to those identified in George’s bank (which were chosen based upon
Integrating Marine Species Biomass Data
(a)
361
(b)
Fig. 6. Ranked Species using Bootstrapping with the Variable Search for Fish data trained on Georges Bank data and tested on cod in the North Sea (a) and East Scotian Shelf (b)
their relationship to cod). Figure 6 displays these rankings. Perhaps the most striking feature of the functional equivalence algorithm applied to the East Scotian Shelf is the presence of many deepwater species such as argentine (argentina sphyraena), grenadier (nezumia bairdi) and hakes (merluccius bilinearis). The inclusion of grey seals is expected as they were implicated in the decline and lack of recovery of many groundfish stocks such as cod on the East Scotian Shelf. The presence of coldwater seeking deepwater species on the East Scotian Shelf could be an indication of the watercooling that occurred on the East Scotian Shelf in the late 1980s and early 1990s which also led to increased coldwater shrimp and snow crabs. Furthermore, though grey seals increased in abundance at the same time, grey seals are not deep divers and if the deepwater species remained in the shelf basins and slope water, they would be less susceptible to grey seal predation than would cod. In the North Sea, most of the selected species are commercially desirable and some experience large declines in biomass in this period, though the nature of the species is not dissimilar to Georges Bank compared to East Scotian Shelf which showed some qualitatively very different species appearing. Catch of haddock and cod appeared to be important in the North Sea while commercial fish catch seemed less important on the East Scotian Shelf. These factors combined might suggest that catch is one of the most important factors driving change in the North Sea while on the East Scotian Shelf it may be that other factors lead to fundamental changes in the fish community composition. To summarise, many expected species were identified using our functional approach that play key functional roles to cod in the North Sea and East Scotian
362
A. Tucker and D. Duplisea
Shelf and that fit a general trend of catch being influential in the North Sea but not in the East Scotian Shelf, where cod stocks appear irrecoverable.
4
Conclusions and Future Work
In this paper, a novel approach for integrating datasets with differing attributes has been explored. This works by exploiting the use of prior expertise of the data in the form of functional knowledge (which can either be learnt from other datasets or supplied by experts in the form of a functional model). The method has been tested using Bayesian network models on synthetic data from a welldocumented simulated dataset as well as real surveyed fish abundance data collected from two North West Atlantic regions and the North Sea in conjunction with associated foodwebs. The results have consistently shown that whilst the direct functional causal factors are not always identified, closely correlated variables are discovered which result in improved prediction. The possibility of using this kind of approach for identifying important functions and species which fill them in each community may warrant particular care for an ecosystem approach to management (which is all the rage in environmental management). It could also help identify when a system may be teetering on the edge of changing to an alternative state because of loss of function (e.g. eastern Newfoundland after cod crash is now dominated by invertebrates). There is a necessity to restore function before restoring ecosystem state and this may be much more difficult than just stopping fishing as there is a hysteresis in the system. It is not just an interesting exercise in analysis or esoteric ecology but has application in terms of helping fishery managers manage risk in terms of destroying and recovering function which has carry-on effects throughout the marine ecosystem. This can provide a means to operationalise fuzzy and often intangible ecological concepts which can be brought in to aid better management. Fish abundance data are time-series and the next natural progression is to explore extending the functional modelling approach to dynamic models which we are beginning to undertake through the use of dynamic Bayesian networks [7] and variants of hidden Markov models [14], [2]. As was hinted in the introduction, this approach is not only limited to fish survey data. In fact, it can be applied to any domain where multiple studies are recorded of comparable systems that can exhibit similar functional relationships. Some alterations to the algorithm could offer improvements by exploiting expertise about the function to influence the networks through prior structures and parameters, for example. Acknowledgements. We would like to thank Jerry Black DFO-BIO Halifax for assistance with the ESS survey data, Alida Bundy DFO-BIO Halifax for the ESS foodweb, the ICES datras database for the North Sea IBTS data, Bill Kramer NOAA-NMFS Woods Hole for providing the Georges Bank survey, Jason Link NOASS-NMFS for the Georges Bank foodweb, Jon Hare NOAA-NMFS for NE USA plankton data, SAHFOS for North Sea plankton data, and Mike Hammill for ESS seal data.
Integrating Marine Species Biomass Data
363
References 1. SEIS 2008. European commission: Towards a shared environmental information system (2008) 2. Aliferis, C.F., Tsamardinos, I., Statnikov, A.: Hiton, a novel markov blanket algorithm for optimal variable selection. In: Proceedings of the 2003 American Medical Informatics Association, pp. 21–25 (2003) 3. Bakun, A.: Wasp-waist populations and marine ecosystem dynamics: navigating the ”predator pit” topographies. Progress in Oceanography 68, 271–288 (2006) 4. Beinlich, I., Suermondt, G., Chavez, R., Cooper, G.: The alarm monitoring system: A case study with two probabilistic inference techniques for belief networks. In: Proc. 2nd European Conf. on AI and Medicine (1989) 5. Duplisea, D.E., Blanchard, F.: Relating species and community dynamics in an heavily exploited marine fish community. Ecosystems 8, 899–910 (2005) 6. Friedman, N., Goldszmidt, M., Wyner, A.: Data analysis with Bayesian networks: A bootstrap approach. In: Proceedings of 15th Annual Conference on Uncertainty in Artificial Intelligence 7. Friedman, N., Murphy, K.P., Russell, S.J.: Learning the structure of dynamic probabilistic networks. In: Proceedings of the 14th Annual Conference on Uncertainty in AI, pp. 139–147 (1998) 8. Hammond, T.R., O’Brien, C.M.: An application of the bayesian approach to stock assessment model uncertainty. ICES Journal of Marine Science (58), 648–656 (2001) 9. Heckerman, D., Geiger, D., Chickering, D.: Learning bayesian networks: The combination of knowledge and statistical data. In: KDD Workshop, pp. 85–96 (1994) 10. Inza, I., Larranaga, P., Etxeberria, R., Sierra, B.: Feature subset selection by bayesian network-based optimization. Artificial Intelligence 123(1-2), 157–184 (2000) 11. Jensen, F.V.: Bayesian Networks and Decision Graphs. Springer, Heidelberg (2001) 12. Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science (220), 671–680 (1983) 13. Marcot, B.G., Steventon, J.D., Sutherland, G.D., McCann, R.K.: Guidelines for developing and updating bayesian belief networks applied to ecological modeling and conservation. Canadian Journal of Forest Research (36), 3063–3074 (2006) 14. Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE, 257–286 (1989) 15. Thrush, S., Giovani, C., Hewitt, J.E.: Complex positive connections between ufunctional groups are revealed by neural network analaysis of ecological time-series. The American Naturilst 171(5) (2008)
A Stylometric Study and Assessment of Machine Translators V. Suresh, Avanthi Krishnamurthy, Rama Badrinath, and C.E. Veni Madhavan Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India {vsuresh,rama,cevm}@csa.iisc.ernet.in,
[email protected] Abstract. In this work we present a statistical approach inspired by stylometry –measurement of author style– to study the characteristics of machine translators. Our approach quantifies the style of a translator in terms of the properties derived from the distribution of stopwords in its output – a standard approach in modern stylometry. Our study enables us to match translated text to the source machine translator that generated them. Also, the stylometric closeness of human generated text to that generated by machine translators provides handles to assess the quality of machine translators. Keywords: Machine Models, LDA.
1
Translation,
Stylometry,
Stopwords,
Topic
Introduction
Modern machine translation approaches [2, 3] are often statistical in nature and could be traced back to the pioneering work of Weaver [9]. Assessment of a machine translator’s output is important for establishing benchmarks for translation quality. An obvious way to assess the quality of machine translation is through the perception of human subjects. Though highly reliable, this approach is not scalable as the outputs to be assessed could be huge. Hence mechanisms have been devised to automate the assessment process. In principle such assessment methods are essentially a study of correlations between human translation and machine translation. In this work, we present a scalable approach to assess the quality of machine translation that borrows features from the study of writing styles, popularly known as stylometry. Stylometric techniques are useful in ascertaining authorship of disputed documents as illustrated by the case of authorship attribution of the disputed documents in the Federalist Papers [10]. Counterintuitively, these methods [11] often rely on studying the distribution of stopwords – words that do not necessarily convey syntactic meaning – rather than contentwords, which carry such emphasis. One of the reasons is that stopwords such as articles, prepositions, popular verbs, pronouns etc., occur in great profusion that even though the number of stopwords is small compared to the size of the lexicon, they are J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 364–375, 2011. c Springer-Verlag Berlin Heidelberg 2011
A Stylometric Study and Assessment of Machine Translators
365
typically around 30% of written text (in terms of word usage) and hence provide richer statistics for analysis when compared to contentwords. We first show that our stylometric approach correctly maps a given translated output to the machine translator that generated it. This is akin to author identification: measuring the style of the text given and identifying the author whose style is closest to it. As an extension of this idea, we use our approach to categorize the efficacy of machine translators based on how close in terms of style, human generated text are to the text generated by the machine translators. Typically, a machine translator’s quality is assessed by comparing it against a human generated reference text on a document by document basis. Non availability of human reference documents – blogs for example – pose limiting contexts for such assessment schemes. However, assessment of translators viewed in the context of stylometry, is a problem of identifying whether or not the text possesses qualities that are usually observed in the ones generated by humans; this can be done even in the absence of standard per document human references. Thus our approach offers a scalable way to benchmark the quality of translators even in the absence of standard references. Our experimental results for the popular web based translators are in accordance with subjective opinion regarding the quality of the translated text. The rest of the paper is organized as follows. In the next section we give a description of the conventional measures that are used for the purpose of evaluating the quality of machine translation. Section 3 explains our approach; this is followed by section 4 that details our experiments and results. In section 5 we summarize the findings of this work and present our concluding remarks.
2
Evaluation Measures
Measures to evaluate the quality of translators have largely depended on comparing a translator’s output with a human generated translation as the reference. The quality of a translator is ascertained based on overlaps in properties like number of words in each sentences, preservation of word orderings, editing operations required to morph one into another, occurrences of n-grams etc., between the human and the translator generated texts. These have evolved into some important benchmarks in the field of machine translation evaluation. In following we outline a few important evaluation measures developed over the past years. 2.1
Edit Distance Based Metrics
Here the minimum number of atomic operations like insertions, deletions etc., that must be made on the machine translation in order to transform it to the reference translation is considered for evaluation. We consider some standard metrics that come under this category. Word Error Rate (WER). WER is defined as the ratio of Levenshtein distance 1 between machine translation to reference translation calculated at the 1
http://en.wikipedia.org/wiki/Levenshtein distance
366
V. Suresh et al.
word level to the number of words in the reference translation. Levenshtein distance is defined as the minimum number of word insertions, substitutions and deletions necessary to transform one sentence to other. PositionIndependent Word Error Rate (PI-WER). The major problem with WER is that it heavily depends on the word order between the two sentences; but the word order of the machine translation might be different from that of reference translation. To address this problem PIWER [15] was introduced. This metric is similar to WER but it neglects word order completely. Translation Edit Rate (TER). TER is defined as the number of edits needed to change a machine translation so that it exactly matches one of the references, normalized by the average length of the references [16]. Possession of multiple references is assumed here. The allowed edit operations are insertion, deletion and substitution of single words or word sequences. This measures the amount of human work that would be required to post-edit the machine translation to convert it into one of the references. A related metric called HTER (Humantargeted TER) is also used. TER usually scores a candidate translation against an existing reference translation; HTER scores a candidate translation against a post-edited version of itself. 2.2
N-Gram Based Metrics
These metrics evaluate the translator’s quality based on the co-occurrence of n-grams between the machine and reference translations. BLEU(BiLingual Evaluation Understudy). BLEU [4] is a precision based approach known to exhibit correlations with human judgement on translation quality. It measures how well a machine translation overlaps with multiple human translations using n-gram co-occurrence statistics. Final score is a weighted geometric average of the n-gram scores. In addition to this, to prevent very short sentences getting high precision scores, a brevity penalty is added. Higher order n-grams account for fluency and grammaticality – the quality of being grammatically well formed text. Typically n-gram length of 4 used. NIST(from National Institute of Standards and Technology). NIST [8] improves the assessment approach of BLEU by giving weights for the matching ngram such that rarer words get more weight. For example, matching of stopwords – words like a, an, and, the etc., – are given less weight over matching of words that convey more information. In addition, some other corrections were made: not being case insensitive, less punitive brevity penalty measure etc. METEOR (Metric for Evaluation of Translation with Explicit ORdering). METEOR [7] takes into account the precision and recall of unigrams of the human reference text and the translator’s output. Precision is computed as the ratio of the number of unigrams in the machine translator’s output that are also found in the reference translation, to number of unigrams in the candidate
A Stylometric Study and Assessment of Machine Translators
367
translation. For recall, the numerator is the same as in precision, bt the denominator denotes the number of unigrams in the human translation. Precision and recall are combined using a harmonic mean –with more weight for the former– in order to compute a correlation between the human and the machine translator’s output. In addition, this approach also matches synonyms and uses a stemmmer to lemmatise the words before computing the measure. ROUGE (Recall Oriented Understudy for Gisting Evaluation).ROUGE [14] which is a recall oriented metric was developed with the same idea of BLEU. It is a widely used technique for the evaluation of summaries. It has been shown in [13] that ROUGE-L, ROUGE-W and ROUGE-S statistics correlates with human judgements and hence can be used to assess the quality of machine translation. However, main disadvantages of the above mentioned methods are: – Many important characteristics like stopword distributions which start to appear at the document level are lost when evaluation is done at the sentence level – Evaluation cannot be done in the absence of a human translation acting as a reference document. In addition to this, Callison-Burch et al [1] have showed that BLEU-like metrics are unreliable at the level of individual sentence due to data sparsity. To address these issues, we present a novel approach to assess the quality of machine translation that takes into account only the stopwords present in the text. Also, in our approach the evaluation is done at the document level, hence we capture the translator’s behaviour over the entire document rather than over individual sentences. The overall essence of our approach is to identify if a machine translation has the characteristics of human generated text or not. For this purpose an exact reference translation is not required, all that is required is a corpus that contains a collection of standard human generated translations. Thus our approach is free from the restrictive requirement of possessing human reference translations. Stopwords have been previously used for the purpose of stylometry or author attribution [10, 11]. As mentioned earlier, stopwords are words that do not usually convey syntactic meaning but occur in significant quantities in the English language usage. Our approach uses 571 stopwords from a standard list2 . In our approach, we study the two popular web based translators, namely the Google Translator3 and Microsoft’s Bing Translator4, and assess the quality of their translation by finding how close their outputs are to human generated text. Our subjective impression – after going through the translations of the French novels, The Count of Monte Cristo, The Three Musketeers and The Les Miserables – suggests that Google Translator’s output is of a better quality than Bing’s. Results based on this heuristic are in line with this observation. 2 3 4
http://www.ai.mit.edu/projects/jmlr/papers/volume5/lewis04a/ lyrl2004 rcv1v2 README.htm http://translate.google.com http://www.microsofttranslator.com
368
3
V. Suresh et al.
Evaluation Heuristic
Identification of the source translator that generated a given document is based on the stylometric closeness of the given document to the translators. Likewise, our assessment of the quality of a machine translator’s output is based on its overall stylometric closeness to text generated by human translation. The text mentioned here comprises only of stopwords: all non-stopwords (known as contentwords) are removed from the text prior to applying our heuristics. First, features are extracted from the machine translators’ outputs and the reference human translations. Then, the decision as to which translator produces text that is stylometrically closer to human generated text, is made based on how close the features corresponding to the human translation are to the features of the machine translators. The features we extract are derived from a topic modelling mechanism like the Latent Dirichlet Allocation (LDA) [5]. LDA is used to extract common topics from text corpora with the view to finding hidden or latent topic structures in them. The latent or hidden structures are popularly called as topics. Typically a topic is a distribution over words in the dictionary. Every word is associated with every topic but differ in their strength based on probabilities. With topics defined this way, documents are viewed as a mixture of these topics. Each document is then viewed as a vector with as many components as the number of topics wherein each component is indicative of the presence of the corresponding topic in that document. The components of these vectors viewed as n-tuples (n being the number of topics) are used as features for our approach. Such an approach is known to be effective for stylometric purposes [12]. Upon generating the topic vectors, stylometric closeness is measured as follows: The vectors from the documents are given labels to denote the translator from which it was derived (Google or Bing in this case) and classified using an a Support Vector Machine [6] classifier. Given a translated document, its topic vector (unlabelled) is computed and presented to the classifier. The label predicted identifies the source translator. SVM’s label prediction is in terms of how close the unlabelled vector is to the set of labelled vectors belong to the different classes of tranalators Likewise, for assessment of translation quality, a similar approach is used. SVM classifier is built based on the labelled topic vectors from the different tranalators. The topic vectors derived from the human translated documents (using LDA) are presented to the classifier. The output of the classifier – the translator to which they are assigned to – is recorded for each human document. The translator to which a majority of human translated documents are assigned is deemed qualitatively better. The experiments and results of applying this heuristic on French to English translation is described in the following section.
4
Experiments and Results
We considered the original French texts: The Count of Monte Cristo, The Three Musketeers, Les Miserables and their English translations (human generated)
A Stylometric Study and Assessment of Machine Translators
369
freely available from the Project Gutenberg5 . Hereafter, these novels will be referred to as CM, TM and LM respectively. The original texts were then translated into English using the Google and Bing translators. Samples of these translation are shown in Table 1. We mentioned earlier that our approach identifies the better translator. This statement must be refined in view of the sample output seen in the table as “our approach identifies the translator that is less worse”, given that both these translators are woefully inadequate when it comes to preserving grammatical correctness and the context of the narrative. Table 1. Human, Google and Bing translation of the opening passage from the Count of Monte Cristo. It is evident that both the translators fall short even in constructing grammatically correct sentences. Note that both the machine translators refer to ship as building! Subjectively however, over many such passages one feels that Google is better compared to Bing. (To put it better, Google is less worse among the two.)
Human: On the 24th of February, 1815, the look-out at Notre-Dame de la Garde signalled the three-master, the Pharaon from Smyrna, Trieste, and Naples. As usual, a pilot put off immediately, and rounding the Chateau d’If, got on board the vessel between Cape Morgion and Rion island. Immediately, and according to custom, the ramparts of Fort Saint-Jean were covered with spectators; it is always an event at Marseilles for a ship to come into port, especially when this ship, like the Pharaon, has been built, rigged, and laden at the old Phocee docks, and belongs to an owner of the city. Google: On February 24, 1815, the lookout of Notre-Dame de la Garde reported the three-masted Pharaon, from Smyrna, Trieste and Naples. As usual, a coast pilot immediately set the port, shaved Chateau d’If, and went to approach the vessel between Cape Morgion and the island Rion. Immediately, as usual again, the platform of the Fort St. John was covered with spectators, for it is always a great affair Marseille that the arrival of a building, especially when this ship, as the Pharaon was built, rigged, stowed on the sites Phocaea old, and belongs to an owner of the city. Bing: 24 February 1815, the surveillance of our Lady of the guard himself the unidentified Pharaon, coming from Smyrna, Trieste, and Naples. As usual, a coastal driver immediately sailed from the port, rasa If Castle, and alla approach the ship between the Morgion Cape and island of Rio. Immediately, as usual again, platform of fort Saint-Jean was covered with curious; because it is always a great case in Marseille than the arrival of a building, especially when this building, like the Pharaon, was built, rigged, stowed on Phocaea, old sites and belongs on a shipowner to the city.
4.1
Comparison with Standard Metrics
Before presenting the results based on our approach, we show that the standard existing approaches give similar results. As the original and translated novels 5
www.gutenberg.org
370
V. Suresh et al.
Table 2. ROUGE scores for CM, TM and LM. G-x and B-x corresponds to the length of text considered from Google and Bing translators respectively. x ranges from 100 to 500 words in steps of 100.
Length G-100 B-100 G-200 B-200 G-300 B-300 G-400 B-400 G-500 B-500
ROUGE-L CM TM LM 0.550 0.647 0.613 0.449 0.516 0.532 0.580 0.672 0.638 0.486 0.558 0.553 0.560 0.690 0.650 0.514 0.579 0.567 0.613 0.699 0.659 0.531 0.592 0.577 0.626 0.707 0.667 0.546 0.560 0.585
ROUGE-W-1.2 CM TM LM 0.178 0.218 0.209 0.142 0.166 0.175 0.167 0.202 0.193 0.136 0.160 0.161 0.161 0.195 0.183 0.134 0.157 0.154 0.158 0.189 0.177 0.132 0.152 0.149 0.156 0.184 0.173 0.132 0.149 0.146
ROUGE-W-1.2 CM TM LM 0.296 0.413 0.373 0.186 0.240 0.255 0.299 0.417 0.384 0.197 0.257 0.261 0.304 0.426 0.389 0.208 0.265 0.266 0.308 0.428 0.391 0.214 0.271 0.268 0.314 0.432 0.395 0.221 0.274 0.272
had different number of sentences we could not use BLEU and METEOR for evaluation. However the results based on ROUGE (which does not depend on the number of sentences) suggests that Google performs better than Bing. We show the ROUGE results using following three metrics: – ROUGE-L: Longest Common Subsequence – ROUGE-W-1.2: Weighted Longest Common Subsequence with weight factor as 1.2 – ROUGE-S4: Skip-bigram match allowing 4 gaps Table 2 shows the ROUGE-L, ROUGE-W-1.2, and ROUGE-S4 results for Google and Bing applied to the three novels. These values were computed by considering a chunk of words from each chapters of these novels and applying ROUGE on them individually and then averaging them to obtain the scores shown in the table. The number of words in text chunk was varied from 100 to 500 words in steps of 100. In all these experiments, it was evident that Google performs better than Bing. In these experiments, we made use of the fact that we were in possession of the translated as well as the reference documents. In the following we describe our experiments based on our approach wherein we show that our results do not depend on the availability of exact references. 4.2
Our LDA Based Approach
Each chapter of these novels was considered as a separate document and was presented to a Latent Dirichlet Allocation mechanism to compute the topic vectors. We used GibbsLDA++6 with default settings. Prior to this, as mentioned earlier, only the stopwords are retained. The number of topics to be extracted is subjective and could be varied. For the present experiments this number was 6
GibbsLDA++: http://gibbslda.sourceforge.net
A Stylometric Study and Assessment of Machine Translators
371
varied from 10 to 25 in steps of 5. These topic vectors are then labelled correspondingly as Google/Bing and used to generate a classifier using Support Vector Machine as mentioned earlier. For the present work, we used the public domain tool LIBSVM [6] with default settings. 4.3
Source Identification
First, we check if the approach is capable of identifying the source of the machine translation – Google or Bing in the present case. Towards this, topic vectors from a set of selected documents from the Google and Bing output are used to build an SVM classifier; the remaining documents are used to test whether they are assigned the correct class or not. The results are shown in Table 3 for different number of topics. We see that our approach correctly identifies the translator in most of the cases with high accuracy over topic ranges 10 to 20 (the aberration in the second row corresponding to topic number 15 needs further investigation). It can be seen from the table that the training and testing sets are not necessarily from the same novels, or even from the same author. This shows that the machine translator have unique quantifiable styles that are independent of style of the source authors. Table 3. Test to identify the correct translator. Column one refers to the number of documents translated with both Google and Bing, Column two represents the number of the particular machine translated documents used for testing. The last 4 columns give the percentage of correct classification for different number of topics.
Training set CM, 117 CM, 117 LM, 220 LM, 220 LM, 220 LM, 220
4.4
Testing set TM, 67 - Google TM, 67 - Bing TM, 67 - Google TM, 67 - Bing CM, 117 - Google CM, 117 - Bing
10 83.58 77.61 64.18 76.12 66.67 75.21
Accuracy(%) 15 20 25 100.00 67.16 97.02 37.31 59.70 98.51 98.51 92.54 53.73 98.51 98.51 76.12 98.29 82.91 41.88 75.21 94.87 42.74
Qualitative Assessment
The quality of the translator is judged by the closeness of the topic vectors resulting from the human translation to those that are computed from the translated documents. This is based on label assigned by the SVM classifier (trained with the translated documents as mentioned earlier) to the topic vectors from the human documents. More the documents assigned to translator better it is in terms of its closeness to human style. We had previously mentioned that Google translation appears to be subjectively better than Bing. Our approach identifies a majority of human generated documents as closer to Google and hence is in accordance with the subjective evaluation of the translations. This is shown in Table 4. Note that the training
372
V. Suresh et al.
Table 4. Google vs. Bing Translators. Column one refers to the number of documents translated with both Google and Bing, Column two represents the number of human translated documents used for testing. The last 4 columns give the percentage of human translated documents identified as google for different number of topics.
Training set Testing set TM, 67 TM, 67 CM, 117 CM, 117 LM, 220 LM, 220 CM, 117 TM, 67 LM, 220 CM, 117 LM, 220 TM, 67
Classified as 10 15 100.00 98.51 76.92 66.67 61.36 76.82 83.58 83.58 67.52 93.16 68.66 95.52
Google(%) 20 25 92.54 98.51 86.33 83.76 82.27 50.91 79.11 95.52 69.23 52.14 97.02 47.76
vectors mentioned in the table are from the outputs of Google and Bing translators and the testing vectors are the ones computed from the human translations. It can also be seen that the results are almost invariant to the number of topics chosen. In some cases the entire human corpus is identified as closer to Google resulting in an accuracy of 100%. It is to be viewed in the following sense. In real, the output of the classifier is dependent upon which cluster the human document is closest to. Ideally the closeness must be defined in terms of a margin and if both the translators lie within the margin, then it must be deemed as a tie. We have not employed such margin based tie resolution mechanism and hence these very high values are in favour of Google. However, it must be noted that our method was not biased in favour of any one of these translators beforehand, and hence not having a margin for classification might actually help both these translators equally likely and it is possible that Bing derives such benefits too. But it is evident that Google translator is consistently better than Bing (or to be more accurate, less worse). We now point our attention to the lower half (fourth row downwards) of the Table 4: the machine translated documents used for training the classifier and the set of human documents used for testing belong to different novels and authors. For example, the machine translation of The Count of Monte Cristo is compared with the human translation of The Three Musketeers. We observe that our identifies a majority of the human translated documents as closer to Google translator. Thus our results are fairly independent of the specific nature of human translation and it suggests that our approach captures the overall closeness of the machine translation to human generated text. This establishes the independence of our approach from the restriction to have an exact human translation of the original text for assessing the quality of the machine translation. This is useful in situations where human translations do not exist. Since LDA is usually employed to cluster meaningful words into their natural topics (in fact, usually the stopwords are removed from the documents prior to computing topic groupings as it is assumed that stopwords would uniformly distribute over topics), one would be interested in knowing how the stopwords are grouped into topics. Table 5 lists the top five stopwords for five topics from
A Stylometric Study and Assessment of Machine Translators
373
Table 5. A partial listing of topics and the top five words (w) in each with probabilities (p) for the Human, Google and the Bing corpora out of 20 topics
Topics
I
II
III
IV
V
Human w p said 0.080 and 0.076 will 0.049 be 0.042 is 0.040 the 0.260 and 0.112 of 0.091 which 0.046 had 0.036 the 0.157 with 0.076 for 0.060 was 0.054 would 0.049 a 0.158 of 0.099 in 0.076 his 0.065 which 0.065 you 0.134 to 0.078 me 0.067 i 0.056 have 0.055
Google w p said 0.087 and 0.081 will 0.047 go 0.042 is 0.040 the 0.263 and 0.122 of 0.089 which 0.040 had 0.035 the 0.158 with 0.068 was 0.058 for 0.057 would 0.054 a 0.165 of 0.096 in 0.074 his 0.066 which 0.056 you 0.145 to 0.068 me 0.066 i 0.058 have 0.046
Bing w p and 0.085 said 0.076 it 0.050 is 0.050 will 0.048 the 0.257 and 0.125 of 0.095 which 0.040 had 0.0357 the 0.153 with 0.075 for 0.058 was 0.057 would 0.055 a 0.187 of 0.101 in 0.077 his 0.065 which 0.053 you 0.150 me 0.080 to 0.063 i 0.052 have 0.046
an experiment with 20 topics. Note that the topic ordering and the associated probabilities for the Human, Google and Bing corpus are similar that it is visually difficult to judge as to which one is closest to the Human corpus in terms of these properties. Note that unlike the previous approaches, the judgement is made in terms of closeness to clusters in the feature space formed by outputs of the machine translators rather than closeness to the actual translation. In essence, this could be construed as comparing the human generated document with the entire set of documents as represented in the feature space of the translators. Hence this approach is more stringent in terms of assessing the translator’s quality. These experiments serve to suggest that our heuristic is capable of evaluating translation quality that is in accordance with the human perception. In addition, unlike an approach like METEOR [7], our approach does not consider contentwords; this is very useful in reducing the effort required for evaluation – matching synonyms, considering inflections etc. Though very small in number,
374
V. Suresh et al.
stopwords constitute around 30% of English text and hence seem to possess statistical properties of higher quality when compared to those resulting from the contentwords. In addition, this makes it independent of the nature of synonyms chosen by one translator over another thereby reducing the synonym matching as done by typical evaluation approaches. Our results in this section are based on applying our approach to standard documents like books, however there should be no difficulties in applying it to assess other web documents like blog posts and reviews. Also, one might argue that the present assessment mechanism could be gamed – translators could produce text in such a way that the stopword distributions resemble those from human generated text. Such a criticism is not without its own merits, hence we make the following observations: Firstly, we assess standard translators from reputed sources – Google and Microsoft in this case – wherein earning the trust of online users is given more weightage than manipulating the output to score in assessment tests. Secondly, it is not clear how easy it would be to manipulate the translators to generate output that have stopword distribution resembling that of human texts (randomly displaying a human generated text is one naive possibility). Lastly, and more importantly, using stopwords for author identification is an accepted approach in stylometry, and our is presented as a bridge between stylometry and translator assessment mechanisms with the hope that improvements in one area could be quickly passed on to the other.
5
Conclusions
We have presented a stylometry based approach for assessing the quality of machine translation in a scalable manner. The present approach borrows from stylometry the practice of studying the properties of stopwords in texts rather than the whole text. More importantly, our approach does not require the exact human translation of the original text – all that is required is the human translation of any standard text. This would benefit assessing the translation quality in situations wherein reference documents are not readily available.
References 1. Callison-Burch, C., Osborne, M., Koehn, P.: Re-evaluating the role of BLEU in Machine Translation Research EACL, pp. 249–256 (2006) 2. Brown, P.F., Pietra, V.J.D., Pietra, S.A.D., Mercer, R.L.: The Mathematics of Statistical Machine Translation: Parameter Estimation. Comput. Linguist. 19(2), 263–311 (1993) 3. Hutchins, W.J., Somers, H.L.: An Introduction to Machine Translation. Academic Press, London (1992) 4. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: A Method for Automatic Evaluation of Machine Translation. In: ACL 2002: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318 (2002) 5. Blei, D., Ng, A.Y., Jordan, M.I.: Latent Dirichlet Allocation. Journal of Machine Learning Research 3 (2001)
A Stylometric Study and Assessment of Machine Translators
375
6. Chung Chang, C., Lin, C.-J.: LIBSVM: A Library for Support Vector Machines (2001) 7. Lavie, A., Agarwal, A.: Meteor: An Automatic Metric for MT Evaluation with high levels of Correlation with Human Judgments. In: StatMT 2007: Proceedings of the Second Workshop on Statistical Machine Translation, pp. 228–231 (2007) 8. Doddington, G.: Automatic Evaluation of Machine Translation Quality using Ngram Co-occurrence Statistics. In: International Conference on Human Language Technology Research, pp. 138–145 (2002) 9. Weaver, W.: Translation. In: Locke, W.N., Donald Booth, A. (eds.) Machine Translation of Languages: Fourteen Essays, pp. 15–23. MIT Press, Cambridge (1949) 10. Mosteller, F., Wallace, D.L.: Applied Bayesian and Classical Inference: The Case of the Federalist Papers. Springer, Heidelberg (1964) 11. Arun, R., Suresh, V., Veni Madhavan, C.E.: Stopword Graphs and Authorship Attribution in Text Corpora. In: Third IEEE International Conference on Semantic Computing, Berkeley, CA, USA, pp. 192–196 (September 2009) 12. Arun, R., Suresh, V., Saradha, R., Narasimha Murty, M., Veni Madhavan, C.E.: Stopwords and Stylometry: A Latent Dirichlet Allocation Approach. In: Neural Information Processing System (NIPS) Workshop on Applications of Topic models and Beyond, Vancouver, Canada (December 2009) 13. Lin, C.-Y., Och, F. J.: Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. Association for Computational Linguistics (2004) 14. Lin, C.-Y.: ROUGE: A Package for Automatic Evaluation of Summaries. In: ACL Workshop on Text Summarization Branches Out (2004) 15. Tillmann, C., Vogel, S., Ney, H., Zubiaga, A., Sawaf, H.: Accelerated Dp based search for Statistical Translation. In: European Conf. on Speech Communication and Technology, pp. 2667–2670 (1997) 16. Snover, M., Dorr, B., Schwartz, R., Micciulla, L., Makhoul, J.: A Study of Translation Edit Rate with Targeted Human Annotation. Association for Machine Translation in the Americas, pp. 223–231 (2006)
Traffic Events Modeling for Structural Health Monitoring Ugo Vespier1 , Arno Knobbe1 , Joaquin Vanschoren1, Shengfa Miao1 , Arne Koopman1 , Bas Obladen2 , and Carlos Bosma2 1
LIACS, Leiden University, The Netherlands 2 Strukton Civiel, The Netherlands
Abstract. Since 2008, a sensor network deployed on a major Dutch highway bridge has been monitoring various structural and environmental parameters, including strain, vibration and climate, at different locations along the infrastructure. The aim of the InfraWatch project is to model the structural health of the bridge by analyzing the large quantities of data that the sensors produce. This paper focus on the identification of traffic events (passing cars/trucks, congestion, etc.). We approach the problem as a time series subsequence clustering problem. As it is known that such a clustering method can be problematic on certain types of time series, we verified known problems on the InfraWatch data. Indeed, some of the undesired phenomena occurred in our case, but to a lesser extent than previously suggested. We introduce a new distance measure that discourages this observed behavior and allows us to identify traffic events reliably, even on large quantities of data.
1
Introduction
In this paper, we investigate how to build a model of traffic activity events, such as passing vehicles or traffic jams, from measurements data collected by a sensor network installed on a major Dutch highway bridge [5], as a part of its Structural Health Monitoring (SHM) system. The SHM of infrastructural assets such as bridges, tunnels and railways is indeed an interesting problem from a data mining perspective and is proving to be a challenging scenario for intelligent data analysis [5]. A typical SHM implementation requires the infrastructure to be equipped with a network of sensors, continuously measuring and collecting various structural and climate features such as vibration, strain and weather conditions. This continuous measuring process generates a massive amount of streaming data which can be further analyzed in order to deduce knowledge about the asset’s lifetime and maintenance demand. This work is based on real-world data collected in the context of the InfraWatch project1 which is concerned with the monitoring of a large highway bridge in the Netherlands, the Hollandse Brug. The bridge is equipped with a 1
www.infrawatch.com
J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 376–387, 2011. c Springer-Verlag Berlin Heidelberg 2011
Traffic Events Modeling for Structural Health Monitoring
377
network of 145 sensors measuring vibrations, strain and temperature at various locations along the infrastructure. Moreover, a camera produces continuous video data overviewing the actual traffic situation on the bridge. The final aim of the project is to build a system capable of assessing the structural health of the bridge over time, providing an efficient way to schedule maintenance works or inspections. It has been shown that the structural stress caused by heavy loads is one of the main causes of bridge deterioration. Because of this, we focus here on modeling traffic activity events in the strain measurements, such as passing vehicles or traffic jams. The produced model can then be employed for real-time event classification or detection of anomalous responses from the bridge. Furthermore, automatic labeling of the video data can be achieved without relying on more expensive image processing techniques. A single moving vehicle is represented in the strain measurements as a bumpshaped peak (see Figure 1 (right)) with an intensity proportional to the vehicle’s weight and a duration in the order of seconds. On the other hand, events like traffic jams reside in significantly larger time spans and cause an overall increase in the average strain level, due to the presence of many slow moving vehicles on the bridge. Because we are dealing with events of varying nature, straightforward algorithms based on peak detection will not suffice. In order to model all the different kinds of traffic events represented in the strain data, we investigate the effectiveness of time series subsequence clustering [2,4,1,3], which essentially employs a sliding window technique to split the data stream in individual subsequences (observations), which can then be clustered. However, the naive implementation of subsequence clustering (SSC) using a sliding window and k-Means is controversial, as it is prone to producing undesirable and unpredictable results, as was previously demonstrated and analyzed in several publications, e.g. [4,1,3]. Indeed, within our strain data application, we notice some of the mentioned phenomena, although not all. We provide an analysis of how the different phenomena can be explained, and why some of them are not present in the data we consider. Finally, we introduce a novel Snapping distance measure which, employed in SSC based on k-Means, removes the artifacts and produces a correct clustering of the traffic events. We believe that the proposed distance measure can lead to a rehabilitation of SSC methods for finding characteristic subsequences in time series.
2
InfraWatch and the Strain Sensor Data
In this section we briefly introduce the research context of our work, the InfraWatch project, and we describe what the strain data looks like and how the different types of traffic activities are represented in the strain measurements, in order to motivate the technical solutions employed in Section 3. The bridge we focus on has been equipped by Strukton Civiel with a sensor network in August 2008, during the maintenance works needed to make it operational and safe again, after some 40 years of service. The network comprises
378
U. Vespier et al. 35
13.0
12.5 30 12.0
11.5
μm/m
μm/m
25
20
15
11.0
10.5
10.0
9.5 10 9.0
5 0 0 1
Time
:0
0 :3 9
9
:0
0
8.5
5 s
10 s
Time
15 s
20 s
Fig. 1. Detailed plots of strain, showing a traffic jam during rush hour (left) and individual vehicles (right)
145 sensors that measure different aspects of the condition of the bridge, at several locations along it. These sensors include strain gauges (measuring horizontal strain on various locations), vibration sensors, and thermometers (to measure both the air and structure temperature). For more details, see [5]. As mentioned, we focus on modeling traffic events, such as vehicles passing over the bridge or traffic jams, represented in the strain measurements. The data is being sampled at 100 Hz which amounts to approximately 8.6 · 106 measurements per sensor per day. As the sensor network is highly redundant, and the different strain sensors are fairly correlated or similar in behavior, we selected one sensor that is reliable and low in measurement-noise (less than 1.0 µm/m). The strain gauge considered is placed at the bottom of one of the girders in the middle of a 50 meter span near one end of the bridge. The strain data is thus related to this portion of the infrastructure. Every load situated on this span will have a positive effect, with loads in the middle of the span contributing more to the strain than loads near the supports of the span. Figure 2 shows an overall plot of the measurements for a single (week)day. At the time scale of Figure 2, it is not possible to identify short term changes in the strain level (except for notable peaks), such as individual vehicles passing over the span. However, long term changes are clearly visible. For instance, there is a slightly curved trend of the strain baseline which slowly develops during a full day, which is is due to changes in temperature, slightly affecting both the concrete and gauge properties. The sudden rise of the average strain level between 9am and 10am is caused by a traffic jam over the bridge (as verified by manual inspection of the video signal). A traffic jam involves many slowly moving vehicles, which causes high vehicle densities. This in turn produces a heavy combined load on the span, and the strain measurements record this fact accordingly. Figure 1 (left) shows a detailed plot of the traffic jam event. Short term changes, on the other hand, can be identified when considering a narrower time window, in the order of seconds. A passing vehicle is represented
Traffic Events Modeling for Structural Health Monitoring
379
40 35 30
μm/m
25 20 15 10
:0 0
:0 0
24
:0 0
23
:0 0
22
:0 0
21
:0 0
20
:0 0
19
:0 0
18
:0 0
17
:0 0
16
:0 0
15
:0 0
14
:0 0
Time
13
:0 0
12
:0 0
11
10
9: 00
7: 00 8: 00
6: 00
5: 00
3: 00 4: 00
0: 00
1: 00 2: 00
5 0
Fig. 2. One full week day of strain measurements. All y-axis units in this paper are in µm/m (µ-strain).
in the data by a bump-shaped peak, reflecting the load displacement as the car moves along the bridge’s span. Figure 1 (right) shows a time window of 22 seconds where the big peak represents a truck while the smaller ones are caused by lighter vehicles such as cars. The examples above show how different traffic events, though all interesting from a monitoring point of view, occur with different durations and features in the strain data. Our aim is to characterize the different types of traffic the bridge is subjected to by analyzing short fragments of the strain signal, in the order of several seconds. The remainder of this paper is dedicated to the clustering of such subsequences obtained by a sliding window.
3
Subsequence Clustering for Traffic Events Modeling
In this section we provide some basic definitions of the data model and we introduce the rationale behind the subsequence clustering technique. We review the known pitfalls of SSC considering the features of the strain data and we show how its naive application produces results affected by artifacts. We finally propose a novel distance measure for SSC designed to remove the artifacts. 3.1
Time Series and Subsequence Clustering
The data produced by a sensor of the network is a time series of uniformly sampled values. In this work, we assume there are no missing values in the stream produced by the sensors. Below, we give some basic definitions: Definition 1 (Time Series). A time series is a sequence of values X = x1 , ..., xm such that xi ∈ R and m > 0. Definition 2 (Subsequence). A subsequence Sp,w of a time series X = x1 , ..., xm is the sequence of values xp , ..., xp+w−1 such that 1 ≤ p ≤ m − w + 1 and the window length w < m.
380
U. Vespier et al. 8
8 6
6
4
4
2
2
0
0 -8
-6
-4
-2
-2
-2
-4
-4
-6
-6
-8
-8
0
2
4
6
8
Fig. 3. Two plots of the same data, showing the original data as a function of time (left), and a projection on two selected dimensions in w-space and the four prototypes generated by k-Means (red circles). Clearly, the sliding window technique creates a trajectory in w-space, where each loop corresponds to a bump in the original signal.
Definition 3 (Subsequences Set). The subsequences set D(X, w) = {Si,w | 1 ≤ i ≤ m − w + 1} is the set of all the subsequences extracted by sliding a window of length w over the time series X. The subsequences set D(X, w) contains all possible subsequences of length w of a time series X. The aim of subsequence clustering is discovering groups of similar subsequences in D(X, w). The intuition is that, if there are repeated similar subsequences in X, they will be grouped in a cluster and eventually become associated to an actual event of the application domain. 3.2
Subsequence Clustering Equals Event Detection?
Subsequence clustering is an obvious and intuitive choice for finding characteristic subsequences in time series. However, in a recent paper by Keogh et al. [4], it was shown that despite the intuitive match, SSC is prone to a number of undesirable behaviors that make it, in the view of the authors, unsuitable for the task at hand. Since then, a number of papers (e.g. [3] and [1]) have further investigated the observed phenomena, and provided theoretical explanations for some of these, leading to a serious decline in popularity of the technique. In short, the problematic behavior was related to the lack of resemblance between the resulting cluster prototypes and any subsequence of the original data. Prototype shapes that were observed were collections of smooth functions, most notably sinusoids, even when the original data was extremely noisy and angular. More specifically, when the time series were constructed from several classes of shorter time series, the resulting prototypes did not represent individual classes, but rather were virtually identical copies of the same shape, but out of phase. Finally, it was observed that the outcome of the algorithm was not repeatable, with different random initializations leading to completely different results. The unintuitive behavior of SSC can be understood by considering the nature of the subsequence set D(X, w) that is the outcome of the initial sliding window step. Each member of D(X, w) forms a point in a Euclidean w-dimensional
Traffic Events Modeling for Structural Health Monitoring
381
11.5
11.0
10.5
10.0
9.5
9.0 0
50
100
150
200
250
300
350
400
Fig. 4. Multiple representation of events. The left plot shows the prototypes computed by the classic k-Means. The right plot shows, in black, the portion of the data assigned to the two bump-shaped prototypes.
space, which we will refer to as w-space, illustrated in Figure 3. As each subsequence is fairly similar to its successor, the associated points in w-space will be quite close, and the members of D(X, w) form a trajectory in w-space. Figure 3 shows an example of a (smoothed) fragment of strain data, and its associated trajectory in w-space (only two dimensions shown). Individual prototypes correspond to points in w-space, and the task of SSC is to find k representative points in w-space to succinctly describe the set of subsequences, in other words, the trajectory. Figure 3 (right) also shows an example of a run of k-Means on this data. As the example demonstrates, the prototypes do not necessarily lie along the trajectory, as they often represent a (averaged) curved segment of it. So how does SSC by k-Means fare on the strain data from the Hollandse Brug? Experiments reported in Section 4 will show that not all the problematic phenomena are present in clustering results on the strain data. In general, cluster prototypes do resemble individual subsequences, although some smoothing of the signal as a result of averaging does occur, which is only logical. The relatively good behaviour can be attributed to some crucial differences between the nature of the data at hand, and that used in the experiments of for example [4,3]. Whereas those datasets typically were constructed by concatenating rather short time series of similar width and amplitude, the strain data consists of one single long series, with peaks occurring at random positions. Furthermore, the strain data shows considerable differences in amplitude, for example when heavy vehicles or traffic jams are concerned. There remains however one phenomenon that makes the regular SSC technique unsuitable for traffic event modeling: the clustering tends to show multiple representations of what is intuitively one single event (see Figure 4 for an example). Indeed, each of the two bump-shaped prototypes resembles a considerable fraction of the subsequences, while at the same time having a large mutual Euclidean distance. In other words, our notion of traffic event does not coincide with the Euclidean distance, which assigns a large distance to essentially quite similar subsequences. In the next section, we
382
U. Vespier et al.
introduce an alternative distance measure, which is designed to solve this problem of misalignment. 3.3
A Context-Aware Distance Measure for SSC
As showed in the previous section, applying SSC to the strain data employing the classic k-Means leads to undesirable multiple representations of the same logical event. The problem is that comparing two subsequences with the Euclidean distance does not consider the similarity of their local contexts in the time series. Below we introduce a novel distance measure which finds the best match between the two compared subsequences in their local neighborhood. Given a time series X and two subsequences Sp,w ∈ X and Sf ixed of length w, we consider not only the Euclidean distance between Sf ixed and Sp,w , but also between Sf ixed and the neighboring subsequences, to the left and to the right, of Sp,w . The minimum Euclidean distance encountered is taken as the final distance value between Sp,w and Sf ixed . Formally, given a shift factor f and a number of shift steps s, we define the neighbor subsequences indexes of Sp,w as: fw · i −s ≤ i ≤ s NS = p+ s The extent of data analyzed to the left and to the right of Sp,w is determined by the shift factor while the number of subsequences considered in the interval is limited by the shift steps parameter. The Snapping distance is defined as: Snapping(Sp,w , Sf ixed ) = min{Euclidean(Si,w , Sf ixed ) | i ∈ N S}
(1)
We want to employ the Snapping distance in a SSC scheme based on k-Means. k-Means is a well known clustering/quantization method that, given a set of vectors D = {x1 , ..., xn }, aims to find a partition P = {C1 , ..., Ck } and a set of centroids C = {c1 , ..., ck } such that the sum of the squared distances between each xi and its associated centroid cj is minimized. The classic k-Means heuristic implementation looks for a local minimum by iteratively refining an initial random partition. The algorithm involves four steps: 1. (initialization) Randomly choose k initial cluster prototypes c1 , ..., ck in D. 2. (assignment) Assign every vector xi ∈ D to its nearest prototype cj according to a distance measure. The classic k-Means uses the Euclidean distance. 3. (recalculation) Recalculate the new prototypes c1 , ..., ck by computing the means of all the assigned vectors. 4. Stop if the prototypes did not change more than a predefined threshold or when a maximum number of iterations has been reached, otherwise go back to step 2. In our SSC scheme, the set of vectors D to be clustered is the subsequences set D(X, w), where X is a time series and w the sliding window’s length. In the assignment step, we employ the Snapping distance defined in Equation 1.
Traffic Events Modeling for Structural Health Monitoring
383
Ck
Sp,w
Fig. 5. A subsequence Sp,w is compared against the centroid Ck . The minimum Euclidean distance between Ck and the neighbor subsequences of Sp,w , including itself, is taken as a distance. Here, the best match is outlined in gray at the right of Sp,w .
Moreover, we force the initialization step to choose the random subsequences such that they do not overlap in the original time series. Figure 5 illustrates the intuition behind the Snapping distance measure in the context of k-Means clustering. In the next section we evaluate this SSC scheme on the InfraWatch strain data.
4
Experimental Evaluation
In this section we introduce the experimental setting and we discuss the results of applying the SSC scheme defined in Section 3.3 to the strain data. We considered the following strain time series: 100Seconds has been collected during the night in a period of low traffic activity across the Hollandse Brug, and consists of 1 minute and 40 seconds of strain data sampled at 100 Hz. The series contains clear traffic events and does not present relevant drift in the strain level due to the short time span. A more substantial series, FullWeekDay, consists of 24 hours of strain measurements sampled at 100 Hz, corresponding to approximately 9 millions values. The data has been collected on Monday 1st of December 2008, a day in which the Hollandse Brug was fully operational. All the traffic events expected in a typical weekday, ranging from periods of low activity to congestion due to traffic jams, are present in the data. The temperature throughout the chosen day varied between 4.9 and 7.7 degrees. Figure 2 shows an overall plot of the data. In order to run the defined k-Means SSC scheme, we need to fix a number of parameters. The window length w has been chosen to take into account the structural configuration of the bridge and the sensor network. Considering the span in question is 50 meters long, and a maximum speed of 100 km/h, a typical vehicle takes in the order of 2.5 seconds to cross the span. In order to capture such events, and include some data before and after the actual event, the window length was set to 400, which corresponds to 4 seconds. The number of clusters k
384
U. Vespier et al.
12.0
11.5
11.0
10.5
10.0
9.5
9.0 0
50
100
150
200
250
300
350
400
Fig. 6. Improved results using the Snapping distance (see Figure 4)
directly affects how the resulting prototypes capture the variability in the data. For the 100Seconds data we found k = 3 a reasonable choice because, considering its short duration, the time series does not present drift in the strain baseline and the variability in the data can be approximated by assuming three kind of events: no traffic activity (baseline) and light and heavy passing vehicles. On the other hand, the FullWeekDay data presents much more variability, mostly due to the drift in the measurements which vertically translates all the events to different levels depending on the external temperature. Moreover, traffic jams cause underlying variability in the data. In the FullWeekDay, we found k = 10 to be large enough to account for most of the interesting, from a SHM point of view, variations in the time series, though we will also show the result with k = 4 for comparison. The f parameter affects the size of the neighborhood of subsequences considered by the Snapping distance. As the neighborhood gets smaller, the Snapping distance converges to the Euclidean. A big neighborhood, on the other hand, could include subsequences pertaining to other events. We experimented with f = 0.25, f = 0.5 and f = 0.75, yelding comparable outcomes. The presented results were all computed using f = 0.5. The shift steps parameter imposes a limitation on the number of Euclidean distances to compute for each comparison of a subsequence with a centroid; we fix it to s = 10. 4.1
Results
Given the chosen parameters, we run both the classic k-Means SSC and the Snapping distance variant on the 100Seconds and FullWeekDay data. Figure 6 depicts the results obtained by applying the k-Means SSC based on the Snapping distance on the 100Seconds data. Comparing this with the results using the Euclidean distance on the same data in Figure 4, in this case, the big bump-shaped peak, caused by a heavy passing vehicle, is represented by a single prototype, while the remaining prototypes model lighter passing vehicles and the strain baseline (whose assignments are not shown in the picture). Figure 7 shows the resulting prototypes obtained from the FullWeekDay data for k = 4 (left) and k = 10 (right). The prototypes computed for k = 4 by
Traffic Events Modeling for Structural Health Monitoring
Classic
18
k-Means
SSC
18
16
16
14
14
12
12
10
10
8 0
50
100 150 200 250 300 350 400
8 0
Snapping
50
k-Means
SSC
100 150 200 250 300 350 400
Classic
22
k-Means
SSC
22
20
20
18
18
16
16
14
14
12
12
10
10
8 0
50
100 150 200 250 300 350 400
8 0
Snapping
50
k-Means
385
SSC
100 150 200 250 300 350 400
Fig. 7. Prototypes produced by applying k-Means respectively with Euclidean and Snapping distance on the FullWeekDay data, for both k = 4 (left) and k = 10 (right)
Fig. 8. Two examples of events represented by individual prototypes. The central point of an associated subsequence is drawn in black.
both the classic and revised k-Means SSC are really similar. Setting k = 4 does not account for all the variability in the FullWeekDay data and the resulting prototypes try to represent the different strain levels more than the actual events. In this case, the effect of considering the neighborhood of each subsequence, as done by the Snapping distance, is dominated by the presence of large differences in the strain values. The prototypes for k = 10 better describe the variability in the data and represent both the different strain levels as well as the individual events (peaks). In this case, the classic k-Means SSC introduces double representations of the same logical events. This is avoided in our revised solution, thus better representing the variability in the data: every prototype now models a different strain level or event, as shown in Figure 7 (right). Although Figure 7 gives an idea of the differences between the prototypes produced by the classic k-Means SSC and the Snapping version, it does not show how the data is subdivided across them. Figure 8 shows two examples, at different time scales, of events associated to a single prototype. The plot on the left shows a heavy passing vehicle (in black), while the plot on the right shows all the subsequences considered part of a traffic jam event.
386
U. Vespier et al.
split 1
split 2
Subsequences (with lead-in/out)
Data massage
Raw data
... Map ...
Reduce (per tsi )
Map
Reduce
Map
Reduce
...
split n
split 1
split n
lead-in ts
split 2
Map
Map
...
lead-out
Cluster centroids
Clustering
Map current cluster centroids
Reduce (per clusi )
Reduce
update current cluster centroids iterate
k (random)
Fig. 9. MapReduce implementation of our clustering method. Every map or reduce task can be run on any available computing core.
4.2
A Scalable Implementation
Given the amount of data generated by the sensor network, it is important to have a very scalable implementation of our clustering method. Therefore, we have developed a parallelized version based on the MapReduce framework using Hadoop [6]. Indeed, the main bottleneck in clustering lies in calculating the (snapping) distances between every subsequence and the cluster centers, which need to be read from disk. With MapReduce, we can distribute the data reads over a cluster of machines. An overview of the resulting system is shown in Figure 9. In the first stage, we ‘massage’ the data to prepare it for the clustering phase. Since the computing nodes work independently, they need to be passed complete subsequences, including the lead-in and lead-out, in single records. First, we read the measurements of a single sensor for every timestamp, and its value is mapped to the initial timestamp ts of every subsequence in which it occurs. Then, all measurements for a specific ts are reduced to a complete subsequence. In the clustering phase, we first select k random centroids. Then, each subsequence is mapped to the nearest centroid, using the snapping distance, together with the combined points mapped by the same mapper. The reducer receives all points mapped to a certain cluster and calculates the new cluster centroid. This is repeated n times or until the clusters converge. The k-Means implementation is an adapted version of k-Means found in the Mahout library.2 We evaluated this implementation on a relatively small cluster of 5 quad-core computing nodes. Compared to a sequential implementation that loaded all the FullWeekDay data in memory, it yielded a 6-fold speedup in spite of the extra I/O overhead. Moreover, it scales linearly, even slightly sub-linearly, to time series of several months. For example, one month of data was clustered (using 10 iterations) in less than 14 hours, and can be sped up further by simply adding more nodes. 2
Mahout - Scalable Machine Learning Library. http://mahout.apache.org
Traffic Events Modeling for Structural Health Monitoring
5
387
Conclusion
In this paper we have focused on the problem of identifying traffic activity events in strain measurements, produced by a sensor network deployed on a highway bridge. Characterizing the bridge’s response to various traffic events represents an important step in the design of a complete SHM solution, as it will permit implementations of real-time classification or anomaly discovery techniques. The proposed solution is based on subsequence clustering, a technique shown to be prone to undesired behaviors and whose outcome is strongly dependent on the kind of data it is applied to. In view of this, we studied SSC in relation to the features of the strain data, showing that only some of the documented pitfalls (i.e., multiple representations) occur in our case. To solve this, we introduced a context-aware distance measure between subsequences, which also takes the local neighborhood of a subsequence into account. Employing this Snapping distance measure, we showed that SSC by k-Means returns a correct modeling of the traffic events. Acknowledgements. The InfraWatch project is funded by the Dutch funding agency STW, under project number 10970. We thank SARA, the Dutch HPC Center, for providing support in the use of their experimental Hadoop service.
References 1. Fujimaki, R., Hirose, S., Nakata, T.: Theoretical analysis of subsequence time-series clustering from a frequency-analysis viewpoint. In: Proceedings of SDM 2008, pp. 506–517 (2008) 2. H¨ oppner, F.: Time series abstraction methods - a survey. In: Informatik bewegt: Informatik 2002 - 32. Jahrestagung der Gesellschaft f¨ ur Informatik e.v. (GI), pp. 777–786. GI (2002) 3. Id´e, T.: Why does subsequence time-series clustering produce sine waves? In: F¨ urnkranz, J., Scheffer, T., Spiliopoulou, M. (eds.) PKDD 2006. LNCS (LNAI), vol. 4213, pp. 211–222. Springer, Heidelberg (2006) 4. Keogh, E., Lin, J.: Clustering of time-series subsequences is meaningless: implications for previous and future research. Knowledge and Information Systems 8(2), 154–177 (2005) 5. Knobbe, A., Blockeel, H., Koopman, A., Calders, T., Obladen, B., Bosma, C., Galenkamp, H., Koenders, E., Kok, J.: InfraWatch: Data management of large systems for monitoring infrastructural performance. In: Cohen, P.R., Adams, N.M., Berthold, M.R. (eds.) IDA 2010. LNCS, vol. 6065, pp. 91–102. Springer, Heidelberg (2010) 6. White, T.: Hadoop, The Definite Guide. O’Reilly, Sebastopol (2009)
Supervised Learning in Parallel Universes Using Neighborgrams Bernd Wiswedel and Michael R. Berthold Department of Computer and Information Science, University of Konstanz, Germany
Abstract. We present a supervised method for Learning in Parallel Universes, i.e. problems given in multiple descriptor spaces. The goal is the construction of local models in individual universes and their fusion to a superior global model that comprises all the available information from the given universes. We employ a predictive clustering approach using Neighborgrams, a one-dimensional data structure for the neighborhood of a single object in a universe. We also present an intuitive visualization, which allows for interactive model construction and visual comparison of cluster neighborhoods across universes.
1 Introduction Computer-driven learning techniques are based on a suitable data-representation of the objects being analyzed. The classical concepts and techniques that are typically applied are all based on the assumption of one appropriate unique representation. This representation, typically a vector of numeric or nominal attributes, is assumed to sufficiently describe the underlying objects. In many application domains, however, various different descriptions for an object are available. These different descriptor spaces for the same object domain typically reflect different characteristics of the underlying objects and as such often even have their own, unique semantics and can and should therefore not be merged into one descriptor. As most classical learning techniques are restricted to learning in exactly one descriptor space, the learning in the presence of several object representations is in practice often solved by either reducing the analysis to one descriptor, ignoring the others; by constructing a joint descriptor space (which is often impossible); or by performing independent analyses on each space. All these strategies have limitations because they either ignore or obscure the multiple facets of the objects (given by the descriptor spaces) or do not respect overlaps. This often makes them inappropriate for practical problems. As a result Learning in Parallel Universes [13] has emerged as a novel learning scheme that encompasses the simultaneous analysis of all given descriptor spaces, i.e. universes. It deals with the concurrent generation of local models for each universe, whereby these models cover local structures that are unique for individual universes and at the same time can also cover other structures that span multiple universes. The resulting global model outperforms the above mentioned schemes and often also provides new insights with regard to overlapping as well as universe-specific structures. Although the learning task itself can be both supervised (e.g. building a classifier) and J. Gama, E. Bradley, and J. Hollm´en (Eds.): IDA 2011, LNCS 7014, pp. 388–400, 2011. c Springer-Verlag Berlin Heidelberg 2011
Supervised Learning in Parallel Universes Using Neighborgrams
389
unsupervised (e.g. clustering), we concentrate in the following on supervised problem scenarios. In this paper we discuss an extension to the Neighborgram algorithm [4,5] for learning in parallel universes. It is a supervised learning method that has been designed to model small or medium sized data sets or to model a (set of) minority class(es). The latter is commonly encountered in the context of activity prediction for drug discovery. The key idea is to represent an object’s neighborhoods, which are given by the various similarity measures in the different universes, by using neighborhood diagrams (so-called neighborgrams). The learning algorithm derives cluster candidates from each neighborgram and ranks these based on their qualities (e.g. coverage). The model construction is carried out in a sequential covering-like manner, i.e. starting with all neighborgrams and their cluster candidates, taking the numerically best one, adding it as cluster and proceeding with the remaining neighborgrams while ignoring the already covered objects. We will present and discuss extensions to this algorithm that reward clusters which group in different universes simultaneously, and thus respect overlaps. As we will see this can significantly improve the classification accuracy over just considering the best cluster at a time. Another interesting usage scenario of the neighborgram data structure is the possibility of displaying them and thus involving the user in the learning process. Using the visualization techniques described in [4] in a grid view, which aligns the different universes column by column, the user can inspect the different neighborhoods across the available universes and assess possible structural overlaps.
2 Related Work Subspace Clustering. Subspace clustering methods [11] operate on a single, typically very high-dimensional input space. The goal is to identify regions of the input space (a set of features), which exhibit a high similarity on a subset of the objects, or, more precisely, the objects’ data representations. Subspace clustering methods are commonly categorized into bottom-up and top-down approaches. Bottom-up starts by considering density on individual features and then subsequently merging features/subspaces to subspaces of higher dimensionality, which still retain a high density. The most prominent example is C LIQUE [2], which partitions the input space using a static grid and then merges grid elements if they meet a given density constraint. Top-down techniques initially consider the entire feature space and remove irrelevant features from clusters to compact the resulting subspace clusters. One example in this category is C OSA [10], a general framework for this type of subspace clustering; it uses weights to encode the importance of features to subspace clusters. These weights influence the distance calculation and are optimized as part of the clustering procedure. Subspace clustering methods share with learning in parallel universes that they try to respect locality also in terms of the features space, although they do it on a different scale. They refer to individual features whereas we consider universes, i.e. semantically meaningful descriptor spaces, which are given a-priori. Multi-View-Learning. In multi-view-learning [12] one assumes a similar setup as in learning in parallel universes, i.e. the availability of multiple descriptor spaces
390
B. Wiswedel and M.R. Berthold
(universes or views). This learning concept has a different learning scope since it expects all universes/views to share the same structure. Therefore each individual universe would suffice for learning if enough training data were available. One of the first works in the multi-view area was done by Blum and Mitchell [6], who concentrate on the classification of web pages. They introduce co-training in a semi-supervised setting, i.e. they have a relatively small set of labeled and a rather large set of unlabeled instances. The web pages are described using two different views, one based on the actual content of the web site (bag of words), the other one based on anchor texts of hyperlinks pointing to that site – both views are assumed to be conditionally independent. Co-training creates a model on each view, whereby the training set is augmented with data that was labeled with high confidence by the model of the respectively other view. This way the two classifiers bootstrap each other. There is also a theoretical foundation for the co-training setup: It was shown that the disagreement rate between two independent views is an upper bound for the error rate of either hypothesis [9]. This principle already highlights the identical structure property of all views as the base assumption of multi-view learners, which is contrary to the setup of learning in parallel universes, where some information may only be available in a subset of universes. There exist a number of other similar learning setups. These include ensemble learning, multi-instance-learning and sensor fusion. We do not discuss these methods in detail but all have either a different learning input (no a-priori defined universes) or a distinct learning focus (no partially overlapping structures across universes while retaining universe semantics).
3 Learning in Parallel Universes Learning in parallel universes refers to the problem setting of multiple descriptor spaces, whereby single descriptors are not sufficient for learning. Instead we assume that the information is distributed among the available universes, i.e. individual universes may only explain a part of the data. We propose a learning concept, which overcomes the limitations outlined above by a simultaneous analysis of all available universes. The learning objective is twofold. First, we aim to identify structures that occur in only one or few universes (for instance groups of objects that cluster well in one universe but not in others). Secondly we want to detect and facilitate overlapping structures between multiple, not necessarily all, universes (for instance clusters that group in multiple universes). The first aim addresses the fact that a universe is not a complete representation of the underlying objects, that is, it does not suffice for learning. The task of descriptor generation is in many application domains a science by itself, whereby single descriptors (i.e. universes) carry a semantic and mirror only certain properties of the objects. The second aim describes cases, in which structures overlap across different universes. For clustering tasks, for instance, this would translate to the identification of groups of objects that are self-similar in a subset of the universes. Note that in order to detect these overlaps and to support their formation it is necessary for information to be exchanged between the individual models during their construction. This is a major characteristic of learning in parallel universes, which cannot be realized with any of the other schemes outlined in the previous section.
Supervised Learning in Parallel Universes Using Neighborgrams
391
3.1 Application Scenarios The challenge of learning in parallel universes can be found in almost all applications that deal with the analysis of complex objects. We outline a few of these below. Molecular data. A typical example is the activity prediction of drug candidates, typically small molecules. Molecules can be described in various ways, which potentially focus on different aspects [3]. It can be as simple as a vector of numerical properties such as molecular weight or number of rotatable bonds. Other descriptors concentrate on more specific properties such as charge distribution, 3D conformation or structural information. Apart from such semantic differences, descriptors can also be of a different type including vector of scalars, chemical fingerprints (bit vectors) or graph representations. These diverse representations make it impossible to simply join the descriptors and construct a joint feature space. 3D object data. Another interesting application is the mining of 3D objects [8]. The literature lists three main categories of descriptors: (1) image-based (features describing projections of the object, e.g. silhouettes and contours), (2) shape-based (like curvature measures) and (3) volume-based (partitioning the space into small volume elements and then considering elements that are completely occupied by the object or contain parts of its surface area, for instance). Which of these descriptions is appropriate for learning, mostly depends on the class of object being considered. For instance imageor volume-based descriptors fail on modeling a class of humans, taking different poses, since their projections or volumes differ, whereas shape-based descriptors have proven to cover this class nicely. Image data. There are also manifold techniques available to describe images. Descriptors can be based on properties of the color or gray value distribution; they can encode texture information or properties of the edges. Other universes can reflect user annotations like titles that are associated with the image. Also in this domain it depends sometimes on the descriptor, whether two images are similar or not.
4 Neighborgrams in Parallel Universes In the following we describe the neighborgram data structure [4], discuss the fully automated clustering algorithm and highlight the advantages of the visualization. We consider a set of objects i, each object being described in U, 1 ≤ u ≤ U, parallel universes. The object representation for object i in universe u is xi,u . Each object is assigned a class c (i). We further assume appropriate definitions of distance functions for each universe d (u) (xi,u , x j,u ). 4.1 Neighborgram Data Structure We define a neighborgram as a list of R-nearest neighbors to an object i in a universe u: (u) NGi = x (u) , . . . , x (u) . li,1 ,u
li,R ,u
392
B. Wiswedel and M.R. Berthold (u)
The subscript li,r reflects the ordering of the neighbors according to the distance to the (u)
centroid. For the sake of simplicity we abbreviate li,r with lr and also omit the second subscript u, always accounting for the fact that the list contains objects referring to a centroid object i in a specific universe u. The ordering implies for any neighbor(u) gram li,1 = l1 = i since the centroid is closest to itself. This list is not necessarily a unique representation of the neighborhood as objects may have an equal distance to the centroid. However, this is not a limitation to the algorithm. Neighborgrams are constructed for all objects of interest, i.e. either for all objects of a small- or medium-size data set or the objects belonging to one or more target classes, in all universes. As an illustrative example consider the data set in figure 1, which is given in a single universe. This universe is 2-dimensional (shown on the left) with two different classes (empty and filled circles). The neighborgrams for the three numbered empty objects are shown on the right. We use the Manhattan norm as distance function, i.e. the distance between two objects is the number of steps on the grid (for instance the distance 1 and 3 is 3, two steps in horizontal and one in vertical direction). The corbetween responding list representation is then 1 , , 2 , 3, NG1 = NG2 = 2 , 3 , 1, , NG3 = 3 , 2 , , 1, We will use these neighborgrams in the following to introduce basic measures that help us to derive cluster candidates and to determine numeric quality measures.
6 5 4 3 2 1
y 1
2
2
3
3
2
0
1
3
1
d
3 1
2
x 0 1 2 3 4 5 6 7
1
2
d
3
d
Fig. 1. Sample input space with neighborgrams for the objects 1, 2 and 3
4.2 Neighborgram Clustering Algorithm The basic idea of the learning algorithm is to consider each neighborgram in each universe as a potential (labeled) cluster candidate. These candidates are derived from the list representation, are based on the class distribution around the centroid, and are assigned the class of their respective centroids. Good clusters will have a high coverage, i.e. many objects of the same class as the centroid in the close vicinity, whereby cluster boundaries are represented by the distance of the farthest neighbor satisfying some purity constraint. The basic clustering algorithm will iteratively add the cluster candidate with the highest coverage to the set of accepted clusters, remove the covered objects
Supervised Learning in Parallel Universes Using Neighborgrams
393
from consideration, and start over by re-computing the coverage of the remaining candidates. The interesting point here is that this basic algorithm is not restricted to learn in a single universe. In fact, this basic algorithm is free to choose the best cluster candidate from all universes, thereby directly building a cluster set with cluster from different origins and hence a model for parallel universes. Before we provide the algorithm in pseudo-code let us define some measures that are used to define a cluster candidate and its numerical quality. Each of the following values is assigned to a single neighborgram (but computed for all): (u)
– Coverage Γi : The coverage describes how many objects of the same class as the centroid c (i) are within a certain length r in the neighborgram for object i in universe u. These objects are covered by the neighborgram. (u) (u) Γi (r) = xlr ∈ NGi | 1 ≤ r ≤ r ∧ c (lr ) = c (i) For example, the coverages for NG1 in figure 1 are: Γ1 (1) = 1, Γ1 (2) = 1, Γ1 (3) = 2 and Γ1 (4) = 3. (u)
– Purity Πi (r): The purity denotes the ratio of objects of the correct class (i.e. same class as the centroid) to all objects that are contained within a certain neighborhood of length r: (u) xlr ∈ NGi | 1 ≤ r ≤ r ∧ c (lr ) = c (i) (u) . Πi (r) = (u) xlr ∈ NGi | 1 ≤ r ≤ r 1 For instance, object 1 in figure 1 would have purities: Π 1 (1) = 1, Π 1 (2) = 2 , Π1 (3) = 23 and Π1 (5) = 35 . (u)
– Optimal Length Λi : The optimal length is the maximum length for which a given purity threshold pmin is still valid. In practical applications it has shown to be reasonable to constrain the optimal length to be at least some minimum length value rmin in order to avoid clusters that cover only few objects. (u)
Λi (pmin , rmin ) = (u) max r | rmin ≤ r ≤ r ∧ Πi (r ) ≥ pmin . Note, this term may be undefined and practically often is for noisy data. In the example in figure 1 the optimal length values are for a minimum length rmin = 1 (unconstrained): Λ2 (1, 1) = 2 and Λ2 ( 23 , 1) = 3. Apart from the specification of a minimum purity pmin and a minimum cluster size rmin , the user has to specify a minimum overall coverage ψmin . If the sum of coverage values of all accepted clusters is greater than this value, the algorithm terminates. The basic algorithm is outlined in algorithm 1. The initialization starts with the construction of neighborgrams for all objects of certain target classes (minority class(es) or all classes) in line 1. Prior to the learning process itself, it determines a cluster candidate for each
394
B. Wiswedel and M.R. Berthold
Algorithm 1. Basic Neighborgram Clustering Algorithm (u)
1: ∀ i, u: c (i) is target class ⇒ construct NGi 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: 17:
(u) ∀ NGi :
(u) Λi
(u) = Λi (pmin , rmin )
compute s←0 /* accumulated coverage */ N G ← 0/ /* (empty) result set */ iˆ ← undefined /* current best Neighborgram */ repeat
(u) (u) (u) Λi ∀ NGi : compute Γi
(u) (u) ˆ u) (i, ˆ ← arg max(i,u) Γi Λi if iˆ is undefined then (u) ˆ ωˆ ← Λiˆ /* optimal length best Neighborgram */
(u) ˆ (u) ˆ ˆ ˆ Mcovered ← li,r ˆ | 1 ≤ r ≤ ω ∧ c li,r ˆ =c i (u) (u) (u) ∀ NGi : NGi ← NGi \ x j,u | j ∈ Mcovered (u) ˆ
s ← s + Γiˆ
(ωˆ )
(u) ˆ N G ← N G ∪ NGiˆ , dˆ end if until (s ≥ ψ ) ∨ (iˆ is undefined) return N G
neighborgram, which is based on user defined values for the minimum purity and the minimum size (line 2). The learning phase is shown in line 6 to 16: It iteratively adds the cluster candidate with the highest (remaining) coverage (line 8) to the cluster set N G , while removing the covered objects from all remaining neighborgrams. The algorithm terminates if either no more cluster candidates satisfy the search constraints (miss the minimum size constraint) or the accumulated coverage is larger than the required coverage ψmin . This basic algorithm learns a set of clusters distributed over different universes, whereby each cluster is associated with a class (the class of its centroid). The prediction of unlabeled data is carried out by a best-matching approach, i.e. by identifying the cluster that covers the query object best. In case more than one cluster covers the query and these clusters are assigned different classes, the final class can be determined using a majority vote of the clusters. Universe Interaction. The basic algorithm inherently enables simultaneous learning in different universes by considering all neighborgrams as potential cluster candidates and removing covered objects in all universes. However, this is a rather weak interaction as it does not respect overlaps between universes. These are often of special interest for mainly two reasons: Firstly, they give new insights regarding recurrent structures in different universes and help the expert to better understand relations between universes. Secondly, they may help to build a more robust model. The basic algorithm described above penalizes such overlapping structures since it removes covered objects from all remaining neighborgrams. If two clusters group equally in two universes, the algorithm would choose any one of the two clusters and remove the objects. The non-accepted
Supervised Learning in Parallel Universes Using Neighborgrams
395
cluster will never be considered as a good cluster candidate again, because all its supporting objects were removed already. In order to take these overlaps into account, we modify the algorithm to detect overlapping clusters during the clustering process. More formally, we define the relative overlap υ : NG × NG → [0, 1] of two neighborgrams as (u) (y)
∩ Mj M i (u) (y) , υ NGi , NG j = (u) (y) Mi ∪ M j (u)
whereby Mi denotes the set of objects of the same class as the centroid that are covered (u) by the cluster candidate derived from the Neighborgram NGi according to the user settings of minimum purity and size. This overlap is the cardinality of the intersection of these two sets divided by the size of the union. Two neighborgrams in different universes representing the same cluster will therefore have a high overlap. Using the above formula we can numerically express the overlap between cluster candidates. We modify the basic algorithm to also add those candidates to the cluster set that have the highest overlap with the currently best cluster candidate in each universe unless their overlap is below a given threshold υmin . That is, in each iteration (refer to line 6–16 in algorithm 1) we determine the cluster with the highest remaining coverage in all universes. Before removing the covered objects from further consideration we identify the cluster candidates in all remaining (i.e. non-winner) universes with the highest overlap wih respect to the current winner. For each of these overlapping cluster candidates we test whether the overlap is at least υmin and, if so, also accept it as cluster. This strategy of course increases the number of clusters in the final model but it also improves the prediction quality on test data considerably as we will see later. Weighted Coverage. Another beneficial modification to the algorithm is the usage of a different cluster representation by means of gradual degrees of coverage. In the previous sections we used sharp boundaries to describe a cluster. An object is either covered by a cluster or not, independent of its distance to the cluster centroid. By using a weighted coverage scheme that assigns high coverage (→ 1) to the objects in the cluster center and low values (→ 0) to those at the cluster boundaries, we have a much more natural cluster representation [5]. As weighting scheme we use in the following a (u) linear function; the degree of coverage μi ( j) for an object j to the cluster candidate (u) for a neighborgram NGi with its centroid xi,u in universe u is: ˆ (u) d−d (xi,u ,x j,u ) (u) if d (u) (xi,u , x j,u ) ≤ dˆ μi ( j) = dˆ 0 else, whereby dˆ represents the distance of the farthest object covered by the neighborgram (u) according to the optimal length Λi . Note that due to the gradual coverage the algorithm needs to be slightly adapted: The coverage of a cluster is no longer a simple count of the contained objects but an accumulation of their weighted coverage. Also the removal of objects from consideration needs to be changed in order to reflect the
396
B. Wiswedel and M.R. Berthold
partial coverage of objects. These changes are straightforward and therefore we omit a repeated listing of the modified algorithm here.
5 A k-Nearest Neighbor Classifier for Parallel Universes Before we present our results, we shall briefly discuss an extension to the k-nearest neighbor approach for parallel universes, which we will use to compare our results against. It is based on the idea presented in [7], who use a modified distance measure to perform 3D object retrievals. Similar to learning in parallel universes they are given data sets in multiple descriptor spaces. The distance between a query object and an object in the training set is composed of their distances in the different universes. The individual distance values are weighted by the κ -entropy impurity, which is the class entropy of the κ nearest neighbors in the (labeled) training set in a universe around the query object, i.e. the object to be classified. The entropy impurity will be 0 if all κ neighbors in the training set have an equal class attribute and accordingly larger if the neighborhood contains objects of different classes. If imp(u) (q, κ ) denotes the κ -entropy impurity in universe u for a query object q, then the accumulated overall distance δ (q, i) between q and and object i is [7]:
δ (q, i) =
U
1
∑ 1 + imp(u) (q, κ ) ·
u=1
d (u) (xq,u , xi,u ) (u)
.
dmaxq
(u)
Note the term dmaxq in the denominator of the distance coefficient: it is the maximum distance between q and the objects in the training set. It is used to overcome normalization problems that arise when accumulating distances from different domains/universes – a problem that the neighborgram approach fortunately does not suffer from as it does not compare distances across universes. We use the above distance function in a k-nearest neighbor algorithm in the following section as it has proven to be appropriate for the 3D data set being analyzed [7]. However, its general applicability for parallel universe problems is questionable for mainly three reasons: (1) it does not scale for larger problem sizes (specifically because of query-based nearest neighbor searches in all universes), (2) it has the above-mentioned normalization problem (the normalization factor depends heavily on the data set), and (3) it does not produce an interpretable model in the form of rules or labeled clusters (as in the Neighborgram approach).
6 Results We use a data set of 3D objects to demonstrate the practical usefulness of the presented neighborgram approach. The data set contains descriptions of 3D objects given in different universes, which cover image-, volume- and shape based properties [1,7]. There are a total of 292 objects, which were manually classified into 17 different classes such as airplanes, humans, swords, etc. The objects are described by means of 16 different universes, whose dimensions vary from 31 to more than 500 dimensions. The number of
Supervised Learning in Parallel Universes Using Neighborgrams
397
misclassification count (υmin = 0.8) 90 80 70 60 50 40 30
0/1 coverage weighted coverage
0.7
0.75
0.8
0.85
0.9
0.95
1
min purity pmin Fig. 2. Misclassification counts for sharp (0/1 coverage) vs. soft cluster boundaries (weighted coverage)
objects per class ranges from 9 to 56. The descriptors mainly fall into three categories. The image-based descriptors extract properties from an object’s projections (typically after normalizing and aligning the object). They typically encode information regarding an object’s silhouette or depth information. There are 6 universes in this category. Volume-based descriptors reflect volumetric properties of an object, for instance a distribution of voxels (small volume elements) in a unit cube being occupied by the object. There are 5 universes of this type. Finally, shape-based descriptors encode surface and curvature properties, e.g. based on the center points of the objects faces (center of the different polygons describing an object). The remaining 5 universes are of this type. Similar to the results presented in [7] we use the Manhattan norm on the unnormalized attributes in each universe and perform 2-fold cross validation to determine error rates (we list absolute error counts for the 292 objects to be classified below). We first ran a k-nearest neighbor approach to determine a reference value for the achievable error rate. We used the aggregated distance measure presented in section 5 and tested different settings. The smallest error we could achieve was 40 misclassifications for κ = 3 (to calculate a universe’s κ -entropy impurity) and k = 2 (nearest neighbor parameter), which matches the findings of [7]. In a first experiment with the neighborgram clustering approach we tested the effect of the weighted coverage approach and compared it to the error rates when using a sharp cluster representation. We varied the minimum purity parameter pmin from 0.7 to 1.0 and set an overlap threshold υmin = 0.8. A minimum size rmin for a cluster was not set to respect underrepresented classes. Figure 2 shows the results. The 0/1 coverage curve indicates the error rates when using a sharp cluster representation, i.e. even objects at the cluster boundaries are fully covered. The weighted coverage approach improves the prediction quality considerably and yields error rates comparable to the k-nearest neighbor method. Note, using the weighted covering approach also increases the total number of clusters in the final model. If there are no further constraints regarding termination criterion or minimum cluster size, the cluster count can reach up to 600 clusters (from a total of 16 · 292 = 4 672 cluster candidates). In another experiment we evaluated the impact when identifying overlapping clusters across universes and adding those to the cluster set. This experiment compares the
398
B. Wiswedel and M.R. Berthold misclassification count (υmin = 0.6) 80 70 60
SIL DBF ParUni (basic) ParUni(overlap)
50 40 30 0.7
0.75
0.8
0.85
0.9
0.95
1
0.95
1
min purity pmin cluster count (υmin = 0.6) 130 125 120 115 110 105 100 95 90 85 80
SIL DBF ParUni (basic)
0.7
0.75
0.8
0.85
0.9
min purity pmin Fig. 3. Misclassification rates and cluster counts using the Parallel Universe extension compared to models built on individual universes
basic algorithm shown in algorithm 1 with the extension discussed in section 4.2 (using weighted coverage in either case). The results are summarized in figure 3 along with the results of the two best single-universe classifiers. These were built in the universes DBF (depth buffer) and SIL (silhouette), both of which are image-based descriptors. The basic algorithm in parallel universes already outperforms the single-universe classifiers in terms of both classification accuracy and numbers of clusters used. When additionally taking overlapping clusters into account (represented by the curve labeled “Paruni (overlap)”), the error rate drops considerably, showing the advantage of having some redundancy across universes in the cluster set. However, these re-occurring structures come at a price of an increased number of cluster, which is in the order of 500–600 (omitted in the graph). This suggests that the basic algorithm is suitable if the focus is on building an understandable model with possibly only a few clusters, whereas the enhanced algorithm with overlap detection may be appropriate when the focus lies on building a classifier with good prediction performance.
Supervised Learning in Parallel Universes Using Neighborgrams
399
Neighborgram Visualization Another advantage of the neighborgram data structure is the ability to visualize them and thus allow the user to visually assess a cluster’s quality and potential overlaps across universes. Figure 4 shows an example. A row represents the neighborgrams for one object in four different universes (two image- (DBF & SIL), one shape- (SSD) and one volume-based (VOX)). We use the same visualization technique as in the illustrative example in figure 1, i.e. a distance plot, whereby the vertical stacking is only used to allow an individual selection of points. The plots show the 100 nearest neighbors; objects of the same class as the centroid are shown in dark grey and objects of all other classes in light grey. Objects of a (semi-automatically) selected cluster are additionally highlighted to see their occurrence also in the respective other neighborgrams/universes. 2D depictions of these objects are also shown at the bottom of the figures, whereby we manually expanded the cluster to also cover some conflicting objects (shown on the bottom right). Figure 4 shows the largest cluster in the DBF universe, which covers the class “swords”. Note this cluster also groups in the SIL universe, which is also image-based, though there seems to be no grouping in the shape- and volume-based descriptors SSD and VOX. In contrast, clusters of the class “humans” form nicely in the shape-based universe SSD. There are more interesting clusters in this data set, for instance larger groups of cars, which cluster well in the DBF and VOX universe or a group of weeds, which only clusters in the volume-based descriptor space. We do not show these clusters in separate figures due to space constraints. However, this demonstrates nicely that depending on the type of object certain descriptors, i.e. universes, are better suited to group them. These results emphasize that learning in parallel universes is an important and well suited concept when dealing with such types of information.
Fig. 4. The largest cluster in universe DBF covers objects of class “sword” (highlighted by border). 2D depictions of the covered objects are shown at the bottom. The cluster was manually expanded to cover also two conflict objects of class “spoon” (shown at bottom right).
400
B. Wiswedel and M.R. Berthold
7 Summary We presented a supervised method for the learning in parallel universes, i.e. for the simultaneous analysis of multiple descriptor spaces using neighborgrams. We showed that by using an overlap criterion for clusters between universes we can considerably improve the prediction accuracy. Apart from using neighborgrams as an underlying basis to learn a classification model, they can also be used as visualization technique to either involve the user in the clustering process or to allow for a visual comparison of different universes.
References 1. Konstanz 3D model search engine (2008), http://merkur01.inf.uni-konstanz.de/CCCC/ (last access March 20, 2009) 2. Agrawal, R., Gehrke, J., Gunopulos, D., Raghavan, P.: Automatic subspace clustering of high dimensional data for data mining applications. In: Proc. ACM SIGMOD International Conference on Management of Data, pp. 94–105. ACM Press, New York (1998) 3. Bender, A., Glen, R.C.: Molecular similarity: a key technique in molecular informatics. Organic and Biomolecular Chemistry 2(22), 3204–3218 (2004) 4. Berthold, M.R., Wiswedel, B., Patterson, D.E.: Neighborgram clustering: Interactive exploration of cluster neighborhoods. In: IEEE Data Mining, pp. 581–584. IEEE Press, Los Alamitos (2002) 5. Berthold, M.R., Wiswedel, B., Patterson, D.E.: Interactive exploration of fuzzy clusters using neighborgrams. Fuzzy Sets and Systems 149(1), 21–37 (2005) 6. Blum, A., Mitchell, T.: Combining labeled and unlabeled data with co-training. In: Proceedings of the Eleventh Annual Conference on Computational Learning Theory (COLT 1998), pp. 92–100. ACM Press, New York (1998) 7. Bustos, B., Keim, D.A., Saupe, D., Schreck, T., Vranic, D.V.: Using entropy impurity for improved 3D object similarity search. In: Proceedings of IEEE International Conference on Multimedia and Expo (ICME 2004), pp. 1303–1306 (2004) 8. Bustos, B., Keim, D.A., Saupe, D., Schreck, T., Vrani´c, D.V.: An experimental effectiveness comparison of methods for 3D similarity search. International Journal on Digital Libraries, Special issue on Multimedia Contents and Management in Digital Libraries 6(1), 39–54 (2006) 9. Dasgupta, S., Littman, M.L., McAllester, D.A.: PAC generalization bounds for co-training. In: NIPS, pp. 375–382 (2001) 10. Friedman, J.H., Meulmany, J.J.: Clustering objects on subsets of attributes. Journal of the Royal Statistical Society 66(4) (2004) 11. Parsons, L., Haque, E., Liu, H.: Subspace clustering for high dimensional data: a review. SIGKDD Explor. Newsl. 6(1), 90–105 (2004) 12. R¨uping, S., Scheffer, T. (eds.): Proceedings of the ICML 2005 Workshop on Learning With Multiple Views (2005) 13. Wiswedel, B., Berthold, M.R.: Fuzzy clustering in parallel universes. International Journal of Approximate Reasoning 45(3), 439–454 (2007)
i MMPC: A Local Search Approach for Incremental Bayesian Network Structure Learning Amanullah Yasin and Philippe Leray Knowledge and Decision Team, Laboratoire d’Informatique de Nantes Atlantique (LINA) UMR 6241, Ecole Polytechnique de l’Université de Nantes, France {amanullah.yasin,philippe.leray}@univ-nantes.fr
Abstract. The dynamic nature of data streams leads to a number of computational and mining challenges. In such environments, Bayesian network structure learning incrementally by revising existing structure could be an efficient way to save time and memory constraints. The local search methods for structure learning outperforms to deal with high dimensional domains. The major task in local search methods is to identify the local structure around the target variable i.e. parent children (PC). In this paper we transformed the local structure identification part of MMHC algorithm into an incremental fashion by using heuristics proposed by reducing the search space. We applied incremental hill-climbing to learn a set of candidate- parent-children (CPC) for a target variable. Experimental results and theoretical justification that demonstrate the feasibility of our approach are presented.
1
Introduction
Many sources produce data continuously like customer click streams, telephone call records, large sets of web pages, multimedia, scientific data and sets of retail chain transactions etc. Such type of data is called data stream, it is a real time and continuous ordered sequence of items which are not feasible to store nor possible to control the order. Data stream mining means extracting useful information or knowledge structures from continuous data. It becomes the key technique to analyze and understand the nature of the incoming data. Typical data mining tasks including association mining, classification, and clustering, helps to find interesting patterns, regularities, and anomalies in the data. However traditional data mining techniques cannot directly apply to data streams. This is because most of them require multiple scans of data to extract the information, which is unrealistic for stream data. More importantly the characteristics of the data stream can change over time and the evolving pattern needs to be captured[10]. Furthermore we also need to consider the problem of resource allocation in mining data streams. Due to the large volume and the high speed of streaming data, mining J. Gama, E. Bradley, and J. Hollmén (Eds.): IDA 2011, LNCS 7014, pp. 401–412, 2011. c Springer-Verlag Berlin Heidelberg 2011
402
A. Yasin and P. Leray
algorithms must cope with the effects of system overload. Thus how to achieve optimum results under various resource constraints becomes a challenging task. BN structure learning is a NP-Hard problem [6], it motivated the use of heuristic search methods to solve it. There are three common approaches for Bayesian Network (BN) learning: – Score and search based methods search over the space of all possible Bayesian networks in an attempt to find the network with maximum score. Unfortunately in such methods the search space is super-exponential in the number of random variables. It is very hard to compare all the structures specially in high dimensional domains[7]. Therefore score-based methods are theoretically intractable despite the quality of the search heuristic in use. – Constraint based methods, the main idea behind these methods is to exploit the independence semantics of the graph. They construct the graphical structure called “patterns” using statistical tests or information theoretic measures [16]. Later using different assumptions they direct the edges to get a directed acyclic graph (DAG). Its performance is limited with small conditioning set and criticized for complex structures. – Local Search methods first search for the conditional independence relationships among the variables in a dataset and construct a local structure around a target variable i.e. Parent-Children (PC) or Markov Blanket (MB), using different heuristics like MMPC [17], MBOR [14] and then they use another heuristic to learn the full BN structure. For instance MMHC [18] combines MMPC and greedy search approaches. Applying BN structure learning in high dimensional domains e.g. biological or social networks, faces the problem of high dimensionality. These domains produce data sets with tens or hundreds of thousands of variables. The recent MaxMin Hill-Climbing algorithm (MMHC) [18] has been proposed to solve high dimensionality problem and it outperforms on a wider range of network structures. MMHC combines both the local search and the score-and-search based approaches. In the first phase it learns the possible skeleton of the network using the local discovery Max-Min Parent Children (MMPC) algorithm. While in the second phase, it orients the determined edges using the greedy hill-climbing search. Algorithms using local discovery for skeleton identification performs better than other leading non-hybrid structure learning algorithms in high dimensional domains [18]. Here we will focus on Local Search which are most scalable methods. In this paper we present an incremental local learning algorithm (iMMPC) to identify the set of candidate parent children (CPC) of a target variable in any BN, which faithfully representing the distribution of data. We applied incremental hill climbing method to find a set of CPCs for a target variable and observed that it saves considerable amount of computational complexity. This paper is organized as follows: first we discussed the previous work in the field of incremental learning and data stream mining in section 2. In section 3 we recall the basics of the heuristics used in our proposed method. In section 4 we present our incremental local search approach iMMPC. In section 5 we explained
iMMPC: A Local Search Approach
403
our method with an example. In section 6 we presented the experimental results of our proposed iMMPC algorithm. Finally we conclude in section 7 with some proposals for future research.
2
Related Work
In the field of incremental Bayesian Network Structure learning, some works proposed to iteratively revise the structure of a Bayesian network. It can be classified in two categories, approaches deal with the stationary domains and non-stationary domains. Algorithms which deal with stationary domains consider the data is drawn from single underlying distribution, which will not change with the passage of time. The models are also not very different when they evaluated with respect to similar data sets [1]. On the other hand algorithms deal with non-stationary domains consider the underlying distribution of data may change with time so, drift or shift change may occur. Furthermore we can divide these domains with respect to BN structure learning methods. Buntine’s [3] proposes a batch algorithm that uses the score-and-search based Bayesian approach, later he proposes some guidelines for converting it into an incremental or online algorithm. He considered two conditions, if there is not enough time to update parent structure, the algorithm will only update posterior probabilities of the parent lattices. Otherwise both structure and parameters will be updated. Lam and Bacchus’s [12,11] approach is also an extension of their batch algorithm and based on Minimal Description Length (MDL) principle. The idea is to first learn both partial network structure from the new data and existent network using the MDL learning method and then modifies locally the global old structure using the newly discovered partial structure. Friedman and Goldszmidt [9] proposes three different approaches, first naive approach which stores all previously seen data, and repeatedly invokes a batch learning procedure on each new example. Second approach based on Maximum Aposteriori Probability (MAP). Third approach called incremental, maintains a set of network candidates that they call the frontier of the search process. As each new data example arrives, the procedure updates the information stored in memory, and invokes the search process to check whether one of the networks in the frontier is deemed more suitable than the current model. Roure [1] rebuilds the network structure from the branch which is found to be invalidated. He proposed two heuristics to change a batch Hill-climbing search into an incremental one. Section 3.2 will describe more deeply these heuristics. He discussed the incremental BN structure learning methods in [2]. All above approaches deal with the stationary domains and use the scoring methods to learn BN structure incrementally. Recently Da Shi [15] adopted hybrid approach to learn BN structure incrementally. A recent work in this field done by Nielsen and Nielsen [13] considers the nonstationary domains where concept shift may occur. This approach consists of two mechanisms, first monitoring and detecting when and where the model should be changed and second relearning and using a local search strategy integrating the
404
A. Yasin and P. Leray
parts of the model that conflict with the observations. Castillo[5] also proposed an adaptive learning algorithm in changing environment. As a conclusion we see that most of the algorithms use the scoring methods and treat the incremental process in different ways for stationary domains. Score based methods are not scalable for data stream mining, the judicious choice for handling a great number of variables is a local search approach. Its why we take advantages of local search approaches and proposed iMMPC algorithm: a scalable incremental method for stationary domains with high dimensionality, in section 4.
3 3.1
Background MMPC(T): Local Search Approach for BN Structure Learning
The MMPC(T) algorithm discovers the set of CPC (candidate parent-children, without distinguishing among both) for a target variable T. It is a combination of M M P C(T ) algorithm and additional correction for symmetric test. Algorithm M M P C(T ) (cf. Algo. 1) has two phases. In forward phase it adds variables in CPC for a target variable T and in backward phase it removes the false positives. The pivotal part of the MMPC algorithm is the forward phase of M M P C(T ) algorithm. This phase starts from the empty set of CPC(T) for a target variable T, then sequentially adds the variables in CPC which have strong direct dependencies. Function min(Assoc) used to measure the association between two variables given CPC, it will be zero if these two variables are independent conditionally any subset of CPC. So variables having zero association given any subset of already calculated CPC, can never enter in CPC(T) and such variables does not consider again for further steps. This function estimates the strength of association by using any measure of association Assoc like χ2 , mutual information (MI) or G2 . Further “MaxMinHeuristic” [18] selects the variables which maximizes the min(Assoc) with target variable T conditioned to the subset of the currently estimated CPC. Forward phase stops when all remaining variables are independent of the target variable T given any subset of CPC. 3.2
Incremental Adaption of Score Based BN Structure Learning
In the category of score based methods, here we are discussing the Roure [1] approach. In his approach he proposed two heuristics to change a batch Hillclimbing search (HCS) into an incremental algorithm. We will first describe the usual non incremental HCS and then introduce Roure’s heuristics. Hill Climbing Search HCS (cf. Algo. 2) methods traverse the search space called neighborhood by examining only possible local changes at each step and applying the one that maximizes the scoring function. The neighborhood of a model M consists of all
iMMPC: A Local Search Approach
405
Algorithm 1. M M P C (T ) Require: target variable T; data D; threshold θ Output: a set of Candidate Parent and Children (CPC) of T \** Forward Phase: MaxMinHeuristic **\ CP C = ∅ Repeat < F, assocF > = maxx∈X CP C minS⊆CP C Assoc (x; T | S) if assocF = 0 then Add (CP C, F ) Endif Until CPC has not changed \** Backward Phase: **\ For all X ∈CPC if ∃S⊆ CP C, s.t. Assoc (X; T | S) < θ then Remove (CP C, X) Endif Return CPC
models which can be build using the operators op and argument pairs A, where operator can be Add Edge, Delete Edge or Reverse Edge. A scoring function f (M, D) is used to measure the quality of the model. Search path or traverse path is a sequence of operators and argument pairs added on each step to obtain final model M f , in other words it is a sequence of intermediate models. Definition 1. (Search Path) Let M0 be an initial model and Mf is a final model obtained by a hill-climbing search algorithm as: Mf = opn (...(op2 , (op1 , A1 ), A2 ), ..., An ) where each operator and argument pair yields the model with highest score to the neighborhood. So search path is the sequence of operators and argument pairs Oop = {(op1 , A1 ) , (op2 , A2 ) , . . . (opn , An )} used to build Mf . Models in search path are in increasing quality score order. f (M0 , D) < f (M1 , D) < f (M2 , D) · · · < f (Mf , D)
Incremental Hill Climbing Search (iHCS) In iHCS (cf. Algo. 3) Roure discussed two main problems: firstly when and which part needs to update, secondly to calculate and store sufficient statistics. At each iteration it repeats the search path by traversing the reduced DAG space. He proposed two heuristics which are based on the assumption that all data in the stream are sampled from the same probability distribution. The search space is supposed to be not change too much when few data are added in current dataset or new data slightly change the underlying distribution, so the scoring function imagined to be a continuous over the space of datasets.
406
A. Yasin and P. Leray
Algorithm 2. Hill Climbing Search (HCS) Require: data D; scoring function f (M, D); a set of operators OP = {op1 , . . . , opk } Output: DAG: a model M of high quality i=0 Mi = ∅ Repeat oldScore = f (Mi , D) i++ Mi = op(Mi−1 , Ai ) \** where op(Mi−1 , Ai )= arg max(opk , A)∈Gn f opk (Mi−1 , Ai ), D **\ Until oldScore ≥ f (Mi , D) Return Mi
First heuristic “Traversal Operators in Correct Order” (TOCO) verifies the already learned model and its search path for new data. If the new data alter the learning (search) path then it is worth to update an already learned model. Second heuristic “Reduced search space” (RSS) applies when the current structure needs to be revised. At each step of the search path, it stores top k models in a set B having the score closer to the best one. The set B reduces the search space by avoiding to explore those parts of the space where low quality models were found during former search steps.
Algorithm 3. Incremental Hill Climbing Search (iHCS) Require: data D; scoring functionf (M, D), a set of operators OP = {op1 , . . . , opm } and Bi is a set of k best operators and argument pairs for model Mi Output: DAG: a model M of high quality \** TOCO **\ Verify previous search path: After evaluation of scoring function f over new data D∪D , let (opj , Aj ) be a last pair where the new data agree with previously learned search path. Mini = (opj , Aj ) i=0 Mi = Mini \** RSS **\ Repeat oldScore = f (Mi , D) i++ Mi = op(Mi−1 , Ai ) \** where op(Mi−1 , Ai )= arg max(opm , A)∈Bi f (opm (Mi−1 , Ai ), D) **\ if (Mi =Mf inal ) then Recalculate Bk Endif Until oldScore ≥ f (Mi , D)
iMMPC: A Local Search Approach
4
407
Incremental MMPC(T) Approach
Now we can present our local search algorithm iMMPC(T) for stationary domains. As already discussed the pivotal part of the MMPC(T) algorithm is the forward phase of M M P C(T ) algorithm. In this phase variables enter incrementally in the set of CPC(T). 4.1
Forward Phase of M M P C(T ) Algorithm as a HCS
First of all, let’s demonstrate that this forward phase can be seen as a Hill Climbing search in a specific search space (set of models). The idea of M M P C(T ) (forward phase) is also the same as HCS, that is to generate a model in a step by step fashion by making the maximum possible improvement in quality function on each step. In M M P C(T ), model can be defined as: Definition 2. (Model) Let T be a target variable and CPC is a set of candidate parent-children of T, without distinguishing among both parent and children variables. Then model M is a undirected graph which is defined by variables V = T ∪ CP C and edges E = {< T, x >, ∀ x ∈ CP C}. To measure the quality of a model we define a scoring function as: Definition 3. (Quality Measuring) Let D be a dataset with a faithful distribution P and M is a model. The quality of model M can be measured by a function f (M, D), where f (M, D) = M I(T, CP C) and its value increases when good variables are added to CPC. Operator also defined as: Definition 4. (Operator) Let T be a target variable, operator can be defined by AddU ndirectedEdge(T, X) which corresponds to adding X to the set of CPC(T). Mutual Information (MI) has the property that M I(X, Y ∪ W ) ≥ M I(X, Y ) means MI always increasing by including additional variables. Furthermore following property of conditional mutual information justify our choice of MI [4]: Conditional mutual information between X and Y given a set of variables Z defined as: M I(X, Y ∪ W | Z) = M I(X, Y | Z) + M I(X, W | Z ∪ Y )
(1)
If Z is an empty set and Mutual Information (MI) used to measure the strength of the Assoc (Assoc = MI ) then equation 1 can be written as: Assoc(T, CP C ∪ X) = Assoc(T, CP C) + Assoc(T, X | CP C) So to maximize f (M, D) it needs to maximize Assoc(T, X | CP C) and it is the “MaxMinHeuristic” of the MMPC(T) algorithm. At each step M M P C(T ) algorithm searches within the neighborhood to improve the quality function and select the best model. This neighborhood space over the models obtained by applying the one operator AddUndirectedEdge(T, X). With the above three definitions we can easily describe the forward phase of M M P C(T ) algorithm works as hill climbing search.
408
4.2
A. Yasin and P. Leray
Our Proposal of Incremental MMPC(T)
By showing the HCS behavior of M M P C(T ) algorithm (forward phase) we are able to adapt TOCO and RSS heuristics of Roure approach (Algo. 3) with model M , scoring function f and operator as previously defined. Our Initial model M0 corresponds to an empty set of CPC (M0 ={T }). Incremental MMPC(T) starts from initial model M0 and then search in the neighborhood to find the best one. This iterative process continues until all remaining variables find the weak dependencies. So the search path in iMMPC is a sequence of the variables added in the set of CPC(T) incrementally. Each step of the search path can be called intermediate model and for each intermediate model (or search step) we store the k neighboring models having the association value very close to the best one, as a set B where k is a user input value, greater the value of k guarantees more significant model. On the arrival of new data iMMPC(T) first verify the previously learned path and define the initial model M i from which the search will be resumed. If initial model is a best model then it will continue to improve an existing model in the light of set B (set of best arguments) to introduce new variables in the final model, otherwise it will recalculate the set of best arguments. We maintain the set B in descending order. Relearning process limits the search space by using the set B. Backward phase of M M P C has no concern as calculations of Assoc remains the same in iMMPC.
5
Toy Example
Now we provide a simple example for the better understanding of incremental M M P C(T) algorithm. The original undirected acyclic graph of incoming data stream is shown in the Figure 1. The data fed to the algorithm is sampled from the distribution of the original graph and we are interested in identifying CP C(X6). We are going to present two cases correspond to the two columns in figure 1. In the first one we handle data D and other is to handle data D ∪ D . First case will start by searching the whole neighborhood space and storing the reduced search space (with k = 4) B. In second case the incremental algorithm will reduce the search space by using the reduced search space B. So it starts from initial model M0 which contains only target variable and empty set of CPC. In the first iteration it generates the neighborhood by applying the operator AddU ndirectedEdge(M0 , X) and then calculate the association between X6 and each of the 7 other variables. Suppose the maximum value is obtained for X4 variable, and f (M0 ) < f (M1 ), so X4 is added in the CP C(T ). At this step we store the four best variables which have the association value closest to the maximum one so, for instance set B0 = {X4, X3, X7, X2}. Next iteration starts from model M1 and repeats the same process. Suppose the maximum association value found by applying AddU ndirectedEdge(M0 , X3) and f (M2 ) > f (M1 ), so X3 is added in the CP C (T ). We store the four best variables which have the association value closest to the the maximum one so,
iMMPC: A Local Search Approach
409
Fig. 1. Example
set B1 = {X3, X7, X2, X5}. And the same in third iteration where X7 is added in the list of CPC, also the set of best operator is stored as B2. Now we can not proceed because there is no other model having the maximum value greater than f (M2 ). In other words we can say that the remaining variables have the zero association with the target variable given the any subset of CPC. On the arrival of new data D , our algorithm will relearn the model for data D ∪ D . Again it starts from the initial model and generates the neighborhood by applying the AddU ndirectedEdge(M0 , X) operator for only those variables which are stored in the set of best arguments found in the previous learning process. So here the search space reduced for only four models. Again supposed the variable X 4 has a maximum association value so, it added in the list of CPC (T). Here we also verify the search path, if the search path is not in the same sequence then we need to recalculate the set of best operators. The complexity of our algorithm if we consider the above best case then at first time forward phase needs 3(n − 2) comparisons and for next incremental stage it requires only 3k comparisons. In average case if new data changes then the complexity will be O(2k + n). For worst case when distribution of the entire data is changed or model learning process starts from scratch then it needs maximum comparisons O(3n). This simple example illustrates the interest of our incremental approach. In high dimension, k n and lot of Assoc computations are saved.
410
6
A. Yasin and P. Leray
Experimental Study
In this section we demonstrate the effectiveness of proposed approach. Experimental Protocol: We used the Alarm(37 vars. and 46 arcs) network[8], which is a standard benchmark for the assessment of the Bayesian network algorithms, Barley(48 vars. and 84 arcs) and Win95pts(76 vars. and 112 arcs) networks taken from GeNIe and SMILE network repository1 . We present average results of five random samples generated from above networks and each sample contains 20,000 instances. Mushroom(8124 instances and 23 vars.) dataset taken from UCI machine learning repository2 . We compare our partial(CPC’s skeleton) results with the results presented in [1]. iMMPC generates a skeleton of candidate parent children (CPC) for individual variables in the network. Further the greedy search approach can be used to obtain a final network. Evaluation Measures: We evaluate our algorithm in terms of model accuracy and computational complexity. Str.Eq.(structural equalities) shows the number of arcs shared by incremental and batch approaches. CG%(Call gain) is the gain nI in number of calls to the scoring function and calculated by 1 − nB ×100, where nI and nB are the number of calls performed by the incremental and batch algorithms respectively. The main task in the algorithm is to calculate the score so, CG% gives us a good idea to compare the algorithm complexity. #Arc.B. shows the number of arcs determined by batch approach. Model accuracy observed by comparing the true positive edges found by both approaches. Table 1. Comparison with Roure’s approach DataSet Alarm Mushroom
Roure’s algo B Roure’s algo FG iMMPC #Arc.B. Str.Eq. CG% #Arc.B. Str.Eq. CG% #Arc.B. Str.Eq. CG% 53 71
42.20 73.97 52.60 51.06
54 71
40.40 88.88 45.60 66.90
40 92
37.2 91
67.7 45.6
Experimental Results and Interpretations: In Table 1, it is evident that our algorithm obtained a better structural equality. CG% is less than Roure’s because iMMPC used local search approach which is already has a less complexity as compare to score based approaches. Finally in Figure 2 we recall the sensitivity of our algorithm for window size w={1000,2000,3000,4000} and K={4,6,8,10}. It is clear that our approach has high accuracy rate. It found the same network as batch with highest call gain. It is observed that our approach has an advantage of double optimization, once a variable reaches minimum association of zero with T it is not considered again by the algorithm and second, TOCO, which reduces the search space. We observed in our experiments that TOCO revision is fired mostly at the middle or at the end of the searching path so, revision does not start from the scratch. 1 2
http://genie.sis.pitt.edu/netowkrs.html http://www.ics.uci.edu/mlearn/MLRepository.html
iMMPC: A Local Search Approach
411
Fig. 2. Comparison of model accuracy using different window size and k value
7
Conclusion and Perspectives
In this paper we have proposed an incremental version of local search Max-Min Parent-children algorithm of Bayesian Network structure learning. We applied incremental hill climbing approach to discover the set of CPC of a target T. We store the most strong dependencies as a set of best arguments. It reduces the search space for new data by considering only the strong dependencies. Theoretical study and preliminary experiments shows our approach improves the performance of the algorithm systematically and reducing the complexity significantly. In the further, we plan to carry out this work by incrementally identifying the whole structure. We also plan to apply optimize measuring by storing sufficient statistics. Furthermore dealing with non-stationary domains by handling shift or drift detection.
References 1. Alcobé, J.R.: Incremental hill-climbing search applied to bayesian network structure learning. In: First International Workshop on Knowledge Discovery in Data Streams, KDDS (2004) 2. Alcobé, J.R.: Incremental methods for bayesian network structure learning. AI Communications 18(1), 61–62 (2005) 3. Buntine, W.: Theory refinement on bayesian networks. In: Proceedings of the Seventh Conference (1991) on Uncertainty in Artificial Intelligence, pp. 52–60. Morgan Kaufmann Publishers Inc., San Francisco (1991) 4. de Campos, L.M.: A scoring function for learning bayesian networks based on mutual information and conditional independence tests. J. Mach. Learn. Res. 7, 2149–2187 (2006)
412
A. Yasin and P. Leray
5. Castillo, G., Gama, J.a.: Adaptive bayesian network classifiers. Intell. Data Anal. 13, 39–59 (2009) 6. Chickering, D.: Learning bayesian networks is NP-complete. In: Proceedings of AI and Statistics, pp. 121–130 (1995) 7. Chickering, D., Geiger, D., Heckerman, D.: Learning bayesian networks: Search methods and experimental results. In: Proceedings of Fifth Conference on Artificial Intelligence and Statistics, pp. 112–128 (1995) 8. Cooper, G.F., Herskovits, E.: A bayesian method for the induction of probabilistic networks from data. Machine Learning 9, 309–347 (1992), doi:10.1007/BF00994110 9. Friedman, N., Goldszmidt, M.: Sequential update of bayesian network structure. In: Proc. 13th Conference on Uncertainty in Artificial Intelligence (UAI 1997), pp. 165–174. Morgan Kaufmann, San Francisco (1997) 10. Gama, J.: Knowledge Discovery from Data Streams. CRC Press, Boca Raton (2010) 11. Lam, W.: Bayesian network refinement via machine learning approach. IEEE Trans. Pattern Anal. Mach. Intell. 20(3), 240–251 (1998) 12. Lam, W., Bacchus, F.: Using new data to refine a bayesian network. In: Proceedings of the Tenth Conference on Uncertainty in Artificial Intelligence, pp. 383–390. Morgan Kaufmann, San Francisco (1994) 13. Nielsen, S.H., Nielsen, T.D.: Adapting bayes network structures to non-stationary domains. Int. J. Approx. Reasoning 49(2), 379–397 (2008) 14. Rodrigues de Morais, S., Aussem, A.: A novel scalable and data efficient feature subset selection algorithm. In: Daelemans, W., Goethals, B., Morik, K. (eds.) ECML PKDD 2008, Part II. LNCS (LNAI), vol. 5212, pp. 298–312. Springer, Heidelberg (2008) 15. Shi, D., Tan, S.: Incremental learning bayesian network structures efficiently. In: Proc. 11th Int Control Automation Robotics & Vision (ICARCV) Conf., pp. 1719– 1724 (2010) 16. Spirtes, P., Glymour, C., Scheines, R.: Causation, Prediction, and Search, 2nd edn. Adaptive Computation and Machine Learning Series. The MIT Press, Cambridge (2000) 17. Tsamardinos, I., Aliferis, C.F., Statnikov, A.: Time and sample efficient discovery of Markov blankets and direct causal relations. In: Proceedings of the Ninth ACM SIGKDD international Conference on Knowledge Discovery and Data Mining, KDD 2003, pp. 673–678. ACM, New York (2003) 18. Tsamardinos, I., Brown, L.E., Aliferis, C.F.: The max-min hill-climbing bayesian network structure learning algorithm. Mach. Learn. 65(1), 31–78 (2006)
Analyzing Emotional Semantics of Abstract Art Using Low-Level Image Features He Zhang1 , Eimontas Augilius1 , Timo Honkela1 , Jorma Laaksonen1, Hannes Gamper2 , and Henok Alene1 1
Department of Information and Computer Science 2 Department of Media Technology Aalto University School of Science, Espoo, Finland {he.zhang,eimontas.augilius,timo.honkela,jorma.laaksonen, hannes.gamper,henok.alene}@aalto.fi
Abstract. In this work, we study people’s emotions evoked by viewing abstract art images based on traditional low-level image features within a binary classification framework. Abstract art is used here instead of artistic or photographic images because those contain contextual information that influences the emotional assessment in a highly individual manner. Whether an image of a cat or a mountain elicits a negative or positive response is subjective. After discussing challenges concerning image emotional semantics research, we empirically demonstrate that the emotions triggered by viewing abstract art images can be predicted with reasonable accuracy by machine using a variety of low-level image descriptors such as color, shape, and texture. The abstract art dataset that we created for this work has been made downloadable to the public. Keywords: emotional semantics, abstract art, psychophysical evaluation, image features, classification.
1
Introduction
Analyzing image emotional semantics is an emerging and promising research direction for Content-Based Image Retrieval (CBIR) in recent years [5]. While the CBIR systems are conventionally designed for recognizing object and scene such as plants, animals, people etc., an Emotional Semantic Image Retrieval (ESIR) [17] system aims at incorporating the emotional reflections to enable queries like “beautiful flowers”, “lovely dogs”, “happy faces” etc. In analogy to the concept of semantic gap implying the limitations of image recognition, the emotional gap can be defined as “the lack of coincidence between the measurable signal properties, commonly referred to as features, and the expected affective state in which the user is brought by perceiving the signal” [6]. Though emotions can be affected by various factors like gender, age, culture, background, etc. and are thus considered as high-level cognitive processes, they still have certain stability and generality across different people and cultures [13]. This enables researchers to generalize their proposed methodologies from limited J. Gama, E. Bradley, and J. Hollm´ en (Eds.): IDA 2011, LNCS 7014, pp. 413–423, 2011. c Springer-Verlag Berlin Heidelberg 2011
414
H. Zhang et al.
samples given a sufficiently large number of observers [17]. Due to the subjectivity of emotions, two main challenges in analyzing image emotional semantics can be identified: 1) measuring effectively the subjectivity or consensus among people for obtaining robust ground-truth labels of the shown image. 2) extracting informative image content descriptors that can reflect people’s affective feelings evoked by the shown image. As for measuring the subjectivity, adjective (impression) words are often selected to represent people’s emotions at first. Recently, many researchers use opposing adjective word pairs to represent emotional semantics, such as happysad, warm-cold, like-dislike etc. For example, respectively 9, 10, and 12 adjective word pairs were used in [19], [13], and [18] for constructing the emotional factor space, with a number of 12, 31, and 43 observers involved per each. Generally, using a large number of adjective word pairs may improve the experimental results. However, this will increase the evaluation time and affect the observers’ moods, which in turn would lower the generality of evaluation results. Besides, the adjectives often conceptually overlap in adjective space [8]. As for extracting meaningful image descriptors, many attempts have been reported (e.g. [4,9,16,18,19]). Most of the works developed features that are specific to the domains related to art and color theories, which lacks generality and makes it difficult for researchers who are unfamiliar with computer vision algorithms to perform image analysis on their own data [15]. Besides, the image datasets used in these works are mainly scenery and photographic images containing recognizable objects and faces which are likely to distort people’s emotions. In this article, we study people’ emotional feelings evoked by viewing abstract art images within a classification framework. Similar to [9], we choose abstract art images as our dataset since they usually contain little contextual information that can distort the observer’s emotions. However, the features used in [9] are designed on art and color theories and thus are deficient in generality. In contrast, we seek to bridge the emotional gap by utilizing a large variety of conventional low-level image features extracted from raw pixels and compound image transforms, which have been widely used in computer vision and machine learning problems. Compared with a designed baseline method, we can achieve significantly better results on classifying image emotions between exciting and boring, relaxing and irritating, with a relatively small number of image samples. The features used in this article have been implemented by Shamir et al [15] as an open source software and the abstract art image dataset that we collected has been made downloadable to the research community 1 . Section 2 describes the collected dataset and online psychophysical evaluations. Section 3 introduces the low-level image features extracted from both the raw pixels and image compound transforms. The classification setups are explained in Section 4. Section 5 presents the experimental results with analysis. Finally the conclusions and future work are made in Section 6. 1
http://research.ics.tkk.fi/cog/data/esaa/
Emotional Semantics of Abstract Art
2
415
Data Acquisition
Data collection is important in that it provides art images for evaluation by real observers such that both the input samples and their ground-truth labels can be obtained for training and testing the classifier in the latter stage. The duration of evaluation process should not be too long, otherwise the observers will get tired and their responses will deteriorate. This means we should select a limited number of image samples, without affecting as much the generality of the model as possible. 2.1
Image Collections
We collected 100 images of abstract art paintings with different sizes and qualities through Google image search. These abstract art paintings were created by artists with various origins and the image sizes were within a range between 185 × 275 and 1, 000 × 1, 000. We kept the image samples the same as they were initially selected from Internet and no data preprocessing such as image downsampling or cropping were performed. This not only mimics a real user’s web browsing scenario, but also it facilitates the image descriptors in extracting discriminative information from the art images in the latter stage. 2.2
Psychophysical Evaluations
Semantic Differential (SD) [12] is a general method for measuring the affective meaning of concepts. Each observer is asked to put a rating on a scale between two bipolar adjective words (e.g. happy-sad) to describe his or her emotions evoked by the shown images. The final rating of every opposing adjective word pairs for an image is obtained by averaging ratings over all the observers. Following the SD approach, an online image survey was conducted through our designed web user interface2 . To lower the bias, 10 female and 10 male observers both Asians and Europeans were recruited. All the observers have a university degree and their ages are between 20 and 30 years old. During the evaluation, each observer was shown 100 abstract art images with one per page. Under each image, he or she was asked to indicate ratings for both exciting-boring and relaxing-irritating. For every word pair, we used 5 ratings with respective values of −2, −1, 0, 1, 2 to denote an observer’s affective intensity. Thus the overall rating of each adjective pair for an image was the average rating score from all the observers. The evaluation took about 10 to 15 minutes in general, which was easily acceptable for most of the participants. Post-processing: To obtain the ground-truth labels, we adopted a simple rule: if an image, for an adjective word pair, received an average rating score larger than zero, then it was treated as a positive sample for the classifier; if the average score was smaller than zero, it is a negative sample. Figure 1 shows the top 3 images for each impression word, after sorting the 100 images by their average 2
http://www.multimodwellbeing.appspot.com/?controlled
416
H. Zhang et al.
Fig. 1. The top 3 images sorted by the average scores over 20 observers for excitingboring (upper row) and relaxing-irritating (bottom row)
ratings over 20 observers for the 2 adjective word pairs respectively. Although there still lacks a common measurement of subjectivity, intuitively our survey results revealed certain generalities towards abstract art paintings among people.
3
Feature Extraction
For describing the visual art paintings, a large set of image features have been extracted from both the raw image and several compound image transforms, which were found highly effective earlier in biological image classification and face recognition [11], as well as recognition of painters and schools of art [14]. The (eleven) groups of features are listed in Table 1. In addition to the raw pixels, the image features have also been extracted from several transforms of the image and transforms of transforms [14]. These transforms are Fast Fourier Transform Table 1. The features used in [14] and our study Group of Features First four moments Haralick features Multiscale histograms Tamura features Radon transform features Chebyshev statistic features Chebyshev-Fourier features Zernike features Edge statistics features Object statistics Gabor filters
Type Dimension Image Statistics 48 Texture 28 Texture 24 Texture 6 Texture 12 Polynomial 400 Polynomial 32 Shape & Edge 72 Shape & Edge 28 Shape & Edge 34 Shape & Edge 7
Emotional Semantics of Abstract Art
417
(FFT), Wavelet 2D Decomposition, Chebyshev Transform, Color Transform, and Edge Transform, as shown in Figure 2. The idea in the transforms mentioned above is to create automatically a numerical representation that captures different kinds of basic qualities of the images in a reasonably condensed representation. The dimensionality of a 1, 000 × 1, 000 RGB color image can be reduced from 3, 000, 000 to about 4, 000 with an increased descriptiveness of the features (see below for details). For instance, transforms like FFT or Wavelets can detect invariance at different levels of detail and help in reducing noise to benefit the classification. The total number of the numeric image descriptors used in this article is 4,024 for every image, whereas the authors in [14] excluded the Zernike features, Chebyshev statistics, and Chebyshev-Fourier features from several compound transforms, resulting in a total number of 3,658 descriptors.
Fig. 2. Image transforms and paths of the compound transforms described in [14]
The image features and transforms above have been implemented as part of an open source software [15] and we directly utilize it in our study. For details of the feature extraction algorithms, one may refer to [10] and the references therein.
4
Classification of Emotional Responses
After calculating all the features for all images, a mapping needs to be built to bridge the semantic gap between the low-level image features and the high-level emotional semantics. Feature Selection: Due to the usage of the large number of image descriptors, a feature selection step is expected prior to the recognition stage, since the discriminative power may vary in features and only some of them are informative, whereas others are redundant and/or non-related. Various feature selection
418
H. Zhang et al.
strategies exist. Here we utilize the popular Fisher score [1] for feature ranking, i.e. assigning each feature f with a weight Wf such that: C Wf =
¯ − f¯c )
c=1 (f
C
c=1
2 σf,c
,
(1)
where C is the total number of classes (C = 2 in our case); f¯ denotes the mean of 2 denote respectively the mean and feature f in the whole training set; f¯c and σf,c variance of feature f among all the training images of class c; The Fisher score can be explained as the ratio of between-class variance to within-class variance. The larger the F-score is, the more likely that this feature is more discriminative. Therefore the features with higher Fisher scores are retained whereas those with lower values are removed. Classification: Support Vector Machine (SVM) [3] is chosen to build the mapping as it is the state-of-art machine learning method and has been used for classification in recent emotion-related studies [19,4]. In our paper, we use the SVM package LIBSVM [2] with default parameters in order to ensure reproducibility of the experimental results. After splitting the image dataset into training and testing sets, an SVM classifier is learned using the features of training set (consisting of both positive and negative image samples). Then for every image in the testing set, a corresponding class label or affective category is automatically predicted. Evaluation: To measure the performance of SVM, the classification accuracy is calculated, defined as the proportion of correctly predicted labels (both positive and negative) within testing image set. Another measure is the precision or positive predictive rate, since we are more interested in how many positive image samples can be correctly recognized through a machine learning approach.
5
Results and Analysis
The LIBSVM [2] package was utilized in all the experiments, with linear kernel and default parameters. Since a “standard” baseline method does not exists in this field, we generated for each image an array of 4024 random numeric values, and repeated the same training and testing procedure as described in Section 4. This facilitates us to validate the effectiveness of representing emotional semantics with real low-level image features. For all the binary classification cases, the number of positive image samples was roughly equal to that of the negative ones. In each case, we calculated the final classification accuracies and precisions based on 5-fold cross validation. Classification Performances: Figure 3 shows the average precisions and accuracies as a function of the percentages of best real image features, compared with those using random image features in all the 6 cases. Table 2 lists for each case the best average precisions and accuracies with the corresponding percentages of the best image features, compared with the respective precisions and accuracies
Emotional Semantics of Abstract Art
All 20: Exciting−Boring
All 20: Relaxing−Irritating real_precision real_accuracy rand_precision rand_accuracy
●●
● ●
●●● ●● ●●
●● ● ●
●
● ● ●
● ●
●
●●●●●●●●● ●
●
●
●
●●
● ●
●
●●
●●●●●●
●
● ●●●
●●●
●
●
●
●
● ●●
●
● ●
●
●
●
●
● ● ●
●●
● ●
●
● ●
●
●
●●
●
●●
●
●
0.65
●●
●
●
● ● ●
●
●
● ●
●
●
●
●
●
●● ●●
●
●
●
● ●●●●●
●
●
●●● ●
●●
●●
● ●●●
● ●
●
●
●
● ●●
●
●
●
●
●
●● ●●
●●
●● ●
●●●●●●● ●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●●
●●
● ● ●
0.4
● ●
●
● ●
●
●
●
●
●
● ●
● ●●
●
●● ● ●
●●●●●●●
● ●
●
●
●●
● ●
0.55
●
●
●
0.3
● ●
●
●
●
●
● ●
●
● ●
●
●
0.50
● ●
● ● ●●
●
●
●
●●●
● ●
●
● ●● ●
●
●
● ●
●
●
●
●●●
●●
●●
●●
●
0.2
20
40 60 Percentage of best features (a)
80
100
0
●
●
●
●●
●● ●●
0.65
●
40 60 Percentage of best features (b)
●
0.7
●
●
●
●
●● ●
● ●
●
●
●
●● ●
● ● ● ●
●
●●
●● ●
●●
●●
●●●
●
● ●
●
● ●
● ● ●
●
●●
●
0.6
●● ●
● ●
●● ● ●●●●●
● ●
●
●
●
●
●● ●
● ●
●
●
● ● ●
●
● ●
●
●
● ● ●●
●●●
●●●●●● ●● ●
●
● ● ●●
● ● ●●● ●
● ●● ●
●
● ●● ●
● ●
● ●
●
●
●
● ●●● ●●● ●●●●●●
●
●
●
●●
● ●●
●
●
● ● ●
● ●
●
● ●
●● ●●●●●●●●
● ●● ● ● ●
●●
●●
●
● ●●
●● ● ●
●●
● ● ● ● ●● ● ●●●●●●
●
●
●
●
●
● ●
● ●
●●
●● ●
●●●● ●
●
● ● ●●
●
● ●
●
●
●
●
● ●
● ●●● ● ●
●●●● ●
● ●●●●●●●● ●●●
●● ●
0.5
● ● ●●
0.55
● ●
● ●● ● ●
●
●
Value
0.60
●
● ●
● ●
● ●
real_precision real_accuracy rand_precision rand_accuracy
●
● ● ● ●
100
●
●
●
●
●●
● ●●●●●●●●
80
●
●
●
10 females: Relaxing−Irritating real_precision real_accuracy rand_precision rand_accuracy
●
●
●
● ●●●●● ●● ●●
●
20
10 females: Exciting−Boring
● ●●
● ●● ●
● ● ●
●
0
Value
●●●● ●●●
●
●
0.50
●●
●
● ● ●
●
●
●
● ●
●● ●
●●
Value
●
●
● ●●● ●●
●●
●
●●
●
●
0.5
●●
●
●●
●●
● ●●
Value
● ●●
●
●● ●● ●●
●
●●●●●●● ● ●●
●
● ●
real_precision real_accuracy rand_precision rand_accuracy
●
●
0.60
0.6
● ●
0.70
●
● ●
419
● ●
●
●
0.45
●●
0.4
●
●●
●
●●
0.40
● ● ●●
●● ●
●
● ●
● ●
●
●
● ●
●
●
●●●
●
● ●● ● ●
●●
● ●●●
● ●●
●●●
●
●
● ● ●●
●
●
●●●●
●●
●●●●
●
●●
●
● ●●● ● ●●● ●
●●●●
● ●●
●
●
● ●●● ● ●
●
●●●
●
0.3
0.35
●
0
20
40 60 Percentage of best features (c)
80
100
0
20
40 60 Percentage of best features (d)
●
●
●
● ●● ●●
● ● ●
●●●
●
●
●
●●● ●
●● ● ●
● ●●
● ●
● ●●●● ●●
● ● ●●●●●●●●● ● ● ●
●
●
●
● ●●●●
●
● ●●●● ●●●●●●
●● ●
●
●●
●
● ●
real_precision real_accuracy rand_precision rand_accuracy
● ●
● ●
● ●
●
0.60
● ● ●
●●●
●
●●
● ●
●
●
●● ● ●
●●● ●
0.65
0.65
●
●
●● ●
●
● ●●
●
●● ●
●● ●●
● ●●
●● ●●● ●
●● ●
●
● ●● ● ●● ●
●
●
● ● ●
●
●● ●
●
●●●
● ●●●●●●● ●
●
●●● ● ●● ●
●● ●
●
●●●●●●●●●● ●●
●
●●
●
●
●
●
0.50
●●●
● ●
●
●
●
●●
●●
●
● ●● ●●●
●
●
●
● ● ●
● ●
●
●
●●●
●
● ● ● ●
●
●
●
● ●
●
●●
● ●
●●
●●
● ●
●
●
● ●
●
●
●● ● ●
●
●●●
●
●●● ●●
●● ● ●
●
●
● ●
●● ● ●
● ●●● ● ●●● ●
●
●●
●
●●●●
● ● ●●
●
●●●● ● ●
●●
●
● ●●● ●
●● ●●●●●●●●●
● ●
●●●●
● ●
●●
● ● ●
●
●●●● ●● ●●
●
0.50
0.45
● ●
● ●●
●● ●
●● ●
●
●●
● ●
●●
●
●●
● ●
●
0.55
0.55
●
0.60
● ●
●
Value
●
Value
real_precision real_accuracy rand_precision rand_accuracy
0.70
●
●
100
10 males: Relaxing−Irritating
0.70
10 males: Exciting−Boring
80
●
●
●
●
●
● ●●●
●
●
● ●
●
0.45
0.40
●
●
0
20
40 60 Percentage of best features (e)
80
100
●
0
20
40 60 Percentage of best features (f)
80
100
Fig. 3. The average precisions and accuracies as a function of the percentage of best real image features, compared with those using random features in all 6 cases
420
H. Zhang et al.
Table 2. The best average precisions (column 3) and accuracies (column 5) with the corresponding percentages (column 2) of the best image features (denoted as Real), compared with the respective precisions (column 4) and accuracies (column 6) at the same percentages of random image features (denoted as Rand) in the 6 cases (column 1). All statistics are shown in percentages (%). Case All 20: Exciting-Boring All 20: Relaxing-Irritating 10 female: Exciting-Boring 10 female: Relaxing-Irritat. 10 male: Exciting-Boring 10 male: Relaxing-Irritat. Average
Best Pre.-Real Pre.-Rand Acc.-Real Acc.-Rand 27 62 33 65 48 11 76 53 70 52 3 57 49 58 50 6 70 48 69 40 8 69 56 67 47 7 72 53 71 54 10 68 49 67 49
at the same percentages of random image features. Generally, the classification performances of using real image features significantly outperform those of using the random ones, except for the case in Figure 3(c). The average precision using real image features over all the 6 cases was 19% higher than that using the random features, using an average 10% of the best real image features. For the case in Figure 3(b), a peak precision (accuracy) of 0.76 (0.70) was obtained when using the best 11% of real image features sorted by their Fisher scores, compared with a precision (accuracy) of 0.53 (0.52) at the same percentage when using the random ones. Similar comparisons can be made in the other 5 cases. Rankings of Feature Groups: Table 3 lists the top 10 feature groups for the 2 cases in “All 20” (within the best 10% of real image features). For the “Exciting-Boring” case, the best 2 feature groups are Color histogram extracted from raw images and Multiscale histogram from Chebyshev FFT compound transforms, whereas for the “Relaxing-Irritating” case, the best 2 feature groups are Edge statistics extracted from raw images and Radon transform from Color FFT compound transforms. This conforms to the previous research using Table 3. The top 10 feature groups (compound transforms) in “all 20” cases Exciting-Boring Color histograms Multiscale histograms (Chebyshev FFT) Radon transform Chebyshev statistics (Color Chebyshev) Haralick texture (Color) Tamura texture (Edge) Chebyshev-Fourier (Color) Chebyshev statistics (Color) Tamura texture (Edge Wavelet) Chebyshev-Fourier (FFT Wavelet)
Relaxing-Irritating Edge statistics Radon transform (Color FFT) Haralick texture (Wavelet) Tamura texture (FFT Wavelet) Haralick texture (Chebyshev FFT) Tamura texture (Color FFT) Tamura texture (Wavelet FFT) Tamura texture (Color) Tamura texture (FFT) Radon transform (Edge)
Emotional Semantics of Abstract Art
421
features based on color theories for image emotions’ classification [9,16], and to the study where edge and texture features were favored for different art styles’ recognition [14].
6
Conclusions and Future Work
In this work, we have studied people’s emotions evoked by viewing abstract art images within a machine learning framework. The ground-truth labels of sample images were obtained by conducting an online web survey, where 20 observers both females and males were involved in evaluating the abstract art dataset. A large variety of low-level image descriptors were utilized to analyze their relationship with people’s emotional responses to the images. Both the utility implementing the image features and the abstract art images that we collected for the experiments are in public domain. Our results show that the low-level image features, instead of domain specific ones, can be used for effectively describing people’s high-level cognitive processes, which has been empirically demonstrated in our experiments that the image emotions can be well recognized in classification tasks. Besides, by examining the rankings of feature groups sorted by their Fisher scores, the most discriminative features are color, shape, and texture, which in itself conforms to the art and color theories, as well as to several recent studies related to image emotional semantics (e.g. [14]). Even in the case of abstract art images where the semantic content of the image does not influence the evaluation, a high degree of subjectivity is involved. This is actually true even for linguistic expressions [7]. Therefore, the aim is not to create a “correct” classification model for the emotional responses but rather model the tendencies in the evaluation. When there are thousands of subjects involved in this kind of study, it is possible to model in additional detail the agreements and disagreements in the subjective evaluations and potentially associate those with some variables that characterize each individual.3 Our next step is to compare the low-level features with the domain specific ones. A direct application is to integrate the low-level image features into an ESIR system to facilitate the emotional queries. Still, more advanced feature selection algorithms can be tested since the Fisher’s criterion neglects the relationships between features. Besides, other relevance feedback modalities [20] from the observer, such as eye movements and speech signals (see [21] for instance), could be combined with image features to enhance the recognition performance, so that a deeper understanding of image emotions can be accomplished. In addition to Emotional Semantic Image Retrieval [17], potential applications include automatic selection of images that could be used to induce specific emotional responses. Acknowledgements. This work is supported by Information and Computer Science Department at Aalto University School of Science. We gratefully 3
For this purpose, you are welcome to interact with our online survey at http://www.multimodwellbeing.appspot.com/?controlled
422
H. Zhang et al.
acknowledge Ms. Na Li and her colleagues for attending the psychophysical evaluations. We are grateful for the EIT ICT Labs and Dr. Krista Lagus for the fact the Wellbeing Innovation Camp 2010 served as the starting point for the work reported in this paper. We wish that the increasing understanding of the wellbeing effects of pieces of art and other cultural artefacts will approve to have useful applications in the future.
References 1. Bishop, C.M.: Pattern recognition and machine learning, vol. 4. Springer, New York (2006) 2. Chang, C., Lin, C.: LIBSVM: a library for support vector machines (2001) 3. Cortes, C., Vapnik, V.: Support-vector networks. Machine Learning 20(3), 273–297 (1995) 4. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Studying aesthetics in photographic images using a computational approach. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006, Part III. LNCS, vol. 3953, pp. 288–301. Springer, Heidelberg (2006) 5. Datta, R., Joshi, D., Li, J., Wang, J.Z.: Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys (CSUR) 40(2), 5 (2008) 6. Hanjalic, A.: Extracting moods from pictures and sounds: Towards truly personalized TV. IEEE Signal Processing Magazine 23(2), 90–100 (2006) 7. Honkela, T., Janasik, N., Lagus, K., Lindh-Knuutila, T., Pantzar, M., Raitio, J.: GICA: Grounded intersubjective concept analysis – a method for enhancing mutual understanding and participation. Tech. Rep. TKK-ICS-R41, AALTO-ICS, ESPOO (December 2010) 8. Honkela, T., Lindh-Knuutila, T., Lagus, K.: Measuring adjective spaces. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds.) ICANN 2010, Part I. LNCS, vol. 6352, pp. 351–355. Springer, Heidelberg (2010) 9. Machajdik, J., Hanbury, A.: Affective image classification using features inspired by psychology and art theory. In: Proceedings of the International Conference on Multimedia, pp. 83–92. ACM, New York (2010) 10. Orlov, N., Johnston, J., Macura, T., Shamir, L., Goldberg, I.G.: Computer Vision for Microscopy Applications. In: Obinata, G., Dutta, A. (eds.) Vision Systems– Segmentation and Pattern Recognition, pp. 221–242. ARS Pub. (2007) 11. Orlov, N., Shamir, L., Macura, T., Johnston, J., Eckley, D.M., Goldberg, I.G.: WND-CHARM: Multi-purpose image classification using compound image transforms. Pattern Recognition Letters 29(11), 1684–1693 (2008) 12. Osgood, C.E., Suci, G.J., Tannenbaum, P.: The measurement of meaning. University of Illinois Press, Urbana (1957) 13. Ou, L., Luo, M.R., Woodcock, A., Wright, A.: A study of colour emotion and colour preference. Part I: Colour emotions for single colours. Color Research & Application 29(3), 232–240 (2004) 14. Shamir, L., Macura, T., Orlov, N., Eckley, D.M., Goldberg, I.G.: Impressionism, expressionism, surrealism: Automated recognition of painters and schools of art. ACM Transactions on Applied Perception 7, 8:1–8:17 (2010) 15. Shamir, L., Orlov, N., Eckley, D.M., Macura, T., Johnston, J., Goldberg, I.G.: Wndchrm–an open source utility for biological image analysis. Source Code for Biology and Medicine 3(1), 1–13 (2008)
Emotional Semantics of Abstract Art
423
16. Solli, M., Lenz, R.: Emotion related structures in large image databases. In: Proceedings of the ACM International Conference on Image and Video Retrieval, pp. 398–405. ACM, New York (2010) 17. Wang, W., He, Q.: A survey on emotional semantic image retrieval. In: International Conference on Image Processing (ICIP), pp. 117–120. IEEE, Los Alamitos (2008) 18. Wang, W., Yu, Y., Jiang, S.: Image retrieval by emotional semantics: A study of emotional space and feature extraction. In: International Conference on Systems, Man and Cybernetics (SMC 2006), vol. 4, pp. 3534–3539. IEEE, Los Alamitos (2006) 19. Wu, Q., Zhou, C., Wang, C.: Content-based affective image classification and retrieval using support vector machines. In: Tao, J., Tan, T., Picard, R.W. (eds.) ACII 2005. LNCS, vol. 3784, pp. 239–247. Springer, Heidelberg (2005) 20. Zhang, H., Koskela, M., Laaksonen, J.: Report on forms of enriched relevance feedback. Technical Report TKK-ICS-R10, Helsinki University of Technology, Department of Information and Computer Science (November 2008) 21. Zhang, H., Ruokolainen, T., Laaksonen, J., Hochleitner, C., Traunm¨ uller, R.: Gazeand speech-enhanced content-based image retrieval in image tagging. In: International Conference on Artificial Neural Networks–ICANN 2011, pp. 373–380 (2011)
Author Index
Alene, Henok 413 Alonso, Seraf´ın 10 Augilius, Eimontas 413 Badrinath, Rama 364 Baena-Garc´ıa, Manuel 90 Bahamonde, Antonio 246 B´ artolo Gomes, Jo˜ ao 22 Bender, Andreas 318 Berchenko, Yakir 34 Berthold, Michael R. 1, 306, 388 Bifet, Albert 90 Blockeel, Hendrik 318 Borgelt, Christian 43, 55 Boselli, Roberto 270 Bosma, Carlos 376 Bradley, Elizabeth 173 Braune, Christian 55 Brueller, Nir N. 34 Buyko, Ekaterina 67 ´ Campo-Avila, Jos´e del 90 Caporossi, Gilles 80 Carmona-Cejudo, Jos´e M. 90 Ceccon, Stefano 101 Cesarini, Mirko 270 Crabb, David 101 Cule, Boris 113 Daliot, Or 34 Dasu, Tamraparni 125 De Knijf, Jeroen 138 Dom´ınguez, Manuel 10 Duplisea, Daniel 352 Fiosina, Jelena 150 Fiosins, Maksims 150 Gaber, Mohamed Medhat 22 Gama, Jo˜ ao 90, 162 Gamper, Hannes 413 Garland, Joshua 173 Garway-Heath, David 101 Goethals, Bart 113, 138
Gomes, Carla P. 8 Gr¨ un, Sonja 55 Hahn, Udo 67 Hammer, Barbara 185 Hassan, Fadratul Hafinaz Hollm´en, Jaakko 10 Honkela, Timo 413 H¨ oppner, Frank 210
198
Klawonn, Frank 210, 234 Knobbe, Arno 376 K¨ olling, Jan 258 Koopman, Arne 376 Kosina, Petr 162 K¨ otter, Tobias 43, 306 Krempl, Georg Matthias 222 Krishnamurthy, Avanthi 364 Krishnan, Shankar 125 Krone, Martin 234 Laaksonen, Jorma 413 Langenk¨ amper, Daniel 258 Lastra, Gerardo 246 Leblay, Christophe 80 Leray, Philippe 401 Liekens, Anthony 138 Linde, J¨ org 67 Liu, Xiaohui 9 Loyek, Christian 258 Luaces, Oscar 246 L¨ uhrs, Thorsten 234 Mart´ınez-Marchena, Ildefonso May, Sigrun 210 Menasalvas, Ernestina 22 Mercorio, Fabio 270 Mezzanzanica, Mario 270 Miao, Shengfa 376 Mokbel, Bassam 185 Molina, Martin 282 Morales-Bueno, Rafael 90 Mora-L´ opez, Llanos 294 Murty, M. Narasimha 328
294
426
Author Index
Nagel, Uwe 306 Nattkemper, Tim W. 258 Niehaus, Karsten 258 Obladen, Bas
Sousa, Pedro A.C. 22 Stent, Amanda 282 Sulkava, Mika 10, 340 Suresh, V. 364
376 Talonen, Jaakko 340 Tassenoy, Sven 113 Thiel, Kilian 306 Tucker, Allan 101, 198, 352
Parodi, Enrique 282 Dawid 306 Piatek, Piliougine, Michel 294 Pomann, Gina Maria 125 Prada, Miguel Angel 10 Priebe, Steffen 67 Quevedo, Jose Ramon Rahmani, Hossein Ritter, Christiane
Vanschoren, Joaquin 376 Veni Madhavan, C.E. 364 Verboven, Sabine 113 Vespier, Ugo 376
246
318 234
Wiswedel, Bernd Yasin, Amanullah
Schleif, Frank-Michael 185 Sharma, Govind 328 Sidrach-de-Cardona, Mariano
294
Zhang, He Zhu, Xibin
413 185
388 401