Lecture Notes in Bioinformatics
6160
Edited by S. Istrail, P. Pevzner, and M. Waterman Editorial Board: A. Apostolico S. Brunak M. Gelfand T. Lengauer S. Miyano G. Myers M.-F. Sagot D. Sankoff R. Shamir T. Speed M. Vingron W. Wong
Subseries of Lecture Notes in Computer Science
Francesco Masulli Leif E. Peterson Roberto Tagliaferri (Eds.)
Computational Intelligence Methods for Bioinformatics and Biostatistics 6th International Meeting, CIBB 2009 Genoa, Italy, October 15-17, 2009 Revised Selected Papers
13
Series Editors Sorin Istrail, Brown University, Providence, RI, USA Pavel Pevzner, University of California, San Diego, CA, USA Michael Waterman, University of Southern California, Los Angeles, CA, USA Volume Editors Francesco Masulli Università di Genova, DISI, Via Dodecaneso 35, 16146 Genoa, Italy E-mail:
[email protected] Leif E. Peterson Cornell University, Weill Cornell Medical College, TMHRI 6565 Fannin, Suite MGJ6-031, Houston, Texas 77030, USA E-mail:
[email protected] Roberto Tagliaferri Università di Salerno, DMI, Via Ponte don Melillo 84084 Fisciano (Sa), Italy E-mail:
[email protected] Library of Congress Control Number: 2010930710
CR Subject Classification (1998): F.1, J.3, I.4, I.2, I.5, F.2 LNCS Sublibrary: SL 8 – Bioinformatics ISSN ISBN-10 ISBN-13
0302-9743 3-642-14570-1 Springer Berlin Heidelberg New York 978-3-642-14570-4 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
This volume contains a selection of the best contributions delivered at the 6th International Meeting on Computational Intelligence Methods for Bioinformatics and Biostatistics (CIBB 2009) held at Oratorio San Filippo Neri in Genoa (Italy) during October 15–17, 2009. The CIBB meeting series is organized by the Special Interest Group on Bioinformatics of the International Neural Network Society (INNS) to provide a forum open to researchers from different disciplines to present and discuss problems concerning computational techniques in bioinformatics, systems biology and medical informatics with a particular focus on neural networks, machine learning, fuzzy logic, and evolutionary computational methods. From 2004 to 2007, CIBB meetings were held with an increasing number of participants in the format of a special session of bigger conferences, namely, WIRN 2004 in Perugia, WILF 2005 in Crema, FLINS 2006 in Genoa and WILF 2007 in Camogli. With the great success of the special session at WILF 2007 that included 26 strongly rated papers, we launched the first autonomous CIBB conference edition starting with the 2008 conference in Vietri. CIBB 2009 attracted 57 paper submissions from all over the world. A rigorous peer-review selection process was applied to ultimately select the papers included in the program of the conference. This volume collects the best contributions presented at the conference. Moreover, the volume also includes one presentation from keynote speakers and two tutorial presentations. The success of CIBB 2009 is to be credited to the contribution of many people. Firstly, we would like to thank the organizers of the special sessions for attracting so many good papers which extended the focus of the main topics of CIBB. Second, special thanks are due to the Program Committee members and reviewers for providing high-quality reviews. Last, but not least, we would like to thank the keynote speakers and tutorial presenters Gilles Bernot (University of Nice Sophia Antipolis, France), Taishin Nomura (Osaka University, Japan) and the tutorial presenters Fioravante Patrone (University of Genoa, Italy), and Santo Motta and Francesco Pappalardo (University of Catania, Italy).
October 2009
Francesco Masulli Leif Peterson Roberto Tagliaferri
Organization
The 6th CIBB meeting was a joint operation of the Special Interest Groups on Bioinformatics and Biopattern of INNS and of the Task Force on Neural Networks of the IEEE CIS Technical Committee on Bioinformatics and Bioengineering with the collaboration of the Gruppo Nazionale Calcolo Scientifico, the Italian Neural Networks Society, the Department of Computer and Information Sciences, University of Genoa, Italy, and the Department of Mathematics and Computer Science, University of Salerno, Italy, and with the technical sponsorship of the Human Health Foundation Onlus, the Italian Network for Oncology Bioinformatics, the University of Genoa, Italy, the IEEE CIS and the Regione Liguria.
Conference Chairs Francesco Masulli Leif E. Peterson Roberto Tagliaferri
University of Genoa, Italy and Temple University, Philadelphia, USA Methodist Hospital Research Institute, Houston, USA University of Salerno, Italy
CIBB Steering Committee Pierre Baldi Alexandru Floares Jon Garibaldi Francesco Masulli Roberto Tagliaferri
University of California, Irvine, USA Oncological Institute Cluj-Napoca, Romania University of Nottingham, UK University of Genoa, Italy and Temple University, Philadelphia, USA University of Salerno, Italy
Program Committee Klaus-Peter Adlassnig Gilles Bernot Domenico Bordo Mario Cannataro Giovanni Cuda Joaquin Dopazo Enrico Formenti
Medical University of Vienna, Austria University of Nice Sophia Antipolis, France National Cancer Research Institute, Genoa, Italy University of Magna Graecia, Catanzaro, Italy University of Magna Graecia, Catanzaro, Italy C.I. Pr´ıncipe Felipe, Valencia, Spain University of Nice Sophia Antipolis, France
VIII
Organization
Antonio Giordano
Nicolas Le Novere Pietro Li´ o Giancarlo Mauri Oleg Okun Giulio Pavesi David Alejandro Pelta Jagath Rajapakse Volker Roth Giuseppe Russo Anna Tramontano Giorgio Valentini Gennady M. Verkhivker L. Gwenn Volkert
University of Siena, Italy and Sbarro Institute for Cancer Research and Molecular Medicine, Philadelphia, USA Wellcome-Trust Genome Campus, Hinxton, UK University of Cambridge, UK University of Milano Bicocca, Italy University of Oulu, Finland University of Milan, Italy University of Granada, Spain Nanyang Technological University, Singapore ETH Zurich, Switzerland Temple University, Philadelphia, USA Sapienza University, Rome, Italy University of Milan, Italy University of Kansas, and UC San Diego, USA Kent State University, Kent, USA
Special Session Organizers C. Angelini, P. Li´ o, L. Milanesi Combining Bayesian and Machine Learning Approaches in Bioinformatics: State of Art and Future Perspectives A. Floares, F. Manolache, Intelligent Systems for Medical Decisions A. Baughman Support F. Patrone Using Game-Theoretical Tools in Bioinformatics V.P. Plagianakos, Data Clustering and Bioinformatics D.K. Tasoulis
Referees (in addition to previous committees) R. Avogadri J. Bacardit M. Biba P. Chen A. Ciaramella E. Cilia A. d’Acierno D. di Bernardo
G. Di Fatta P. Ferreira A. Fiannaca M. Filippone B. Fischer S. Gaglio C. Guziolowski P.H. Guzzi
G. Lo Bosco L. Milanesi A. Maratea G. Pollastri D. Trudgian
Organization
IX
Local Scientific Secretary Paolo Romano Stefano Rovetta
National Cancer Research Institute, Genoa, Italy University of Genoa, Italy
Congress Management Davide Chicco V.N. Manjunath Aradhya
Laura Montanari Maura E. Monville Daniela Peghini
University of Genoa, Italy Dayananda Sagar College of Engg, Bangalore, India and University of Genoa, Italy University of Genoa, Italy University of Genoa, Italy University of Genoa, Italy
Financing Institutions DISI, Department of Computer and Information Sciences, University of Genoa, Italy DMI, Department of Mathematics and Computer Science, University of Salerno, Italy GNCS, Gruppo Nazionale Calcolo Scientifico, Italy
Table of Contents
Tools for Bioinformatics The ImmunoGrid Simulator: How to Use It . . . . . . . . . . . . . . . . . . . . . . . . . Francesco Pappalardo, Mark Halling-Brown, Marzio Pennisi, Ferdinando Chiacchio, Clare E. Sansom, Adrian J. Shepherd, David S. Moss, Santo Motta, and Vladimir Brusic
1
Improving Coiled-Coil Prediction with Evolutionary Information . . . . . . . Piero Fariselli, Lisa Bartoli, and Rita Casadio
20
Intelligent Text Processing Techniques for Textual-Profile Gene Characterization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Floriana Esposito, Marenglen Biba, and Stefano Ferilli SILACAnalyzer - A Tool for Differential Quantitation of Stable Isotope Derived Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lars Nilse, Marc Sturm, David Trudgian, Mogjiborahman Salek, Paul F.G. Sims, Kathleen M. Carroll, and Simon J. Hubbard
33
45
Gene Expression Analysis Non-parametric MANOVA Methods for Detecting Differentially Expressed Genes in Real-Time RT-PCR Experiments . . . . . . . . . . . . . . . . . Niccol´ o Bassani, Federico Ambrogi, Roberta Bosotti, Matteo Bertolotti, Antonella Isacchi, and Elia Biganzoli In Silico Screening for Pathogenesis Related-2 Gene Candidates in Vigna Unguiculata Transcriptome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ana Carolina Wanderley-Nogueira, Nina da Mota Soares-Cavalcanti, Luis Carlos Belarmino, Adriano Barbosa-Silva, Ederson Akio Kido, Semiramis Jamil Hadad do Monte, Valesca Pandolfi, Tercilio Calsa-Junior, and Ana Maria Benko-Iseppon Penalized Principal Component Analysis of Microarray Data . . . . . . . . . . Vladimir Nikulin and Geoffrey J. McLachlan An Information Theoretic Approach to Reverse Engineering of Regulatory Gene Networks from Time–Course Data . . . . . . . . . . . . . . . . . . Pietro Zoppoli, Sandro Morganella, and Michele Ceccarelli
56
70
82
97
XII
Table of Contents
New Perspectives in Bioinformatics On the Use of Temporal Formal Logic to Model Gene Regulatory Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Gilles Bernot and Jean-Paul Comet
112
Predicting Protein-Protein Interactions with K-Nearest Neighbors Classification Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mario R. Guarracino and Adriano Nebbia
139
Simulations of the EGFR - KRAS - MAPK Signalling Network in Colon Cancer. Virtual Mutations and Virtual Treatments with Inhibitors Have More Important Effects Than a 10 Times Range of Normal Parameters and Rates Fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Nicoletta Castagnino, Lorenzo Tortolina, Roberto Montagna, Raffaele Pesenti, Anahi Balbi, and Silvio Parodi
151
Special Session on “Using Game-Theoretical Tools in Bioinformatics” Basics of Game Theory for Bioinformatics . . . . . . . . . . . . . . . . . . . . . . . . . . . Fioravante Patrone Microarray Data Analysis via Weighted Indices and Weighted Majority Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Roberto Lucchetti and Paola Radrizzani
165
179
Special Session on “Combining Bayesian and Machine Learning Approaches in Bioinformatics: State of Art and Future Perspectives” Combining Replicates and Nearby Species Data: A Bayesian Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claudia Angelini, Italia De Feis, Viet Anh Nguyen, Richard van der Wath, and Pietro Li` o Multiple Sequence Alignment with Genetic Algorithms . . . . . . . . . . . . . . . Marco Botta and Guido Negro
191
206
Special Session on “Data Clustering and Bioinformatics” (DCB 2009) Multiple Clustering Solutions Analysis through Least-Squares Consensus Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Loredana Murino, Claudia Angelini, Ida Bifulco, Italia De Feis, Giancarlo Raiconi, and Roberto Tagliaferri
215
Table of Contents
XIII
Projection Based Clustering of Gene Expression Data . . . . . . . . . . . . . . . . Sotiris K. Tasoulis, Vassilis P. Plagianakos, and Dimitris K. Tasoulis
228
Searching a Multivariate Partition Space Using MAX-SAT . . . . . . . . . . . . Silvia Liverani, James Cussens, and Jim Q. Smith
240
A Novel Approach for Biclustering Gene Expression Data Using Modular Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . V.N. Manjunath Aradhya, Francesco Masulli, and Stefano Rovetta
254
Special Session on “Intelligent Systems for Medical Decisions Support” (ISMDS 2009) Using Computational Intelligence to Develop Intelligent Clinical Decision Support Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alexandru G. Floares Different Methodologies for Patient Stratification Using Survival Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ana S. Fernandes, Davide Bacciu, Ian H. Jarman, Terence A. Etchells, Jos´e M. Fonseca, and Paulo J.G. Lisboa 3-D Mouse Brain Model Reconstruction from a Sequence of 2-D Slices in Application to Allen Brain Atlas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Anton Osokin, Dmitry Vetrov, and Dmitry Kropotov A Proposed Knowledge Based Approach for Solving Proteomics Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Antonino Fiannaca, Salavatore Gaglio, Massimo La Rosa, Daniele Peri, Riccardo Rizzo, and Alfonso Urso Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
266
276
291
304
319
The ImmunoGrid Simulator: How to Use It Francesco Pappalardo1,3, Mark Halling-Brown6, Marzio Pennisi1 , Ferdinando Chiacchio1 , Clare E. Sansom4 , Adrian J. Shepherd4 , David S. Moss4 , Santo Motta1,2 , and Vladimir Brusic5 1
4
Department of Mathematics and Computer Science, University of Catania 2 Faculty of Pharmacy, University of Catania 3 Istituto per le applicazioni del calcolo Mauro Picone, CNR, Roma Department of Biological Sciences and Institute of Structural and Molecular Biology, Birkbeck College, University of London 5 Cancer Vaccine Center, Dana-Farber Cancer Institute, Boston 6 The institute of Cancer Research, Surrey {mpennisi,francesco,motta,fchiacchio}@dmi.unict.it, {a.shepherd,d.moss,c.sansom}@mail.cryst.bbk.ac.uk,
[email protected], vladimir
[email protected] Abstract. In this paper we present the ImmunoGrid project, whose goal is to develop an immune system simulator which integrates molecular and system level models with Grid computing resources for large-scale tasks and databases. We introduce the models and the technologies used in the ImmunoGrid Simulator, showing how to use them through the ImmunoGrid web interface. The ImmunoGrid project has proved that simulators can be used in conjunction with grid technologies for drug and vaccine discovery, demonstrating that it is possible to drastically reduce the developing time of such products.
1
Introduction
The ImmunoGrid project is a three year project funded by the FP6 programme of the European Union which established an infrastructure for the simulation of the immune system. One of the major objectives of the ImmunoGrid project is to help the development of vaccines, drugs and immunotherapies. The unique component of this project is the integration of the simulations of immune system processes at molecular, cellular and organ (system) levels for various applications. The project is a web-based implementation of the Virtual Human Immune System using Grid technologies. It adopts a modular structure that can be easily extended and modified. Depending on the users’ needs, it can be used as an antigen analyzer (T-cell and B-cell), antigen processing simulator, infection simulator, cancer simulator, allergy simulator, vaccine designer, etc. While immune system models and simulators have existed for many years, their use, from a computational point of view, has been limited by the amount of computational effort needed in order to simulate a realistic immunological scenario. F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 1–19, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
F. Pappalardo et al.
In recent years, Grid Technologies have enabled access to powerful, distributed computational resources. Partly as a consequence, the use of models and simulators in immunology is becoming increasingly relevant to the health and life sciences. Nevertheless, the lack of communication between modelers and life scientists (biologists, medical doctors, etc.) represents an important drawback that inhibits the wider use of models and simulators. The ImmunoGrid project was intended to develop an easy-to-access simulator capable of showing, using selected descriptive and predictive models, that Information and Communication Technology (ICT) tools can be useful in the life sciences. Existing models on specific pathologies have been upgraded, verified and integrated via a web interface for scientific (predictive models) ed educational (descriptive models) purposes. New models have been also developed. Descriptive models are tuned and reproduce data from the literature. Models can also be used to predict the output of a new experiment. If the model predictions are confirmed by wet experiments the model can be used as a predictive model. Predictive models require fine tuning of the parameters on existing data which in turn implies longer development times. The ImmunoGrid simulator also includes a model (cancer vaccination) whose predictions have been experimentally validated. Academics and industrial vaccine developers are the primary users to whom the simulator is addressed. In addition, the simulator is of benefit to students, as it can be easily be used in academic courses to explain how the immune system works. In the present paper we will describe the ImmunoGrid simulator and its components. We will show how to use the different models included in the ImmunoGrid simultor, but for the description of each model the reader must refer to the published papers quoted in the reference. The paper is organized as follows: section 1 introduces the ImmunoGrid project; section 2 describes the technologies underlying the ImmunoGrid simulator, section 3 is devoted to explaining how to use the ImmunoGrid simulator and in section 4 final conclusions are drawn.
2
The ImmunoGrid Simulator
The ImmunoGrid simulator uses a modular strategy to integrate molecular and system models with grid and web-based technologies in order to provide an easily extensible and accessible computational framework. System level modeling is used to model biological events at a cellular and organ scale. System models and software prototypes of the immune system able to reproduce and predict immune responses to viruses, analyze MHC diversity and its effects on host-pathogen interactions, represent B cell maturation, simulate both cellular and humoral immune responses represent the outcome of system level modeling. The cores of the system modeling of the ImmunoGrid Simulator are based on C-ImmSim and SimTriplex computational models [1, 2]. C-ImmSim reproduces
The ImmunoGrid Simulator: How to Use It
3
the immune responses to bacteria and viruses (e.g. HIV-1), and SimTriplex models the tumor growth and responses to immunization. Molecular level models are used to simulate the immune systems sequencebased processing and discriminate self and non-self proteins at the molecular level. Actually, various modeling paradigms such as Artificial Neural Networks (ANN), Matrix, Hidden Makov Model (HMM), Support Vector Machine (SVM) and other statistical techniques are used. These models focus on predictions of Tcell and B-cell epitopes. CBS prediction tools make up the core of the molecular level models [3, 4]. Both pre-calculated models and online job submission models are supplied from the web for different users. Standardization of concepts and rules for the identification, description and classification of biological components and processes has been done to achieve easy cooperation and communication among the involved partners. A a common conceptual model, terminology and standards are used to favor the integration of different models into an unique infrastructure. For this purpose definitions R and ontologies are largely based on IMGT[5, 6], which has become a widely accepted standard in immunology. 2.1
System Level Models
The immune system is an highly distributed system capable of acting against foreign pathogens. In spite of the lacking of a central control system, it performs complex functions at a system level efficiently and effectively [7]. Principal mechanisms on how the immune system acts against harmful viruses, bacteria and microorganisms have been known since decades. However our knowledge of the immune system remains incomplete and imprecise. The main goal of mathematical and computational modeling in life sciences and, in particular, in immunology is to provide a formal description of the underlying organization principles of the immune system and the relationships between its components. This description needs to capture the complexity of the problem at the right level of abstraction in order to coherently describe and reproduce immune system behavior. To use mathematical models in immunology, both the strengths and weakness of the methodologies involved and the source data used for modeling have to be deeply understood and analyzed. For example the source data often needs to be properly treated (normalization, filtering, or other pre-processing) prior to be used for model development. Mathematical models of the immune system evolved from classical models using differential equations, difference equations, or cellular automata to model a relatively small number of interactions, molecules, or cell types involved in immune responses. Description of major mathematical models for immunological applications can be found in [8, 9]. The key enabling technologies of genomics [10], proteomics [11, 12], bioinformatics [13] and systems biology (including such genomic pathway analysis) [14] in immunology have provided large quantities of data describing molecular profiles of both physiological and pathological states.
4
F. Pappalardo et al.
Old and new technologies such as Multiparametric flow cytometry, nanotechnology for quantitation of cytokine production, ELISPOT, intra-cytoplasmic cytokine staining and mRNA/micro-RNA-based assays and laser scanning cytometry keep improving and expanding, enabling the availability of new data and informations that permit a more detailed modeling of immune processes [15, 16]. Mathematical modeling of the immune system has grown to include the extensive use of simulations [9] and iterative refinement of the immune system models at both molecular [17] and system level [18]. Only models that can be practically applied to vaccine discovery, formulation, optimization and scheduling can be considered. These models must, therefore, be at the same time biologically realistic and mathematically tractable [19]. The ImmunoGrid system-level modeling core has been inspired by IMMSIM, a cellular automata model of the immune system developed by Celada and Seiden [20]. In IMMSIM entities are followed individually, and the stochastic rules governing the interactions are represented by sophisticated algorithms. The model uses a discrete lattice grid, and more than one entity can be found in every site of the lattice. It also introduces the use of bit string representations of receptors and ligands. IMMSIM has been a conceptually important advance, because it developed a general modeling framework that could be used for multiple studies. It incorporated enough immunological detail to be used for real immunological problems. The use of spatial physical models such as partial differential equations has been also added to describe lymph nodes, chemotaxis, cell movement and diffusion. The two key models of the immune system used by ImmunoGrid are represented by C-ImmSim [1] and SimTriplex [2]. These models describe both adaptive and innate immunity and the core function, focusing in particular on the modeling the two branches of adaptive immunity (humoral and cellular immunity). C-ImmSim is a multi-purpose model used for the modeling of primary and secondary immune responses, bacterial and viral infection. It has been used for the description of HIV infection [21, 22], Epstein-Barr virus infection [23] and cancer immunotherapy [24]. SimTriplex first release initially derived from C-ImmSim, and was focused on predictive modeling. Later versions of SimTriplex evolved independently from its progenitor. The SimTriplex model has been used as a predictive model for the immunoprevention of cancer elicited by a vaccine [2, 18, 25, 26] and more recently as a descriptive model of atherosclerosis processes [27]. From SimTriplex model, new ImmunoGrid physical models for describing tumor growth based on nutrient or oxygen starvation (based on the lattice Boltzmann method [28, 29]) have been developed. These models enabled the simulation of tumor growth for both benign and malignant tumors. ImmunoGrid also has a lymph node simulator derived by C-ImmSim, which offers a mechanistic view of the immune processes that occur in lymph nodes. The models used in ImmunoGrid have been carefully validated experimentally and incrementally refined (predict-test-refine), allowing the investigation of
The ImmunoGrid Simulator: How to Use It
5
various problems, such as the study of mammary cancer immunoprevention, therapeutic approaches to melanoma and lung cancer, immunization with influenza epitopes, the study of HIV infection and HAART therapy, and the modeling of atherosclerosis. The immune responses of an immunopreventive vaccine against mammary carcinoma in HER-2/neu mice have been accurately reproduced for up to 52 weeks of age [2, 18, 25, 26]. Modeling of HIV-1 infection in untreated as well as patients receiving HAART has been obtained by tuning with data from literature as well as clinical observations [21, 22]. Atherogenesis descriptive model was tuned to match published experimental data [27]. The descriptive lymph-node simulator reproduces experimental data published in [30]-[32]. 2.2
Molecular Level Models
One of the most important tasks of the immune system is to discriminate between self and non-self patterns. This immune systems specialized sequencebased processing primarily occurs in adaptive immune response and involves multiple mechanisms (short range non-covalent interactions, hydrogen binding, Van Der Walls interactions, etc.). The adaptive immune system has two different arms: the humoral immunity mediated by antibodies and cellular immunity mediated by T cells. Antibodies are released by B cells and they can neutralize pathogens and antigens outside the cells. T cells act to neutralize intracellular pathogens and cancers by killing infected or malfunctioning cells (cytotoxic T cells), and can also provide a boost for both humoral and cellular immunity responses (T helper cells). B cells use receptors that have the same shape of the antibodies they release after differentiation into plasmacells. They are specialized in recognizing epitopes represented by 3D shapes on the antigen surface, mostly discontinuous parts of antigen sequences. T cells receptors are indeed specialized in recognizing epitopes composed by short fragments of antigenic peptides presented on the surface of infected cells. Cytotoxic CD8 T-cells (CTLs) recognize and target cells that express foreign T-cell epitopes in conjunction with class I MHC complexes. Helper CD4 T cells recognize class II MHC complexes presented on specialized Antigen Presenting Cells surfaces such as B cells, Macrophages and Dendritic cells, and provide co-stimulatory signals needed for activation of B cells and CTLs. The combinatorial complexity of antigen processing and presentation results in a large variety of cell clones that recognize, act and gain memory for specific antigens. The prediction and analysis of MHC-binding peptides and Tcell epitopes thus represents a problem suitable for large-scale screening and computational analysis. Many computational models of antigen processing and presentation have been developed during the last two decades. Specialized databases have been used to store information on MHC ligands and T-cell epitopes [33]-[36]. Multiple
6
F. Pappalardo et al.
prediction tools such as quantitative matrices, artificial neural networks, hidden Markov models, support vector machines, molecular modeling and others have been realized [43, 44] and made available for the scientific community through the use of web servers. The molecular-level models used in ImmunoGrid are primarily based on the CBS tools [37]-[42], but several other predictive models are included as well [45][47]. Exhaustive validation with carefully selected experimental data has been used to accurately select the best predictive model for any biological scenario. Focusing on vaccine research, the ImmunoGrid molecular models have been applied to analyze 40000 influenza proteins for peptides that bind to some 50 HLA alleles, showing how the predictions of peptide binding to HLA class I molecules are of high accuracy and therefore directly applicable to identification of vaccine targets [48]-[50]. Furthermore, these models are stable and their performance can be reproduced across several major HLA class I variants [51]. Predictions of MHC-II class II ligands and T-cell epitopes represent a more difficult challenge since MHC-II-binding predictions are of much lower accuracy than those of MHC-I [52, 53]. However this accuracy can be improved by the use of genetic algorithms [54], Gibbs sampling [55] or by combining prediction results from multiple predictors [56]. HLA class I predictions can be actually used for prediction of positives (binders and T-cell epitopes), while HLA class II predictions can be used for elimination of obvious negatives (non-binders, non-T-cell epitopes). 2.3
Integration with the Grid Framework
Newly designed integrated web-based systems for accessing to highly optimized computational resources, data storing and managing, and laboratory automation are nowadays emerging to support and improve research [57, 58]. The main goals of these systems are to provide integration of different hardware resources and to hide the underlying heterogeneities under an homogenous interface. These systems have to deal with the bottleneck coming from the huge and rapidly growing quantities of data that current research challenges require. Developments in the field of information and communication technologies and computational intelligence [59] allowed the born of the concept of “Virtual Laboratory”. A Virtual Laboratory can be defined as an integrated intelligent environment for the systematic analysis and production of high-dimensional quality assured data, in contrast with the common approach where independent exploratory studies are usually combined in an ad hoc manner [60]. The ImmunoGrid project goal is to address many specific vaccine questions thanks to the integration of molecular and system level models with Grid computing resources for large-scale tasks and databases. Even if grid infrastructures are not not always essential for computational projects, the ImmunoGrid simulator needs the integration of grid technologies for the multiple reasons. One of the essential requirements of the project is represented by the need of running large numbers of simulations. Multiple simulations over large number of individuals allow the exploration of parameters space.
The ImmunoGrid Simulator: How to Use It
7
The increasing in size of the simulation space represents an additional aim of the ImmunoGrid project. Larger simulations can give a more coherent representation of the “natural scale” of the immune system, and then more powerful machines, such those usually available in grid environments, have to be used. The required computational resources may vary substantially from a simulation to another, depending on the size and complexity of the question we want to address. For this reason, some resources will be more appropriate than others for certain problems. A Grid environment can easily supply the access to a more diverse set of resources than could be accessed without. The ImmunoGrid grid implementation has been designed to provide access to multiple diverse grid middlewares through a single interface, hiding the complexity of the implementation from the user. Without this approach, the average user would have to deal with complexities such as virtualization softwares, applications servers, authentications, certificates and so on. Using the ImmunoGrid integrated simulator, the final user will only see a front end which is as simple as using an average web form. To achieve this goal a job broker/launcher has been implemented into the ImmunoGrid Web Server (IWS) in order to access to PI2S2 [61], UNICORE [62] or AHE [63] clients via command line tools, and web services via standard Simple Object Access protocols (SOAP). The gLite middleware, mandatory for accessing to the PI2S2 Cometa Grid Infrastructure [61], has been integrated with the (IWS). Other middlewares that have been integrated are the Application Hosting Environment (AHE), DESHL client [64] (UNICORE) and simple web service access. The AHE permits to launch jobs on both globus [66] enabled resources and local (non-national grid) GRIDSam [65] resources. GRIDSam is a web service capable of launching jobs to local systems and of building Resource Specific Language (RSL) requests to globus enabled machines. To allow access to supercomputing resources at the CINECA DEISA site [67], integration of a DESHL client (a command line client which is able to launch jobs on UNICORE enabled machines) has also been provided. A simulator web service implemented in Java, Perl and PHP has been created to allow resources which do not have any grid middleware to be made available through the IWS. This service is able to run on an application server on the host resource and fork jobs to the local system. Interested readers can find further details in [69–71]. 2.4
The Web Interface
The ImmunoGrid web interface offers an educational and a research section. The educational section is dedicated to students whereas the research section is devoted to researchers and requires login credentials. Jobs coming from the educational section are only allowed access to local IWS resources as they are normally small, low-complexity jobs. Jobs launched from the research section level have access to the full range of resources. This approach also prevents access to the grid resources by unhautorized users.
8
F. Pappalardo et al.
The web interface is implemented in PHP, AJAX and DHTML to provide a modern look and feel to the site. The interface is used to create and prepare jobs to be launched onto the grid. As previously stated, communication with different grid middlewares is completely dealt with by the IWS job launcher. This launcher is written in PHP. The main challenge that has been faced when implementing this grid solution has been that grid clients have to be completely accessible via the command line in order for the PHP job launcher to interact with them. Solutions to this problem have been presented in [68]. When a new job is created by the user, it is then sent to the job launcher. The launcher selects an appropriate resource based on a pre-established criterion. It then executes the appropriate command line script for the chosen grid infrastructure. Any information required to uniquely identify the job, as well as the actual state of the job itself, is stored in the local database, allowing the monitoring of the job state when asked by the user through a request to the IWS. This centralised scheduling approach allows transparent access to multiple middlewares. It is also useful for enforcing the use of a particular resource under certain circumstances. At the end of a simulation, test results and logs files are saved retrieved to the IWS user space. All the authentication steps between the IWS and the different grid environments are dealt with by the job launcher. Access grant is gained through the use of public key authentications and PKCS12 certificates. This process is also hidden to the final user, whose only worry is to have an authorized access to the ImmunoGrid website. 2.5
Predicted Epitopes Database
The database of predicted epitopes has been integrated into the ImmunoGrid project. It provides a summary of predictions for all protein antigens of a particular virus (eg. influenza A or Dengue) or all known cancer antigens for the human (HLA) or mouse (H-2) MHC molecules. This database represents all possible peptide targets from known antigens and will serve as a maximum set of compounds for designing T-cell epitope based vaccines. The advantage relative to the current state of the art is that it offers to vaccine researchers an exhaustive set of targets that should be considered, rather than currently used small, incomplete, and historically biased data sets. 2.6
IMGT-ONTOLOGY for Antibody Engineering
The use of the IMGT-ONTOLOGY concepts and standards provides a standardized way to compare and analyse immunoglobulin sequences in the process of antibody engineering, whatever the chain type (heavy and light) and whatever the species (e.g. murine and human). More specifically, the IMGT unique
The ImmunoGrid Simulator: How to Use It
9
numbering provides the delimitations of the frameworks (FR-IMGT) and complementarily determining regions (CDR-IMGT) for the analysis of loop grafting in antibody humanization.
3
Using the ImmunoGrid Simulator
To access to the ImmunoGrid simulator web interface, users just only have to type the ImmunoGrid address (http:// www.ImmunoGrid.eu) in a web-browser, and to click to the “Simulators” link. Registered users can then login to website in order to have full access to the simulator; occasional (anonymous) users can access only to the Educational section of the simulator. Complete documentation about the simulators modules is available clicking on the “Education” link. 3.1
The Educational Simulator
Simulations produced by ImmunoGrid include results of either descriptive or predictive modeling. Selected cases have been integrated in the educational simulator. These educational examples use the concept of learning objects, where students can select several experimental conditions and observe the outcomes using realistic simulations. Learning objects offer visual and interactive means to introduce key concepts, information and situated data in developing disciplinespecific thinking. ImmunoGrid simulations help students develop a culture of practice, a spirit of inquiry, understand theoretical knowledge and information in immunology through encapsulated descriptions and opportunity for understanding practical aspects through simulations of key equipments, tools and processes in vaccine development. The Educational Simulator can be used in courses in the partner institutions. This exploitable result may become a template for development of virtual teaching laboratory. 3.2
System Models Developed for Specific Diseases
System models have been built and customized to address disease-specific questions. Examples of this kind are the simulation of HIV-1 infection and related antiretroviral therapy, the simulation of the Epstein-Barr virus, the simulation of hypersensitivity reaction, the simulation of cancer immunotherapy to a generic non-solid tumor. The innovation lies in the general-purpose nature of this simulator and the relative easiness of upgradeability and customizability. Two diseasespecific questions were addressed to the ImmunoGrid consortium, namely to model the immune control of atherogenesis and to model a influenza vaccine which was under investigation at Dana-Farber Cancer Institute in Boston. The result of these new investigations are now two new modules of the ImmunoGrid simulator. Viral infections section represents the first simulation section one can select. Three different viral infections can be selected: Influenza/A, HIV and EpsteinBarr virus. For each of those it is possible to run different types of simulations choosing among various specific parameters.
10
F. Pappalardo et al.
Selecting the influenza simulator it is possible to run simulations about influenza/A. Once one has selected a simulation, the interface will provide a screen where the user can choose parameters values. The list of available parameters depends on the specific simulation (figure 1).
Fig. 1. The influenza/A simulator parameters form
Once the parameters values have been chosen, the interface will take care of building the Grid job and submit it, freeing the user to know about specific Grid procedures. The simulation will be executed and the selected grid location will be shown on a map. Actually 4 different Grid environments are available: DEISA Grid – Cineca (Bologna, Italy); PI2S2 (Catania, Italy); Birkbeck College (London, UK); Dana-Faber Cancer Institute (Boston, USA). Once submitted, the status of the job can be followed inside the section “My jobs”. As a job is completed, the user can analyze results both through the web interface (visualizing graphs and movies) and by retrieving raw data for post-processing (figure 2). The same methodology applies to run the other simulations. The user can obviously run at the same time different type of simulations. For simulations that require more time, the user is free to log out and return to the web interface later. The “My jobs” section will provide information about the job status. The next section is represented by the cancer section. Here we can choose between a generic cancer growth simulator and breast cancer one. For the cancer growth simulation, it is possible to see pre-computed results (see figure 3) or to run a specific simulation, modifying simulation parameters in order to simulate several cancer growth scenarios. Available results are shown in an executive summary, interactive graphs and movies.
The ImmunoGrid Simulator: How to Use It
11
Fig. 2. Graphical visualization of influenza/A simulator results
Fig. 3. Pre-computed results of cancer growth simulations
The Breast cancer in HER/2-neu mice simulator has been validated with in vivo data from the University of Bologna partner. From this section it is possible to simulate a single mammary gland, or to launch a “natural scale”
12
F. Pappalardo et al.
simulation using 10 mammary glands at the same time. It is possible to choose the vaccination schedule, the number of simulations and to vary many biological parameters such as the lenght of the simulation, the IL2 quantity, the number of released antibodies and so on. Results are presented through an executive summary, interactive graphs and movies. The Atherogenesis simulator can be used to model atheromatous plaques formation related to LDL concentration. Even here it is possible to choose from a wide range of parameters of the atherogenesis such as the number of simulations, the length of the simulation the LDL quantity, the IL2 quantity and so on. When a simulation has finished, it is possible to visualize, download and play movies of the plaques formation as previously described. The last simulation is a descriptive simulation and deals with lymph node modeling. It is possible to simulate a lymph node with or without antigen infection or just have a look to pre-run simulations. Results are visualized using interactive graphs. 3.3
Molecular Level Prediction Models
The use of CBS web based tools allows a way to find epitopes in protein sequences. More specifically, the detection of epitopes could improve rational vaccine design, the creation of subunit or genetic vaccines, and resolve problems related to the use of whole live or attenuated organism. Models are divided in two different classes: – The T cell epitope prediction tools. NetChop performs predictions for cleavage sites of the human proteasome. The NetMHC is a tool to predict the binding of peptides to a large number of HLA alleles. The NetCTL is an integrative method, using NetChop and NetMHC (Integration of peptide-MHC binding prediction, proteasomal C-terminal cleavage and TAP transport efficiency). The NetMHCII is a tool that predicts binding of peptides to several HLA alleles. – B cell epitope prediction tools. The BepiPred offers prediction of linear epitopes and the DiscoTope specifically targets the prediction of discontinuous epitopes based on the three dimensional structure. Alle the CBS Prediction Tools have been integrated with the ImmunoGrid web portal to permit an easy centralized access and to allow the use of Grid computational frameworks to execute predictions. The first workflow is represented the epitopes prediction in breast cancer. Selecting the cancer section it is possible to execute the HER2 Epitope prediction. After choosing the MHC/HLA alleles (for either humans or mice) one can execute the simulation. Obtained results will show for each protein the position, the peptide and the list of alleles that recognize a specific peptide (promiscuity) (see image 4). Structural display trough 3D Jmol HER2 protein model is also available. From the influenza section it is possible to proceed with the Influenza epitope prediction. After choosing for Human class I and II alleles, the model allows
The ImmunoGrid Simulator: How to Use It
13
Fig. 4. Visualization of results relative to the breast cancer epitopes prediction module
Fig. 5. Structural display of with protein surface highlighting for the influenza epitope prediction module
the selection of the host (avian, mammal, human), the influenza protein, the influenza subtype and the influenza isolate. Results are presented in a similar way as previously shown. Structural display (with epitope and protein surface highlighting) is possible even here (see figure 5). The workflow for HIV epitope prediction is accessible from the HIV section of the ImmunoGrid web interface. For this kind of prediction it is possible to select
14
F. Pappalardo et al.
the Human class I alleles (max 20 alleles), the protein (e.g. envelop protein), the virus cascade and the virus strain. List of peptides and scores of the bindings are then presented. The atherosclerosis epitope prediction workflow can be accessed from the Atherogenesis section of the web interface. Selection of parameters and display of results is done in a similar way as previously presented.
Fig. 6. Example of results obtained using the atherosclerosis epitopes prediction module
4
Conclusions
In the present paper we have introduced the ImmunoGrid EC funded project, describing the models and the technologies used for building the ImmunoGrid Simulator, and focusing on how to use it. The ImmunoGrid project has reached some major goals: the use and integration of different of grid middlewares, an easy to use access to simulators running on grid environments, an extended ontology framework for immunology, a general framework for simulating immune system response. These achievements required a close cooperation among partners. These have gained significant experience in the integration of technology with maths and computer science to describe the immune response to different pathologies. The ImmunoGrid project has proved that simulators and grid technologies can be used with success in the framework of drug and vaccine discovery and life science research. At least 50 years of experiences in Physical, Chemical and Engineering Science had proven that “Computer Aided...” discovery, engineering, design, etc. has the effect of drastically reduce the developing time of the product.
The ImmunoGrid Simulator: How to Use It
15
Nowadays this experience can be translated to vaccine and drug discovery [72, 73]. If industries will pick up the ImmunoGrid experience and all the other experiences of the ICT-VPH projects, the drug developing time can be reduced by a significant amount ranging from 30-50%. The feedback on european people’s health and european industrial competitiveness is potentially enormous.
References 1. Castiglione, F., Bernaschi, M., Succi, S.: Simulating the immune response on a distributed parallel computer. Int. J. Mod. Phys. C 8, 527–545 (1997) 2. Motta, S., Castiglione, F., Lollini, P., Pappalardo, F.: Modelling vaccination schedules for a cancer immunoprevention vaccine. Immunome Res. 1, 5 (2005) 3. Lin, H.H., Ray, S., Tongchusak, S., et al.: Evaluation of MHC class I peptide binding prediction servers: applications for vaccine research. BMC Immunol. 9, 8 (2008) 4. Lin, H.H., Zhang, G.L., Tongchusak, S., et al.: Evaluation of MHC-II peptide binding prediction servers: applications for vaccine research. BMC Bioinformatics 9(Suppl. 12), S22 (2008) R 5. Lefranc, M.P.: IMGT, the international ImMunoGeneTics information system : a standardized approach for immunogenetics and immunoinformatics. Immunome Res. 1, 3 (2005) [imgt.cines.fr] R a system and an ontology that 6. Lefranc, M.P., Giudicelli, V., Duroux, P.: IMGT , bridge biological and computational spheres in bioinformatics. Brief Bioinform. 9, 263–275 (2008) 7. Motta, S., Brusic, V.: Mathematical modeling of the immune system. In: Ciobanu, G., Rozenberg, G. (eds.) Modelling in Molecular Biology. Natural Computing Series, pp. 193–218. Springer, Berlin (2004) 8. Louzoun, Y.: The evolution of mathematical immunology. Immunol. Rev. 216, 9–20 (2007) 9. Castiglione, F., Liso, A.: The role of computational models of the immune system in designing vaccination strategies. Immunopharmacol. Immunotoxicol. 27, 417–432 (2005) 10. Falus, A. (ed.): Immunogenomics and HumanDisease. Wiley, Hoboken (2006) 11. Purcell, A.W., Gorman, J.J.: Immunoproteomics: Massspectrometry-based methods to study the targets of the immune response. Mol. Cell Proteomics 3, 193–208 (2004) 12. Brusic, V., Marina, O., Wu, C.J., Reinherz, E.L.: Proteome informatics for cancer research: from molecules to clinic. Proteomics 7, 976–991 (2007) 13. Sch¨ onbach, C., Ranganathan, S., Brusic, V. (eds.): Immunoinformatics. Springer, Heidelberg (2007) 14. Tegn´er, J., Nilsson, R., Bajic, V.B., et al.: Systems biology of innate immunity. Cell Immunol. 244, 105–109 (2006) 15. Sachdeva, N., Asthana, D.: Cytokine quantitation: technologies and applications. Front Biosci. 12, 4682–4695 (2007) 16. Harnett, M.M.: Laser scanning cytometry: understanding the immune system in situ. Nat. Rev. Immunol. 7, 897–904 (2007)
16
F. Pappalardo et al.
17. Brusic, V., Bucci, K., Scho¨ nbach, C., et al.: Efficient discovery of immune response targets by cyclical refinement of QSAR models of peptide binding. J. Mol. Graph Model 19, 405–411 (2001) 18. Pappalardo, F., Motta, S., Lollini, P.L., Mastriani, E.: Analysis of vaccines schedules using models. Cell Immunol. 244, 137–140 (2006) 19. Yates, A., Chan, C.C., Callard, R.E., et al.: An approach to modelling in immunology. Brief Bioinform. 2, 245–257 (2001) 20. Celada, F., Seiden, P.E.: A computer model of cellular inter- action in the immune system. Immunol. Today 13, 56–62 (1992) 21. Castiglione, F., Poccia, F., D’Offizi, G., Bernaschi, M.: Mutation, fitness, viral diversity and predictive markers of disease progression in a computational model of HIV-1 infection. AIDS Res. Hum. Retroviruses 20, 1316–1325 (2004) 22. Baldazzi, V., Castiglione, F., Bernaschi, M.: An enhanced agent based model of the immune system response. Cell Immunol. 244, 77–79 (2006) 23. Castiglione, F., Duca, K., Jarrah, A., et al.: Simulating Epstein- Barr virus infection with C-ImmSim. Bioinformatics 23, 1371–1377 (2007) 24. Castiglione, F., Toschi, F., Bernaschi, M., et al.: Computational modeling of the immune response to tumor antigens: implications for vaccination. J. Theo. Biol. 237/4, 390–400 (2005) 25. Lollini, P.L., Motta, S., Pappalardo, F.: Discovery of cancer vaccination protocols with a genetic algorithm driving an agent based simulator. BMC Bioinformatics 7, 352 (2006) 26. Pappalardo, F., Lollini, P.L., Castiglione, F., Motta, S.: Modeling and simulation of cancer immunoprevention vaccine. Bioinformatics 21, 2891–2897 (2005) 27. Pappalardo, F., Musumeci, S., Motta, S.: Modeling immune system control of atherogenesis. Bioinformatics 24, 1715–1721 (2008) 28. He, X., Luo, L.: Theory of the lattice Boltzmann method: from the Boltzmann equation to the lattice Boltzmann equation. Phys. Rev. E 56, 6811–6817 (1997) 29. Ferreira Jr., S.C., Martins, M.L., Vilela, M.J.: Morphology transitions induced by chemotherapy in carcinomas in situ. Phys. Rev. E 67, 051914 (2003) 30. Catron, D.M., Itano, A.A., Pape, K.A., et al.: Visualizing the first 50hr of the primary immune response to a soluble antigen. Immunity 21, 341–347 (2004) 31. Garside, P., Ingulli, E., Merica, R.R., et al.: Visualization of specific B and T lymphocyte interactions in the lymph node. Science 281, 96–99 (1998) 32. Mempel, T.R., Henrickson, S.E., Von Andrian, U.H.: T-cell priming by dendritic cells in lymph nodes occurs in three distinct phases. Nature 427, 154–159 (2004) 33. Brusic, V., Rudy, G., Harrison, L.C.: MHCPEP, a database of MHC-binding peptides: update 1997. Nucleic Acids Res 26, 368–371 (1998) 34. Rammensee, H., Bachmann, J., Emmerich, N.P., et al.: SYFPEITHI: database for MHC ligands and peptide motifs. Immunogenetics 50, 213–219 (1999) 35. Toseland, C.P., Clayton, D.J., McSparron, H., et al.: AntiJen: a quantitative immunology database integrating functional, thermodynamic, kinetic, biophysical, and cellular data. Immunome Res. 1, 4 (2005) 36. Sette, A., Bui, H., Sidney, J., et al.: The immune epitope database and analysis resource. In: Rajapakse, J.C., Wong, L., Acharya, R. (eds.) PRIB 2006. LNCS (LNBI), vol. 4146, pp. 126–132. Springer, Heidelberg (2006) 37. Nielsen, M., Lundegaard, C., Lund, O., Kesmir, C.: The role of the proteasome in generating cytotoxic T-cell epitopes: insights obtained from improved predictions of proteasomal cleavage. Immunogenetics 57, 33–41 (2005)
The ImmunoGrid Simulator: How to Use It
17
38. Larsen, M.V., Lundegaard, C., Lamberth, K., et al.: An integrative approach to CTL epitope prediction: a combined algorithm integrating MHC class I binding, TAP transport efficiency, and proteasomal cleavage predictions. Eur. J. Immunol. 35, 2295–2303 (2005) 39. Nielsen, M., Lundegaard, C., Lund, O.: Prediction of MHC class II binding affinity using SMM-align, a novel stabilization matrix alignment method. BMC Bioinformatics 8, 238 (2007) 40. Nielsen, M., Lundegaard, C., Blicher, T., et al.: NetMHCpan, a method for quantitative predictions of peptide binding to any HLA-A and -B locus protein of known sequence. PLoS ONE 2, e796 (2007) 41. Larsen, J.E., Lund, O., Nielsen, M.: Improved method for predicting linear B-cell epitopes. Immunome Res. 2, 2 (2006) 42. Andersen, P.H., Nielsen, M., Lund, O.: Prediction of residues in discontinuous B-cell epitopes using protein 3D structures. Protein Sci. 15, 2558–2567 (2006) 43. Brusic, V., Bajic, V.B., Petrovsky, N.: Computational methods for prediction of T-cell epitopes a framework for modelling, testing, and applications. Methods 34, 436–443 (2004) 44. Tong, J.C., Tan, T.W., Ranganathan, S.: Methods and protocols for prediction of immunogenic epitopes. Brief Bioinform. 8, 96–108 (2007) 45. Reche, P.A., Glutting, J.P., Zhang, H., Reinherz, E.L.: Enhancement to the RANKPEP resource for the prediction of peptide binding to MHC molecules using profiles. Immunogenetics 56, 405–419 (2004) 46. Zhang, G.L., Khan, A.M., Srinivasan, K.N., et al.: MULTIPRED: a computational system for prediction of promiscuous HLA binding peptides. Nucleic Acids Res. 33, 17–29 (2005) 47. Zhang, G.L., Bozic, I., Kwoh, C.K., et al.: Prediction of supertype-specific HLA class I binding peptides using support vector machines. J. Immunol. Meth. 320, 143–154 (2007) 48. Peters, B., Bui, H.H., Frankild, S.: A community resource benchmarking predictions of peptide binding to MHC-I molecules. PLoS Comput. Biol. 2, e65 (2006) 49. Larsen, M.V., Lundegaard, C., Lamberth, K., et al.: Large-scale validation of methods for cytotoxic T-lymphocyte epitope prediction. BMC Bioinformatics 8, 424 (2007) 50. Lin, H.H., Ray, S., Tongchusak, S., et al.: Evaluation of MHC class I peptide binding prediction servers: applications for vaccine research. BMC Immunol. 9, 8 (2008) 51. You, L., Zhang, P., Bod´en, M., Brusic, V.: Understanding prediction systems for HLA-binding peptides and T-cell epitope identification. In: Rajapakse, J.C., Schmidt, B., Volkert, L.G. (eds.) PRIB 2007. LNCS (LNBI), vol. 4774, pp. 337–348. Springer, Heidelberg (2007) 52. Lin, H.H., Zhang, G.L., Tongchusak, S., et al.: Evaluation of MHC-II peptide binding prediction servers: applications for vaccine research. BMC Bioinformatics 9(Suppl. 12), S22 (2008) 53. Gowthaman, U., Agrewala, J.N.: In silico tools for predicting peptides binding to HLA-class II molecules: more confusion than conclusion. J. Proteome Res. 7, 154–163 (2008)
18
F. Pappalardo et al.
54. Rajapakse, M., Schmidt, B., Feng, L., Brusic, V.: Predicting peptides binding to MHC class II molecules using multi- objective evolutionary algorithms. BMC Bioinformatics 8, 459 (2007) 55. Nielsen, M., Lundegaard, C., Worning, P., et al.: Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach. Bioinformatics 20, 1388–1397 (2004) 56. Karpenko, O., Huang, L., Dai, Y.: A probabilistic meta- predictor for the MHC class II binding peptides. Immunogenetics 60, 25–36 (2008) 57. Zhang, C., Crasta, O., Cammer, S., et al.: An emerging cyberinfrastructure for biodefense pathogen and pathogen-host data. Nucleic Acids Res. 36, 884–891 (2008) 58. Laghaee, A., Malcolm, C., Hallam, J., Ghazal, P.: Artificial intelligence and robotics in high throughput post-genomics. Drug Discov. Today 10, 12539 (2005) 59. Fogel, G.: Computational Intelligence approaches for pattern discovery in biological systems. Brief Bioinform. 9, 307–316 (2008) 60. Rauwerda, H., Roos, M., Hertzberger, B.O., Breit, T.M.: The promise of a virtual lab in drug discovery. Drug Discov. Today 11, 228–236 (2006) 61. Becciani, U.: The Cometa Consortium and the PI2S2 project. Mem. S.A.It 13(Suppl.) (2009) 62. Romberg, M.: The UNICORE Architecture: Seamless Access to Distributed Resources, High Performance Distributed Computing. In: Proceedings of the 8th IEEE International Symposium on High Performance Distributed Computing, August 03-06 (1999) 63. Coveney, P.V., Saksena, R.S., Zasada, S.J., McKeown, M., Pickles, S.: The Application Hosting Environment: Lightweight Middleware for Grid-Based Computational Science. Computer Physics Communications 176(6), 406–418 64. Sloan, T.M., Menday, R., Seed, T.P., Illingworth, M., Trew, A.S.: DESHL– Standards Based Access to a Heterogeneous European Supercomputing Infrastructure. In: Proceedings of the Second IEEE International Conference on e-Science and Grid Computing, p. 91 (2006) 65. McGougha, A.S., Leeb, W., Dasc, S.: A standards based approach to enabling legacy applications on the Grid. Future Generation Computer Systems 24(7), 731–743 (2008) 66. Foster, I., Kesselman, C.: Globus: a Metacomputing Infrastructure Toolkit. International Journal of High Performance Computing Applications 11(2), 115–128 (1997), doi:10.1177/109434209701100205 67. Niederberger, R.: DEISA: Motivations, strategies, technologies. In: Proc. of the Int. Supercomputer Conference, ISC 2004 (2004) 68. Mastriani, E., Halling-Brown, M., Giorgio, E., Pappalardo, F., Motta, S.: P2SI2ImmunoGrid services integration: a working example of web based approach. In: Proceedings of the Final Workshop of Grid Projects, PON Ricerca 2000-2006, vol. 1575, pp. 438–445 (2009); ISBN: 978-88-95892-02-3 69. Halling-Brown, M.D., Moss, D.S., Sansom, C.J., Shepherd, A.J.: Computational Grid Framework for Immunological Applications. Philosophical Transactions of the Royal Society A (2009) 70. Halling-Brown, M.D., Moss, D.S., Shepherd, A.J.: Towards a lightweight generic computational grid framework for biological research. BMC Bioinformatics 9, 407 (2008)
The ImmunoGrid Simulator: How to Use It
19
71. Halling-Brown, M.D., Moss, D.S., Sansom, C.S., Sheperd, A.J.: Web Services, Workflow & Grid Technologies for Immunoinformatics. In: Proceedings of Intern. Congress of Immunogenomics and Immunomics, vol. 268 (2006) 72. Kumar, N., Hendriks, B.S., Janes, K.A., De Graaf, D., Lauffenburger, D.A.: Applying computational modeling to drug discovery and development. Drug discovery today 11(17-18), 806–811 (2006) 73. Davies, M.N., Flower, D.R.: Harnessing bioinformatics to discover new vaccines. Drug Discovery Today 12(9-10), 389–395 (2007)
Improving Coiled-Coil Prediction with Evolutionary Information Piero Fariselli, Lisa Bartoli, and Rita Casadio Biocomputing Group, Department of Biology University of Bologna, via Irnerio 42, 40126 Bologna, Italy {lisa,piero,casadio}@biocomp.unibo.it
Abstract. The coiled-coil is a widespread protein structural motif known to have a stabilization function and to be involved in key interactions in cells and organisms. Here we show that it is possible to increase the prediction performance of an ab initio method by exploiting evolutionary information. We implement a new program (addressed here as PS-COILS) in order to take as input both single sequence and multiple sequence alignments. PS-COILS is introduced to define a baseline approach for benchmarking new coiled-coil predictors. We then design a new version of MARCOIL (a Hidden Markov Model based predictor) that can exploit evolutionary information in the form of sequence profiles. We show that the methods trained on sequence profiles perform better than the same methods only trained and tested on single sequence. Furthermore, we create a new structurally-annotated and freely-available dataset of coiled-coil structures (www.biocomp.unibo.it/ lisa/CC). The baseline method PS-COILS is available at www.plone4bio.org through subversion interface.
1
Introduction
Coiled-coils are structural motifs comprising of two or more alpha-helices that wind around each other in regular bundles to produce rope-like structures (Fig.1) [1, 2]. Coiled-coils are important and widespread motifs that account for 5-10% of the protein sequences in the various genomes [3, 4] . A new advancement in the accurate detection of coiled-coil motifs at atomic resolution has been obtained with the development of the SOCKET program [4]. SOCKET recognizes the characteristic “knobs-into-holes” side-chain packing of coiled-coils and it can, therefore, distinguish between coiled-coils and the great majority of helix-helix packing arrangements observed in globular domains. Recently, SOCKET was also utilized to generate a “Periodic Table” of coiled-coil structures [5] available in a web-based browsable repository [6]. Several programs for predicting coiled-coil regions in protein sequences have been developed so far: the first and widely-used COILS [3, 7], PAIRCOIL [8], the retrained version PAIRCOIL2 [9], MULTICOIL [10], MARCOIL [11] and a profile-based version of COILS, called PCOILS [12], which substitutes sequence-profile comparisons with profile-profile F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 20–32, 2010. c Springer-Verlag Berlin Heidelberg 2010
Improving Coiled-Coil Prediction with Evolutionary Information
21
Fig. 1. Antiparallel regular coiled-coil (seven residues over two turns) of the SeryltRNA synthetase (PDB id: 1SRY)
comparisons exploiting evolutionary information. For a comparative analysis of these coiled-coil prediction methods, see [13]. Recently, we also developed CCHMM PROF [14], an evolutionary-based HMM for coiled-coil prediction.
2
Datasets
The only annotated dataset publicly available created for developing a predictor is the MARCOIL dataset of protein sequences. However, the same MARCOIL authors stated that the coiled-coil annotations in their database are not reliable [11]. For this reason we followed the prescription suggested by Lupas and coworkers [12] by adopting the intersection between SCOP [15] and SOCKET [4] as a “safer” and more impartial set, with respect to those used in literature. We generated our dataset of experimentally-determined coiled-coil structures following this suggestion and considering only the intersection between the SCOP coiled-coil class and the output of the SOCKET program. Following this procedure we ended up with a final annotated dataset comprising 104 sequences (S104). Sequences shorter than 30 residues, or with coiled-coil domains shorter than 9 residues were excluded. This lower limit has been chosen since 9-residues long domains are the shorter ones classified by MARCOIL [11]. The complete S104 dataset contains 10,724 residues with 3,565 coiled-coil residues (33% of the overall dataset). More specifically, among the different protein chains that are labeled with a coiled-coil domain in SCOP, we retained the subset for which SOCKET found at least a coiled-coil segment in that domain. The final annotation of the coiled-coil segment is the one indicated by the SOCKET program. This set can be downloaded from the web page, so that we provide a structurallyannotated dataset available for benchmarking (or for improving/correcting it). Furthermore, in order to test the different methods on a blind set, we selected a subset of 50 non-identical protein chains (S50) with sequence identity 25% identity) to one of the MARCOIL negative set. In addition, we checked the corresponding entries of the PDB in order to further remove all the structures annotated as coiled-coil or coiled-coil related. Finally, we clustered the remaining sequences fixing the sequence identity threshold to 25% and we choose a representative sequence for each cluster. The negative dataset consists of 1,139 proteins sequences (S1139).
3 3.1
Measures of Accuracy Per-residue Indices
The overall accuracy (Q2) is defined as: Q2 = p/N
(1)
where p and N are the total number of correct predictions and total number of residues, respectively. The correlation coefficient (C) for a given class s is defined as: C(s) = [p(s)n(s) − u(s)o(s)]/d(s)
(2)
where d(s) is the factor d(s) = [((p(s) + u(s))(p(s) + o(s))(n(s) + u(s))(n(s) + o(s)))]1/2
(3)
p(s) and n(s) are respectively the true positive and true negative predictions for class s, while o(s) and u(s) are the numbers of false positives and false negatives. The sensitivity (coverage, Sn) for each class s is defined as Sn(s) = p(s)/[p(s) + u(s)]
(4)
The specificity (accuracy, Sp) is the probability of correct predictions and it is expressed as follows: Sp(s) = p(s)/[p(s) + o(s)] (5) However, these measures cannot discriminate between similar and dissimilar segment distributions. To overcome this problem the per-segment index SOV (Segment OVerlap) has been defined to evaluate secondary structure segments rather than individual residues [16].
Improving Coiled-Coil Prediction with Evolutionary Information
3.2
23
Per-segment Indices
If (s1 , s2 ) is a pair of overlapping segments, S(i) is defined as the set of all the overlapping pairs in state i: S(i) = {(s1 , s2 ) : s1 ∩ s2 = , s1 and s2 in conf ormation i}
(6)
while S (i) is the set of all segments s1 for which there is no overlapping segment s2 in state i. S (i) = {(s1 , s2 ) : s1 ∩ s2 = , s1 and s2 in conf ormation i} For state i the segment overlap (SOV) is defined as: 1 minov(s1 , s2 ) + δ(s1 , s2 ) × len(s1 ) SOV (i) = 100 × Ni maxov(s1 , s2 )
(7)
(8)
S(i)
with the normalization factor Ni defined as: len(s1 ) + len(s1 ) Ni = S(i)
(9)
S (i)
The sums over S(i) run over the segment pairs in state i which overlap by at least one residue. The other sum in the second equation runs over the remaining segments in state i. len(s1 ) and len(s2 ) are the lengths of segments s1 and s2 respectively, minov(s1 , s2 ) is the length of the overlap between s1 and s2 , maxov(s1 , s2 ) is the total extent for which either of the segments has a residue labelled with i and δ(s1 , s2 ) is defined as: δ(s1 , s2 ) = min{(maxov(s1 , s2 ) − minov(s1 , s2 )); minov(s1 , s2 ); ; int(len(s1 )/2); int(len(s2 )/2)}
(10)
More specifically, in the tables we indicate with SOV(CC) the value of the segment overlap for the coiled-coil regions and with SOV(N) the value for the non coiled-coil regions. 3.3
Per-protein Indices
Protein OVerlap (POV) is a binary per-protein measure (0 or 1). POV is equal to 1 only – if the number of predicted (Np ) and observed (Np ) coiled-coil segments is the same and – if all corresponding pairs have a minimum segment overlap. More formally: if (Np = No and pi ∩ oi ≥ th, ∀ i = j) ⇒ P = 1
(11)
24
P. Fariselli, L. Bartoli, and R. Casadio
Otherwise: if Np = No ⇒ P = 0
(12)
To establish the segment overlap we set two thresholds. The first one is the minimum between the half lengths of the segments: th = min(Lp /2, Lo/2)
(13)
where Lp is the length of the predicted coiled-coil segment and Lo is the length of the corresponding observed segment. A second and more strict threshold is the mean of the half lengths of the segments: th = (Lp /2 + Lo /2)/2
(14)
For a set of proteins the average of all POVs over the total number of proteins N is: N Pi P OV = i=1 (15) N The final scores are obtained by averaging over the whole set the values computed for each protein. This measure is usually more stringent than summing up all the predictions and computing the indices at the end, since in this last case the scores of completely misclassified proteins can be absorbed by other predictions. For this reason, it may happen that both Sn and Sp can be lower than the corresponding Q2.
4
Baseline Predictor
In order to highlight the contribution of the introduction of evolutionary information we implemented two baseline predictors. The first (called PS-COILS-ss) takes single sequence as input, the other one is a multiple sequence-based version of the former (PS-COILS-ms). PS-COILS-ss is our implementation of the COILS program. The basic idea behind COILS is the adoption of a simple Bayesian classifier. In practice, given a protein segment xi of length W starting at position i, PS-COILS-ss computes the probability P (cc|xi ) of being in coiled coil structure (cc) given the segment as: P (cc|xi ) =
P (xi |cc) P (xi |cc) + c · P (xi |g)
(16)
where c = P (g)/P (cc) is the ratio of the prior probabilities, while P (xi |cc) and P (xi |g) are the probabilities of the segment xi given the coiled-coil structure (cc) or non coiled-coil structure (g), respectively. To compute these quantities PS-COILS-ss models the likelihoods as Gaussian functions (P (xi |cc) = Gcc and P (xi |g) = Gg ). The COILS parameters are [7]:
Improving Coiled-Coil Prediction with Evolutionary Information
25
– μcc and μg , the average scoring values of the coiled-coil and globular protein sets, respectively; – σcc and σg the standard deviation of the scoring values for the coiled-coil and globular protein sets; – S h (a) the score for the residue type a in the heptad position h (with h ranging from 1 to 7). Thus, the probability of a coiled-coil segment of length W starting at position i (P (cc|xi )) in a given sequence is computed as: P (cc|xi ) =
Gcc Gcc + c · Gg
(17)
where Gcc and Gg are defined as: 2
2
(x −μ ) 1 − i σ2 cc cc e Gcc = √ 2σcc
;
Gg = √
(x −μ ) − i σ2 g 1 g e 2σg
(18)
The score xi is computed using the matrix S h (a) along the segment W starting at position i: W −1 xi = ( f (ai+h , h)eh )1/N (19) h=0
where eh is the exponential weight of the position h (eh = 1 if not weighted) W and N is the normalization factor N = h=1 eh . The function f is in the case of PS-COILS-ss program is simply f (ai+h , h) = S h (ai+h )
(20)
where S h (ai+h ) is the element of the COILS scoring table accounting for the residue type ai+h in the hth heptad position. The parameters S h (ai+h ) were computed by Lupas [3] on the original datasets and here they are not updated in order to better highlight the effect of the evolutionary information. PS-COILS-ms takes as input a sequence profile Pk (s) and its score is defined as: W −1 xi = ( f (S, P, h)eh )1/N (21) h=0
and f (S, P, h) =< S h , Pi+h >=
S h (a) · Pi+h (a)
(22)
a∈{Residues}
PS-COILS is designed to be more general and its score is a linear combination of the PS-COILS-ss and PS-COILS-ms scores, namely, PS-COILS=λ PS-COILSss+(1-λ)PS-COILS-ms where λ is a weight in the range [0,1]. The program adopts the same parameters that were calculated for PS-COILS-ss and combines singlesequence and evolutionary information (in the form of sequence profiles). We then have:
26
P. Fariselli, L. Bartoli, and R. Casadio
xi = (
W
f (S, P, h, λ)eh )1/N
(23)
h=1
and f (S, P, h, λ) = λS h (ai+h ) + (1 − λ) < S h , Pi+h >
5
(24)
HMM Predictors: MARCOIL and CCHMM PROF
In a previous work we have developed CCHMM PROF [14], the first HMM-based coiled-coil predictor that exploits sequence profiles. CCHMM PROF was specifically designed to predict structurally-defined coiled coil segments. However, to better highlight the effect of evolutionary information on the performance of the prediction of coiled-coil regions, here we implemented a new version of MARCOIL (called MARCOIL-ms) that can exploit evolutionary information in the form of sequence profile. By this we can compare the old version based on single sequence with the new one that can take advantage of mupltiple sequence alignments.
Fig. 2. Organization of the states of the MARCOIL HMM. The background state is the state 0. Each of the 9 groups has seven states that represent the heptad repeat. The arrows represent the allowed transitions.
The MARCOIL HMM [11] is composed of 64 states. The model has a background state (labeled with 0) and other 63 states labeled with a group number (from 1 to 9) and with a letter that indicates the heptad position. The minimal coiled-coil length allowed by the model is nine residues. Groups from 1 to 4 model the first four residues of a coiled-coil domain (namely the N-terminal
Improving Coiled-Coil Prediction with Evolutionary Information
27
turn) while groups from 6 to 9 cover the last four residues (the C-terminal turn). The fifth group models the internal coiled-coil residues. In Figure 2 a diagram of the allowed transitions in the MARCOIL model is represented. A coiled-coil region begins with a transition to the first group and ends with a transition to group state 0. With domains of more than nine residues, the states of group five are visited more than once. As depicted in Figure 3, each of the 9 groups has seven states that represent the heptad repeat.
Fig. 3. Details of the heptad transitions within each one the 9 groups of states in the MARCOIL HMM. The arrows represent the most probable transitions.
MARCOIL-ms uses the same automaton but the states emit vectors instead of symbols, as described in [17]. The sequences of characters, commonly analyzed by single sequence-based HMMs, are replaced with sequences of vectors, namely the sequence profile. In practice the emission probabilities ek (c) for each state k and for each emission symbol c are substituted with the dot product of the → profiles entries − x with the internal emission probability vector (for the k th −state → − → → we have eVk ( x ) =< − e k, − x >). Accordingly with this change we modify the updating procedure of the Expectation-Maximization learning. If Ek (c) is the expected number of times in which the symbol c is emitted from state k and Ai,k is the expected number of transitions between state i and state k, then the transition probabilities and the emission probabilities can be updated as: Ai,k ai,k = l Ai,l
(25)
Ek (c) ek (c) = l Ek (l)
(26)
28
P. Fariselli, L. Bartoli, and R. Casadio
Ai,k and Ek (c) are computed with the Forward and Backward algorithms. Ai,k =
Np
Lp −1
fi (t)ai,k ek (xpt+1 )bk (t + 1) Np L p 1 Ek (c) = p=1 xt =c fk (t)bk (t) P (xp )
1 p=1 P (xp )
t=0
(27) (28)
Using the scaling factor Ai,k =
Np Lp −1 p=1
Ek (c) =
p t=0 fi (t)ai,k ek (xt+1 )bk (t + 1) Np Lp p=1 xt =c fk (t)bk (t)Scale(t)
(29) (30)
Finally the updating equations are: Lp −1 p 1 p=1 P (xp ) t=0 fi (t) ai,k eVk (xt+1 )bk (t + Np Lp −1 p Ai,k = p=1 t=0 fi (t) ai,k eVk (xt+1 )bk (t + 1)
Ai,k =
Np
1)
(31)
And for the emissions Np L p 1 Ek (c) = p=1 t=1 fk (t)bk (t)xt (c) P (xp ) Np L p Ek (c) = p=1 t=1 fk (t)bk (t)Scale(t)xt (c)
(32)
− where xt (c) is the component c of the vector → x t , representing the t-th sequence position.
6 6.1
Results and Discussion PS-COILS Performance as a Function of the λ Value
In order to define a baseline predictor we evaluate the PS-COILS performance as function of the λ value. Figure 4 shows how the performance of our method at different λ values. The behavior of the per-residue index Q2, of the SOV(CC) per-segment index and of the POVmin per-protein index is plotted. The best performance of PS-COILS corresponds to λ = 0.5 (referred as PS-COILS 05 in Tables 1 and 2), when single sequence and evolutionary information are equally weighted. It is worth remembering that the method parameters are not retrained so that the PS-COILS can be taken as benchmarking baseline method. 6.2
Comparison with Other Methods
The two major issues of the prediction of coiled-coil domains can be synthesized as: 1. sorting out the list of proteins that contain coiled-coil segments in a given set of protein sequences (or in genomes) 2. predicting the number and the location of coiled-coil domains in a protein chain.
Improving Coiled-Coil Prediction with Evolutionary Information
29
Fig. 4. PS-COILS overall accuracy (Q2), coiled-coil Segment OVerlap (SOV(CC)) and Protein OVerlap (POVmin) as a function of the λ value
The majority of the papers addressed both questions at the same time and their final accuracy is very good for the first task. For each method we computed the Receiver Operating Characteristic Curve (ROC) and in Tab.1 we report the Area Under Curve (AUC). From Table 1 it is evident that the most recent methods outperform PS-COILS-ss in this task. However, when the evolutionary information in the form of sequence profiles is added to the oldest method the picture changes and PS-COILS-ms and PS-COILS 05 achieve the same level of accuracy. This highlights once more the relevance of the evolutionary information. Furthermore, in this paper we focus on the prediction of the location of the coiled-coil segments into the protein sequences. In Table 2 we report the accuracy of the different methods on the S50 subset (with no sequence similarity with the MARCOIL dataset proteins). The methods are evaluated using their default thresholds to predict the coiled-coil segments. From POV indices, it is evident that about 60% of chains are correctly predicted at the protein level when evolutionary information is taken into account. The exploitation of the evolutionary information increases the methods accuracy (see PS-COILS-ms and PS-COILS 05 versus PS-COILS-ss). By this PS-COILS-ms can be compared to more recent methods and PS-COILS-ms (or PS-COILS 05) can be adopted as as a baseline method for benchmarking tests. When the evolutionary informaton is taken into account, machine
30
P. Fariselli, L. Bartoli, and R. Casadio Table 1. Area under the ROC curve (AUC) computed for all the classifiers Method PS-COILS-ss MARCOIL PAIRCOIL2 PS-COILS-ms PS-COILS 05 MARCOIL-ms CCHMM PROF
AUC 0.92 0.95 0.96 0.96 0.96 0.96 0.97
Table 2. Comparison of the different methods on the benchmark S50 dataset
Method
single sequence PS-COILS-ss MARCOIL PAIRCOIL2 multiple sequence PS-COILS-ms PS-COILS 05 MARCOIL-ms MARCOIL-ms-cv CCHMM PROF(1)
Per-protein
Per-segment
POVmin POVav SOV(CC) SOV(N)
Per-residue C
Sn(CC) Sp(CC) Sn(N) Sp(N)
0.43 0.55 0.53
0.37 0.47 0.45
0.46 0.55 0.54
0.55 0.53 0.38
0.25 0.35 0.15
0.41 0.60 0.55
0.85 0.72 0.61
0.47 0.53 0.45
0.57 0.73 0.45
0.61 0.62 0.80 0.80 0.80
0.49 0.60 0.78 0.78 0.75
0.62 0.64 0.81 0.82 0.81
0.53 0.56 0.53 0.56 0.61
0.36 0.36 0.53 0.55 0.62
0.67 0.64 0.98 0.98 0.98
0.66 0.70 0.49 0.51 0.60
0.55 0.60 0.71 0.72 0.73
0.73 0.71 0.96 0.97 0.97
(1) CCHMM PROF is a new HMM-based predictor specifically developed for predicting structurally-annotated coiled-coil segments [14].
learning approaches such as MARCOIL and CCHMM PROF are the best performing methods overpassing the new baseline method PS-COILS 05. In particular MARCOIL, that was originally trained and tested with single sequence encoding on an old manually annotated data set, can perform significantly higher (POVmin 0.80 vs 0.53) when a structurally annotated dataset and multiple sequence information are provided (from 0.80 vs 0.53). Despite MARCOIL-ms and CCHMM PROF are two very different HMM models, they can interchangeably be used for predicting coiled-coil segments since they are able to correctly predict the 80% of the tested proteins. Summing up, in this work we addressed the following relevant issues: 1. we develope a new code for predicting coiled-coils segments freely available under GPL license that can be used as baseline predictor for bechmarkingr; 2. we present MARCOIL-ms a new HMM-based predictor that achieves the state of the art accuracy;
Improving Coiled-Coil Prediction with Evolutionary Information
31
3. we propose a new scoring scheme that takes into account per-segment and per-protein accuracy, allowing a more robust evaluation of the performance; 4. we show that the evolutionary information plays a relevant role also in the task of predicting coiled-coil segments.
Acknowledgments RC acknowledges the receipt of the following grants: FIRB 2003 LIBI–International Laboratory of Bioinformatics.
References [1] Gruber, M., Lupas, A.N.: Historical review: another 50th anniversary–new periodicities in coiled coils. Trends Biochem. Sci. 28, 679–685 (2003) [2] Lupas, A.N., Gruber, M.: The structure of alpha-helical coiled coils. Adv. Protein Chem. 70, 37–78 (2005) [3] Lupas, A.N.: Prediction and analysis of coiled-coil structures. Methods Enzymol. 266, 513–525 (1996) [4] Walshaw, J., Woolfson, D.N.: Socket: a program for identifying and analysing coiled-coil motifs within protein structures. J. Mol. Biol. 307, 1427–1450 (2001) [5] Moutevelis, E., Woolfson, D.N.: A periodic table of coiled-coil protein structures. J. Mol. Biol. 385, 726–732 (2008) [6] Testa, O.D., Moutevelis, E., Woolfson, D.N.: CC+: a relational database of coiledcoil structures. Nucleic Acids Res. 37(Database issue), D315–D322 (2009) [7] Lupas, A.N., Van Dyke, M., Stock, J.: Predicting coiled coils from protein sequences. Science 252, 1162–1164 (1991) [8] Berger, B., Wilson, D.B., Wolf, E., Tonchev, T., Milla, M., Kim, P.S.: Predicting coiled coils by use of pairwise residue correlations. Proc. Natl. Acad. Sci. USA 92, 8259–8263 (1995) [9] McDonnell, A.V., Jiang, T., Keating, A.E., Berger, B.: Paircoil2: improved prediction of coiled coils from sequence. Bioinformatics 22, 356–358 (2006) [10] Wolf, E., Kim, P.S., Berger, B.: MultiCoil: a program for predicting two- and three-stranded coiled coils. Protein Sci. 6, 1179–1189 (1997) [11] Delorenzi, M., Speed, T.: An HMM model for coiled-coil domains and a comparison with PSSM-based predictions. Bioinformatics 18, 617–662 (2002) [12] Gruber, M., Soding, J., Lupas, A.N.: REPPER-repeats and their periodicities in fibrous proteins. Nucleic Acids Res. 33, 239–243 (2005) [13] Gruber, M., S¨ oding, J., Lupas, A.N.: Comparative analysis of coiled-coil prediction methods. J. Struct. Biol. 155, 140–145 (2006) [14] Bartoli, L., Fariselli, P., Krogh, A., Casadio, R.: CCHMM PROF: a HMM-based coiled-coil predictor with evolutionary information. Bioinformatics 25, 2757–2763 (2009) [15] Murzin, A.G., Brenner, S.E., Hubbard, T., Chothia, C.: SCOP: a structural classification of proteins database for the investigation of sequences and structures. J. Mol. Biol. 247, 536–540 (1995)
32
P. Fariselli, L. Bartoli, and R. Casadio
[16] Zemla, A., Venclovas, C., Fidelis, K., Rost, B.: A modified definition of Sov, a segment-based measure for protein secondary structure prediction assessment. Proteins 34, 220–223 (1999) [17] Martelli, P.L., Fariselli, P., Krogh, A., Casadio, R.: A sequence-profile-based HMM for predicting and discriminating beta barrel membrane proteins. Bioinformatics 18(Suppl. 1), S46–S53 (2002)
Intelligent Text Processing Techniques for Textual-Profile Gene Characterization Floriana Esposito1 , Marenglen Biba2 , and Stefano Ferilli1 1
2
Department of Computer Science, University of Bari Via, E. Orabona, 4, 70125, Bari Italy Department of Computer Science, University of New York Tirana Rr. Komuna e Parisit, Tirana, Albania
[email protected],
[email protected],
[email protected] Abstract. We present a suite of Machine Learning and knowledge-based components for textual-profile based gene prioritization. Most genetic diseases are characterized by many potential candidate genes that can cause the disease. Gene expression analysis typically produces a large number of co-expressed genes that could be potentially responsible for a given disease. Extracting prior knowledge from text-based genomic information sources is essential in order to reduce the list of potential candidate genes to be then further analyzed in laboratory. In this paper we present a suite of Machine Learning algorithms and knowledge-based components for improving the computational gene prioritization process. The suite includes basic Natural Language Processing capabilities, advanced text classification and clustering algorithms, robust information extraction components based on qualitative and quantitative keyword extraction methods and exploitation of lexical knowledge bases for semantic text processing.
1
Introduction
Many wide-spread diseases are still an important health concern for our society due to their complexity of functioning or to their unknown causes. Some of them can be acquired but some can be genetic and a large number of genes has already been associated to particular diseases. However, many potential candidate genes are still suspected to cause a certain disease and there is strong motivation for developing computational methods that can help reduce the number of these susceptibility genes. The number of suspected regions of the genome that encode probable disease genes for particular diseases is often estimated to be very large. This gives rise to a really large number of genes to be analyzed which would be infeasible in practice. In order to focus on most promising genes, prioritization methods are being developed in order to exploit prior knowledge on genetic diseases that can help guide the process of identifying novel genes. Much of the prior background knowledge on genetic diseases is primarily reported in the form of free text. Extracting relevant information from unstructured data has always been a key challenge for Machine Learning methods [2]. F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 33–44, 2010. c Springer-Verlag Berlin Heidelberg 2010
34
F. Esposito, M. Biba, and S. Ferilli
These have the power to provide precious capabilities to rank genes based on their textual profile. Indeed, human knowledge on genome and genetic diseases is becoming huge and manual exploitation of the large amount of raw text of scientific papers is infeasible. For this reason, recently many automatic methods for information extraction from text sources have been developed. The approach presented in [1] links groups of genes with relevant MEDLINE abstracts through the PubMed engine, characterizing each group with a set of keywords from MeSH and the UMLS ontology. In [3] co-expression information is linked with the cocitation network constructed on purpose. In this way co-expressed genes are characterized by MeSH keywords appearing in the abstract about genes. In [4] the authors developed an approach called neighborhood divergence that quantifies the functional coherence of a group of genes through a database that links genes to documents. In [5] it was proposed a schema that links abstracts to genes in a probabilistic framework that uses the EM algorithm to estimate the parameters of the word distributions. Genes are defined similar when the corresponding gene-by-documents representations are close. Finally, in [6] and [7] it is provided a proof of principle on how clustering of genes encoded in a keyword-based representation can be useful to discover relevant subpatterns. In this paper we describe a suite of machine learning and knowledge-based methods for textual-profile based gene prioritization. We present a classification system based on inductive logic programming [8] that is able to semantically classify texts regarding genes or diseases. The power of the approach lies in the representation language, Horn logic, that permits to describe a text through its logical structure and thus reason about it. Classification theories are induced from examples and are then used to classify novel unseen documents. The input to this machine learning system is given by a knowledge-based component for text-processing. This rule-based system performs a series of NLP steps such as part-of-speech tagging and disambigation in order to produce an accurate representation of the text which is then transformed into a Prolog clause that serves as input for the learning system. Then we present a novel distance between Horn clauses that is used in this context to define similarities between papers regarding genes. The distance is then used in an instance-based approach to cluster documents of disease genes and candidate genes. Through the mapping gene-to-document provided by EntrezGene, then the clustering of documents can be seen as a clustering of genes and further analysis can be performed to reduce the number of genes to be further analyzed. Both, the learning system and the instance-based approach are combined with the taxonomic knowledge base WordNET [9,10]. In the case of the inductive approach WordNET is used to properly generalize theories in presence of similar words that share any common parent nodes in the hierarchy of concepts of WordNET. In the case of the instance-based approach, WordNET is used to semantically define a taxonomic distance between words in the text and include it in the overall distance between two texts. Clustering of documents is then mapped directly to gene clustering using the gene-to-docs mapping of EntrezGene.
Intelligent Text Processing Techniques
35
We present also two keyword-based approaches for textual-profile gene clustering. The first approach is quantitative in that it combines into the Bayes theorem the frequency of a term (in a document and in a collection) and its position in the document. Through this formula, the probability that a term is a keyword is computed. On the other side, the qualitative approach exploits WordNET Domains [11,12] to semantically extract keywords from documents using the hierarchy of categories defined in WordNET Domains for each of the synsets found in the document. Both methods are used to generate the gene × terms matrix which is then combined with the gene-to-docs mapping taken from EntrezGene, to give the final matrix gene × terms. Then classical clustering algorithms are performed on this matrix to discover relationships between disease and candidate genes. The hypothesis is that couple of genes (candidate-disease) within the same cluster and that show high similarity as given by their textual-profile, could be probably involved in the same disease. The paper is organized as follows: in Section 2 we describe the various components of the suite for gene prioritization, in Section 3 we describe some preliminary experimental scenarios, in Section 4 we discuss related work and the differences with our approach and in Section 5 we conclude.
2
Machine Learning and Knowledge-Based Components
In this section we describe the various parts of the suite and how they interact with each other. We also describe how each component is used for the final goal of scoring candidate genes. 2.1
Rule-Induction System for Text Classification on Diseases
The rule-induction system INTHELEX [13], is an incremental learning system that induces and revises first-order logic (FOL) theories for multiple classes from examples. It uses two inductive refinement operators to fix wrong classifications of the current theory. In case a positive example is rejected, the system generalizes the definition of the corresponding concept by dropping some conditions (ensuring consistency with all the past negative examples), or adding to the current theory (if consistent) a new alternative definition of the concept. When a negative example is explained, a specialization of one of the theory definitions that concur in explaining the example is attempted by adding to it some (positive or negative) conditions which characterize all the past positive examples and discriminate them from the current negative one. When dealing with examples describing text, the system finds difficulties to generalize theories if lexical knowledge is missing. Therefore, to handle theory refinement for text classification we have introduced in the system an operator for finding common parents of words in the WordNET knowledge base. The following example clarifies how generalization is performed using WordNET:
36
F. Esposito, M. Biba, and S. Ferilli
Let’s consider the following phrases: The enterprise bought shares. The company acquired stocks. These represent two examples for the learning system. When the first example comes, the learning system generates a theory to properly classify it. When the second example is given to the system, it fails to explain it and tries to generalize the current theory. It fails to generalize the theory since it does not have any further information on the proper logical leterals to use in the theory revision process. But if we use WordNET, we can navigate the concept hierarchy and find that the couples enterprise-company, buy-acquire and share-stock share common parents, therefore can be generalized using the most specific or general common parent in the WordNET graph. Each example is expressed in Horn logic and describes the structural composition of the text (subject, verb, complement) and the part-of-speech of each component. This is performed using the rule-based approach described in Section 2.3. The Prolog representation of one of the above examples is: observation(text1) :- phrase(text1,text1 e1), subject(text1 e1,text1 e2), token(text1 e2,text1 e3), company(text1 e3), noun(text1 e3), verb(text1 e1,text1 e4), token(text1 e4,text1 e5), buy(text1 e5), past tense(text1 e5), complement object(text1 e1,text1 e6), token(text1 e6,text1 e7), shares(text1 e7), noun(text1 e7). The induction system is used in the context of gene prioritization on a certain disease to classify documents regarding this disease. The mapping gene-to-doc coming from EntrezGene does not specify if a document is on a particular disease, thus the learning system once trained is used to properly weight the matrix gene×documents (Figure 1). Training examples are taken from PubMed collecting positive examples on the interesting disease and negative examples on any other disease, where each example is constructed from the document abstract. The learning system induces theories that can be then used to classify unknown texts. 2.2
Instance-Based System for Text Clustering Regarding Genes
Clustering aims at organizing a set of unlabeled patterns into groups of homogeneous elements based on their similarity. The similarity measure exploited to evaluate the distance between elements determines the effectiveness of the clustering algorithms. Differently from the case of attribute-value representations [17], completely new comparison criteria and measures must be defined for FOL descriptions since they do not induce an Euclidean space. Our proposal is based on a similarity assessment technique for Horn clauses introduced in [14], and exploits the resulting similarity value as a distance measure in a classical k-means clustering algorithm, based on medoids instead of centroids as cluster prototypes guiding the partitioning process (due to the non-Euclidean space issue).
Intelligent Text Processing Techniques
37
In order to properly deal with text, the distance in [14] is enriched with lexical knowledge from WordNET defining a taxonomic similarity element as part of the overall similarity. This taxonomic similarity between couples of words in the text is defined as the intersection of the graphs of the parents in the is-a hierarchy of concepts in WordNET. The more nodes these graphs have in common the more similar the respective words are. Clustering is applied to abstracts from EntrezGene in the following way. Disease genes are identified from the domain expert (there are already a large number of repositories which detail known genes about different diseases) and related documents for each gene are taken from EntrezGene. The same is performed from a series of candidate genes suspected to be responsible for the disease. The abstracts are pre-processed with the system presented in Section 2.3 and transformed in Horn clauses. These clauses are given in input to the clustering algorithm which produces a set of clusters containing documents on disease and candidate genes. These clusters provide precious information to be analyzed in order to discover in the same cluster elements regarding disease and candidate genes. This is useful to produce a final prioritization based on the similarity values computed during clustering. 2.3
Rule-Based System for Syntactic and Logical Analysis of Text
Natural language processing (NLP) is one of the classical challenges for Artificial Intelligence and Machine Learning. Many NLP approaches rely on knowledge about natural language instead of statistical training on corpora. Our rule-based system falls among the knowledge-based approaches to NLP. The rule-based component is part of a larger system DOMINUS [15] which performs a number of pre-processing steps to the raw text before it is given in input to the rulebased component. Specifically, after a tokenization step that aims at splitting the text into homogeneous components such as words, values, dates, nouns and a language identification step, the system also carries out additional steps that are language- dependent: PoS-tagging (each word is assigned to the grammatical role it plays in the text), Stopword removal (less frequent or uniformly frequent items in the text, such as articles, prepositions, conjunctions, etc, are ignored to improve effectiveness and efficiency), Stemming (all forms of a term are reported to a standardized form, this way reducing the amount of elements). After these basic NLP steps, the text is given in input to the rule-based system which performs Syntactic Analysis (yielding the grammatical structure of the sentences in the text) and Logical Analysis (providing the role of the various grammatical component in the sentences). After each grammatical component has been tagged with its semantic role, the structure is saved in an XML format for future use and then transformed in a Prolog representation as described in Section 2.1. The Prolog clause represents an example for the rule-induction system or a training instance for the clustering algorithm.
38
F. Esposito, M. Biba, and S. Ferilli
2.4
Quantitative Keyword Extraction Method
The quantitative keyword extraction method implemented in our system is a na¨ıve Bayes technique based on the concepts of frequency and position of a term and on the independence of such concepts [16]. Indeed, a term is a possible keyword candidate if the frequency of the term is high both in the document and in the collection. Furthermore, the position of a term (both in the whole document and in a specific sentence or section) is an interesting feature to consider, since a keyword is usually positioned at the beginning/end of the text. Such features are combined according to the Bayes Theorem in a formula to calculate the probability of a term to be a keyword in the following way, P (key|T, D, P T, P S) is given by: |insT | |insS| P (Di |key) ∗ j=1 P (P Tj |key) ∗ k=1 P (P Sk |key) |insD| |insT | |insS| P ( i=1 Di + j=1 P Tj + k=1 P Sk ) (1) where P (key) represents the probability a priori that a term is a keyword (the same for each term), P (T |key) is the standard tf-idf value of the term, P (D|key), respectively P (P T |key) and P (P S|key), are computed by dividing the distance of the first occurrence of the term from the beginning of the section (D), document (P T ), sentence (P S) with the number of the terms in the section, respectively document and sentence. Finally, P (D, P T, P S) is computed by adding the distances of the first occurrence of the term from the beginning of the section, document and sentence. Since a term could occur in more than one document, section or sentence, the sum of the values are considered. In this way, the probability for the candidate keyword are calculated and the first k with the highest probability are considered as the final keywords for the document. In the specific context, keywords are extracted from documents of EntrezGene regarding disease and candidate genes. For each document a set of weighted keywords is extracted and saved into a document×term matrix. This matrix is then combined with the gene-to-doc mapping given by EntrexGene in order to have the final matrix gene × terms. Different clustering algorithms are then applied to these matrix such as simple k-means, Expectation-Maximization clustering or Cobweb. P (T |key) ∗
2.5
|insD| i=1
Qualitative Keyword Extraction Method That Exploits Lexical Knowledge
In [18] it was shown that quantitative keyword extraction methods are complementary to qualitative ones. The quantitative method basically exploits terms frequency and is more related to a collection of documents, while the qualitative methods, exploiting lexical knowledge are in general more related to the single document. The qualitative method implemented in our suite exploits a WordNET-based density function defined in [19]. The method works as follows:
Intelligent Text Processing Techniques
39
terms not included in WordNet (frequent words such as articles, pronouns, conjunctions and prepositions) are not evaluated for classification, this way implicitly performing a stop-word removal. Given the set W = t1 , ..., tn of terms in a sentence, each having a set of associated synsets S(ti ), a generic synset s will have weights: = 1/|S(ti)| if sk ∈ S(ti ), 0 otherwise, – p(S(ti ), s) – p(W, s) = i=1,..,n p(S(ti ), s)/|W |
in S(ti ), and in sentence W.
If a term t is not present in WordNet, S(t) is empty and t will not contribute to computation of |W |. The weight of a synset associated to a single term ti is 1/(|W | ∗ |S(ti)|). The normalized weight for a sentence is equal to 1. Given a document D made up m sentences Wi , each with associated weight wi > 0, wof k ). The total weight for a document, given by the is p(D, Wi ) = wi /( k=1,..,h sum of weights of all its sentences, is equal to 1.Thus, the weight of a synset s in a document can be defined as: p(D, s) = j=1,...,m p(Wj , s) ∗ p(D, Wj ). In order to assign a document to a category, the weights of the synsets in the document that refer to the same WordNet Domains category are summed, and the category with highest score is chosen. This Text Categorization technique, differently from traditional ones, represents a static classifier that does not need a training phase, and takes the categories from WordNet Domains. The keyword extraction algorithm, after computing the density function, proceeds as follows: 1. sort the list of synsets in the document by decreasing weight; 2. assign to the document the first k terms in the document referred to the synsets with highest weight in the list; 3. for each pair synset-weight create the pair label-weight where label is the one that WordNet Domains assignes to that synset 4. sort by decreasing weight the pairs label-weight; 5. select the first n domain labels that are above a given quality threshold. After assigning weights to all synsets expressed by the document under processing, the synsets with highest ranking can be selected, and the corresponding terms can be extracted from the document as best representatives of its content. Keyword extracted with the qualitative methods can be used in the same way as for the quantitative methods as described in the previous section. An interesting point to investigate is the intersection regarding common keywords found by both methods. In [18] it was found that in a large collection of documents, there is an important intersection of the two methods and this can be exploited by taking into consideration only the keywords found by both methods.
3
Experimental Evaluation
The suite of components is currently being experimented on a number of candidate genes. There are different scenarios in which the framework could be used and currently it is being evalualed how to use it in practice for an important
40
F. Esposito, M. Biba, and S. Ferilli
Fig. 1. The Machine Learning and knowledge-based framework
disease. Here we describe each preliminary experiment that we are performing. Figures 1 and 2 present the various components of the pipeline. Scenario 1. Disease and candidate genes are taken from known lists of genes. Abstracts of documents regarding candidate and disease genes are taken from EntrezGene and given in input to the two keyword extraction methods. A document × term matrix is produced from each of them (a single matrix can be obtained by taking the keywords at the intersection of both methods). Then this matrix is combined with the gen-to-doc mapping of EntrezGene to produce the final matrix gene × terms. This matrix is then transformed into a suitable representation for clustering algorithms such as k-means, EM or Cobweb. Scenario 2. Scientific papers regarding a certain disease (and not) are taken from MEDLINE, serving as positive and negative examples for the learning system. These are first given in input to the pre-processing engine which performs the basic NLP steps and syntactic and logical analysis of the text. The output of this phase is a structural representation of text with the semantic role of each text element. This representation is transformed into Prolog examples for the rule-induction system which learns a semantic text classifier exploiting also the WordNET lexical knowledge base. The classifier is then used to properly weight papers from EntrezGene (regarding the genes involved) but that do not have any tagging or labeling about the disease. The papers classified as not regarding the given disease are weighted differently from the papers classified as treating that disease. This produces a weighted matrix gen × documents that combined with the matrix in Scenario 1 gives the final matrix gene × terms for the clustering of genes.
Intelligent Text Processing Techniques
41
Fig. 2. Instance-based learning for gene clustering on first-order descriptions
Scenario 3. Disease and candidate genes are taken from domain experts of known lists from online repositories and given in input to the pre-processing component which produces a Prolog clause for each abstract. These are given to the clustering algorithm that uses the similarity function defined on Horn clauses. Clustered documents are then mapped to a gene clustering through the mapping of EntrexGene gene-to-doc. Scenario 4. Scientific papers regarding specific genes are taken from MEDLINE and labeled with the aid of a domain expert. These are given to the pre-processing engine and then to the learning system which produces an accurate text classifier for each gene. Then other unknown papers from MEDLINE are given in input to the classifier which assigns each document to a gene. This process produces a novel gene-to-doc mapping which combined with the matrix documents× term from Scenario 1, gives the final matrix gene × terms on which clustering is then performed. The pipeline is designed also to use domain vocabolaries. Currently the vocabolary to be used is MeSH but we intend to extend to others such as eVOC, GO, OMIM and LDDB. Current experiments regard genes which are suspected to be involved in a certain disease. Respective documents are planned to be taken from MEDLINE according to the mapping of EntrezGene. Scenario 1 will be experimented on a collection of a very large number of documents that regard disease and candidate genes. Keywords will be extracted and weighted with the quantitative method described in the previous Sections. For each gene a vector of weights will be produced and then it will be computed the euclidean distance of the vectors. Finally, for each candidate gene the score is given from the sum of the distances of the respective vectors. Complete results on these experiments will be reported in an extended version of this work.
42
4
F. Esposito, M. Biba, and S. Ferilli
Related Work and Discussion
Text-based gene prioritization is receiving much attention and different approaches have been proposed for this purpose. In [20] was proposed a tool that scores concepts in GO according to their relevance to each disease via text mining. Potential candidate genes are then scored through a BLASTX search on reference sequence. Another approach proposed in [21] uses shared GO annotations to identify similarities for genes to be involved in the same disease. A similar text-mining approach was proposed in [22] where candidate gene selection is performed using the eVOC ontology as a controlled vocabulary. eVOC terms are first associated with disease names according to co-occurrence in MEDLINE abstracts. Then the identified terms are ranked and the genes annotated with the top-ranking terms are selected. One of the few approaches that exploits machine learning and one of the most promising ones is that proposed in [23]. The authors use machine learning to build a model and then rank the test set of candidate genes according to the similarity to the model. The similarity is computed as the correlation for vector space data and BLAST score for sequence data. The advantage of the method is that it incorporates different multiple genomic data sources (microarray, InterPro, BIND, sequence, GO annotation, Motif, Kegg, EST, and text mining). Recently, in [24] was proposed a gene prioritization tool for complex traits which ranks genes by comparing the standard correlation of term-frequency vectors (TF profiles) of annotated terms in different ontological descriptions and integrates multiple ranking results by arithmetical (min, max, and average) and parametric integrations. The most closely related to our approach is that of [25] which exploits the textual profile of genes for their clustering. The suite of algorithms and components that we propose here differs in many points from these previous approaches. First, to the best of our knowledge, machine learning in the form of rule-induction has not been used before for textbased gene prioritization or clustering. Rule-induction, due to the power of the approach, has been mainly used in biological domains for structural classification of molecules. Relation extration is from text is another important task often faced with rule-induction approaches. But this relations have not yet been used to produce gene characterization profiles. The use of relations could lead to novel and more reliable gene profiles because relations could involve different biological entities of interest and thus important information that was ignored before can be now used to strengthen informative power of the gene profile. Second, similarity measures used previously for gene prioritization has always been on attribute-value representations, whereas here we use a novel similarity function defined on first-order descriptions. This has the advantage that firstorder languages allow for more thorough description of text and this can help capture hidden features of the entities. Moreover, we adopt a novel representation of texts not simply as bag-of-words but as a Horn clause incorporating the syntactic and logical role of elements in the sentence. This helps perform through rule-induction more robust text classification compared to attributevalue based methods. In addition, taxonomic similarity is another novel feature
Intelligent Text Processing Techniques
43
that we exploit in our similarity function in order to better capture the meaning of each text and properly define the similarity between texts. Finally, qualitative and quantitative keyword extraction methods are plugged in together to boost extraction of most significant terms. The qualitative methods, being related to the single document, can find more specialized words that can properly represent the single document. On the other side, the quantitative method, being related to a collection of documents, tries to capture more general words found in the entire collection and that can distinguish between the documents.
5
Conclusion
Most genetic diseases are characterized by many potential candidate genes that can cause the disease. Gene expression analysis typically produces a large number of co-expressed genes that could be potentially responsible for a given disease. Extracting prior knowledge from text-based genomic information sources is essential in order to reduce the list of potential candidate genes to be then further analyzed in laboratory. In this paper we present a suite of Machine Learning algorithms and knowledge-based components for improving the computational gene prioritization process. The pipeline includes basic Natural Language Processing capabilities, advanced text classification and clustering algorithms, robust information extraction components based on qualitative and quantitative keyword extraction methods and exploitation of lexical knowledge bases for semantic text processing.
References 1. Masys, D.R., Welsh, J.B., Fink, J.L., Gribskov, M., Klacansky, I., Corbeil, J.: Use of keyword hierarchies to interpret gene expression. Bioinformatics 17, 319–326 (2001) 2. Feldman, R., Sanger, J.: Text Mining Handbook: Advanced Approaches in Analyzing Unstructured Data. Cambridge University Press, Cambridge (2006) 3. Jenssen, T., Laegreid, A., Komorowski, J., Hovig, E.: A literature network of human genes for high-throughput analysis of gene expression. Nat. Genet. 28, 21–28 (2001) 4. Raychaudhuri, S., Schutze, H., Altman, R.B.: Using text analysis to identify functionally coherent gene groups. Genome Res. 12, 1582–1590 (2002) 5. Shatkay, H., Edwards, S., Boguski, M.: Information retrieval meets gene analysis. IEEE Intelligent Systems (Special Issue on Intelligent Systems in Biology) 17, 45–53 (2002) 6. Chaussabel, D., Sher, A.: Mining microarray expression data by literature profiling. Genome Biol. 3 (2002) 7. Glenisson, P., Antal, P., Mathys, J., Moreau, Y., Moor, B.D.: Evaluation of the vector space representation in text-based gene clustering. In: Pacific Symposium on Biocomputing, pp. 391–402 (2003) 8. Lavrac, N., Dzeroski, S.: Inductive Logic Programming: Techniques and applications. Ellis Horwood, UK (1994)
44
F. Esposito, M. Biba, and S. Ferilli
9. Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., Miller, K.: Introduction to WordNet: An On-line Lexical Database. International Journal of Lexicography 3(4), 235–244 (1990) 10. Fellbaum, C.: WordNet an Electronic Database, pp. 1–23. MIT Press, Cambridge (1998) 11. Magnini, B., Strapparava, C., Pezzulo, G., Gliozzo, A.: The role of domain Information in Word Sense Disambiguation. Natural Language Engineering 8(4), 359–373 (2002) 12. Magnini, B., Cavagli, G.: Integrating Subject Field Codes into WordNet. ITC-irst. In: Proc. Second International Conference on Language Resources and Evaluation, LREC 2000, pp. 1–6 (2000) 13. Esposito, F., Ferilli, S., Fanizzi, N., Basile, T.M., Di Mauro, N.: Incremental multistrategy learning for document processing. Applied Artificial Intelligence: An International Journal 17(8/9), 859–883 (2003) 14. Ferilli, S., Basile, T.M.A., Biba, M., Di Mauro, N., Esposito, F.: A General Similarity Framework for Horn Clause Logic. Fundamenta Informaticae Journal 90(1-2), 43–66 (2009) 15. Esposito, F., Ferilli, S., Basile, T.M.A., Di Mauro, N.: Machine Learning for Digital Document Processing: From Layout Analysis To Metadata Extraction - Machine Learning in Document Analysis and Recognition, pp. 105–138 (2008) 16. Uzun, Y.: Keyword Extraction Using Na¨ıve Bayes, Bilkent University, Department of Computer Science (2005) 17. Li, M., Chen, X., Li, X., Ma, B., Vitanyi, P.: The similarity metric. IEEE Transactions On Information Theory 50(12) (December 2004) 18. Ferilli, S., Biba, M., Basile, T.M.A., Esposito, F.: Combining Qualitative and Quantitative Keyword Extraction Methods with Document Layout Analysis. In: Proceedings of 5th Italian Research Conference on Digital Libraries (IRCDL 2009), DELOS: an Association for Digital Libraries (2009) 19. Angioni, M., Demontis, R., Tuveri, F.: A Semantic Approach for Resource Cataloguing and Query Resolution. Communications of SIWN 5, 62–66 (2008) 20. Perez-Iratxeta, C., Wjst, M., Bork, P., Andrade, M.A.: G2D: a tool for mining genes associated with disease. BMC Genet. 6, 45 (2005) 21. Turner, F.S., Clutterbuck, D.R., Semple, C.A.M.: POCUS: mining genomic sequence annotation to predict disease genes. Genome Biol. 4(11), R75 (2003) 22. Tiffin, N., Kelso, J.F., Powell, A.R., Pan, H., Bajic, V.B., Hide, W.A.: Integration of text- and data-mining using ontologies successfully selects disease gene candidates. Nucleic Acids Res. 33(5), 1544–1552 (2005) 23. Aerts, S., Lambrechts, D., Maity, S., Van Loo, P., Coessens, B., De Smet, F., Tranchevent, L.C., De Moor, B., Marynen, P., Hassan, B., Carmeliet, P., Moreau, Y.: Gene prioritization through genomic data fusion. Nat. Biotechnol. 24(5), 537–544 (2006) 24. Gaulton, K.J., Mohlke, K.L., Vision, T.: A computational system to select candidate genes for complex human traits. Bioinformatics 23(9), 1132–1140 (2007) 25. Glenisson, P., Coessens, B., Van Vooren, S., Mathys, J., Moreau, Y., De Moor, B.: TXTGate: profiling gene groups with text-based information. Genome Biol. 5(6), R43 (2004)
SILACAnalyzer - A Tool for Differential Quantitation of Stable Isotope Derived Data Lars Nilse1,2 , Marc Sturm2 , David Trudgian3,4 , Mogjiborahman Salek4 , Paul F.G. Sims5 , Kathleen M. Carroll6, and Simon J. Hubbard1 1
Faculty of Life Sciences, University of Manchester, Michael Smith Building, Oxford Road, Manchester M13 9PT, UK
[email protected],
[email protected] 2 Wilhelm Schickard Institute for Computer Science, Eberhard Karls University, T¨ ubingen, Sand 14, 72076 T¨ ubingen, Germany
[email protected] 3 Centre for Cellular and Molecular Physiology, University of Oxford, Roosevelt Drive, Oxford OX3 7BN, UK
[email protected] 4 Sir William Dunn School of Pathology, University of Oxford, South Parks Road, Oxford OX1 3RE, UK
[email protected] 5 Faculty of Life Sciences, Manchester Interdisciplinary Biocentre, 131 Princess Street, Manchester M1 7DN, UK
[email protected] 6 Manchester Centre for Integrative Systems Biology, Manchester Interdisciplinary Biocentre, 131 Princess Street, Manchester M1 7DN, UK
[email protected] Abstract. Quantitative proteomics is a growing field where several experimental techniques such as those based around stable isotope labelling are reaching maturity. These advances require the parallel development of informatics tools to process and analyse the data, especially for highthroughput experiments seeking to quantify large numbers of proteins. We have developed a novel algorithm for the quantitative analysis of stable isotope-based proteomics data at the peptide level. Without prior formal identification of the peptides by MS/MS, the algorithm determines the mass charge ratio m/z and retention time t of stable isotope-labelled peptide pairs and calculates their relative ratios. It supports several nonproprietary XML input formats and requires only minimal parameter tuning and runs fully automated. We have tested its performance on a low complexity peptide sample in an initial study. In comparison to a manual analysis and an automated approach using MSQuant, it performs as well or better and therefore we believe it has utility for groups wishing to perform high-throughput experiments.
1
Introduction
Proteomics experiments are becoming increasingly automated, genome wide and quantitative [1–3]. The move towards high-throughput quantitave methods leads F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 45–55, 2010. c Springer-Verlag Berlin Heidelberg 2010
46
L. Nilse et al.
to larger data sets and data processing challenges. Many approaches are used for quantitation. Stable isotope labelling methods, where labelled amino acids are introduced in cell culture, e.g. SILAC [4]) allow otherwise identical cells to be cultured whose proteins and peptides differ only by a predictable, fixed mass shift. Quantitation can then be performed by comparing the intensities of labelled and unlabelled peptides observed during mass spectrometry (MS) experiments. The related QconCAT technique concerns recombinant proteins [5], but also performs quantitation by considering labelled peptides. Manual analysis of quantiative data, where individual peak pairs must be identified and their intensities integrated at the ion chromatogram level [6], is acceptable for small-scale studies, but is less attractive for several hundred or more peptide/protein identifications. A wide range of software tools for automated quantitation exist, provided by MS instrument vendors, third-party companies, and academic groups. Commercial packages such as Mascot Distiller (Matrix Science) (http://www.sanger.ac.uk/turl/90c) support a range of quantitation methods, but have significant licensing costs. Vendor software and many third party packages e.g. MSQuant (http://www.sanger.ac.uk/turl/90f) and MaxQuant [7] are instrument specific, compatible with data files from a subset of MS instruments only. In this paper we introduce SILACAnalyer, a tool which offers high quality automatable processing comparable with the best manual analyses, but does not require prior identification of peptides or indeed any other molecular ion species prior to quantitation. The software has minimal user parameters, is freely available, and has been integrated in the OpenMS framework [8] which ensures it can be used in pipelines and as a GUI-driven tool. It is compatible with a wide range of formats including the Proteome Standards Initiative (PSI) supported formats [9]. We believe these advantages ensure SILACAnalyzer is a useful addition to the range of software tools available for quantitation in the proteomics field.
2
Methods
The combination of liquid chromatography (LC) and mass spectrometry (MS) is a powerful technique for the analysis of complex protein samples. As the retention time t increases different sets of molecules elute from the LC column. Each of these simplified mixtures is then analysed in a mass spectrometer. The resulting set of spectra (intensity vs mass charge ration m/z) forms an intensity map in the two-dimensional t-m/z plane. Such a raw map of LC-MS data [10] is the starting point of our analysis. 2.1
Algorithm
Our algorithm is divided into three parts: filtering, clustering and linear fitting, see Fig 1(d), (e) and (f). In the following discussion let us consider a particular mass spectrum at retention time 1350 s, see Fig 1(a). It contains a peptide of mass
SILACAnalyzer - A Tool for Differential Quantitation
47
Fig. 1. (a) Part of a raw spectrum showing the isotopic envelopes of a peptide pair, (b) spline fit on the raw data, (c) standard intensity cut-off filter, (d) non-local intensity cut-off filter as used in the algorithm, (e) clusters of raw data points that passed the filter, (f) linear fit on intensity pairs of a single cluster
1492 Da and its 6 Da heavier labelled counterpart1 . Both are doubly [M H +2 ] charged in this instance. Their isotopic envelopes therefore appear at 746 and 749 in the spectrum. The isotopic peaks within each envelope are separated by 0.5. The spectrum was recorded at finite intervals. In order to read accurate intensities at arbitrary m/z we spline-fit over the data, see Fig 1(b). We would like to search for such peptide pairs in our LC-MS data set. As a warm-up let us consider a standard intensity cut-off filter, see Fig 1(c). Scanning 1
The exact mass shift is 6.02 Da. In the following discussion we will use the approximation.
48
L. Nilse et al.
through the entire m/z range (red dot) only data points with intensities above a certain threshold pass the filter. Unlike such a local filter, the filter used in our algorithm takes intensities at a range of m/z positions into account, see Fig 1(d). A data point (red dot) passes if 1. all six intensities at m/z, m/z+0.5, m/z+1, m/z+3, m/z+3.5 and m/z+4 lie above a certain threshold and 2. the intensities within the first envelope (at m/z, m/z+0.5 and m/z+1) and second envelope (at m/z+3, m/z+3.5 and m/z+4) decrease successively2 . Let us now filter not only a single spectrum but all spectra in our data set3 . Data points that pass the filter form clusters in the t-m/z plane, see Fig 1(e). Each cluster centers around the unlabelled peptide of a pair. We now use hierarchical clustering methods to assign each data point to a specific cluster. The optimum number of clusters is determined by maximizing the silhouette width [11] of the partitioning4 . Each data point in a cluster corresponds to three pairs of intensities (at [m/z, m/z+3], [m/z+0.5, m/z+3.5] and [m/z+1, m/z+4]). A plot of all intensity pairs in a cluster shows a clear linear correlation, see Fig 1(f). Using linear regression we can determine the relative amounts of labelled and unlabelled peptides in the sample. 2.2
Implementation
We implemented the above algorithm as part of TOPP - The OpenMS Proteomics Pipeline [8]. TOPP is based on OpenMS [10], an open-source C++ software library for the development of proteomics software tools. OpenMS provides a wide range of data structures and algorithms for data handling, signal processing, feature detection and visualization. It is written in ANSI C++ and makes use of several other libraries where appropriate, see Fig 2. As a result it is platform-independent, stable and fast. The classes in OpenMS are easy to use, intuitive and well documented. In short, developers can focus on their algorithms and do not need to worry about technicalities. 2
3
4
The second rule leads to a significant reduction of the false positive rate in the peptide pair detection. Background signals are strongly suppressed. This rule has got one drawback: At very high masses the first peak of an envelope is no longer the one of highest intensity. In such a case data points of the first peak do not pass the filter. The corresponding cluster contains fewer data points and the accuracy of the ratio is slightly reduced. The filtering step is controlled by two parameters: mz step width determines the step width with which we scan over the spline-fitted spectrum. It should be of the order of the m/z resolution of the raw data. intensity cutoff specifies the intensity cutoff itself, see dashed line in Fig 1(d). The clustering step requires two further parameters: The hierarchical clustering algorithm performs best for symmetric clusters, i.e. width and height of a cluster should be of the same order. We therefore scale the retention times by a dimensionless factor rt scaling. For the example in Fig 3 we choose rt scaling=0.02. In some cases the number of clusters returned by the silhouette width maximization is not optimal. The dimensionless parameter cluster number scaling can be used for fine-tuning.
SILACAnalyzer - A Tool for Differential Quantitation
49
Fig. 2. The TOPP tools including SILACAnalayzer are based on the OpenMS software library. OpenMS in turn founds on several other libraries: The Computational Geometry Algorithms Library (CGAL) (http://www.cgal.org) is used for geometric calculations. The GNU Scientific Library (GSL) (http://www.sanger.ac.uk/turl/917) deals with numerics problems. Xerces (http://www.sanger.ac.uk/turl/918) is used for XML file handling. Qt (http://www.sanger.ac.uk/turl/919) provides the platform independence layer and is used for the graphical user interface. LibSVM (http://www.sanger.ac.uk/turl/91a) is used for machine learning.
TOPP comprises of a set of specialized software tools, which can be used to design analysis pipelines for particular LC-MS experiments. OpenMS and therefore TOPP support the non-proprietary file formats mzData [12], mzXML [13] and mzML. Vendor software either allows the export to these formats or there exist helper tools that can convert from vendor formats to these open standards. In this way it is possible to analyse data sets from mass spectrometers, irrespective of the vendor. The new TOPP tool SILACAnalyzer is an implementation of the algorithm described above. It is In the following section we will test its performance in detail. 2.3
Samples
For the test we prepared three samples each containing 25 pairs of known peptides plus an excess of unlabelled unpaired peptides (≈ 250). Each pair consisted of a labelled and unlabelled form of the peptides. Their ratio was varied from sample to sample. The samples were analyzed in two different mass spectrometers, a Thermo LTQ Orbitrab XL and a Bruker Esquire 3000plus. On each machine three identical aliquots of each sample were run. The resulting 18 data sets were then analyzed in three different ways: manually, using SILACAnalyzer and MSQuant. The analyses were restricted to peptide pairs with a mass difference of 6Da in charge state 2+5 . They appear consequently as pairs with m/z separation 3 in the spectra. The peptides are the result of the tryptic digest of a QconCAT protein [5]. Table 1 lists all 25 peptides, T1 to T25, and their associated post-translational modifications. Some of these peptides are not expected to appear as pairs with 5
The data sets of the Thermo orbitrap contain predominantly 2+ and 3+ ions. Data sets of the Bruker ion trap are dominated by 1+ and 2+ ions. In order to compare like with like, we include only 2+ ions in our analysis.
50
L. Nilse et al.
Table 1. Peptides T1 to T25 of the artificial QconCAT protein [5] and their posttranslational modifications: pyroglutamate formation (pyro-glu) and methionine oxidation (meth-ox). Peptides in the lower part are not expected to be present in the spectra, see discussion in the text. peptide T3 T4 T5 T6 T7 T8 T8’ T9 T10 T11 T12 T13 T13’ T14 T15 T15’ T16 T18 T18’ T19 T20 T21 T21’ T22 T1 T2 T17 T23 T24 T25
sequence m/z GFLIDGYPR 519.27 VVLAYEPVWAIGTGK 801.95 NLAPYSDELR 589.30 GDQLFTATEGR 597.79 SYELPDGQVITIGNER 895.95 QVVESAYEVIR 646.85 QVVESAYEVIR (pyro-glu) 638.34 LITGEQLGEIYR 696.38 ATDAESEVASLNR 681.83 SLEDQLSEIK 581.31 VLYPNDNFFEGK 721.85 GILAADESVGTMGNR 745.87 GILAADESVGTMGNR (meth-ox) 753.87 ATDAEAEVASLNR 673.84 LQNEVEDLMVDVER 844.91 LQNEVEDLMVDVER (meth-ox) 852.91 LVSWYDNEFGYSNR 875.40 QVVDSAYEVIK 625.84 QVVDSAYEVIK (pyro-glu) 617.32 AAVPSGASTGIYEALELR 902.98 LLPSESALLPAPGSPYGR 913.00 FGVEQNVDMVFASFIR 929.96 FGVEQNVDMVFASFIR (meth-ox) 937.96 GTGGVDTAAVGAVFDISNADR 996.99 MAGR 217.61 VIR 194.14 ALESPERPFLAILGGAK 885.00 AGK 138.09 VICSAEGSK 447.22 LAAALEHHHHHH 705.35
m/z separation 3 in the raw LC-MS data for the following reasons: The m/z of peptides T1, T2 and T23 are very small and lie outside the scan range used in the respective MS methods. Peptide T17 contains an arginine R which is protected from tryptic digestion by the following Proline P. The T17 peptides are consequently separated by a 12Da shift and will not appear in our search. Peptide T24 contains a cysteine residue which was not protected by reduction and alkylation. We therefore do not expect it to be seen in the analyses. Finally, T25 contains a His tag. It is therefore very likely to appear in higher charge states and not as 2+.
SILACAnalyzer - A Tool for Differential Quantitation
51
Table 2. False positive and negative rates for identification of 216 peptide pairs (3 samples of 3 aliquots each containing 24 peptide pairs) Thermo orbitrap analysis false negatives false manual 18 SILACAnalyzer 26 MSQuant 25
positives 3 9 0
Bruker ion trap analysis false negatives false positives manual 86 10 SILACAnalyzer 103 10
Table 3. Average standard deviations of peptide pair ratios (Only peptide pairs which were identified in all three aliquots were taken into account) Thermo orbitrap analysis sample 1 sample 2 sample 3 manual 0.0186 0.0590 0.0639 SILACAnalyzer 0.0030 0.0344 0.0485 MSQuant 0.0102 0.0653 0.1228 Bruker ion trap analysis sample 1 sample 2 sample 3 manual 0.1517 0.0464 0.2787 SILACAnalyzer 0.0777 0.0159 0.1581
3
Discussion
In what follows, we will compare performance and ease of use of the three different analyses. In the identification of peptide pairs, all three methods perform well, with broadly equivalent performance, see Table 2. The data presented highlight only the errors, and in the majority of cases (190 out of 216, 88%) the heavy/light peptide pairs were identified, notably without a priori identification in the case of SILACAnalyzer. Some of the peptides, for example T21 and its modified form T21’, appear with a very low signal in the spectra. Finding such pairs is equally challenging for the manual analysis as it is for the two software tools and such pairs were infrequently identified across all experiments and data processing methods. For cases in which peptide pairs were successfully identified in each of the three aliquots, we calculated the average standard deviation of the ratios, shown in Table 3. The results from SILACAnalyzer are the most consistent ones of the three analyses, a desirable property in an analysis tool. Since the precise source of the underlying systematic error is unknown in this system, this is an important metric to assess the quality of the approach.
52
L. Nilse et al.
Fig. 3. (a) SILACAnalyzer identifies 22 of the 24 peptide pairs in a data set of sample 2 (b) zoom of cluster 1, (c) plot of intensity pairs of cluster 1
Let us now compare how much effort was needed to achieve these results. First, it should be noted that an MSQuant analysis of the Bruker data set was not possible since the software did not support the data format. SILACAnalyzer on the other hand was able to analyse data from both mass spectrometers. That allowed a direct and unbiased comparison of both machines. Our algorithm requires no prior peptide identification or fitting of the data to theoretical models. The few parameters of the algorithm were optimized within minutes. Once the parameters for a mass spectrometer were found, the analyses were performed autonomously. On the other hand, both MSQuant and manual analysis required human input and took several hours or days respectively to complete. We
SILACAnalyzer - A Tool for Differential Quantitation
53
Fig. 4. Three samples with labelled and unlabelled peptides at ratios 1:3, 4:3 and 8:3 were analysed in a Thermo LTQ orbitrap and a Bruker Esquire 3000plus ion trap mass spectrometer. The ratios were calculated manually, using SILACAnalyzer and MSQuant. (The Bruker data format is not supported by MSQuant. An analysis of these data sets was therefore not possible).
therefore believe SILACAnalyzer will be well suited for the analysis of complex protein samples. The quality of the results obtained is also naturally of relevance, not just the ease of use or the ability to process data without obtaining identifications. We present a comparison of the performance from the different processing pipelines in Fig 4. Again, the performance of SILACANalyzer is comparable, and frequently better, than the other methods. The data presented also highlights the inherent variability presented by quantification data that is reasonably typical of experimental data viewed at the individual peptide level. A considerable level of variability is present when compared to the expected ratio which is
54
L. Nilse et al.
generally independent of the data processing method. This is apparent from the fact that individual data points which deviate from expectation are generally seen as triplets which co-cluster with the equivalent points processed by another method. A good example is T15’ for which the ratio is consistently underestimated across all experiments and processing pipelines, as shown in Fig 4.
4
Conclusions
In this paper we have described a new, effective and automated algorithm for the quantitative analysis of LC-MS data sets at the peptide level. Importantly, it requires no prior peptide identification and is therefore well suited to highthroughput analyses, and can identify molecular species with interesting quantitative changes which may be targeted for further study. Indeed, if only analytes with high confidence identifications are targeted for quantification then interesting changes could be lost. Our approach therefore provides the possibility of quantification without identification and can save instrument duty cycles that are usually required for acquiring MS/MS spectra for identification purposes. Using our approach, the interesting changes in amount of molecular species can be prioritised for identification, maximising the amount of time the instrument spends studying relevant ions. An implementation of the algorithm is available as part of The OpenMS Proteomics Pipeline (TOPP) [8]. In combination with other TOPP tools such as IDMapper, SILACAnalyzer is an important component for the design of quantitative proteomics pipelines. We have tested the performance of our software tool and find its results to be as good or better than the ones of existing methods. Its principal advantages over other methods lie in its fast performance, full automation and minimal parameter tuning, as well as the ability to perform quantitation a priori without peptide identifications.
Availability SILACAnalyzer is available as part of The OpenMS Proteomics Pipeline (TOPP) under the Lesser GNU Public License (LGPL) from http://www.openms.de . Binary packages for Windows, MacOS and several Linux distribtions are available in addition to the plotform-independent source package.
Acknowledgments Lars Nilse would like to thank Julia Handl for very helpful discussions and sharing her knowledge on clustering methods, and Oliver Kohlbacher and his group for their hospitality during a visit in T¨ ubingen. David Trudgian wishes to acknowledge the Computational Biology Research Group, Medical Sciences Division, University of Oxford for use of their services in this project.
SILACAnalyzer - A Tool for Differential Quantitation
55
Funding: This work was supported by the Biotechnology and Biological Science Research Council [grant ref BB/E024912/1]. David Trudgian is funded by a John Fell OUP award and the EPA trust. Paul Sims thanks the Wellcome Trust for research support (grant no. 073896).
References 1. Cox, J., Mann, M.: Is proteomics the new genomics? Cell 130(3), 395–398 (2007) 2. Alterovitz, G., Liu, J., Chow, J., Ramoni, M.F.: Automation, parallelism, and robotics for proteomics. Proteomics 6(14), 4016–4022 (2006) 3. de Godoy, L.M.F., Olsen, J.V., Cox, J., Nielsen, M.L., Hubner, N.C., Frohlich, F., Walther, T.C., Mann, M.: Comprehensive mass-spectrometry-based proteome quantification of haploid versus diploid yeast. Nature 455(7217), 1251–1254 (2008) 4. Ong, S.-E., Blagoev, B., Kratchmarova, I., Kristensen, D.B., Steen, H., Pandey, A., Mann, M.: Stable isotope labeling by amino acids in cell culture, SILAC, as a simple and accurate approach to expression proteomics. Mol. Cell. Proteomics 1(5), 376–386 (2002) 5. Beynon, R.J., Doherty, M.K., Pratt, J.M., Gaskell, S.J.: Multiplexed absolute quantification in proteomics using artificial QCAT proteins of concatenated signature peptides. Nat. Methods 2(8), 587–589 (2005) 6. Schulze, W.X., Mann, M.: A novel proteomic screen for peptide-protein interactions. J. Biol. Chem. 279(11), 10756–10764 (2004) 7. Cox, J., Mann, M.: MaxQuant enables high peptide identification rates, individualized p.p.b.-range mass accuracies and proteome-wide protein quantification. Nat. Biotechnol. 26(12), 1367–1372 (2008) 8. Kohlbacher, O., Reinert, K., Gr¨ opl, C., Lange, E., Pfeifer, N., Schulz-Trieglaff, O., Sturm, M.: TOPP – the OpenMS proteomics pipeline. Bioinformatics 23(2), e191–e197 (2007) 9. Martens, L., Orchard, S., Apweiler, R., Hermjakob, H.: Human Proteome Organization Proteomics Standards Initiative: data standardization, a view on developments and policy. Mol. Cell. Proteomics 6(9), 1666–1667 (2007) 10. Sturm, M., Bertsch, A., Gr¨ opl, C., Hildebrandt, A., Hussong, R., Lange, E., Pfeifer, N., Schulz-Trieglaff, O., Zerck, A., Reinert, K., Kohlbacher, O.: OpenMS - an open-source software framework for mass spectrometry. BMC Bioinformatics 9, 163 (2008) 11. Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics 20, 53–65 (1987) 12. Orchard, S., Hermjakob, H., Taylor, C., Binz, P.-A., Hoogland, C., Julian, R., Garavelli, J.S., Aebersold, R., Apweiler, R.: Autumn 2005 Workshop of the Human Proteome Organisation Proteomics Standards Initiative (HUPO-PSI) Geneva, September 4-6 (2005); Proteomics 6(3), 738–741 (2006) 13. Pedrioli, P., et al.: A common open representation of mass spectrometry data and its application to proteomics research. Nat. Biotechnol. 22(11), 1459–1466 (2004)
Non-parametric MANOVA Methods for Detecting Differentially Expressed Genes in Real-Time RT-PCR Experiments Niccol´o Bassani1 , Federico Ambrogi1 , Roberta Bosotti2 , Matteo Bertolotti2 , Antonella Isacchi2 , and Elia Biganzoli1 1
Institute of Medical Statistics and Biometry “G.A.Maccacaro” - University of Milan Campus Cascina Rosa, via Vanzetti, 5 - 20133 Milan (MI) Italy
[email protected] 2 Biotechnology Dept., Genomics Lab - Nerviano Medical Sciences SRL Viale Pasteur, 10 - 20014 Nerviano (MI)
Abstract. RT-PCR is a quantitative technique of molecular biology used to amplify DNA sequences starting from a sample of mRNA, typically used to explore gene expression variation across groups of treatment. Because of the non-normal distribution of data, non-parametric methods based on the MANOVA approach and the use of permutations to obtain global F-ratio tests have been proposed to deal with this problem. The issue of analyzing univariate contrasts is addressed via Steel-type tests. Results of a study involving 30 mice assigned to 5 differents treatment regimens are presented. MANOVA methods detect an effect of treatment on gene expression, with good agreement between methods. These results are potentially useful to draw out new biological hypothesis to be verified in following designed studies. Future research will focus on comparison of such methods with classical strategies for analysing RT-PCR data; moreover, work will also concentrate on extending such methods to doubly multivariate design. Keywords: RT-PCR, gene expression, MANOVA, non-parametric, permutations.
1
Introduction
In molecular biology reverse transcription polymerase chain reaction (RT-PCR) is a laboratory technique used to amplify DNA copies from specific mRNA sequences. It is the method of choice to detect differentially expressed genes in various treatment groups due to its advantages in detection sensitivity, sequence specificity, large dynamic range, as well as its high precision and reproducible quantitation compared to other techniques [1]. In these investigations one usually has n samples (cell lines, tumor or skin biopsies, etc), whose mRNA is reverse transcribed into cDNA and then amplified to obtain a value of expression for p F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 56–69, 2010. c Springer-Verlag Berlin Heidelberg 2010
Non-parametric MANOVA Methods
57
genes with the method of the threshold cycle. Often the n samples have been treated with k different doses of a molecule whose effect in determining a differential expression of p genes involved is the main objective of the experiment. If the goal is to explore differences between the k groups, the MANOVA is a well-established technique for such purpose. However, some basic assumptions such as the normal distribution of the observations must hold. Moreover, the number of covariates p must not exceed the number of units n, in order to have enough degrees of freedom for the estimation of the effects. In the context of RT-PCR both requirements are often violated: gene expression values tipically show a right-skewed distribution, and it is quite common to have a small sample size on which the expression value of a high number of genes is measured. In the past few years there have been various attempts to face such issues. Specifically, Anderson et al. [2,3] and Xu et al. [4] have proposed methods for nonparametric MANOVA. Both these methods have been applied to real datasets of RT-PCR experiments. In the Methods section we briefly outline the two techniques for the one-way situation, as well as a non-parametric procedure for simultaneous univariate tests on the single variables introduced by Steel [5,6], while in the Results section a study on 30 mice for the analysis of differential gene expression of 58 genes in tumor samples performed at the Nerviano Medical Sciences, a a pharmaceutical company with focus in Oncology located north of Milan, Italy, is presented.
2
Methods (j)
Let yα be i.i.d. samples from p-dimensional multivariate population y (j) with mean vectors μ(j) and covariance Σ(j = 1, ..., k). The hypothesis to test in the classical MANOVA context is H0 : μ(1) = μ(2) = ... = μ(k) .
(1)
that is, there is no difference in the mean gene expression across the k groups. Starting from this hypothesis, Anderson et al. [2,3] and Xu et al. [4] have developed methods for testing differences among groups, using respectively the distances between the observations and the median statistics instead of the mean for the computation of the sum of squares. Both these methods do not assume multivariate normality of the responses as in classical MANOVA, and consider the p variables to be independent. 2.1
Method of Anderson: The Use of Distances
The method proposed by Anderson [2,3] was first introduced for ecological data, which often showed patterns that required a non-traditional handling. Given a population of N experimental units (i.e. observations) divided in k groups and a set of p variables, the task of determining whether the units differ across groups is transformed in the task of determining whether the distances
58
N. Bassani et al.
between units in the same group differ from those between units in different groups. Note that the term distance to refer to any measure of distance and/or dissimilarity is used here. Thus, starting from an N x p matrix containing the data, an N x N distance matrix D is defined, and from this it is possible to compute the total sum of squares as SST =
N −1 N 1 2 d . N i=1 s=i+1 is
(2)
where N is the total sample size of the experiment and dij represent the (i, s)th element of D, i.e. the distance between observation i = 1, . . . , N and s = 1, . . . , N . Similarly, the within-group sum of squares is computed as SSW =
N −1 N 1 2 d is . nj i=1 s=i+1 is
(3)
where nj is the number of experimental units in the j-th group and is takes the value 1 if observation i and j belong to the same group, 0 otherwise. The betweengroup sum of squares can be obtained, as in the classical ANOVA setting, as SSB = SST − SSW . From these quantities one can compute a pseudo F-ratio in order to test the null hypothesis of no difference between the k groups of treatment: SSB / (k − 1) . (4) SSW / (N − k) As the variables are expected to follow a non-normal distribution, it is not reasonable to assume this test statistic to be distributed like a classical F with (k − 1, N − k) degrees of freedom. To test the hypothesis the authors propose to resort to the permutation method, which consists in randomly permuting m times the lines of the original dataset, computing m values of the F statistic, denoted as F ∗ , and then calculating the p-value as F =
N ◦ of F ∗ ≥ F . (5) m According to the results of this permutation analysis, if the null hypothesis of equality of k treatments is refused, the focus of the analysis should be to investigate the differences between the k groups. Thus, contrasts are analyzed using reduced datasets containing only the observations belonging to the groups that are to be compared and doing the same calculations described before to obtain the suitable pseudo-F test and p-values. P =
2.2
Method of Xu: The Use of Medians
This method, originally proposed by the authors for microarray investigation data, makes use of the medians in place of the mean for the computation of the sum of squares.
Non-parametric MANOVA Methods
59
First, a robust version of the within-group error matrix and between-group error matrix is defined, as follows: (j) (j) (j) (j) ˜ W = yα − μ yα − μ nj medianα ˜ ˜ . (6) j
˜= B
(j) (j) ˜ −μ nj μ ˜ μ ˜ −μ ˜ .
(7)
j
where μ ˜(j) = median μ ˜ = median
yα(j) , α = 1, . . . , nj
.
yα(j) , α = 1, . . . , nj , j = 1, . . . , k .
(8) (9)
Here medianα means that the median is calculated element-wise on the matrix inside the curly brackets. For a more detailed notation refer to the original paper ˜ +B ˜= by Xu et al. [4] It has to be noted that in general one cannot say that W (j) (j) T˜ = j nj medianα yα − μ ˜ yα − μ ˜ , because of the properties of the median operator. To test the hypothesis previously stated, the following pseudoF statistic has to be computed: ˜ tr B G= . (10) ˜ tr W To compute the p-value the authors suggest to use the method of random permutations. Just like in the method of Anderson, the p-value is given by # {G∗ ≥ G} . (11) m where G∗ is the value of the test statistic for each permutation and m is the number of permutations. Also for this method it is possible to analyze single contrasts using pseudo-F tests computed on reduced datasets. P =
2.3
Steel Test: A Non-parametric Procedure for Simultaneous Univariate Contrasts
Once the analysis of contrasts has led to identify the relevant differences among the groups, one is interested in exploring for which variables these differences actually occur. To do this, Steel [5,6] has developed a non-parametric univariate technique, which can be considered an analogue of the Tukey’s procedure for multiple comparisons in the classical ANOVA setting. The procedure proposed by the author was originally introduced for balanced situations.
60
N. Bassani et al.
Given a set of k continuous random variables Xi (i = 1, . . . , k) measuring a single characteristic for n subjects on each of the k treatment groups, let us define their cumulative distribution function as Fi . Then, to compare treatment groups one has to test the null hypothesis H0 : F1 = . . . = Fk .
(12)
= Fj at least one i, j pair . H1 : Fi
(13)
against the alternative
= j), assigning rank To test H0 jointly rank the Xi ’s and Xj ’s (i, j = 1, . . . , k, i 1 to the least observation and 2n to the greatest in each ranking. Then the rank sums Tij are obtained by summing the ranks of every i, j pair previously calculated, and a conjugate value Tij is computed as Tij = (2n + 1) n − Tij . The smaller between Tij and Tij is to be used for carrying out the test. Given the current values of k and n one can determine the tabulated value of Ti j with which the actual values have to be compared to determine whether there’s a difference between the groups. This outline clearly refers to the balanced situation, but it is possible, as Munzel et al. [7] show, to extend it to the unbalanced situation. In the real dataset used in the Results section however, data are balanced across groups.
3
Results and Discussion
Results are reported from a RT-PCR experiment planned and carried out at Nerviano Medical Sciences, a pharmaceutical company with focus in Oncology, which involved 30 breed mice, that were considered as independent experimental units, divided in 5 groups with different doses of treatment with a cell cycle inhibitor: vehicle (i.e not treated), 2.5 mg/kg, 5 mg/kg, 10 mg/kg, 20 mg/kg. In this experiment the expression level of 58 genes was measured, and the primary objective was to determine whether there was a difference in the levels of gene expression between the control group and the other 4 treatment levels. 3.1
Cluster Analysis
To preliminary explore patterns of gene expression in this experiment, we drew a heatmap and the denrograms of hierarchical clustering on both genes and mice, using euclidean distance and complete linkage. It can be seen from Fig. 1 that control mice seem to cluster separately from the other groups, whereas higher doses of treatment (10 mg/kg - 20 mg/kg) are associated with increased gene expression (more prevalence of lighter gray blocks) and lower doses of treatment are associated with decreased gene expression (more prevalence of darker grey blocks). Moreover, there seem to be no relevant differences within these two of groups.
Non-parametric MANOVA Methods
61
Fig. 1. Heatmap and dendrogram on mice (columns) and genes (rows): most subjects receiving 2.5 mg/kg or 5 mg/kg of treatment dose are on the right side of the map, while most subjects receiving 10 mg/kg or 20 mg/kg are on the left side of the map. These groups are separated by subjects belonging to the control group (i.e. not receiving any treatment).
To confirm these preliminary exploratory results a particular clustering algorithm named Affinity Propagation [8], recently proposed by Frey and Dueck. Affinity Propagation was chosen because it offers the probability to use different distance measures and has a principled way to choose the number of clusters to be considered. This algorithm search for exemplars into the data points, starting from a user-defined preference value which indicates how much the i-th point is likely to be an exemplar. Thus, depending on this preference value the algorithm will identify a different number of clusters: it is possible to choose the number of clusters n by using a plot which shows the value of n associated with each preference value.
62
N. Bassani et al.
In figure 2 and 3 the number of clusters associated with different preference values is presented, using two different similarities measures: negative euclidean distance and Pearson correlation.
Fig. 2. Affinity Propagation using negative euclidean distance as similarity measure, where each plateau represents a different solution. The table shows the solution with three clusters: vehicle mice form an almost stand-alone cluster, whereas low(high) doses experimental units tend to cluster separately from high(low) doses.
The choice of the solution with three clusters is due to the fact that when using the Pearson correlation coeffcient as similarity measure there are many preference values associated with three clusters. This solution seems to work well also when using negative Euclidean distance, even if only few preference values are associated with it. For both measures however, the results of cluster analysis shows that there seems to be a difference between vehicle mice and all other treatment groups. From both figure 2 and 3 can be noted that the solution with 3 clusters is not the only one likely to be informative. In particular, for both 4 and 5 clusters solution and for both similarity measures described before vehicle mice tend to cluster apart from all the other (results not shown). Also low doses mice show patterns of clusterization similar to those previously described, whereas higher doses tend to separate into different clusters, and sometimes to mix with low doses mice. These results indicate that we should expect a difference in gene expression for all the 4 levels of treatment with respect to the control group, and that there
Non-parametric MANOVA Methods
63
Fig. 3. Affinity Propagation using Pearson correlation as similarity measure, where each plateau represents a different solution. Results are similar to those shown in figure 2.
should be no such difference within high (10 vs 20 mg/kg) and low (2.5 vs 5 mg/kg) regimens. In order to compare the two methods proposed in Section 2.1 and 2.2 and to evaluate agreement with these cluster analysis procedures, results from both analysis are described. With regard to the software, the method of Anderson made use of a Fortran routine available on the website of the author, while for the method of Xu an ad hoc set of R functions was written, starting from the Matlab code available from the authors.
3.2
Anderson Method
From the global F-ratio test in Tab. 1 it seems reasonable to suppose an effect of the factor treatment in determining a differential gene expression. Here the p-value was obtained after 100000 permutations of the original dataset. Table 1. ANOVA table - Anderson method Source Treatment Residual Total
df 4 25 29
SS MS F p (perm) 20275.7481 5068.9370 5.4807 0.0003 23121.6129 924.8645 43397.361
64
N. Bassani et al.
In Tab. 2 results of the contrast analysis are shown: such analysis was carried out to evaluate differences between the control group (i.e. the vehicle mice) and each of the 4 treatment groups. It is interesting to note that the t tests here reported are nothing but the square roots of a F-ratio test computed as previously described using a dataset with only observations from the two groups compared. The adjustment via the False Discovery Rate method was used as multiple comparisons were made, as the Bonferroni method was considered too much conservative for the exploratory purposes of the analysis. Table 2. Contrast analysis - Anderson method Contrast 2.5 mg/kg vs control 5 mg/kg vs control 10 mg/kg vs control 20 mg/kg vs control a
t test 3.0866 2.2696 2.3822 2.9347
p (perm) p (adjusted)a 0.0023 0.0046 0.0021 0.0046 0.0152 0.0152 0.0066 0.0088
False Discovery Rate adjustment.
It can be noted that all contrasts are significant, which means that every group of treatment shows patterns of differential gene expression when compared to the control one. Moreover, consistently with the heatmap, no statistically significant difference was found when comparing mice treated with 2.5 mg/kg and with 5 mg/kg nor when comparing those treated with10 mg/kg with those treated with 20 mg/kg. 3.3
Xu Method
Recalling that the total sum of squares cannot be derived as a sum of treatment and residuals sum of squares using Xu method, Tab. 3 reports only the traces (sum-of-squares) of the within-groups (residual) and between groups (treatment effect) matrix, with the corresponding value of the F test and the p-value, here computed with 10000 permutations. Table 3. ANOVA table - Xu method Source Treatment Residual
SS F P(perm) 4388.906 1.737521 0.0085 2525.959
The difference in the choice of the number of random permutations is due to the computational burden of the method. In Fig. 4 p-values resulting from different numbers of permutation (m) are shown; as the p-value of the pseudo global F test seems to become stable for values of m greater than 100, so the choice of 10000 permutations is considered to be suitable.
Non-parametric MANOVA Methods
65
Fig. 4. Global p-values with different numbers of permutations
In Tab. 4 the results of contrast analysis made with the method of Xu are reported, with the corresponding values of the F test on the subgroups and the adjusted p-values. It can easily be seen that the two methods provide very similar results, both for the global F-ratio test that for the contrast analysis, even with a different number of permutations. Table 4. Contrast analysis - Xu method Treatment 2.5 mg/kg vs control 5 mg/kg vs control 10 mg/kg vs control 20 mg/kg vs control a
F test 1.8328 1.6195 1.1365 1.6459
p(perm) p (adjusted)a 0.0000 0.0000 0.0020 0.0040 0.0450 0.0450 0.0094 0.0125
False Discovery Rate adjustment.
Specifically, from both Tab. 2 and 4, it can be seen that there is a difference in gene expression for all contrasts considered, i.e. each dose of treatment seems to be associated with a change in gene expression with respect to the control group. 3.4
Multiple Comparison Univariate Tests
To explore which could be the genes differentially expressed out of a set of 58, the non-parametric procedure of Steel, particularly suitable for univariate simultaneous tests was used.
66
N. Bassani et al. Table 5. Univariate contrasts - Steel test
Gene 1 2 3 gene1 0.324 0.420 1.000 gene2 0.081 0.172 0.740 gene3 0.034 0.635 1.000 gene4 0.419 1.000 0.958 gene5 0.997 0.986 0.324 gene6 0.833 0.997 0.081 gene7 0.997 0.635 0.240 gene8 0.323 0.986 0.081 gene9 0.053 0.241 0.419 gene10 0.081 0.034 0.241 gene11 0.833 1.000 0.958 gene12 0.033 0.033 0.034 gene13 0.081 0.120 0.907 gene14 0.034 0.033 0.323 gene15 0.833 0.420 0.240 gene16 1.000 0.322 0.080 gene17 0.120 0.120 1.000 gene18 0.033 0.053 1.000 gene19 0.997 0.241 0.034 gene20 0.986 0.986 1.000 gene21 0.421 0.241 1.000 gene22 0.081 0.033 0.080 gene23 0.741 0.240 0.741 gene24 0.034 0.033 0.033 gene25 0.173 0.034 0.082 gene26 1.000 0.986 0.740 gene27 0.081 0.034 0.033 gene28 0.526 0.241 0.324 gene29 0.033 0.033 0.033 1: 2.5mg/kg vs control; 2: 5mg/kg vs
4 Gene 1 2 3 4 0.958 gene30 0.121 0.173 1.000 0.957 0.174 gene31 0.033 0.033 0.033 0.033 0.421 gene32 0.907 0.324 0.833 0.324 0.420 gene33 0.957 0.120 0.033 0.053 0.035 gene34 0.323 0.121 0.986 0.907 0.033 gene35 0.526 0.997 0.741 0.323 0.121 gene36 0.997 0.907 1.000 1.000 0.173 gene37 0.525 0.907 1.000 0.526 1.000 gene38 0.635 0.635 0.173 0.986 0.324 gene39 0.740 1.000 1.000 0.986 0.034 gene40 0.420 0.986 0.907 0.907 0.080 gene41 0.052 0.324 0.636 0.636 0.997 gene42 0.324 0.958 0.741 0.526 0.033 gene43 0.241 0.833 1.000 0.741 0.080 gene44 0.053 0.241 0.324 0.907 0.034 gene45 0.997 0.635 0.081 0.052 1.000 gene46 0.833 0.420 1.000 0.997 1.000 gene47 0.740 0.907 0.986 0.833 0.034 gene48 0.081 0.033 0.033 0.081 0.636 gene49 0.420 0.986 0.636 0.240 0.526 gene50 0.120 0.324 0.997 0.740 0.173 gene51 0.241 0.986 1.000 1.000 0.907 gene52 0.526 0.636 0.997 0.833 0.034 gene53 0.833 1.000 0.907 0.526 0.053 gene54 0.421 0.741 0.833 0.526 0.741 gene55 0.120 0.033 0.986 0.907 0.033 gene56 0.526 0.635 1.000 0.986 0.421 gene57 1.000 0.526 0.173 0.173 0.120 gene58 0.034 0.034 0.035 0.033 control; 3: 10mg/kg vs control; 4: 20mg/kg vs control.
In Tab. 5 all the 2-sided p-values associated with the contrasts shown in Tab. 2 and in Tab. 4 are reported. 17 out of 58 of genes involved in this experiment were found to show possible pattern of different expression for at least one of the four pre-specified contrasts. It can be noted that p-values seem to be discretized, which is due to the fact that the Steel test acts on ranks of the variables to determine whether two groups are different or not, and so the test statistic can assume only a discrete number of values. The apparent lower boundary of the p-values is linked, besides the characteristics of the test itself, to the sample size of each group. In order to evaluate this boundary, a small simulation study was performed. As the Steel test works with univariate multiple comparisons, data were simulated from 5 Normal distributions (our five groups of treatment) with various values of μ and σ and multiple comparisons were carried out between the first group (the control one) and the others (treatment groups). In all the settings considered, and for different sample sizes, data were simulated 100 times to evaluate impact
Non-parametric MANOVA Methods
67
Table 6. Simulations features Group μ 1 0.888 1 2 0.515 (n = 6, 10, 15)a 3 0.424 4 0.526 5 0.497 Group μ 1 0.888 2 2 0.515 (n = 5, 10, 20, 3 0.424 40, 60, 100)a 4 0.526 5 0.497 Group μ 1 3.794 3 2 3.047 (n = 6, 10, 25, 3 3.730 40, 60, 100)a 4 4.077 5 4.967 a data simulated 100 times.
σ 0.080 0.140 0.049 0.087 0.094 σ 0.800 0.950 0.860 0.670 0.730 σ 1.156 0.481 0.415 0.631 0.684
of group size on the p-values lower boundary. Means, standard deviations and sample sizes for each setting are summarized in table 6. It has to be noted that values of μ and σ of setting 1 are the experimental means and standard deviations of gene 24 in table 5: these were chosen because they provided a situation were means were quite different and σ was very low, in contrast with the second setting, were the means are equal to the previous ones but the values of σ are much higher. Values for μ and σ for the third setting are experimental values for gene 4, characterized by partially overlapping means, a situations considerably from those previously described. The barplot in figure 5 shows simulation results for contrast 1 in the first setting. Only small sample sizes could be used because higher sizes led to pvalues which were mostly equal to zero, whereas in the other settings it was possible to increase size up to 100 (see table 6). Nevertheless, the discretization previously described seems to become less relevant when increasing sample size. In the other settings instead, the increase of sample size led mostly to much more distinct p-values, and so to the disappearance of the lower boundary previously described. In some cases, however, sample sizes were associated with a higher degree of discretization, mainly because p-values tended to zero much more frequently, thus making it difficult to actually distinguish between them, even when using a barplot. An example of this, can be seen in figure 6, which refers to contrast 1 in the second setting: as can be noted, a sample size of 100 seems to be associated with a higher frequence of p-value equal to zero compared to a size of 60. Results for other settings and contrasts are similar and are not reported here.
68
N. Bassani et al.
Fig. 5. Barplots of p-values in setting 1 (see table 6), with n = 6 (upper-left panel), 10 (upper-right panel) and 15 (lower panel)
Fig. 6. Barplots of p-values for contrast 1 in setting 2 (see table 6), with n = 60 (left panel) and 100 (right panel)
Even if the authors report that it is likely to expect such tests to have better small sample properties than other non-parametric procedures [7], they do not
Non-parametric MANOVA Methods
69
report how much small can be the sample sizes to have these properties. It seems thus clear, that larger and more detailed simulation studies are needed to better understand the particular features of this test, especially when used with low sample sizes.
4
Conclusions and Future Work
Beyond agreeing with well-known exploratory techniques (the clustering procedures shown in Section 3.1) these results are very useful for exploratory purposes; as a matter of fact, it is possible to draw out new biological hypothesis to test in specific studies appropriately designed. One main drawback of both methods is that they consider the p dependent variables as independent of one another: such an assumption is likely to be violated in many real situations, because it is reasonable to think of the genes as variables possibly correlated. On the other hand, these methods have the advantage of being easy to interpret, as the results are the same of a classic MANOVA approach: the goal is evaluating whether a set of variables shows different patterns depending on the levels of a specific factor. It is possible to extend this method to let it account for doubly multivariate studies, i.e. situations where on the same subjects a set of covariates is measured repeatedly over time/space. In this case, the problem lies mainly in handling the pattern of correlation between repeated measures. The topic of future research will be an evaluation of the performance of these methods when compared to other approaches used in the analysis of RT-PCR experiments. Issues concerning these approaches (e.g. dimensionality of the data) and the particular setting involved (e.g. correlation between variables) will be addressed via simulation studies.
References 1. Wong, M.L., Medrano, J.F.: Real-Time PCR for mRNA Quantitation. Biotechniques 39, 75–85 (2005) 2. Anderson, M.J.: A New Method for Non-Parametric Multivariate Analysis of Variance. Austral. Ecol. 26, 32–46 (2001) 3. McArdle, B.H., Anderson, M.J.: Fitting Multivariate Models to Community Data: a Comment on Distance-Based Redundancy Analysis. Ecology 82, 290–297 (2001) 4. Xu, J., Cui, X.: Robustified MANOVA with Applications in Detecting Differentially Expressed Genes from Oligonucleotide Arrays. Bioinformatics 24, 1056–1062 (2008) 5. Steel, R.G.D.: A Multiple Comparison Rank Sum Test: Treatments vs Control. Biometrics 15, 560–572 (1959) 6. Steel, R.G.D.: A Rank Sum Test for Compairing All Pairs of Treatments. Technometrics 2, 197–207 (1960) 7. Munzel, U., Hothorn, L.A.: A Unified Approach to Simultaneous Rank Test Procedures in the Unbalanced One-Way Layout. Biom. J. 43, 553–569 (2001) 8. Frey, B.J., Dueck, D.: Clustering by Passing Messages between Data Points. Science 315, 972–976 (2007)
In Silico Screening for Pathogenesis Related-2 Gene Candidates in Vigna Unguiculata Transcriptome Ana Carolina Wanderley-Nogueira1, Nina da Mota Soares-Cavalcanti1, Luis Carlos Belarmino1 , Adriano Barbosa-Silva1,2, Ederson Akio Kido1 , Semiramis Jamil Hadad do Monte3 , Valesca Pandolfi1,3 , Tercilio Calsa-Junior1, and Ana Maria Benko-Iseppon1 1 Universidade Federal de Pernambuco Center of Biological Sciences, Departament of Genetics, Laboratory of Plant Genetics and Biotechnology, R. Prof. Moraes Rˆego s/no., Recife, PE, Brazil
[email protected] 2 Max-Delbrueck Center for Molecular Medicine Computational Biology and Data Mining Group, Robert-Roessle-Str. 10, 13125, Berlin, Germany 3 Universidade Federal do Piaui Laboratory of Immunogenetics and Molecular Biology, Campus Petrˆ onio Portela bloco 16, Teresina, PI, Brazil
Abstract. Plants evolved diverse mechanisms to struggle against pathogen attack, for example the activity of Pathogenesis-Related (PR) genes. Within this category PR-2 encodes a Beta-glucanase able to degrade the polysaccharides present in the pathogen cell wall. The aim of this work was to screen the NordEST database to identify PR-2 members in cowpea transcriptome and analyze the structure of the identified sequences as compared with data from public databases. After search for PR-2 sequences in NordEST; CLUSTALx and MEGA4 were used to align PR-2 orthologs and generate a dendrogram. CLUSTER program revealed the expression pattern trough differential display. A new tool was developed aiming to identify plant PR-2 proteins based in the HMMER analysis. Among results, a complete candidate from cowpea could be identified. Higher expression included all libraries submitted to biotic (cowpea severe mosaic virus, CPSMV) stress, as well as wounded and salinity stressed tissues, confirming PR expression under different kind of stresses. Dendrogram analysis showed two main clades, the outgroup and Magnoliopsida where monocots and dicots organisms were positioned as sister groups. The developed HMM model could identify PR-2 also in other important plant species, allowing the development of a bioinformatic routine that may help the identification not only of pathogenesis related genes but any other genes, classes that present similar conserved domains and motifs. Keywords: Bioinformatics, pathogenesis-related, Beta-glucanases, biotic stress, cowpea. F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 70–81, 2010. c Springer-Verlag Berlin Heidelberg 2010
In Silico Screening for Pathogenesis Related-2 Gene Candidates
1
71
Introduction
In response to persistent challenge by a broad spectrum of microorganisms, plants have evolved diverse mechanisms that enable them not only to resist drought and wounding but also to oppose attacks by pathogenic organisms and prevent infection [1,2]. The first mechanism is the hypersensitive response (HR), that is immediate and starting with signal recognition from a pathogen elicitor by a host Resistance (R) gene leading to rapid cell death [3] and acting as a keylocker system where HR occur only if the host presents the correct resistance gene that recognize the pathogen Avirulence (Avr) gene [4]; the second strategy is the systemic activation of genes encoding mithogen-activated protein kinases (MAPKs) and pathogenesis-related (PR) proteins which are directly or indirectly inhibitory towards pathogens and have been associated with the phenomenon of systemic acquired resistance (SAR) [5]. The PR proteins include seventeen gene families based on their serological properties and on sequence data. They generally have two subclasses: an acidic subclass, secreted to cellular space, and a vacuolar basic subclass [6]. Direct antimicrobial activities for members of PR protein families have been demonstrated in vitro through hydrolytic activities on cell walls and contact toxicity; whereas indirect activities perhaps bypass an involvement in defense signaling [7]. There are at least ten PR families whose members have direct activities against fungi pathogens, such as Beta-glucanases. This enzyme, product of PR-2 gene activity, is able to degrade the polysaccharides present in the pathogen cell wall specially in fungi and oomycetes [2], preventing the colonization of the host by these organisms [8]. Studies suggested that PR-2 proteins play a protective role through two distinct mechanisms. First, the enzyme can impair microbial growth and proliferation directly by hydrolyzing the Beta-1,3/1,6-glucan of the cell walls rendering the cells susceptible to lysis and possibly to other plant defense responses. Second, an indirect defensive role is suggested by the observation that specific Beta-1,3/1,6-glucan oligosaccharides, released from the pathogen walls by the action of glucanases, can induce a wide range of plant defense responses and hence the SAR [9]. In herbaceous plants the PR-2 activation are strongly influenced by the accumulation of salicylic acid (SA) in their tissues [10]. In other plants high levels of ethylene and methyl jasmonate can act as markers for PR-2 response [11]. Genes related to this pathway are highly conserved within the plant kingdom in relation to size, amino acid composition and isoelectric point [12], whilst some components of the system show similarity to proteins involved in innate immunity in the animal kingdom [13]. PR -2 genes also are present constitutively in some plant organs or tissues, including roots, leaves and floral tissues [14]. There is no previous evaluation regarding these metabolic pathways in V. unguiculata, which presents great economic importance in semi-arid regions throughout the world. The present work aimed to perform a data mining-based identification of PR-2 gene in the NORDEST database, comparing them with sequences deposited in public databases and literature data.
72
2
A.C. Wanderley-Nogueira et al.
Methods
For the identification of PR-2 gene candidates, a tBLASTn alignment was carried out against NordEST database constructed using 18,984 transcripts isolated from V. unguiculata. An Arabidopsis thaliana sequence (AT3G57260.1) was used as seed sequence. After this search, PR-2 matching sequences (cutoff e-5) have been used to screen for homology in Genbank (NCBI) using the BLASTx tool [15]. Cowpea clusters were translated at Expasy [16] and screened for conserved motifs with aid of the RPS-BLAST CD-search tool [15]. Multiple alignments with CLUSTALx (available at link http://www.nordest.ufpe.br/ lgbv/PR-2 Alignment) allowed the structural analysis of conserved and diverging sites as well as elimination of non aligned terminal segments. CLUSTALx alignments were submitted to the program MEGA (Molecular Evolutionary Genetic Analysis V.4) [17] aiming to create a dendrogram using the maximum parsimony method and bootstrap function (1,000 replicates). Conserved motif evaluation was carried out using *.aln files (from CLUSTALx) out of eight PR-2 candidates from eight different species as input to the HMMER (Hidden Markov Models) program, that allowed the search of PR-2 typical patterns in 16 cowpea selected sequences. To establish an overall picture of PR-2 gene distribution pattern in cowpea, we carried out a direct correlation of the reads frequency of each protein sequence in various NordEST cDNA libraries (available at link http://www.nordest. ufpe.br/~lgbv/PR-2_Candidates_Reads). Afterwards a hierarchical clustering approach was applied using normalized data and a graphic representation constructed with aid of the CLUSTER program. Dendrograms including both axes (using the weighted pair-group for each cluster and library) were generated by the TreeView program [18]. On the graphics, light gray means no expression and black all degrees of expression (see Fig. 4). The analysis of protein migratory behavior, using only the best matches of each seed sequence, was generated with the JvirGel program [19] using an analytical mode for serial calculation of MW (molecular weight), pI (Isoelectric Point), pH-dependent charge curves and hydrophobicity probes, generating a virtual 2D gel as JavaTM applet. The analysis of SuperSAGE data included evaluation of 298,060 tags (26 bp) distributed in four libraries submitted to injury or mosaic virus infection against a negative control (available at link http://www.nordest.ufpe.br/ ~lgbv/Sage_Tags). The transcripts were screened for homology using as seed sequence a cDNA full length sequence corresponding to the A. thaliana PR-2 gene. A local BLASTn against the SuperSAGE tags database (cutoff e-10) was performed using the Bioedit program. The obtained tags were classified into up or down-regulated comparing the control with infected and injured libraries; for this purpose the Discovery Space program was used. Moreover, it has been developed a web-tool in order to identify amino acid sequences similar to the protein PR-2. For this purpose we used a set of sequences highly similar to PR-2 from A. thaliana extracted from the Entrez Protein.
In Silico Screening for Pathogenesis Related-2 Gene Candidates
73
The tool has been developed as a HTML platform which accepts a unique (or list) of FASTA formatted sequence(s) and some alignment parameters like e-value, score, coverage and dust filters. Behind the platform the software BLAST [15] is used to perform the batch pairwise alignments and PHP (available at link http:// www.php.net) to parse the alignment results in order to generate the tool output. In parallel to the alignment, it is executed for each query sequence, which matched PR-2 sequences over the desired thresholds, a HMMPFAM using the software HAMMER [20] is conducted in order to screen the relevant biological motifs present in PR-2 proteins, like glyco-hydro domain. Finally, a distance tree based on protein’s similarities is generated for the local PR-2 proteins and the user’s selected sequences. For this purpose we have used the PHYLIP package (http://evolution.gs.washington.edu/phylip.html) and the Neighbor Joining method. The developed web-tool can be accessed at the supplementary material website in the corresponding link (available at link http://www. nordest.ufpe.br/~lgbv/HAMMER_consensus). This bioinformatic routine may help the identification not only of pathogenesis related genes but any other genes classes that present similar conserved domains and motifs as showed in Figure 1.
Fig. 1. Pipeline to identify PR genes. Black boxes indicates data from automatic annotation. Gray boxes indicates manual annotation steps. Cylinders means used databases.
3
Results
After the tBLASTn five PR-2 candidates were identified in the NordEST database as including three clusters presenting the desired glyco-hydro motif as conserved
74
A.C. Wanderley-Nogueira et al.
Table 1. V. unguiculata PR-2 candidates, showing the sequence size in nucleotides and amino acids with the conserved start and end sites, the best alignment in NCBI database, including the score and e-value. The accession numbers gb.AAY96764.1, gb.AAY25165.1, emb.CAO61165.1, emb.CAO71593.1 and gbEAU72980.1 corresponding to sequences from Phaseolus vulgaris, Ziziphus jujuba, Vitis vinifera, Leishmania infantum and Synechococcus sp., respectively. Abbreviations: CD, Glyco-hydro Conserved Domain. Vigna sequence Size(nt/aa) CD Start/End Best match Score/E-value Contig2370 804/140 yes 2/121 gb.AAY96764.1 255/8.00E-67 Contig2265 833/198 yes 1/164 gb.AAY25165.1 286/6.00E-76 Contig277 1244/242 yes 2/107 emb.CAO61165.1 345/1.00E-93 VUPIST02011D02.b00 867/103 no emb.CAO71593.1 43/0.005 VUABTT00001D02.b00 552/78 no gbEAU72980.1 33/4.6
domain (Tab. 1) and two singlets (sequences available at link http://www. nordest.ufpe.br/~lgbv/vigna_nucleotide_sequences). Best alignments using identified clusters occurred with chinese jujube (Ziziphus jujuba), grape (Vitis vinifera) and common bean (Phaseolus vulgaris). The three clusters presented the desired domain incomplete at the aminoterminal, although only the Contig277 (Vigna 1) presented the complete desired domain. The HMMER search of glyco-hydro domains generated a pattern of conserved motifs characteristic of these proteins when applied to V. unguiculata PR-2 sequences. Among the 16 NCBI cowpea sequences, three presented all 12 motifs that determine the Beta-glucanase activity according the HMMER consensus. Among the other sequences 23,07% presented nine from 12 conserved sites, 53,84% presented eight, 23,07% presented six and only 0,07% presented five sites (Fig. 2). As the aim was to reconstruct the evolutionary history of PR-2 family genes considering the recent sequencing V. unguiculata, the most ancestral organisms (Pinus pinaster and Physcomitrella patens) were selected as out group (Fig. 3). The topology showed two main clades, as expected: (A) outgroup and (B) Magnoliophyta group, that appeared as a monophyletic clade. Moreover, in this last one, the monocots and dicots organisms were identified as sister groups (Fig. 3B). Considering the Magnoliopsida subclade, the organisms were grouped according to their family, but the Rosid subclass behaved as a merophyletic paraphyletic group, sharing characteristics with the Asterid. The virtual electrophoresis evaluation of PR-2 proteins from the 11 analyzed species presented isoelectric point from 3.0 to 9.45. Considering the molecular mass, values varied from 22.16MW to 41.15MW (Fig. 4). Closely related species did not present similar pIs except in case of corn and sugarcane. Transcripts obtained in NordEST database were used to perform a hierarchical clustering analysis from ESTs permitting an evaluation of expression intensity considering co-expression in different libraries (black upper dendrogram) (Fig. 5).
In Silico Screening for Pathogenesis Related-2 Gene Candidates
75
Fig. 2. Twelve conserved motifs characteristic of PR-2 protein in 17 clusters from V. unguiculata. The first line shows the conserved motifs generated by the HMMER program using PR-2 proteins from eight different organisms. In light gray, it is possible to observe which motifs appeared in cowpea PR-2 candidates.
Fig. 3. Dendrogram generated after Maximum Parsimony analysis, showing relationships among the PR2 seed sequence of A. thaliana and orthologs of V. unguiculata and other organisms with PR2 proteins bearing desired domains. Dotted line delimits the main taxonomic units and letters on the right of the dendrogram refer to the grouping. The circle on the root of clade B shows the divergence point between monocots and dicots organisms. Decimal numbers under branches lines means distance values. The numbers between parenthesis on the left of the branches nodes corresponding to the Bootstrap values.
76
A.C. Wanderley-Nogueira et al.
Fig. 4. Graphic representation of PR-2 virtual 2D electrophoresis gel
The three V. unguiculata contigs were formed by 95 reads with expression in seven of the nine NordEST libraries. Higher expression (21%) occurred in the library SS02 (roots of genotype sensitive to salinity after 2 hours of stress). Furthermore, IM90 (leaves collected after 90 min. of virus inoculation) and CT00 (negative control) presented no expression. Regarding the SuperSAGE analysis, 31 tags were obtained with the local BLASTn in our database (Tab. 2), among them, eight (25.8%) were up-regulated in both injured (BRC2) and infected (BRM) libraries when compared with the control. No PR-2 tag was down-regulated and 23 presented no differential regulation.
In Silico Screening for Pathogenesis Related-2 Gene Candidates
77
Fig. 5. PR-2 expression profile. Black indicates higher expression, gray lower expression, and light gray absence of expression in the corresponding tissue and cluster. Abbreviations: CT00 (control); BM90 (Leaves of BR14-Mulato genotype); IM90 (Leaves of IT85F genotype collected with 90 minutes after mosaic viruses infection); SS00 (Root of genotype sensitive to salinity without salt stress); SS02 (Root of genotype sensitive to salinity after 2 hours of stress); SS08 (Root of genotype sensitive to salinity after 8 hours of stress); ST00 (Root of genotype tolerant to salinity without salt stress); ST02 (Root of genotype tolerant to salinity after 2 hours of stress); ST08 (Root of genotype tolerant to salinity after 8 hours of stress).
78
A.C. Wanderley-Nogueira et al.
Table 2. V. unguiculata SuperSAGE Tags in two stress conditions and information about their expression when compared to control SuperSAGE Tag VM753 VM792 VM882 VM10042 VM11167 VM13390 VM13919 VM16412 VM16415 VM17065 VM17386 VM19065 VM20627 VM20828 VM20889 VM24293 VM24766 VM24767 VM25644 VM3048 VM3157 VM3748 VM5358 VM6343 VM6436 VM8048 VM8117 VM8194 VM8909 VVM9116 VM9550
4
Tag Sequence CATGGTGGTGGGTTCAAGAAGTGGAA CATGGCTTGCAGCTCATCCTCACTGT CATGCACATTCACCTCATTTCAATGG CATGAAATTCTCGGTGATCCTTTTTC CATGGCATAGATATGTTGATGATTCG CATGCAGAGTATCAAATTGTTCACCT CATGCTTGTTGTAGTAAATTCAAATT CATGAAACAGTAAGGAATAATTAAGG CATGAAACAGTCCCTAAATAATAGAT CATGAATGGATGAGAATAATTAATGC CATGACCAAGAAGGAAGCTACCCAGG CATGCACATTCACCTCATTTCAATGC CATGCTTTAATTTCAACTATGGCATC CATGGAACTGGATCTGGATGAAAGAC CATGGAAGTTAATTTGAACTACTCTG CATGTCAGCTCGGTTAAATACGCTTA CATGTGAAGAATGAAATATTTGTGCT CATGTGAAGAATGACTCAAAGAAATA CATGTTAACAATTTGTAATGAATCAG CATGATTTTGGGAACTTGTTGTATTA CATGTGAAGAATGACTCAAAGAATAA CATGCGAGCAAGGGAGAAGTTGTAGG CATGCTTTAATTTCAACTATGGGTCG CATGTGTACATAAACAACAAAACATT CATGTTTCTTGATTTTTGGTGGGATT CATGTAACTTTTAACAATTTGATATT CATGTATATATTGAAATTAACTTTAC CATGTCCTTCATTTCCAACGTCCCTG CATGCCAAGGTTATAAATGTTGTTGT CATGGAGAGTAGGAAGGTTCAGGATG CATGTATTCTGTATTTTTCTATGATA
Injured Up regulated Up regulated Up regulated Up regulated Up regulated Up regulated Up regulated Up regulated -
Infected Up regulated Up regulated Up regulated Up regulated Up regulated Up regulated Up regulated Up regulated -
Discussion
Considering the number of sequences available hitherto the NordEST project, the number of PR-2 candidate sequences was higher as expected, revealing five clusters that aligned with this gene, formed by 95 reads. Regarding the search of conserved motifs, essential to protein activity, it is interesting to note that some sequences presented sites that did not match with the consensus generated by the HMMER program, although most of the differences occurred with synonymous amino acids like methionine, leucine, valine or isoleucine, probably not affecting the protein activity; as methionine is a hydrophobic amino acid, and can nearly be classed with other aliphatic amino acids, it prefers substitution with other hydrophobic amino acids (like leucine, valine
In Silico Screening for Pathogenesis Related-2 Gene Candidates
79
and isoleucine) [21]. Using the profile hidden Markov models (profile HMMs) may be a valuable tool to identify not only PR proteins but any other family built from the seed alignment and an automatically generated full alignment which contains all detectable protein sequences [22]. In the generated dendrogram it was possible to indentify that symplesiomorphic characteristics united all Magnoliopsida organisms, as expected, since these plants evolved from a common ancestor. In the Magnoliopsida group, the evolutionary model of the PR-2 seemed to follow a sinapomorphic pattern, leading to their presence in different families and subclasses. Moreover, it was possible to perceive that the different Magnoliopsida families were grouped based on unique autapomorphies. In the monocot grouping all members were annual cereal grains of the Poaceae family. However, while maize and sugarcane were grouped in a terminal clade, rice was grouped separately. This may be justified by the fact that Zea and Saccharum belong to the Panicoideae subfamily, while Oryza belongs to the Ehrhartoideae subfamily. The studied organisms presented different centers of origin, habitats and cycles of life, as well as tolerance, resistance and sensitivity to diverse kind of biotic and abiotic stresses. Despite of that, in a general view, it is evident that the PR-2 pathway, that is activated under any type of stressor component, was a characteristic present in a common ancestor, being relatively conserved in the different groups during the evolution. Considering our evaluation using virtual bidimensional electrophoresis migration for PR-2 sequences, one group could be clearly identified, with only two sequences (corn and sugarcane) deviating especially with respect to their molecular weight, despite the similar pIs and also confirming their similarity revealed in the dendrogram. The second group included most species of the 10 remaining species. Since the functional domains presented high degree of conservation, probably the divergent pattern of migration reflected divergences of the extra-domain regions that are responsible for the acid and basic character of the proteins. Such extra-domain variations are also responsible for the diversity of the sequences and probably for differences in their overall structure and transcription control. The pattern of expression showed that PR-2 transcripts appeared in seven from nine available libraries from NordEST project. Being excluded from negative control (CT00, no stress) and IM90 (leaves collected from IT85F genotype after 90 minutes after virus inoculation). The absence in the non stressed plants was expected and its absence in the genotype IT85F (that is susceptible to this virus infection) reveals that in the resistant genotype (BR14-Mulato) this category of gene is probably activated earlier after injury or virus infection. A similar pattern was confirmed by our SuperSAGE data with high expression after salinity stress, as compared with the negative control. The activation of PR genes after different categories of abiotic stress (wounding and salinity) confirmed the theory that these genes can be activated and expressed systematically as response to any kind of stress, biotic or abiotic [10, 12].
80
A.C. Wanderley-Nogueira et al.
The identified sequences represent valuable resources for the development of markers for molecular breeding and development of pathogenesis related genes specific for cowpea and other related crops. Additionally, the bioinformatic routine may help biologists to identify not only PR-2 genes but any other class of genes that bear similar conserved domains and motifs in their structure in higher plants. This platform represents the first step towards the development of on line tools to identify genes and factors associated with response to pathogen infection in plants.
Acknowledgments This work was developed as part of the Northeastern Biotechnology Network (RENORBIO). The authors are grateful to Brazilian Ministry of Science and Technology (MCT/CNPq) and to Foundation for Support of Science and Technology of Pernambuco State - Brazil (FACEPE) for supporting our research.
References 1. Pedley, K.F., Martin, G.B.: Role of mithogen-activated protein kinases in plant immunity. Current Opinion in Plant Biology 8, 541–547 (2005) 2. Campos, A.M., Rosa, D.D., Teixeira, J.E.C., Targon, M.L.P.N., Sousa, A.A., Paiva, L.V., Stach-Machado, D.R., Machado, M.A.: PR gene families of citrus: Their organ specific biotic and abiotic inducible expression profiles based on ESTs approach. Genetics and Molecular Biology 30, 917–930 (2007) 3. Bonas, U., Anckerveken, G.V.: Gene-for-gene interactions: bacterial avirulence proteins specify plant disease resistance. Current Opinion in Plant Biology 2, 94–98 (1999) 4. van Loon, L.C., van Strien, E.A.: The families of pathogenesis-related proteins, their activities, and comparative analysis of PR-1 type proteins. Physiological and Molecular Plant Pathology 55, 85–97 (1999) 5. Bokshi, A.I., Morris, S.C., McDonald, K., McConchie, R.M.: Application of INA and BABA control pre and post harvest diseases of melons through induction of systemic acquired resistance. Acta Horticulturae 694 (2003) 6. Cutt, J.R., Klessig, D.F.: Pathogenesis Related Proteins. In: Boller, T., Meins, F. (eds.) Genes involved in plant defense, pp. 209–243. Springer, Vienna (1992) 7. van Loon, L.C., Rep, M., Pieterse, C.M.J.: Significance of inducible defense-related proteins in infected plants. Annual Review of Phytopathology 44, 135–162 (2006) 8. Zhu, Q.E., Maher, S., Masoud, R., Dixon, R., Lamb, C.J.: Enhanced Protection against fungal attack by constitutive coexpression of chitinase and glucanase genes in transgenic tobacco. Biotechnology 12, 807–812 (1994) 9. Cte, F., Hahn, M.G.: Oligosaccharins: Structures and signal transduction. Plant Molecular Biology 26, 1379–1411 (1994) 10. Delaney, T.P., Ukness, S., Vernooij, B., Friedrich, L., Weymann, K., Negrotto, D., Gaffney, T., Gut-Rella, M., Kessman, H., Ward, E., Ryals, J.: A central role of salicylic acid in plant disease resistance. Science 266, 754–756 (1994) 11. Fanta, N., Ortega, X., Perez, L.M.: The development of Alternaria alternata is revented by chitinases and beta1,3-glucanases from Citrus limon seedlings. Biology Research 36, 411–420 (2003)
In Silico Screening for Pathogenesis Related-2 Gene Candidates
81
12. Bonasera, J.M., Kim, J.F., Beer, S.V.: PR genes of apple: identification and expression in response to elicitors and inoculation with Erwinia amylovora. BMC Plant Biology 6, 23–34 (2006) 13. Nurnberg, T., Brunner, F.: Innate Immunity in plants and animals: emerging parallels between the recognition of general elicitors and pathogen-associated molecular patterns. Current Opinion in Plant Biology 5, 318–324 (2002) 14. Osmond, R.I., Hrmova, M., Fontaine, F., Imberty, A., Fincher, G.B.: Binding interactions between barley thaumatin-like proteins and (1,3)-beta-D-glucans. Kinetics, specificity, structural analysis and biological implications. European Journal of Biochemistry 268, 4190–4199 (2001) 15. Altschul, S.F., Gish, W., Miller, W., Myers, E.: Basic local alignment search tool. Journal of Molecular Biology 215, 403–410 (1990) 16. Gasteiger, E., Gattiker, A., Hoogland, C., Ivanyi, I., Appel, R.D., Bairoch, A.: ExPASy: The proteomics server for in-depth protein knowledge and analysis. Nucleic Acids Research 31(13), 3784–3788 (2003) 17. Kumar, S., Tamura, K., Nei, M.: MEGA 3: Integrated Software for Molecular Evolutionary Genetics Analysis and Sequence Alignment. Briefings in bioinformatics 5, 150–163 (2004) 18. Eisen, M.B., Spellman, P.T., Brown, P.O., Botstein, B.: Cluster analysis and display of genome-wide expression patterns. Genetics 25, 14863–14868 (1998) 19. Ewing, R.M., Kahla, A.B., Poirot, O., Lopez, F.: Large-scale statistical nalyses of rice ESTs reveal correlated patterns of gene expression. Genome Research 9, 950–959 (1999) 20. Eddy, S.R.: Profile hidden Markov models. Bioinformatics 14, 755–763 (1998) 21. Betts, M.J., Russell, R.B.: Amino acid properties and consequences of subsitutions. In: Barnes, M.R., Gray, I.C. (eds.) Bioinformatics for Geneticists. Wiley, Chichester (2003) 22. Finn, R.D., Tate, J., Mistry, J., Coggill, P.C., Sammut, S.J., Hotz, H.R., Ceric, G., Forslund, K., Eddy, S.R., Sonnhammer, L.E., Bateman, A.: The Pfam protein families database. Nucleic Acids Research 36, 281–288 (2008)
Penalized Principal Component Analysis of Microarray Data Vladimir Nikulin and Geoffrey J. McLachlan Department of Mathematics, University of Queensland {v.nikulin,g.mclachlan}@uq.edu.au
Abstract. The high dimensionality of microarray data, the expressions of thousands of genes in a much smaller number of samples, presents challenges that affect the validity of the analytical results. Hence attention has to be given to some form of dimension reduction to represent the data in terms of a smaller number of variables. The latter are often chosen to be a linear combinations of the original variables (genes) called metagenes. One commonly used approach is principal component analysis (PCA), which can be implemented via a singular value decomposition (SVD). However, in the case of a high-dimensional matrix, SVD may be very expensive in terms of computational time. We propose to reduce the SVD task to the ordinary maximisation problem with an Euclidean norm which may be solved easily using gradient-based optimisation. We demonstrate the effectiveness of this approach to the supervised classification of gene expression data. Keywords: Singular value decomposition, k-means clustering, gradientbased optimisation, cross-validation, gene expression data.
1
Introduction
The goal of projection pursuit is to find low-dimensional projections that provide the most revealing views of the full-dimensional data [1], [2]. Principal component analysis in statistical science, provides a useful mathematical framework for the finding of important and interesting low-dimensional projections. A PCA can be implemented via SVD of the sample covariance matrix [3]. In PCA, the derived principal components are orthogonal to each other and represent the directions of largest variance. PCA captures the largest information in the first few principal components, guarantees minimal information loss and minimal reconstruction error in a least-squares sense [4]. PCA is a popular method of data decomposition or matrix factorisation with applications throughout science and engineering. The decomposition performed by PCA is a linear combination of the input coordinates where the coefficients of the combination (the principal vectors) form a low-dimensional subspace that corresponds to the direction of maximal variance in the data. PCA is attractive for a number of reasons. The maximum variance property provides a way to compress the data with minimal information loss. In fact, the principal vectors provide the closest linear subspace to the data. Second, the representation of the data in the F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 82–96, 2010. c Springer-Verlag Berlin Heidelberg 2010
Penalized Principal Component Analysis of Microarray Data
83
projected space is uncorrelated, which is a useful property for subsequent statistical analysis. Third, the PCA decomposition can be achieved via an eigenvalue decomposition of the data covariance matrix [5]. The pure SVD is a quite complex task, which (in the case of high-dimensional data) would require a significant amount of computational time. The main objective of the imposed penalty or regularisation is to simplify the task and to reduce it to the most ordinary squared optimisation, which may be easily solved using stochastic gradient descent techniques. We shall compare penalized PCA with another approach that uses regularised k-means to group the variables into clusters [6], and represent them by the corresponding centroids. Both methods are unsupervised and, therefore, may be characterised as so-called filter methods. Generally ‘unsupervised learning methods’ are widely used to study the structure of the data when no specific response variable is specified. In contrast to the SVM-RFE [7], we perform dimension reduction using penalized PCA only once to select the variables. This paper is organised as follows: Section 2 discusses two stochastic gradient descent models with left and right updates, which are essentially different. In the case of the model with left update we are looking for the matrix of metagines directly, and will be able to approximate original observations as a linear combinations of these metagenes. As a consequence, it may be difficult to explain secondary metagenes in terms of the original genes. In difference, in the model with right update we are looking for the orthogonal matrix-operator R, and will compute the required matrix of metagenes as a product of the original matrix of gene expression data X and R. As a result, the computational structure of the metagenes may be clearly explained as a linear combinations of the original genes. Dimensional reduction represents the first step of the proposed two steps procedure. As a second step (direct classification of the patterns) we can use linear regression, SVM or multinomial logistic regression (MLR) [8], which may be particular useful in the case of multi-class classification to explore the hidden dependencies between different classes. Section 3 describes five real datasets, where three datasets are binary, and two have multi-class label. Section 4 explains the experimental procedure and the most important aspects of our approach. Finally, Section 5 concludes the paper.
2
Modelling Technique
Let (xi , yi ) , i = 1, . . . , n, be a training sample of observations where xi ∈ Rp is p-dimensional vector of features, and yi is binary label: yi ∈ {−1, 1}. Boldface letters denote matrices or vector-columns. Let us denote by X = {xij , i = 1, . . . , n, j = 1, . . . , p} the matrix containing the observed values on the p variables for the above n observations. For gene expression studies, the number p of genes is typically in the thousands, and the number n of experiments is typically less than 100. The data are represented by an expression matrix X of size n × p, whose columns contain the expression levels of the p genes in the n samples. Our goal is to find a small number of metagenes or factors.
84
V. Nikulin and G.J. McLachlan
400
600
350
500
300
400
250
T(L)
T(L)
700
300
200 150
200 100 100 0
50 20
40
60
80
100
120
140
160
180
0
200
20
40
60
600
500
500
400
400
T(L)
T(L)
600
300
200
100
100
10
20
30
40
50
60
100
120
140
160
180
200
140
160
180
200
300
200
0
80
(b): leukaemia
(a): colon
70
80
90
100
0
(c): lymphoma
20
40
60
80
100
120
(d): Sharma
(L) Fig. 1. Algorithm 1: behaviour of Tn·p as a function of global iteration: (a) colon, (b) leukaemia, (c) lymphoma and (d) Sharma
SVD can be used in practice to remove ‘uninteresting structure’ from X [9]. It is well known that the leading term of the SVD provides the bilinear form with the best least-squares approximation to X. SVD can be employed to perform a principal component analysis on X X. The classical method of factoring a two-dimensional array (matrix) [10] is to factor it as X ∼ LDR , where L is a matrix of left eigenvectors and is n × k, D is a k × k diagonal matrix of eigenvalues, and R is a k × p matrix of right eigenvectors where L L = R R = Ik , Ik is an identity matrix with size k. The relationship, X = LDR ,
(1)
is exact if X has rank k. The task is to find matrix L which may be used then as a replacement for the original microarray matrix X. Algorithm 1. Penalised PCA (left update) 1. Input: X - matrix of microarrays. 2. Select - number of global iterations; k - number of factors; α - regulation parameter; λ - initial learning rate, 0 < ξ < 1 - correction rate, TS - initial value of the target function. 3. Initial matrix L may be generated randomly. 4. Compute values of the matrices E, Q and vector S according to (5).
Penalized Principal Component Analysis of Microarray Data
5. 6. 7. 8. 9. 10. 11. 12. 13.
85
Global cycle: repeat times the following steps 6 - 12: factors-cycle: for j = 1 to k repeat steps 7 - 10: genes-cycle: for i = 1 to n repeat steps 8 - 9: compute Tij according to (6); update lij ⇐ lij + λ · Tij ; re-compute: eja , a = 1, . . . , p; qja , qaj , a = 1, . . . , k, and sj ; compute T (L) according to (4); TS = T (L) if T (L) > TS ; otherwise: λ ⇐ λ · ξ. Output: L - matrix of metagenes or latent factors.
2.1
Penalised PCA with Left Update
The following relations follow directly from (1):
L
X2F
=
DR 2F
=
D2F
=
k
d2i ,
(2)
i=1
where M2F = tr(M M) indicates the squared Frobenius norm (the sum of squared elements of the matrix M). In fact, we might not be interested in all k eigenvectors, but only in those which correspond to the larger eigenvalues (principal components). In order to reduce noise and corresponding overfitting, it would be better to exclude from further consideration other eigenvectors. As a result, we shall derive the following squared optimisation problem: 1 2 α 2 max L XF − L L − Ik F , (3) L 2 2 where α is a positive regulation parameter, the second term in (3) represents penalty or regularisation. Remark 1. Note that another penalized version of the SVD was presented in [11] and [12], where the authors are most interested in obtaining a decomposition made up of sparse vectors. To do this, they use an L1 -penalty (LASSO-type penalty) of the rows and columns of the decomposition. Further, we can re-write the target function of (3) in a more convenient form α 1 2 2 T (L) = L XF − L L − Ik F 2 2 =
p k k−1 k k 1 2 α 2 2 eij − α qij − s , 2 i=1 j=1 2 i=1 i i=1 j=i+1
where eij =
n t=1
lti xtj , qij =
n t=1
lti ltj , si = qii − 1.
(4)
(5)
86
V. Nikulin and G.J. McLachlan
0.22
ALMR
0.2 0.18 0.16 0.14 0.12
1
3
7
5
9
11
15
13
17
19
21
23
25
27
29
30
19
21
23
25
27
29
30
19
21
23
25
27
29
30
(a): colon
ALMR
0.25
0.2
0.15 1
3
7
5
9
11
15
13
17
log(log(alpha+e))
(b): Sharma 1.5
1
0.5
0
1
3
5
7
9
11
13
15
17
(c): index
Fig. 2. Algorithm 1: behaviour of the average LOO misclassification rates as a function of parameter α: (a) colon, (b) Sharma; (c) log(log(α+e)), where values of α correspond to the above cases (a) and (b). The range of values of α is [0.01, . . . , 60] , see Section 4 for more details.
In order to maximise (4), we shall use gradient-based optimisation, and will need the following matrix of partial derivatives Tab =
=
p j=1
ebj xaj − 2α
∂T (L) ∂lab k
qbj laj − αsb lab .
(6)
j=1,j =b
Fig. 1 was produced using the following settings: (a) colon: λ = 4 · 10−6 , ξ = 0.55, k = 15, α = 1.5; (b) leukaemia: λ = 2 · 10−6 , ξ = 0.55, k = 7, α = 1; (c) lymphoma: λ = 2 · 10−6 , ξ = 0.55, k = 23, α = 2; (d) Sharma: λ = 15 · 10−6 , ξ = 0.55, k = 30, α = 0.3. 2.2
Penalised PCA with Right Update
Similarly to (2), we have XR2F = LD2F = D2F =
k i=1
d2i .
(7)
Penalized Principal Component Analysis of Microarray Data
87
Table 1. Algorithm 1: some selected test results with scheme (11), where LMR is LOO misclassification rate, “NM” is number of misclassified Data
Model
k
α
LMR
NM
colon colon colon
SVM SVM SVM
6 20 27
10 10 10
0.0968 0.0806 0.0968
6 5 6
leukaemia leukaemia leukaemia
SVM SVM SVM
9 13 20
5 5 5
0.0139 0.0 0.0139
1 0 1
lymphoma lymphoma lymphoma
MLR MLR MLR
10 13 20
50 50 50
0.0484 0.0323 0.0484
3 2 3
Sharma Sharma Sharma
SVM SVM SVM
27 33 37
5 5 5
0.0833 0.05 0.0833
5 3 5
Khan Khan Khan
MLR MLR MLR
23 31 55
50 50 50
0.0482 0.0241 0.0361
4 2 3
The task in this section is to find the matrix R, which may be used directly to compute the matrix of metagenes XR as a replacement for the original matrix X. Let us consider the following target function, 1 α 2 2 XRF − R R − Ik F T (R) = 2 2 =
n k k−1 k k 1 2 α 2 2 eij − α qij − s , 2 i=1 j=1 2 i=1 i i=1 j=i+1
where eij =
p t=1
rti xtj , qij =
p
rti rtj , si = qii − 1.
(8)
(9)
t=1
Algorithm 2. Penalised PCA (right update) 1. Input: X - matrix of microarrays. 2. Select - number of global iterations; k - number of factors; α - regulation parameter; λ - initial learning rate, 0 < ξ < 1 - correction rate, TS - initial value of the target function. 3. Initial matrix R may be generated randomly. 4. Compute values of the matrices E, Q and vector S according to (9). 5. Global cycle: repeat times the following steps 6 - 12: 6. factors-cycle: for j = 1 to k repeat steps 7 - 10:
88
V. Nikulin and G.J. McLachlan
1
1
0.8
0.8 10
10 0.6
0.6
20 0.4
20
0.4
0.2
30
tissue
tissue
0.2
30 0
0 40
−0.2
−0.2
40 50
−0.4
50
−0.4
−0.6
−0.6 60
−0.8
−0.8
60
70 2
4
6
8
10
−1
2
(a): metagene (lymphoma)
4
6
8
−1
(b): metagene (leukaemia)
Fig. 3. Matrices of metagenes: (a) lymphoma, (b) leukaemia
7. 8. 9. 10. 11. 12. 13.
genes-cycle: for i = 1 to p repeat steps 8 - 9: compute Tij according to (10); update rij ⇐ rij + λ · Tij ; re-compute: eaj , a = 1, . . . , n; qja , qaj , a = 1, . . . , k, and sj ; compute T (R) according to (8); TS = T (R) if T (R) > TS ; otherwise: λ ⇐ λ · ξ. Output: R, where XR is matrix of metagenes or latent factors.
In order to maximise (8), we shall use gradient-based optimisation, and will need the following matrix of partial derivatives Tab =
=
n i=1
ebi xia − 2α
∂T (R) ∂lab k
qbj raj − αsb rab .
(10)
j=1,j =b
Fig. 4 was produced using the following settings: (a) colon: λ = 4 · 10−6 , ξ = 0.55, k = 19, α = 0.8; (b) leukaemia: λ = 4 · 10−6 , ξ = 0.55, k = 23, α = 5; (c) lymphoma: λ = 2 · 10−6 , ξ = 0.55, k = 39, α = 35; (d) Sharma: λ = 15 · 10−6, ξ = 0.55, k = 40, α = 25. Also, we used parameters λ = 2 · 10−5 , ξ = 0.55 in Table 2 in all cases of lymphoma and Khan related to the MLR model.
Penalized Principal Component Analysis of Microarray Data
89
100
1200
90 1000
80
800
60
T(R)
T(R)
70
600
50 40 30
400
20 10
200
0 0 20
40
60
80
100
120
140
160
180
−10
200
20
40
60
(a): colon
80
100
120
140
160
180
200
140
160
180
200
(b): leukaemia
10
50
0
0
−10
−50
T(R)
T(R)
−20 −100 −150
−30 −40
−200
−50
−250 −300
−60
20
40
60
80
100
120
140
160
180
200
−70
20
40
60
80
100
120
(d): Sharma
(c): lymphoma
(R) Fig. 4. Algorithm 2: behaviour of Tn·p as a function of global iteration: (a) colon, (b) leukaemia, (c) lymphoma and (d) Sharma
2.3
Regularised k-Means Clustering Algorithm
Clustering methods provide a powerful tool for the exploratory analysis of highdimensional, low-sample size data sets, such as gene expression microarray data. As with PCA, cluster analysis requires no response variable and thus falls into category of unsupervised learning methods. There are two major problems: stability of clustering and meaningfulness of centroids as cluster representatives. On one hand, big clusters impose strong smoothing and possible loss of very essential information. On the other hand, small clusters are, usually, very unstable and noisy. Accordingly, they can not be treated as equal and independent representatives. To address the above problems, we propose regularisation to prevent the creation of super big clusters, and to attract data to existing small clusters [6]. Algorithm 3. Regularised k-means clustering 1. Select number of clusters k, distance Φ and regulation parameter β; 2. split randomly available genes into k subsets (clusters) with approximately the same size; 3. compute an average (centroid) qc for any cluster c; 4. compute maximum distance L between genes and centroids; 5. redistribute genes according to Φ(xj , qc ) + Rc ,
90
V. Nikulin and G.J. McLachlan
where regularisation term Rc = β·L·#c , #c is the size of cluster c at the p current time; 6. recompute centroids (known from the previous sections as metagenes); 7. repeat steps 5-6, until convergence (that means stable behavior of the target function).
3
Data
3.1
Colon
The colon dataset1 is represented by a matrix of 62 tissue samples (40 tumor and 22 normal) and 2000 genes. The microarray matrix for this set thus has p = 2000 rows and n = 62 columns. 3.2
Leukaemia
The leukaemia dataset2 was originally provided in [13]. It contains the expression levels of p = 7129 genes of n = 72 patients, consisting of 47 patients those with acute lymphoblastic leukaemia (ALL) and 25 patients with acute myeloid leukaemia (AML). Table 2. Algorithm 2: some selected test results with scheme (11)
3.3
Data
Model
k
α
LMR
NM
colon colon colon
SVM SVM SVM
9 19 30
0.8 0.8 0.8
0.129 0.0968 0.1129
8 6 7
leukaemia leukaemia leukaemia
SVM SVM SVM
15 23 25
9 9 9
0.0139 0 0.0139
1 0 1
lymphoma
MLR
26
50
0.0484
3
Sharma Sharma Sharma Sharma
SVM SVM SVM SVM
40 21 28 50
25 25 25 25
0.1 0.1833 0.15 0.0667
6 11 9 4
Khan Khan Khan
MLR MLR MLR
19 39 43
120 120 120
0.0482 0.0241 0.0361
4 2 3
Lymphoma
This publicly available dataset3 contains the gene expression levels of the three most prevalent adult lymphoid malignancies: (1) 42 samples of diffuse large Bcell lymphoma (DLCL), (2) 9 samples of follicular lymphoma (FL), and (3) 11 1 2 3
http://microarray.princeton.edu/oncology/affydata/index.html http://www.broad.mit.edu/cgi-bin/cancer/publications/ http://llmpp.nih.gov/lymphoma/data/figure1
Penalized Principal Component Analysis of Microarray Data
91
samples of chronic lymphocytic leukaemia (CLL). The total sample size is n = 62 and p = 4026 genes. Note that about 4.6% of values in the original lymphoma gene expression matrix are missing and were replaced by average values (in relation to the corresponding genes). The maximum number of missing values per gene is 16. The range of values in the expression matrix is [−6.06, . . . , 6.3]. More information on these data may be found in [14]. 3.4
Sharma
This dataset was described in [15] and contains the expression levels (mRNA) of 1368 genes from 60 blood samples taken from 56 women. Each sample was labelled by clinicians, with 24 labelled as having breast cancer and 36 labelled as not having it. Some of the samples were analysed more than once in separate batches giving a total of 102 labelled base samples. We computed for any of 60 samples an average of the corresponding base samples. As a result, original 102 × 1368 matrix with dimensions was reduced to the 60 × 1368 matrix. 3.5
Khan
This dataset [16] contains 2308 genes and 83 observations, each from a child who was determined by clinicians to have a type of small round blue cell tumour. This includes the following four classes: neuroblastoma (N), rhabdomyosarcoma (R), Burkitt lymphoma (B) and the Ewing sarcoma (E). The numbers in each class are: 18 - N, 25 - R, 11 - B and 29 - E. 3.6
Additional Pre-processing Steps
We followed the pre-processing steps of [17] applied to the leukaemia set: (1) thresholding: floor of 1 and ceiling of 20000; (2) filtering: exclusion of genes with max / min ≤ 2 and (max - min) ≤ 100, where max and min refer respectively to the maximum and minimum expression levels of a particular gene across a tissue sample. This left us with 1896 genes. In addition, the natural logarithm of the expression levels was taken. After the above pre-processing we observed significant improvement in quality of classification. Remark 2. We conducted similar studies in application to the colon data. Firstly, we observed the following statistical characteristics: min = 5.82, max = 20903, 4.38 ≤ max/min ≤ 1258.6. Then, we took the natural logarithm of the expression levels. Based on our experimental results we can not report any improvement in the quality of classification. We applied double normalisation to the data. Firstly, we normalised each column to have means zero and unit standard deviations. Then, we applied the same normalisation to each row.
92
V. Nikulin and G.J. McLachlan Table 3. Algorithm 3: some selected test results with scheme (11)
4
Data
Model
k
β
LMR
NM
colon colon colon
SVM SVM SVM
7 8 9
0.2 0.2 0.2
0.1129 0.0968 0.1129
7 6 7
leukaemia leukaemia
SVM SVM
25 30
0.1 0.1
0.0139 0
1 0
lymphoma lymphoma lymphoma
MLR MLR MLR
60 70 80
0.1 0.1 0.1
0.0484 0.0323 0.0484
3 2 3
Sharma Sharma Sharma
SVM SVM SVM
50 55 60
0.1 0.1 0.1
0.1167 0.0833 0.1333
7 5 8
Khan Khan Khan
MLR MLR MLR
40 44 50
0.1 0.1 0.1
0.0482 0.0241 0.0361
4 2 3
Experiments
After decomposition of the original matrix X we used the leave-one-out (LOO) classification scheme, applied to the matrix L in the case of Algorithm 1 and to the matrix XR in the case of Algorithm 2. This means that we set aside the ith observation and fit the classifier by considering remaining (n − 1) data points. We conducted experiments with linear SVM (used our own code written in C). The experimental procedure has heuristic nature and its performance depends essentially on the initial settings. We shall denote such a scheme as SCH(nrs, M odel, LOO1),
(11)
where we conducted nrs identical experiments with randomly generated initial settings. Any particular experiments includes 2 steps: 1) dimensionality reduction with Algorithm 2 or Algorithm 2.2; 2) LOO evaluation with the classification Model. In most of our experiments with scheme (11) we used nrs = 20. We conducted experiments with two different classifiers: 1) linear SV M , and 2) we used M LR in application to the lymphoma and Khan sets. These classifiers are denoted by “Model” in (11). Fig. 2 illustrates expected LOO misclassification rates (ELMR) as a function of the double logarithmic function of the parameter α. As it may be expected, results corresponding to the small values of α are poor. Then, we have significant improvement to some point. After, that point the model suffers from over-regularisation. Tables 1-2 represent some best results. It can be seen that our results are competitive with those in [18], [19], where the best reported result for the colon set is LMR = 0.113, and LMR = 0.0139 for the leukaemia set.
Penalized Principal Component Analysis of Microarray Data
93
In the case of Fig. 2, we used the following settings: (a) colon: λ = 10−5 , ξ = 0.55, k = 13; (b) Sharma: λ = 10−4 , ξ = 0.75, k = 15. 4.1
Selection Bias
Cross-validation can be used to estimate model or classifier parameters as well as perform model and variable selection. However, combining these steps with error estimation for the final classifier can lead to bias unless one is particular careful [20]. ‘Selection bias’ [21](p.218) can occur when cross-validation is used ‘internally’. In this case, all the available data are used to select the model as a subset of available predictors. This subset of predictors is then fixed and the error rate is estimated by cross-validation. In Fig. 2, we have plotted the ELMRs estimated using LOO cross-validation under the assumption that the matrix factorisation will be the same during the n validation trials as chosen on the basis of the full data set. However, there will be a selection bias in these estimates as the matrix factorisation should be reformed as a natural part of any validation trial; see, for example, [22]. But, since the labels yt of the training data were not used in the factoring process, the selection bias should not be of a practical importance. The validation scheme (LOO scheme N2) SCH(nrs, M odel, LOO2),
(12)
where the step 1 with dimensionality reduction is a part of any LOO loop, requires a lot more computational time comparing with the model (11). Nevertheless, we tested the model (12) with nrs = 10 in application to the fixed number of metagenes. Note that the validation scheme (12) is very compatible with Algorithm 2 and may be easily implemented: at any LOO cycle we can compute vector-row of metagenes according to the following formula: vmgnew = vgnew R, where vgnew is a vector-row of genes corresponding to the tissue under consideration. In the case of colon dataset, we used k = 19 and observed LMR = 0.0968 (1), 0.1129 (6), 0.129 (2) and 0.1451 (1), where the integer numbers in the brackets indicate the number of the times the corresponding value was observed. In the case of leukaemia dataset, we used k = 25 and observed the following values: LMR = 0.0139 (6), 0.0278 (3) and 0.0417 (1). In the case of Sharma dataset, we used k = 18 and observed the following values: LMR = 0.1833 (2), 0.2 (3), 0.2167 (3) and 0.2333 (2). Selection Bias with k-Means Algorithm. In the case of Algorithm 3 the scheme (12) may be organised as follows. We can run Algorithm 3 without
94
V. Nikulin and G.J. McLachlan
any particular tissue. As an outcome, Algorithm 3 produce codebook, or the direction for any gene to the unique cluster. After completion of the clustering with Algorithm 3, this codebook may be used in application to the absent tissue to recompute centroids/metagenes. Table 4. Algorithm 3: some selected test results with scheme (12)
4.2
Data
Model
k
β
LMR
NM
colon leukaemia lymphoma Sharma Khan
SVM SVM MLR SVM MLR
13 44 60 50 30
0.2 0.1 0.1 0.1 0.1
0.1129 0 0.0484 0.1666 0.0241
7 0 3 10 2
Computation Time
A Linux computer with speed 3.2GHz, RAM 16GB, was used for most of the computations. The time for 100 global iterations with Algorithm 1 (used special code written in C) in the case of k = 50 was about 15 sec (Sharma dataset). Table 5. Algorithm 2: computation time, where Scheme 1 corresponds to (11) and Scheme 2 corresponds to (12)
5
Data
k
n
p
Scheme
nrs
Time (min.)
colon lymphoma colon leukaemia Sharma lymphoma
29 21 19 25 40 39
62 62 62 72 60 62
2000 4026 2000 1896 1368 4026
1 1 2 2 2 2
20 20 10 10 10 10
6 11 100 180 195 600
Conclusion
Microarray data analysis presents a challenge to the traditional machine learning techniques due to the availability of only a limited number of training instances and the existence of a large number of genes. In many cases, machine learning techniques rely too much on the gene selection, which may cause selection bias. Generally, feature selection may be classified into two categories based on whether the criterion depends on the learning algorithm used to construct the prediction rule. If the criterion is independent of the prediction rule, the method is said to follow a filter approach, and if the criterion depends on the rule, the method is said to follow a wrapper approach [22]. The objective of this study is to develop a filtering machine learning approach and produce a robust classification procedure for microarray data.
Penalized Principal Component Analysis of Microarray Data
95
Based on our experiments, the proposed penalized PCA performed an effective dimensional reduction as a preparation step for the following supervised classification. Similarly, a two-step clustering procedure was introduced in [6]. Also, it was reported in [6] that classifiers built in metagene, rather than original gene space, are more robust and reproducible because the projection can reduce noise more than simple normalisation. Algorithms 1 and 2, as a main contribution of this paper, are conceptually simple. Consequently, they are very fast and stable. Note also that stability of the algorithms depends essentially on the properly selected learning rate, which must not be too large. We can include additional functions so that the learning rate will be reduced or increased depending on the current performance. There are many advantages to such a metagene approach. By capturing the major, invariant biological features and reducing noise, metagenes provide descriptions of data sets that allow them to be more easily combined and compared. In addition, interpretation of the metagenes, which characterize a subtype or subset of samples, can give us insight into underlying mechanisms and processes of a disease. The results that we obtained on five real datasets confirm the potential of our approach. Based on Tables 1 - 3, we can conclude that performance of the penalised PCA introduced in this paper is better in terms of LMR compared to the regularised k-means algorithm. It is also faster to implement.
References [1] Huber, P.: Projection pursuit. The Annals of Statistics 13, 435–475 (1985) [2] Friedman, J.: Exploratory projection pursuit. Journal of the American Statistical Association 82, 249–266 (1987) [3] Alter, O., Brown, P., Botstein, D.: Singular value decomposition for genome-wide expression data processing and modelling. PNAS 97, 10101–10106 (2000) [4] Guan, Y., Dy, J.: Sparse probabilistic principal component analysis. In: AISTATS, pp. 185–192 (2009) [5] Zass, R., Shashua, A.: Nonnegative sparse PCA. In: Advances in Neural Information Processing Systems (2006) [6] Nikulin, V., McLachlan, G.: Regularised k-means clustering for dimension reduction applied to supervised classification. In: CIBB Conference, Genova, Italy (2009) [7] Guyon, I., Weston, J., Barnhill, S., Vapnik, V.: Gene selection for cancer classification using support vector machines. Machine Learning 46, 389–422 (2002) [8] Bohning, D.: Multinomial logistic regression algorithm. Ann. Inst. Statist. Math. 44, 197–200 (1992) [9] Liu, L., Hawkins, D., Ghosh, S., Young, S.: Robust singular value decomposition analysis of microarray data. PNAS 100, 13167–13172 (2003) [10] Fogel, P., Young, S., Hawkins, D., Ledirac, N.: A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Bioinformatics 23, 44–49 (2007) [11] Hastie, T., Tibshirani, R.: Efficient quadratic regularisation of expression arrays. Biostatistics 5, 329–340 (2004)
96
V. Nikulin and G.J. McLachlan
[12] Witten, D., Tibshirani, R., Hastie, T.: A penalized matrix decomposition, with applications to sparse principal components and canonical correlation analysis. Biostatistics 10, 515–534 (2009) [13] Golub, T., et al.: Molecular classification of cancer: class discovery and class prediction by gene expression monitoring. Science 286, 531–537 (1999) [14] Alizadeh, A., et al.: Distinct types of diffuse large b-cell-lymphoma identified by gene expression profiling. Nature 403, 503–511 (2000) [15] Sharma, P., et al.: Early detection of breast cancer based on gene-expression patterns in peripheral blood cells. Breast Cancer Research 7, R634–R644 (2005) [16] Khan, J., et al.: Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks. Nature Medicine 7, 673–679 (2001) [17] Dudoit, S., Fridlyand, J., Speed, I.: Comparison of discrimination methods for the classification of tumors using gene expression data. Journal of Americal Statistical Association 97, 77–87 (2002) [18] Dettling, M., Buhlmann, P.: Boosting for tumor classification with gene expression data. Bioinformatics 19, 1061–1069 (2003) [19] Peng, Y.: A novel ensemble machine learning for robust microarray data classification. Computers in Biology and Medicine 36, 553–573 (2006) [20] Wood, I., Visscher, P., Mengersen, K.: Classification based upon expression data: bias and precision of error rates. Bioinformatics 23, 1363–1370 (2007) [21] McLachlan, G., et al.: Analysing microarray gene expression data. Wiley, Hoboken (2004) [22] Ambroise, C., McLachlan, G.: Selection bias in gene extraction on the basis of microarray gene expression data. PNAS 99, 6562–6566 (2002)
An Information Theoretic Approach to Reverse Engineering of Regulatory Gene Networks from Time–Course Data Pietro Zoppoli1,2 , Sandro Morganella1,2, and Michele Ceccarelli1,2 1
2
Department of Biological and Evironmental Sciences, University of Sannio, Benevento, Italy Bioinformatics Lab, IRGS Istituto di Ricerche Genetiche G. Salvatore, BioGeM s.c.a r.l., Ariano Irpino (AV), Italy
Abstract. One of main aims of Molecular Biology is the gain of knowledge about how molecular components interact each other and to understand gene function regulations. Several methods have been developed to infer gene networks from steady-state data, much less literature is produced about time-course data, so the development of algorithms to infer gene networks from time-series measurements is a current challenge into bioinformatics research area. In order to detect dependencies between genes at different time delays, we propose an approach to infer gene regulatory networks from time-series measurements starting from a well known algorithm based on information theory. In particular, we show how the ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks) algorithm can be used for gene regulatory network inference in the case of time-course expression profiles. The resulting method is called TimeDelay-ARACNE. It just tries to extract dependencies between two genes at different time delays, providing a measure of these dependencies in terms of mutual information. The basic idea of the proposed algorithm is to detect time-delayed dependencies between the expression profiles by assuming as underlying probabilistic model a stationary Markov Random Field. Less informative dependencies are filtered out using an auto calculated threshold, retaining most reliable connections. TimeDelay-ARACNE can infer small local networks of time regulated gene-gene interactions detecting their versus and also discovering cyclic interactions also when only a medium-small number of measurements are available. We test the algorithm both on synthetic networks and on microarray expression profiles. Microarray measurements are concerning part of S. cerevisiae cell cycle and E. coli SOS pathways. Our results are compared with the ones of two previously published algorithms: Dynamic Bayesian Networks and systems of ODEs, showing that TimeDelay-ARACNE has good accuracy, recall and F -score for the network reconstruction task.
1
Introduction
In order to understand cellular complexity much attention is placed on large dynamic networks of co-regulated genes at the base of phenotype differences. F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 97–111, 2010. c Springer-Verlag Berlin Heidelberg 2010
98
P. Zoppoli, S. Morganella, and M. Ceccarelli
One of the aims in molecular biology is to make sense of high-throughput data like that from microarray of gene expression experiments. Many important biological processes (e.g., cellular differentiation during development, aging, disease aetiology etc.) are very unlikely controlled by a single gene instead by the underlying complex regulatory interactions between thousands of genes within a four-dimension space. In order to identify these interactions, expression data over time can be exploited. An important open question is related to the development of efficient methods to infer the underlying gene regulation networks (GRN) from temporal gene expression profiles. Inferring, or reverse-engineering, gene networks can be defined as the process of identifying gene interactions from experimental data through computational analysis. A GRN can be modelled as a graph G = (V, U, D), where V is the set of nodes corresponding to genes, U is the set of unordered pair (undirected edges) and D is the set of ordered pairs D (directed edges). A directed edge dij from vi to vj is present iff there is a causal effect from node vi to node vj . An undirected edge uij represents the mutual association between nodes vi and vj . Gene expression data from microarrays are typically used for this purpose. There are two broad classes of reverse-engineering algorithms [1]: those based on the physical interaction approach which aim at identifying interactions among transcription factors and their target genes (geneto-sequence interaction) and those based on the influence interaction approach that try to relate the expression of a gene to the expression of the other genes in the cell (gene-to-gene interaction), rather than relating it to sequence motifs found in the promoters. We will refer to the ensemble of these influence interactions as gene networks. Many algorithms have been proposed in the literature to model gene regulatory networks [2] and solve the network inference problem [3]. Ordinary Differential Equations Reverse-engineering algorithms based on ordinary differential equations (ODEs) relate changes in gene transcript concentration to each other and to an external perturbation. Typical perturbations can be for example the treatment with a chemical compound (i.e. a drug), or the over expression or down regulation of particular genes. A set of ODEs, one for each gene, describes gene regulation as a function of other genes. As ODEs are deterministic, the interactions among genes represent causal interactions, rather than statistical dependencies. The ODE-based approaches yield signed directed graphs and can be applied to both steady-state and time-series expression profiles [3, 4]. Bayesian Networks A Bayesian network [5] is a graphical model for representing probabilistic relationships among a set of random variables Xi , where i = 1, · · · , n. These relationships are encoded in the structure of a directed acyclic graph G, whose vertexes (or nodes) are the random variables Xi . The relationships between the variables are described by a joint probability distribution P (X1 , · · · , Xn ). The genes, on which the probability is conditioned, are called the parents of gene
An Information Theoretic Approach to Reverse Engineering
99
i and represent its regulators, and the joint probability density is expressed as a product of conditional probabilities. Bayesian networks cannot contain cycles (i.e. no feedback loops). This restriction is the principal limitation of the Bayesian network model [6]. Dynamic Bayesian networks overcome this limitation [7]. Dynamic Bayesian networks are an extension of Bayesian networks able to infer interactions from a data set consisting of time-series rather than steady-state data. Graphical Gaussian Model Graphical Gaussian model, also known as covariance selection or concentration graph models, assumes multivariate normal distribution for underlying data. The independence graph is defined by a set of pairwise conditional independence relationships calculated using partial correlations as a measure of independence of any two genes that determine the edge-set of the graph [8]. Partial cross correlation has been also used to deal with time delays [9]. Gene Relevance Network Gene relevance networks are based on the covariance graph model. Given a measure of association and defined a threshold value, for all pairs of domain variables (X, Y ), association A(X, Y ) is computed. Variables X and Y are connected by an undirected edge when association A(X, Y ) exceeds the predefined threshold value. One of the measures of association is the mutual information (MI) [10], one of the information theory (IT) main tools. In IT approaches, the expression level of a gene is considered as a random variable. MI is the main tool for measuring if and how two genes influence each other. MI between two variables X and Y is also defined as the reduction in uncertainty about a variable X after observing a second random variable Y . Edges in networks derived by information-theoretic approaches represent statistical dependencies among gene expression profiles. As in the case of Bayesian network, the edge does not represent a direct causal interaction between two genes, but only a statistical dependency. It is possible to derive the information-theoretic approach as a method to approximate the joint probability density function of gene expression profiles, as it is performed for Bayesian networks [11–13]. Time-Course Reverse Engineering Availability of time-series gene expression data can be of help in the study of the dynamical properties of molecular networks, by exploiting the causal genegene temporal relationships. In the recent literature several dynamic models, such as Probabilistic Boolean Networks (PBN) [14]; Dynamic Bayesian Networks (DBN) [7]; Hidden Markov Model (HMM) [15] Kalfman filters [16]; Ordinary Differential Equations (ODEs) [4, 17]; pattern recognition approaches [18]; signal processing approaches [19], model-free approaches [20] and informational
100
P. Zoppoli, S. Morganella, and M. Ceccarelli
approaches [21] have been proposed for reconstructing regulatory networks from time-course gene expression data. Most of them are essentially model-based trying to uncover the dynamics of the system by estimating a series of parameters, such as auto regressive coefficients [19] or the coefficients of state-transition matrix [16] or of a stiffness matrix [4, 17]. The model parameters themselves describe the temporal relationships between nodes of the molecular network. In addition, several current approaches try to catch the dynamical nature of the network by unrolling in time the states of the network nodes, this is the case of Dynamic Bayesian Networks [7] or Hidden Markov Models [15]. One of the major differences between the approach proposed here and these approaches, is that the dynamical nature of the behavior of the nodes in the networks, in terms of time dependence between reciprocal regulation between them, can be modeled in the connections rather that in the time-unwrapping of the nodes. As reported in Figure 1, we assume that the the activation of a gene A can influence the activation of a gene B in successive time instants, and that this information is carried out in the connection between gene A and gene B. Indeed, this idea is also at the basis of the time delay neural network model efficiently used in sequence analysis and speech recognition [22]. Another interesting feature of the reported method, with respect to the ARACNE algorithm, is the fact that the time-delayed dependencies can eventually be used for derive the direction of the connections between the nodes of the network, trying to discriminate between regulator gene and regulated genes. The approach reported here has also some similarities with the method proposed in [21], the main differences are in the use of different time delays, the use of the data processing inequality for pruning the network rather than the minimum description length principle and the discretization of the expression values. Summary of the Proposed Algorithm TimeDelay-ARACNE tries to extend to time-course data ARACNE (Algorithm for the Reconstruction of Accurate Cellular Networks) retrieving time statistical dependency between gene expression profiles. The idea on which TimeDelayARACNE is based comes from the consideration that the expression of a gene at a certain time could depend by the expression level of another gene at previous time point or at very few time points before. TimeDelay-ARACNE is a threesteps algorithm: first it detects, for all genes, the time point of the initial changes in the expression, secondly there is network construction and finally a network pruning step. Is is worth noticing that, the analytical tools for time series often require conditions such as stability and stationarity (see [23]). Although it is not possible to state that these conditions hold in general for microarray data, this is due to the limited number of samples and to the particular experimental setup producing the data, nevertheless time series analysis methods have been demonstrated to be useful tools in many applications of time course gene expression data analysis, for example Ramoni et al. [24], used an auto-regressive estimation step as feature extraction prior to classification, while Holter et al., [25] use the characteristic modes obtained by singular value decomposition to
An Information Theoretic Approach to Reverse Engineering
101
model a linear framework resulting in a time translational matrix. In particular TimeDelay-ARACNE implicitly assumes stationarity and stability conditions in the kernel regression estimator used for the computation of the mutual information, as described in the section Algorithms.
Methods Datasets Simulated Gene Expression Data. We construct some synthetic gene networks in order to compute the functions p, r and F -score of the method having reference true tables and to compare its performance to other methods. According to the terminology in [26] we consider a gene network to be well-defined if its interactions allow to distinguish between regulator genes and regulated genes, where the first affect the behaviour of the second ones. Given a well defined network, we can have genes with zero regulators (called stimulators, which could represent the external environmental conditions), genes with one regulator, genes with two regulators, and so on. If a gene has at least one regulator (it is not a stimulator) then it owns a regulation function which describes its response to a particular stimulus of its regulator/regulators. Our synthetic networks contain some stimulator genes with a random behaviour and regulated genes which can eventually be regulators of other genes. The network dynamics are modeled by linear causal relations which are formulated by a set of randomly generated equations. In particular, let us call the expression of gene i at time t as git , our synthetic network generation module works as follows, – if gene i is a stimulator gene then its expression profile, git , t = 0, 1, ... is randomly initialized with a sequence of uniform numbers in [1, 100] – for each non-stimulator gene i, gi0 is initialized with a uniform random number in [1, 100] – for each non-stimulator gene i, the expression values gi1 , ..., git are computed according to a stochastic difference equation with random coefficients depending on one or two other regulator genes by using one of the two equations below: + ηit git = αti git−1 + βit gpt−1 i git
=
αti git−1
+
βit gpt−1 i
+
γit gqt−1 i
+
ηit
(1) (2)
here the coefficients αti , βit and γit are random variables uniformly distributed in [0, 1] and ηit is a random Gaussian noise with zero mean and variance σ. Moreover the regulators genes pi and qi of the i-th are randomly selected at the beginning of each simulation run. The network generation algorithm is set in such a way that 75% of genes have one regulator and 25% of genes have two regulators. – each expression profile is then normalized to be within the interval [0, 1]
102
P. Zoppoli, S. Morganella, and M. Ceccarelli
In our experiments, the PPV, recall and F -score of the proposed and the other methods is computed as the average over a set of 20 runs over different random networks with the same number of genes, number of time points and noise levels. Microarray Expression Profiles. The time course profiles for a set of 11 genes, part of the G1 step of yeast cell cycle, are selected from the widely used yeast, Saccharomyces cerevisiae, cell cycle microarray data [27]. These microarray experiments were designed to create a comprehensive list of yeast genes whose transcription levels were expressed periodically within the cell cycle. We select one of this profile in which the gene expressions of cell cycle synchronized yeast cultures were collected over 17 time points taken in 10-minute intervals. This time series covers more than two complete cycles of cell division. The first time point, related to the M step, is excluded in order to better recover the time relationships present in the G1 step. The true edges of the underlying network were provided by KEGG yeast’s cell cycle reference pathway [28]. Green Fluorescent Protein Real-Time Gene Expression Data. The time course profiles for a set of 8 genes, part of the SOS pathway of E. coli [29] are selected. Data are produced by a system for real-time monitoring of the transcriptional activity of operons by means of low-copy reporter plasmids in which a promoter controls GFP (green fluorescent protein). From such dataset we select the first 3 time course profiles and for each one the first 15 points, then we concatenate such profiles obtaining a 45 points profile. For each profile we use only the first 15 points avoiding the misguiding flat tails characterizing such gene profiles (the response to the UV stimulus is quick, so very soon mRNAs came back to pre-stimulus condition).
Algorithms ARACNE The ARACNE algorithm has been proposed in [11, 30]. ARACNE is an information-theoretic algorithm for the reverse engineering of transcriptional networks from steady-state microarray data. ARACNE, just as many other approaches, is based on the assumptions that the expression level of a given gene can be considered as a random variable, and the mutual relationships between them can be derived by statistical dependencies. It defines an edge as an irreducible statistical dependency between gene expression profiles that cannot be explained as an artifact of other statistical dependencies in the network. It is a two steps algorithm: network construction and network pruning. Within the assumption of a two-way network, all statistical dependencies can be inferred from pairwise marginals, and no higher order analysis is needed. ARACNE identifies candidate interactions by estimating pairwise gene expression profile mutual information, I(gi , gj ) ≡ Iij , an information-theoretic measure of relatedness that is
An Information Theoretic Approach to Reverse Engineering
103
zero iff the joint distribution between the expression level of gene i and gene j satisfies P (gi , gj ) = P (gi )P (gj ). ARACNE estimates MI using a computationally efficient Gaussian Kernel estimator. Since MI is reparameterization invariant, ARACNE copula-transforms (i.e., rank-order) the profiles before MI estimation; the range of these transformed variables is thus between 0 and 1, and their marginal probability distributions are manifestly uniform. This decreases the influence of arbitrary transformations involved in microarray data pre-processing and removes the need to consider position-dependent kernel widths which might be preferable for non-uniformly distributed data. Secondly the MIs are filtered using an appropriate threshold, I0 thus removing the most of indirect candidate interactions using a well known information theoretic property, the data processing inequality (DPI). ARACNE eliminate all edges for which the null hypothesis of mutually independent genes cannot be ruled out. To this extent, ARACNE randomly shuffles the expression of genes across the various microarray profiles and evaluate the MI for such manifestly independent genes. The DPI states that if genes g1 and g3 interact only through a third gene, g2 , (i.e., if the interaction network is g1 ↔ · · · ↔ g2 ↔ · · · ↔ g3 and no alternative path exists between g1 and g3 ), then I(g1 , g3 ) ≤ min(I(g1 , g2 ); I(g2 , g3 ))[31]. Thus the least of the three MIs can come from indirect interactions only, and so it’s pruned. TimeDelay-ARACNE TimeDelay-ARACNE tries to extend to time-course data ARACNE retrieving time statistical dependency between gene expression profiles. TimeDelayARACNE is a 3 steps algorithm: it first detects, for all genes, the time point of the initial changes in the expression, secondly there is network construction than network pruning. Step 1. The first step of the algorithm is aimed at the selection of the initial change expression points in order to flag the possible regulator genes [7]. In particular, let us consider the sequence of expression of gene ga : ga0 , ga1 , ...gat , ..., we use two thresholds τup and τdown and the initial change of expression (IcE) is defined as IcE(ga ) = arg min{ga0 /gaj ≥ τup or gaj /ga0 ≤ τdown } j
(3)
1 The thresholds are chosen with τup = τdown . In all the reported experiments we used τup = 1.2 and consequently τdown = 0.83. The quantity IcE(ga ) can be used in order to reduce the unuseful influence relations between genes. Indeed, a gene ga can eventually influence gene gb only if IcE(ga ) ≤ IcE(gb )[7].
Step 2. The basic idea of the proposed algorithm is to detect time-delayed dependencies between the expression profiles by assuming as underlying probabilistic model a stationary Markov Random Field [32]. In particular the model should try to catch statistical dependencies between the activation of a given gene
104
P. Zoppoli, S. Morganella, and M. Ceccarelli
ga at time t and another gb at time t+ κ with Ice(ga ) ≤ Ice(gb ). Our assumption relies on the fact the probabilistic properties of the network are determined by (κ) (κ) the joint distribution P (ga , gb ). Here gb is the expression series of gene gb shifted κ instants forward in time. For our problem we should therefore try to (κ) estimate both the stationary joint distribution P (ga , gb ) and, for each pair of genes, the best suited delay parameter κ. In order to solve these problems, as in the case of the ARACNE algorithm [30], the copula-based estimation can help in simplifying the computations [33]. The idea of the copula transform is based on the assumption that a simple transformation can be made of each variable in such a way that each transformed marginal variable has a uniform distribution. In practice, the transformations might be used as an initial step for each margin [34]. For stationary Markov models, Chen et al. [33] suggest to use a standard kernel estimator for the evaluation of the marginal distributions after the copula transform. Here we the use the simplest rank based empirical copula [34] as other kind of transformations did not produce any particular advantage for (κ) the considered problem. Starting from a kernel density estimation P˜ (ga , gb ) of P the algorithm identifies candidate interactions by pairwise time-delayed gene expression profile mutual information defined as: I κ (ga , gb ) =
P˜ (gai , gbi+κ ) P˜ (gai , gbi+κ )log P˜ (gai )P˜ (gi+κ ) i=1
(4)
Therefore, time-dependent MIs are calculated for each expression profile obtained by shifting genes by one time-step till the defined maximum time delay is reached (see Figure 1), by assuming a stationary shift invariant distribution.
Fig. 1. Figure 1 - TimeDelay-ARACNE pairwise time MI idea. The basic idea of TimeDelay-ARACNE is to represent the time-shifting in the connections rather than unrolling the activation of nodes in time.
After this we introduce the Influence as the max time-dependent MIs, I κ (gA , gB ), over all possible delays κ: (k)
Inf l(ga , gb ) = maxκ {I κ (ga , gb ) : κ = 1, 2, ..., with IcE(ga ) ≤ IcE(gb )} (5) TimeDelay-ARACNE infers directed edges because shifted gene values are posterior to locked gene ones; so, if there is an edge it goes from locked data gene to the other one. Shifting profiles also makes the influence measure asymmetric: = I κ (Y, X) for κ =0 I κ (X, Y )
(6)
An Information Theoretic Approach to Reverse Engineering
105
In particular, if the measure Inf l(ga , gb ) is above the the significance threshold, explained below, for a value of κ > 0, then this means that the activation of gene ga influences the activation of gene gb at a later time. In other terms there is a directed link “from” gene ga “to” gene gb , this is the way TimeDelay-ARACNE can recover directed edges. On the contrary, the ARACNE algorithm does not produce directed edges as it corresponds to the case κ = 0, and the Mutual Information is of course symmetric. We want to show direct gene interactions so under the condition of the perfect choice of experimental time points the best time delay is one because it allows to better capture direct interactions while other delays ideally should evidence more indirect interactions but usually time points are not sharply calibrated to detect such information, so considering few different time points could help in the task. If you consider a too long time delay you can see a correlation between gene a and gene c losing gene b which is regulated by a and regulates c while short time delay can be not sufficient to evidence the connection between gene a and gene b, so using some few delays we try to overcome the above problem. After the computation of the Inf l() estimations, TimeDelay-ARACNE filters them using an appropriate threshold, I0 , in order to retrieve only statistical significant edges. In particular TimeDelay-ARACNE auto-sets threshold randomly permuting dataset row values, calculating the average MI, μi and standard deviation, σI of these random values. The threshold is then set with I0 = μI + ασi . Step 3. In the last step TimeDelay-ARACNE uses the DPI twice. In particular the first DPI is useful to split three-nodes cliques (triangles) at a single time delay. Whereas the second is applied after the computation of the time delay between each pair of genes as in (5). Just as in the standard ARACNE algorithm, three genes loops are still possible on the basis of a tolerance parameter. In particular triangles are maintained by the algorithm if the difference between the mutual information of the three connections is below the 15% (this the same tolerance parameter adopted in the ARACNE algorithm).
Results Algorithm Evaluation TimeDelay-ARACNE was evaluated first alone than against ARACNE, dynamical Bayesian Networks implemented in the Banjo package [35] (a software application and framework for structure learning of static and dynamic Bayesian networks) and ODE implemented in the TSNI package [36] (Time Series Network Identification) with both simulated gene expression data and real gene expression data related to yeast cell cycle [27] and SOS signaling pathway in E. coli [29]. Synthetic Data In order to quantitatively evaluate the performance of the algorithm reported here over a dataset with a simple and fair “gold standard” and also to evaluate how the
106
P. Zoppoli, S. Morganella, and M. Ceccarelli
performance depend of the size of the problem at the hand, such as network dimension, number of time points, and other variables we generated different synthetic datasets. Our profile generation algorithm (see Methods) starts by creating a random graph which represents the statistical dependencies between expression profiles, and then the expression values are generated according to a set of stochastic difference equation with random coefficients. The network generation algorithm works in such a way that each node can have zero (a “stimulator” node) one or two regulators. In addition to the random coefficients of the stochastic equations, a random Gaussian noise is added to each expression value. The performance are evaluated for each network size, number of time points and amount of noise by averaging the PPV, recall and F -score over a set of 20 runs with different randomly generated networks. The performance is measured in terms of: – positive predictive value (PPV), it is the percentage of inferred connections which are correct: p=
Number of true positives Number of true positives + Number of false positives
(7)
– recall, it is the percentage of true connection which are correctly inferred by the algorithm: r=
Number of true positives Number of true positives + Number of false negatives
(8)
– F -score. Indeed, the overall performance depend both of the positive prediction value and recall as one can improve the former by increasing the detection threshold, but at the expenses of the latter and vice versa. The F -score measure is the geometric mean of p and r and represents the compromise between them: 2(p · r) F = (9) p+r Since TimeDelay-ARACNE always tries to infer edge’s direction, so the precisionrecall curves take into account direction. As a matter of fact an edge is considered as a true positive only if the edge exist in reference table and the direction is correct. 1.1
Synthetic Data
Many simulated datasets were derived from any different networks analyzed. TD-ARACNE performance in network reconstruction is evaluated for accuracy and recall. Tab. 1 and Tab. 2 show that TD-ARACNE’s performance is directly correlated with time point numbers but not very strictly correlated with network gene numbers. In Tab. 3 we choose 5 small-medium synthetic networks among those previously tested against TD-ARACNE that are different for gene numbers and time points. TD-ARACNE performs much better than competitors but TSNI probably needs of a perturbation and Banjo needs of a very high number of experiments (time points) [35].
An Information Theoretic Approach to Reverse Engineering
107
Table 1. TD-ARACNE performance against synthetic data changing network gene numbers. TD-ARACNE performance results seem to be not very strictly correlated with network gene numbers.
Genes Time Points Accuracy 20 30 0.45 30 30 0.51 50 30 0.46 100 30 0.39
Recall F-score 0.31 0.37 0.31 0.39 0.26 0.33 0.20 0.26
Table 2. TD-ARACNE performance against synthetic data changing network data points. TD-ARACNE performance results show direct correlation with time points.
Genes Time Points Accuracy 20 10 0.23 20 20 0.41 20 30 0.45 20 50 0.61 20 100 0.67
Recall F-score 0.13 0.17 0.25 0.31 0.31 0.37 0.43 0.50 0.52 0.59
Table 3. Algorithms performance comparison against synthetic data. Performance comparison against 5 small-medium synthetic networks among those previously tested against TD-ARACNE that are different for gene numbers and time points. Results checked for accuracy (or positive predictive value) and recall states TD-ARACNE goodness.
TD-ARACNE TSNI Banjo Genes Time Points Accuracy Recall Accuracy Recall Accuracy Recall 11 17 0.67 0.60 0.33 0.13 0.11 0.08 20 20 0.41 0.25 0.29 0.10 0.11 0.08 20 100 0.67 0.52 0.23 < 0.1 < 0.1 < 0.1 40 30 0.49 0.28 0.38 < 0.1 < 0.1 < 0.1 50 50 0.64 0.40 0.47 < 0.1 < 0.1 < 0.1
1.2
Microarray Expression Profiles
In order to test TD-ARACNE performance on microarray expression profile we selected an eleven gene network from yeast S. cerevisiae cell cycle, precisely part of G1 step. Selected genes are: Cln3, Cdc28, Mbp1, Swi4, Clb6, Cdc6, Sic1, Swi6, Cln1, Cln2, Clb5. In Fig. 2 we report the network graphs reconstructed by TD-ARACNE, TSNI, Banjo and the KEGG true table. TSNI and Banjo are used with default settings reported in their manuals. TD-ARACNE recovers many gene-gene edges with both a good accuracy and recall. We also tested our
108
P. Zoppoli, S. Morganella, and M. Ceccarelli
Fig. 2. Yeast cell-cycle KEGG pathway and reconstructed network by three different algorithms. a) yeast cell-cycle KEGG pathway; b) TNSI inferred graph; c)TD-ARACNE inferred graph; d) Banjo inferred graph. TSNI and Banjo are used with default settings reported in their manuals. TD-ARACNE better recovers this yeast network topology than other algorithms.
Fig. 3. TD-ARACNE SOS predicted network and SOS pathway reference. a) E. coli SOS pathway; b) TNSI inferred graph; c) TD-ARACNE inferred graph; d) Banjo inferred graph. TSNI and Banjo are used with default settings reported in their manuals. TD-ARACNE finds lexA correctly as the HUB, recovers 5 edges correctly, 1 edge has wrong direction and only 1 edge is missing even if an undirected relation is established (lexA → polB → uvrY ). TD-ARACNE again better recover E. coli SOS pathway than other algorithms.
An Information Theoretic Approach to Reverse Engineering
109
algorithm using 8 genes by E. coli SOS pathway. Selected genes are: uvrD, lexA, umuDC, recA, uvrA, uvrY, ruvA, polB. In Fig. 3 are reported SOS pathway reconstruction by the three algorithms and relative bibliographic control [37]. TD-ARACNE better recovers these network topologies than other algorithms.
Conclusions The goal of TimeDelay-ARACNE is to recover gene time dependencies from time-course data producing oriented graph. To do this we introduce time Mutual Information and Influence concepts. First tests on synthetic networks and on yeast cell cycle and SOS pathway data give good results but many other tests should be made. Particular attention is to be made to the data normalization step because the lack of a rule. According to the little performance loss linked to the increasing gene numbers shown in this paper, next developmental step will be the extension from little-medium networks to medium networks.
References 1. Gardner, T.S., Faith, J.J.: Reverse-engineering transcription control networks. Physics of Life Reviews 2(1), 65–88 (2005) 2. Hasty, J., McMillen, D., Isaacs, F., Collins, J.: Computational studies of gene regulatory networks: in numeromolecular biology. Nature Review Genetics 2, 268– 279 (2001) 3. Bansal, M., Belcastro, V., Ambesi-Impiombato, A., di Bernardo, D.: How to infer gene networks from expression profiles. Mol. Syst. Biol. 3, 78 (2007) 4. Kim, S., Kim, J., Cho, K.: Inferring gene regulatory networks from temporal expression profiles under time-delay and noise. Computational Biology and Chemistry 31, 239–245 (2007) 5. Neapolitan, R.: Learning bayesian networks. Prentice Hall, Upper Saddle River (2003) 6. Friedman, N., Linial, M., Nachman, I., Pe’er, D.: Using bayesian networks to analyze expression data. Journal of Computational Biology 7, 601–620 (2000) 7. Zou, M., Conzen, S.D.: A new dnamic bayesian network (dbn) approach for identifying gene regulatory networks from time course microarray data. Bioinformatics 21(1), 71–79 (2005) 8. Sch¨ afer, J., Strimmer, K.: An empirical bayes approach to inferring large-scale gene association networks. Bioinformatics 21(6), 754–764 (2005) 9. Stark, E., Drori, R., Abeles, M.: Partial Cross-Correlation analysis resolves ambiguity in the encoding of multiple movement features. J. Neurophysiol. 95(3), 1966–1975 (2006) 10. Butte, A.J., Kohane, I.S.: Mutual information relevance networks: Functional genomic clustering using pairwise entropy measurements. In: Pacific Symposium on Biocomputing, vol. 5, pp. 415–426 (2000) 11. Margolin, A.A., Nemenman, I., Basso, K., Wiggins, C., Stolovitzky, G., Dalla Favera, R., Califano, A.: Aracne: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular context. BMC Bioinformatics 7(Suppl. I), S7 (2006)
110
P. Zoppoli, S. Morganella, and M. Ceccarelli
12. Faith, J.J., Hayete, B., Thaden, T.T., Mogno, I., Wierzbowski, J., Cottarel, G., Kasif, S., Collins, J.J., Gardner, T.S.: Large-scale mapping and validation of escherichia coli transcriptional regulation from a compendium of expression profiles. PLoS Biology 5(1), e8+ (2007) 13. Meyer, P.E., Kontos, K., Lafitte, F., Bontempi, G.: Information theoretic inference of large transcriptional regulatory network. EURASIP Journal on Bioinformatics and Systems Biology 2007 (2007) 14. Shmulevich, I., Dougherty, E.R., Kim, S., Zhang, W.: Probabilistic boolean networks: a rule-based uncertainty model for gene regulatory networks. Bioinformatics 19, i255–i263 (2002) 15. Schliep, A., Sch¨ onhuth, A., Steinhoff, C.: Using hidden markov models to analyze gene expression time course data. Bioinformatics 18(2), 261–274 (2003) 16. Cui, Q., Liu, B., Jiang, T., Ma, S.: Characterizing the dynamic connectivity between genes by variable parameter regression and kalman filtering based on temporal gene expression data. Bioinformatics 21(8), 1538–1541 (2005) 17. Bansal, M., Gatta, G., di Bernardo, D.: Inference of gene regulatory networks and compound mode of action from time course gene expression. Bioinformatics 22(7), 815–822 (2006) 18. Chuang, C., Jen, C., Chen, C., Shieh, G.: A pattern recognition approach to infer time-lagged genetic interactions. Bioinformatics 24(9), 1183–1190 (2008) 19. Opgen-Rhein, R., Strimmer, K.: Learning causal networks from systems biology time course data: an effective model selection procedure for the vector autoregressive process. BMC Bioinformatics 8, S3 (2007) 20. Li, X., Rao, S., Jiang, W., Li, C., Xiao, Y., Guo, Z., Zhang, Q., Wang, L., Du, L., Li, J., et al.: Discovery of time-delayed gene regulatory networks based on temporal gene expression profiling. BMC bioinformatics 7(1), 26 (2006) 21. Zhao, W., Serpedin, E., Dougherty, E.: Inferring gene regulatory networks from time series data using the minimum description length principle. Bioinformatics 22(17), 21–29 (2006) 22. Waibel, A.: Modular construction of time-delay neural networks for speech recognition. Neural Computation 1(1), 39–46 (1989) 23. Luktepohl, H.: New Introduction to Multiple Time Series Analysis. Springer, Heidelberg (2005) 24. Ramoni, M., Sebastiani, P., Kohane, I.: Cluster analysis of gene expression dynamics. Proceedings of the National Academy of Science 99(14), 9121–9126 (2002) 25. Holter, N., Maritan, A., Cieplak, M., Fedoroff, N., Banavar, J.: Dynamic modeling of gene expression data. Proceedings of the National Academy of Science 98(4), 1693–1698 (2000) 26. Gat-Viks, I., Tanay, A., Shamir, R.: Modeling and analysis of heterogeneous regulation in biological network. In: Eskin, E., Workman, C. (eds.) RECOMB-WS 2004. LNCS (LNBI), vol. 3318, pp. 98–113. Springer, Heidelberg (2005) 27. Spellman, P.T., Sherlock, G., Zhang, M.Q., Iyer, V.R., Anders, K., Eisen, M.B., Brown, P.O., Botsein, D., Futcher, B.: Comprehensive identification of cell cycleregulated genes of the yeast saccharomyces cerevisiae by microarray hybridization. Molecular Biology of the Cell 9(12), 3273–3297 (1998) 28. Kanehisa, M., Goto, S.: Kegg: Kyoto encyclopedia of genes and genomes. Nucleic Acid Res. 28(1), 27–30 (2000) 29. Ronen, M., Rosenberg, R., Shraiman, B.I., Alon, U.: Assigning numbers to the arrows: Parameterizing a gene regulation network by using accurate expression kinetics. Proc. Natl. Acad. Sci. U.S.A. 99(16), 10555–10560 (2002)
An Information Theoretic Approach to Reverse Engineering
111
30. Basso, K., Margolin, A.A., Stolovitzky, G., Klein, U., Dalla Favera, R., Califano, A.: Reverse engineering of regulatory networks in human b cells. Nature Genetics 37(4), 382–390 (2005) 31. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley-Interscience, Hoboken (1991) 32. Havard, R., Held, L.: Gaussian Markov random fields: theory and applications. CRC Press, Boca Raton (2005) 33. Chen, X., Fan, Y.: Estimation of copula-based semiparametric time series models. Journal of Econometrics (January 2006) 34. Nelsen, R.B.: An Introduction to Copulas. Springer, Heidelberg (2006) 35. Yu, J., Smith, V.A., Wang, P.P., Hartemink, A.J., Jarvis, A.J.: Advances to bayesian network inference for generating causal networks from observational biological data. Bioinformatics 20(18), 3594–3603 (2004) 36. Bensal, M., Della Gatta, G., Di Bernardo, D.: Inference of gene regulatory networks and compound mode of action from time course gene expression profiles. Bioinformatics 22(7), 815–822 (2006) 37. Saito, S., Aburatani, S., Horimoto, K.: Network evaluation from the consistency of the graph structure with the measured data. BMC Systems Biology 2(84), 1–14 (2008)
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks Gilles Bernot and Jean-Paul Comet Laboratoire I3S, UMR 6070 UNS-CNRS Algorithmes-Euclide-B 2000, route des Lucioles B.P. 121 F-06903 Sophia Antipolis Cedex {bernot,comet}@unice.fr
Abstract. Modelling activities in molecular biology face the difficulty of prediction to link molecular knowledge with cell phenotypes. Even when the interaction graph between molecules is known, the deduction of the cellular dynamics from this graph remains a strong corner stone of the modelling activity, in particular one has to face the parameter identification problem. This article is devoted to convince the reader that computers can be used not only to simulate a model of the studied biological system but also to deduce the sets of parameter values that lead to a behaviour compatible with the biological knowledge (or hypotheses) about dynamics. This approach is based on formal logic. It is illustrated in the discrete modelling framework of genetic regulatory networks due to René Thomas.
1
Introduction: Modelling Gene Regulatory Networks
Since the advent of molecular biology, biologists have to face increasing difficulties of prediction to link molecular knowledge with cell phenotypes. The belief that the sequencing of genomes would rapidly open the door to a personalized medicine has been confronted at first to the necessity of annotating finely genomes, then to the difficulty to deduce the structure(s) of proteins, then to the huge inventory of interactions that constitute biological networks, and so on. In the same way, we have to face now the fact that the knowledge of an interaction graph does not make it possible to deduce the cellular dynamics. Indeed, interaction graphs are of static nature in the same way as genetic sequences, and it turns out that a large number of parameters, which are unknown and not easily measurable, control the dynamics of interactions. Moreover, combined interactions (and especially feedback circuits in an interaction graph) result in several possible behaviours of the system, qualitatively very different. Even with only two genes the situation is far from simple. Let us consider for example the interaction graph of Figure 1. This simple graph contains 2 circuits, whose intersection is the gene x. The left hand side circuit is said positive: F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 112–138, 2010. © Springer-Verlag Berlin Heidelberg 2010
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
113
activation
x activation
y inhibition
Fig. 1. A simple interaction graph containing a positive circuits and a negative one
– If, from an external stress, the concentration level of the protein coded by gene x grows up, then, at a certain threshold, it will favour the expression of x, producing in turn more x proteins even if the external stress has disappeared. – On the contrary, if the concentration level of the protein of gene x is low, then it will not favour the expression of x, and the concentration level of the x protein can stay at a low level. More generally, a positive circuit in a gene interaction network is a circuit that contains an even number of inhibitions, and a positive circuit favours the existence of 2 stable states [28,19,24,10,27,21] (respectively high level and low level of expression of x). The right hand side circuit is said negative because it contains an odd number of inhibitions: – If the concentration level of the protein coded by gene x grows up, then it will favour the expression of gene y, which will in turn inhibit gene x, resulting in a decreasing of the x protein concentration. – Conversely, a low concentration level of the x protein shall decrease the expression level of gene y, which shall become unable to inhibit x, resulting in a higher expression level of x. . . and the process will start again. More generally, a negative circuit favours homeostasy: oscillations which can either be damped towards a unique stable state or sustained towards a limit cycle surrounding a unique unstable equilibrium state of the biological system. The two circuits of Figure 1 are consequently competitors: do we get one or two stable states? shall we observe oscillations? and if so, do we get oscillations around a unique stable state or around two stable states? These predictions entirely depend on parameters that control the strength of the activation or inhibition arrows of the interaction graph (see Section 2.3). Most of the time, mathematical models are used to perform simulations using computers. Biological knowledge is encoded into ordinary differential equations (ODE) for instance, and many parameters of the system of ODEs are a priori unknown, a few of them being approximately known. Many and many simulations are performed, with different values for the parameters, and the behaviour observed in silico is compared with the known in vivo behaviour. This process, by trial and error, makes it possible to propose a robust set of parameters
114
G. Bernot and J.-P. Comet
(among others) that is compatible with the biological observations. Then, several additional simulations that simulate novel situations can predict interesting behaviours, and suggest new biological experiments. The goal of this article is to convince the reader that “brute force simulations” are not the only way to use a computer for gene regulatory networks. A computer can do more than computations. A computer manipulates symbols. Consequently, using deduction rules, a computer can perform proofs within adequate logics. One of the main advantages of logics is that they exhaustively manipulate sets of models, and exhaustively manage the subset of all models that satisfy a given set of properties. More precisely, a logic provides three well established concepts: – a syntax that defines the properties that can be expressed and manipulated (these properties are called formulas), – a semantics that defines the models under consideration and the meaning of each formula according to these models, – deduction rules or model checking from which we get algorithms in order to prove if a given model (or a set of models) satisfy a formula (or a finite set of formulas). Logic can thus be used to manipulate, in a computer aided manner, the set of all the models that satisfy a given set of known properties. Such an exhaustive approach avoids focusing on one model, which can be “tuned” ad libitum because it has many parameters and few equations. Consequently it avoids focusing on a model which is non predictive. Logic also helps studying the ability to refute a set of models with the current experimental capabilities, it also brings useful concepts such as observationally equivalent models, and so on. More generally formal methods are helpful to assist biologists in their reasonnings [4,7,22,1,18]. In this article, we provide a survey of the “SMBioNet method” whose purpose is to elucidate the behaviour of a gene interaction graph, to find all the possible parameter values (if any), and to suggest suitable experiments to validate or to refute a given biological hypothesis. In the next section, we remind the approach of René Thomas [29] to obtain discrete models (with finite sets of possible states) for gene regulatory networks. We take as a pedagogical example the well known lactose operon in E. coli. In Section 3, we show how temporal logic and more precisely CTL can be used to properly encode biological properties. In Section 4, we show how temporal logic can be used to guide the process of gene network elucidation.
2 2.1
Discrete Framework for Gene Regulatory Networks A Classical Example
In this article, we consider the system of the lac operon which plays a crucial role in the transport and metabolism of lactose in Escherichia coli and some other enteric bacteria [11]. Lactose is a sugar which can be used as a source of
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
115
carbon mandatory for mitosis. This system allows to switch on the production of enzymes allowing the metabolism of carbon only if lactose is available and no other more readily-available energy sources are available (e.g. glucose). The Lactose Operon and Its Associated Biological System. The operon consists of three adjacent structural genes, a promoter, a terminator, and an operator. The lac operon is regulated by several factors including the availability of glucose or of lactose. When lactose is absent in the current environment, a repressor protein maintains the expression of the operon at its basal level. In presence of lactose, it enters into the cell thanks to a protein named permease which is coded by the operon itself. The lactose proteins have affinity to the repressor proteins, form complexes with them leading first to a decreasing of the concentration of free repressor and thus to the activation of the operon. Consequently, the permease concentration increases, the lactose enters more efficiently into the cell, maintaining the low concentration of free repressor. These interactions form then a positive feedback loop on the intracellular lactose (left part of Figure 2). Another protein coded by the operon plays also a role in the carbon metabolism: the enzyme galactosidase. It is responsible for the degradation of the lactose in order to transform the lactose into carbon. Thus the increasing of the intracellular lactose leads to an increasing of the galactosidase, then to the decreasing of the intracellular lactose. These interactions form then a negative feedback loop on the intracellular lactose (right part of Figure 2). Moreover, when glucose is present, it inhibits indirectly the transcription of the operon (via an indirect inhibition of cAMP (cyclic Adenosine MonoPhospate) which forms with CAP (Catabolite gene Activator Proteins) the complex responsible for the transcription of the operon. Thus this alternative pathway of the carbon metabolism is inhibited. To summarize, the intracellular lactose is subject to two influences which are contradictory one to another. The positive feedback loop attempts to keep the high concentration of intracellular lactose whereas the negative feedback loop attempts to decrease this concentration, as shown in Figure 2. Biological Questions Drive Modelling Abstractions. The modelling of a biological system often means to construct a mathematical objet that mimics the behaviours of the considered biological system. The elaborated model is often built according to a particular knowledge on the system (interactions, structuration, published behaviours, hypotheses. . . ), and this knowledge is often not complete: for example one may improperly focus on a given subsystem. Thus the modelling process presented in this paper proposes to construct models according a particular point of view on the biological system. This point of view may turn out to be inconsistent and the modelling process should be able to point out inconsistencies leading to reconsider the model(s). The construction of a model aims at studying a particular behaviour of the system. Each facet of the system corresponds to a particular partial model. When
116
G. Bernot and J.-P. Comet
extra−cellular lactose
permease
intra−cellular lactose
lac repressor
operon
galactosidase
glucose
Fig. 2. Schematic representation of the lac operon system in Escherichia coli
this facet of the system is understood, the modeller can throw away the current model in order to go further in the understanding of the system. In other words, the point of view of the modeller evolves during the modelling process, leading to refinements or re-buildings of the model. In such a perspective, as the construction of the model is based on a set of biological facts and hypotheses, it becomes possible to construct a model only to test different hypotheses, to apprehend the consequences of such hypotheses and possibly to refute some of them. Last but not the least, these models can be used in order to suggest some experiments. For example, if biologists want to put the spot on the use of lactose, then we can adopt a modelling point of view that implicitly assumes the absence of glucose. Then glucose does not belong to the model any more. Moreover, even if it may seem surprising, the lacI repressor can be suppressed because the inhibition of cellular lactose on lacI and the inhibition of the lacI repressor on the operon are known to be always functional. These successive repressions can consequently be abstracted by a unique direct activation from cellular lactose to the operon. Depending on the studied hypotheses, some additional simplifications of the model are possible: – If the hypothesis does not refer explicitly to the galactosidase, then we can abstract galactosidase in a similar manner than for the repressor, see Figure 3. – If the hypothesis does not refer explicitly to the permease, then we can abstract permease as in Figure 4. In the sequel of this article we will consider the interaction schema of Figure 4. 2.2
Gene Interaction Networks
Discretization. When modelling gene interactions, threshold phenomena observed in biology [16] constitute one of the key points for the comprehension of
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
extra−cellular lactose
117
intra−cellular lactose
permease
Fig. 3. Abstraction of the lac operon system when focusing on intra-cellular lactose and permease
extra−cellular lactose
intra−cellular lactose
galactosidase
Fig. 4. Abstraction of the lac operon system when focusing on intra-cellular lactose and galactosidase
the behaviour of the system. Combined with the additional in vivo phenomenon of macromolecule degradation, the interaction curves get a sigmoidal shape (e.g. Hill functions) [6], see Figure 5. Then, it becomes clear that for each interaction, two qualitative situations have to be considered: the regulation is effective if the concentration of the regulator is above the threshold of the sigmoid, and conversely, it is ineffective if the concentration of the regulator is below the threshold. When the product of a gene regulates more than one target, more than two situations have to be considered. For example, Figure 5 assumes that u is a gene product which acts positively on v and negatively on w; each curve being the concentration of v (resp. w) with respect to the concentration of u; after a sufficient delay for u to act on v (resp. w). Obviously, three regions are relevant in the different levels of concentration of u: – In the first region u acts neither on v nor on w, – In the second region, u acts on v but it still does not act on w: – In the last region, u acts both on v and w: The sigmoid nature of the interactions shown in Fig. 5 is almost always verified and it justifies this discretization of the concentration levels of u: three abstract levels (0, 1 and 2) emerge corresponding to the three previous regions and constitute the only relevant information from a qualitative point of view1 . The generalization is straightforward: if a gene acts on n targets, at most n+1 abstract regions are considered (from 0 to n). Less abstract levels are possible when two thresholds for two different targets are equal. Gene regulatory graphs. A biological regulatory graph is defined as a labelled directed graph. A vertex represents a variable (which can abstract a gene and its protein for instance) and has a boundary which is the maximal value of its discrete concentration level. Each directed edge u → v represents an action of 1
Atypic behaviours on the thresholds can also be studied, see for example [8].
118
G. Bernot and J.-P. Comet v
u w
u 0
1
2
Fig. 5. Discretization of concentration of a regulator with 2 targets
u on v. The corresponding sigmoid can be increasing or decreasing (Figure 5), leading respectively to an activation or an inhibition. Thus each directed edge u → v is labelled with an integer threshold belonging to [0, bu ] and a sign: + for an activation of v and − for an inhibition of v. Definition 1. A biological regulatory graph is a labelled directed graph G = (V, E) where: – each vertex v of V , called variable, is provided with a boundary bv ∈ N∗ less or equal to the out-degree of v in G; except when the out-degree is 0 where bv = 1; – each edge u → v of E is labelled with a couple (t; ) where t, called threshold, is an integer between 1 and bu and ∈ {−, +}. The schematic figure 4 is too informal to represent a biological regulatory graph: the edge modelling the auto-regulation of the intra-cellular lactose does not point on a variable but on an edge and, moreover, the thresholds are missing. To construct from the figure 4 a biological regulatory graph, we modify the edge modelling the auto-regulation of the intra-cellular lactose: its target becomes directly the intra-cellular lactose, see Figure 6. Moreover, three different rankings of thresholds can be considered : cases A, B or C. Gene regulatory networks. The discretization step allows one to consider only situations which are qualitatively different: if an abstract level changes, there exists at least one interaction which becomes effective or ineffective. To go further, one has to define what are the possible evolutions of each variable under some effective regulations. Assuming that u1 . . . un have an influence on v (entering arrows ui → v), toward which concentration level is v attracted? This level depends on the set of active regulators, which evolves with time: at a given time, only some of them pass the threshold. For example in Figure 6-A, the level toward which intra-cellular lactose is attracted only depends on the presence of extra-cellular lactose (i.e. has a level
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
intra−cellular lactose
1, +
1, +
galactosidase
1, −
C
extra−cellular lactose
intra−cellular lactose
2, + galactosidase
1, −
extra−cellular lactose 1, +
1, +
1, +
2, +
B
extra−cellular lactose
1, +
A
119
intra−cellular lactose
1, + galactosidase
1, −
Fig. 6. Three different biological regulatory graphs of the operon lactose system (note the different values of interaction thresholds)
greater or equal to 1), the presence of itself (i.e. has a level greater or equal to 2) and the absence of galactosidase (i.e. has a level strictly less than 1). Indeed the absence of an inhibitor is equivalent to the presence of an activator, from the symmetry of sigmoids. These “target” concentration levels are defined by parameters, denoted by kv,ω , where ω is a subset of regulators. Biological regulatory networks are biological regulatory graph (Definition 1) together with these parameters kv,ω . Definition 2. A biological regulatory network is a couple R = (G, K) where G = (V, E) is a biological regulatory graph, and K = {kv,ω } is a family of integers such that – v belongs to V , – ω is a subset of G−1 (v), the set of predecessors of v in the graph G, and will be called a set of resources of v, – 0 ≤ kv,ω ≤ bv Intuitivelly, the parameter kv,ω describes the behaviour of variable v when all variables of ω act as a resource of v (a resource being the presence of an activator or the absence of an inhibitor). Most of the time, we consider an additional monotony condition called the Snoussi condition [25]: ∀v ∈ V, ∀ω, ω ∈ G−1 (v), ω ⊂ ω ⇒ kv,ω ≤ kv,ω In other words, values of parameters never contradict the quantity of resources. For the running example, the variable intra-cellular lactose, noted intra in the sequel, is regulated by two activators, and by one inhibitor. This variable can thus be regulated by 23 = 8 different subsets of its inhibitors/activators. In the same way, 2 parameters have to be given for the variable galactosidase, noted g in the sequel. To sum up, 10 parameters have to be given kg,∅ , kg,{intra} , kintra,∅ , kintra,{extra} , kintra,{g} , kintra,{extra,g} , K= . kintra,{intra} , kintra,{intra,extra} , kintra,{intra,g} , kintra,{intra,extra,g} These parameters control the dynamics of the model since they define how targets evolve according to their current sets of ressources.
120
2.3
G. Bernot and J.-P. Comet
Dynamics of Gene Networks: State Graphs
At a given time, each variable of a regulatory network has a unique concentration level. The state of the biological regulatory network is the vector of concentration levels (the coordinate associated with any variable u is an integer belonging to the interval from 0 to the boundary bv ). According to a given state, the resources of a variable are the regulators that help the variable to be expressed. The set of resources of a variable is constituted by all activators whose level is above the threshold of activation and all the inhibitors whose level is below the threshold. Resources are used to determine the evolution of the system. At a given state, each variable is attracted by the corresponding parameters kv,ω where ω is its set of resources. The function that associates with each state the vector formed by the corresponding kv,ω is an endomorphism of the state space. Table 1 defines the endomorphism for the example of Figure 6-A when extra-cellular lactose is present (parameter values are given in the caption). Table 1. Partial state table for the figure 6-A. Here only state with extra-cellular lactose are considered. Values of parameters are : kintra,{extra} = 0, kintra,{extra,intra} = 1, kintra,{extra,g} = 2, kintra,{extra,intra,g} = 2, kg,∅ = 0 and kg,{intra} = 1. extra intra 1 0 1 0 1 1 1 1 1 2 1 2
g ω(intra) 0 {extra, g} 1 {extra} 0 {extra, g} {extra} 1 0 {extra, intra, g} 1 {extra, intra}
ω(g) kintra,ω(intra) kg,ω(g) {} 2 0 {} 0 0 {intra} 2 1 {intra} 0 1 {intra} 2 1 {intra} 1 1
Such a table can be represented by a asynchronous state graph in which each state has a unique successor: the state towards which the system is attracted, see the left part of Figure 7 where extra is supposed to be equal to 1. In this example, when the system is in the state (1, 1, 0), it is attracted towards the state (1, 2, 1). Variables intra and g are both attracted toward different values. The probability that both variables pass through their respective thresholds at the same time is negligible in vivo, but we do not know which one will be passed first. Accordingly we replace such a diagonal transition by the collection of the transitions which modify only one of the involved variables at a time. For example, transition (1, 1, 0) → (1, 2, 1) is replaced by the transitions (1, 1, 0) → (1, 2, 0) and (1, 1, 0) → (1, 1, 1): the first one corresponds to the case where the variable intra evolves first whereas the second one corresponds to the case where the variable g evolves first, see the right part of Figure 7. An arrow of length greater or equal to 2 would imply a variable which increases its concentration level abruptly and jumps several thresholds. For our example, when the system is in the state (1, 0, 0), it is attracted towards the state (1, 2, 0).
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
121
Since the concentration varies continuously, independently of whether it varies rapidly or not, real transitions should only address neighbor states. Thus, transition (1, 0, 0) → (1, 2, 0) is replaced by the transition (1, 0, 0) → (1, 1, 0), see the right part of Figure 7. g kintra,{extra} kintra,{extra,intra} kintra,{extra,g} kintra,{extra,intra,g} kg,∅ kg,{intra}
= = = = = =
0 1 2 2 0 1
g
1
1
0
0 0
1
2
intra
synchronous state graph
0
1
2
intra
asynchronous state graph
Fig. 7. From the values of parameters to the asynchronous state graph .
The value of the parameter kv,ω (where ω is the set of resources of v at the current state η), indicates how the expression level of v can evolve from the state η. It can increase (respectively decrease) if the parameter value is greater (respectively smaller) than the current level of the variable v. The expression level must stay constant if both values are equal. Formally: Definition 3. The asynchronous state graph of the biologigical regulatory network R = (G, K) is defined as follow: – the set of vertices is the set of states Πv∈V [0, bv ] – there is a transition from the state n = (n1 , . . . , n|V | ) to m = (m1 , . . . , m|V | ) iff ∃!i such that mi m=n = ni or mi = (ni ki,ωi (n) ) ∀i ∈ [1, |V |], ni = (ni ki;ωi (n) ) where ωv (n) represents the set of resources of variable v at state n and where (a b) = a + 1 if b > a, (a b) = a − 1 if b < a and (a b) = a if b = a. Transitions have some biological interpretations. For the current example, horizontal left-to-right transitions correspond to the entering of extra-cellular lactose into the cell whereas horizontal right-to-left transitions correspond to the the breakdown of lactose into glucose. Unfortunatelly, this asynchronous state graph has been built from the knowledge of the different parameters. In fact, usualy no information about these parameters is available, it is necessary to consider all possible values. The gene regulatory graphs of Figure 6-A and 6-B, give rise to 23 = 8 parameters for intra-cellular lactose (intra) and 21 = 2 parameters for galactosidase (g). The intra-parameters can get 3 possible values (from 0 to 2), the g-parameters 3 1 can get 2 possible values (0 or 1). So, there are 32 ×22 = 26244 different regulatory networks associated to the regulatory graph of the figures 6-A or 6-B. More
122
G. Bernot and J.-P. Comet in
generally each gene of a regulatoty graph contributes for (out+1)2 different parameter combinations, where in and out are its in- and out-degrees in the graph, and gene contributions are multiplicative. . . The total number of differents reg3 1 ulatory networks denoted by Figure 6 is thus 26244 + 26244 + 22 × 22 = 53512 because Figure 6-C assumes that the two outgoing arrows of intra-cellular lactose share the same threshold. Let us nevertheless note that the number of different asynchronous state graphs can be less than the number of parameterizations (two different parameterizations can lead to the same state graph). For example, the parameterization deduced from the one of Figure 7 by replacing the value of kintra,{extra,intra} by 0, leads to the same dynamics. Anyway, the number of parameterizations depends of the number of interactions pointing on each variables following a double exponential. 2.4
Introducing Multiplexes
In order to decrease the number of parameterizations, other structural knowledge can be useful. For example, let us consider a variable c that has 2 different activators a and b. Without information, we have to consider 22 = 4 different parameters associated with the four situations : what is the evolution of c when both regulators are absent, when only a is present, when only b is present and when both are present. Sometimes additional structural knowledge can be derived from molecular biology: for example, it could be known that the regulation takes place only when both are present, because the effective regulator is in fact the complex of a and b. In such a case, the four parameters account for only two parameters: what is the evolution when the complex is present (both regulators are present) and when the complex is absent (at least one of the regulator is absent). Such information reduces the number of parameters, and drastically decreases the number of parameterizations to consider. Multiplexes allows the introduction of such information [2]. They provide a slight extension of the R. Thomas’ modelling, with explicit information about cooperative, concurrent or more complex molecular interactions [14,15]. Intuitivelly, a regulatory graph with multiplexes is a graph with two kinds of vertices: variables account for vertices (they constitute the set V of nodes, V for variables) and moreover each information about cooperative concurrent or more complex molecular interactions also gives rise to a vertex (they constitute the set M of nodes, M for multiplexes). Information about molecular interactions is coded into a logical formula that explains when the interaction takes place. In the previous example of complexation, the interaction takes place only when both regulators are present that is when (a ≥ sa ) ∧ (b ≥ sb ). Definition 4. A gene regulatory graph with multiplexes, is a tuple G = (V, M, EV , EM ) such that:
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
123
1. (V ∪ M, EV ∪ EM ) constitutes a (labelled) directed graph whose set of nodes is V ∪ M and set of edges is EV ∪ EM , with EV ⊂ V × IN × M and EM ⊂ M × (V ∪ M ). 2. V and M are disjoint finite sets. Nodes of V are called variables and nodes s of M are called multiplexes. An edge (v, s, m) of EV is denoted (v → m) where s is called the threshold. 3. Each variable v of V is labelled with a positive integer bv called the bound of v. 4. Each multiplex m of M is labelled with a formula belonging to the language Lm inductively defined by: s – If (v → m) ∈ EV , then vs is an atom of Lm , and if (m → m) ∈ EM then m is an atom of Lm . – If φ and ψ belong to Lm then ¬φ, (φ ∧ ψ), (φ ∨ ψ) and (φ ⇒ Ψ ) also belong to Lm . 5. All cycles of the underlying graph (V ∪ M, EV ∪ EM ) contain at least one node belonging to V 2 . Let us remark that the point 1 of the previous definition separates two sets of edges. On one hand the first set is made of edges starting from a variable: their targets are multiplexes, they are labelled by a threshold that determine the atoms used in the target multiplexes. On the other hand, the second set is made of edges starting from a multiplex: their targets can be either a variable (the target of the complex interaction) or a multiplex (the expressionlogical formula of the source multiplex plays the role of an atom in the language of the target multiplex). We define now the regulatory network with multiplexes as a regulatory graph with multiplexes provided with a familly of parameters which define the evolutions of the system according to the subset of predecessors (which are now multiplexes instead of variables). Definition 5. A gene regulatory network with multiplexes is a couple (G, K) where – G = (V, M, EV , EM ) is a regulatory graph with multiplexes. – K = {kv,ω } is a family of parameters indexed by v ∈ V and ω ⊂ G−1 (v) such that all kv,ω are integers and 0 ≤ kv,ω ≤ bv . As in the classical framework, the parameters kv,ω define how evolves the variable v when set of effective interactions on v is ω ⊂ G−1 (v). This set of effective interactions, named set of resources is defined inductivelly for each variable v and each state η: Definition 6. Given a regulatory graph with multiplex G = (V, M, EV , EM ) and a state η of G, the set of resources of a variable v ∈ V for the state η is the set of multiplexes m of G−1 (v) such that the formula ϕm of the multiplex m is satisfied. The interpretation of ϕm in m is inductively defined by: 2
This condition is mandatory for the definition of dynamics (Definition 6).
124
G. Bernot and J.-P. Comet
– If ϕm is reduced to an atom vs of G−1 (m) then ϕm is satisfied iff v ≥ s according to the state η. – If ϕm is reduced to an atom m ∈ M of G−1 (m) then ϕm is satisfied iff ϕm of m is satisfied. – If ϕm ≡ ψ1 ∧ ψ2 then ϕm is satisfied if ψ1 and ψ2 are satisfied; and we proceed similarly for all other connectives. We note ρ(v, η) the set of resources of v for the state η. Definition 3 of the asynchronous state graph remains valid for gene regulatory graphs with multiplexes: the set of vertices does not change, nor the definition of transitions, the only difference resids in the definition of the state of resources: ωi (n) has to be replaced by ρ(v, n). The contribution of multiplexes is thus simply to decrease the number of parameters. Introducing a multiplex corresponds to specify how the predecessors of the multiplex cooperate, and allows one to associate a single parameter whatever the number of predecessors. For example, if the cooperation of three regulators on a common target is well know, without multiplexes, one needs 23 = 8 parameters to describe the evolutions of the targer in each situation whereas when considering multiplexes, only 2 are mandatory: one to describe the evolution of the target when cooperation of regulators takes places, and another to describe the evolution of the target when the cooperation does not take place.
3
Temporal Logic and Model Checking for Biology
Since the parameters are generally not measurable in vivo, finding a suitable classes of parameters constitutes a major issue of the modelling activity. This reverse engineering problem is a parameter identification problem since the structure of the interactions is supposed known, see for example [13] for such a problem in the differential framework. In our discrete framework, this problem is simpler because of the finite number of parameterizations to consider. Nevertheless this number is so enormous that a computer aided method is needed to help biologists to go further in the comprehension of the biological system under study. Moreover, when studying a system, the biological knowledge arrives in an incremental manner. It would be apreciable to apprehend the problem in such a way that when a new knowledge has to be taken into account, previous work is not put into question. In other words, one would like to handle not only a possible model of the system, but the exhaustive set of models which are, at a given time, acceptable according to the current knowledge. Biological knowledge, extracted from the litterature, about the behaviour of the system, can be seen as constraints on the set of possible dynamics: a model is satisfactory only if its dynamics are compatible with the biological behaviours reported in the litterature. In a similar way, when a new wet experiment leads to new knowledge about the behaviour of the system, it can also be used as a new constraint which filter the set of possible models: only the subset of models
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
125
whose dynamics is compatible with this new experiment have to be conserved for further investigation. It becomes straightforward that a computer aided approach has to be developped in order to manipulate the different knowledge about the dynamics of the system and to make use of them to automatically handle the set of compatible parameterizations. Logic precisely allows the definition of the sets of these models. In constitutes a suited approach for addressing such a combinatorial problem. 3.1
The Computation Tree Logic (CTL)
Computation tree logic (CTL) is a branching-time logic, in which the time structure is like a tree: the future is not determined; there are different paths in the future, in other words, at some points of the time, it is possible to choose among a set of different evolutions. CTL is generally used in formal verification of software or hardware, specially when the artificial system is supposed to control systems where consequences of a bug can lead to tragedies (public transportation, nuclear power plants, ...). For such a goal, software applications known as model checkers are useful: they determine if a given model of a system satisfies or not a given temporal property which is written in CTL. Syntax of CTL. The language of well-formed CTL Formulae is generated by the following recursive definition: ⎧ ⊥||p| atoms ⎪ ⎪ ⎨ (¬φ) | (φ ∧ φ) | (φ ∨ φ) | (φ ⇒ φ) | (φ ⇔ φ) | usual connectives φ ::= AXφ | EXφ | AF φ | EF φ | temporal connectives ⎪ ⎪ ⎩ AGφ | EGφ | A[φU φ] | E[φU φ] temporal connectives where ⊥ and codes for False and True, p denotes a particular atomic formula, and φ is another well formed CTL formula. In the context of R. Thomas theory for genetic regulatory networks, the atoms can be of the form (a ∝ n) where – a is a variable of the system, – ∝ is a operator in {, ≥} – n is an integer belonging to the interval [0, ba ]. Semantics of CTL. This definition uses usual connectives ( ¬, ∧, ∨, ⇒, ⇔ as well as temporal modalities which are pairs of symbols: the first element of the pair is A or E and the second belongs to {X, F, G, U } whose meanings are given in the next table. First letter Second letter A for All paths choices X neXt state E for at least one path choice (Exist) F some Future state G all future states (Globally) U Until
126
G. Bernot and J.-P. Comet
ϕ
ϕ ϕ
ϕ ϕ
t t+1 EX(ϕ) ϕ ϕ ϕ
EG(ϕ)
t ϕ
ϕ
ϕ
ϕ
ϕ
ϕ t+1 AX(ϕ)
ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ ϕ AG(ϕ)
EF (ϕ) ϕ
ϕ ϕ
ϕ
ϕ
E[ϕU ψ]
ϕ ϕ
ϕ AF (ϕ) ψ ϕ ϕ ψ ϕ ϕ ψ ψ ψ ϕ ϕ ψ ϕ ψ ϕ ψ A[ϕU ψ]
ϕ
ψ
Fig. 8. Semantic of CTL formula. Dashed arrows (resp. arrows with empty head) point on states where ϕ (resp. ψ) is satisfied.
Figure 8 illustrates each temporal modality: – EX(ϕ) is true at the current state if there exists a successor state where ϕ is true. – AX(ϕ) is true at the current state if in all successor states, ϕ is true. – EF (ϕ) is true at the current state if there exists a path leading to a state where ϕ is true. – AF (ϕ) is true at the current state if all paths lead to a state where ϕ is true. – EG(ϕ) is true at the current state if there exists a path starting from the current state whose all states satisfy the formula ϕ. – AG(ϕ) is true at the current state if all states of all paths starting from the current state, satisfy the formula ϕ. – E[ϕU ψ] is true at the current state if there exists a path starting from the current state leading to a state where ψ is true, and passing only through states satisfying ϕ. – A[ϕU ψ] is true at the current state if all paths starting from the current state lead to a state where ψ is true, pass only through states satisfying ϕ. For example AX(intra ≥ 1) means that in all next states accessible from the current state in the asynchronous state graph, the concentration level of intra is greater or equal to 1. Note that this last formula is false in the asynchronous state graph of figure 7 if the initial state is (1, 1) or (0, 1) and is is true for all other initial states. Formula EG(g = 0) means that there exists at least one path starting from the current state where the concentration of g is constantly equal to 0. In Fig. 7 no state satisfies this formula, because from each state, all paths will return to a state where the concentration of g is equal to 0.
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
127
We say that a model or a state graph satisfies a CTL formula, if each state satisfies the formula. 3.2
CTL to Encode Biological Properties
CTL formulas are useful to express temporal properties of the biological system. Once such properties have been elaborated, a model of the biological system will be acceptable only if its state graph satisfies the CTL formulas, otherwise, it is not considered anymore. The first temporal property focuses on the functionality of the entering of lactose into the cell. When external lactose is constantly present in the environment, after a sufficently long time, intra-cellular lactose will increase and will at least cross its first threshold. This first property can then be written in CTL by the formula ϕ1 as follows: ϕ1 ≡ AG(extra = 1) =⇒ AF (intra > 0) This CTL formula is not satisfied by the state graph of Figure 9 (where only the plane extra = 1 is drawn) because from the state (0, 0), it is not possible to increase the abstract level of intra. Let us remark that the state graph of Figure 7 does satisfy it because each path leads to a state where intra is present. g kintra,{extra} kintra,{extra,intra} kintra,{extra,g} kintra,{extra,intra,g} kg,∅ kg,{intra}
= = = = = =
0 2 0 2 0 1
g
1
1
0
0 0
1
2
intra
synchronous state graph
0
1
2
intra
asynchronous state graph
Fig. 9. From others values of parameters to the asynchronous state graph .
The second temporal property focuses on the production of galactosidase. When external lactose is constantly present in the environment, after a sufficently long time, β-galactosidase will be sufficiently produced to degrade lactose, and then it will stay present forever. The translation of this property into CTL is: ϕ2 ≡ AG(extra = 1) =⇒ AF (AG(g = 1)) This CTL formula is not satisfied by the states graphs of figures 7 and 9: In the first case, each path leads to a state where g = 0 whereas in the second case, from the state (0, 0), it is not possible to increase the abstract level of g. Similarly, when external lactose is constantly absent in the environment, after a sufficently long time, β-galactosidase will disappear. The CTL formula expressing this property is: ϕ3 ≡ AG(extra = 0) =⇒ AF (g = 0)
128
G. Bernot and J.-P. Comet
The fourth formula we consider, states that when environment is rich in lactose, the degradation of intra-cellular lactose to produce carbon, is not sufficient in order to entirely consume the intra-cellular lactose. In other words, the permease allows the lactose to enter sufficiently rapidly in order to balance the consumption of intra-cellular lactose. To express this property, we focus on the case where extra-cellular lactose is present, intra-cellular lactose and galactosidase are present. In such a configuration, the intra-cellular lactose will never reach the basal level equal to 0. The CTL formula coding for this property is then written as follow: ϕ4
≡
(AG(extra = 1) ∧ (intra > 0) ∧ (g = 1)) =⇒ AG (intra > 0)
This CTL formula is not satisfied by the state graph of figure 7 because from each state, there is a path that leads to the total degradation of intra. The state graph of figure 9 does not satisfy it because each path starting from a state where intra = 1 leads to a total degradation of intra. The two last temporal properties focus on the functionality of the lactose pathway, even when environment is no more rich in lactose. On one hand, when intra-cellular lactose is present at level 1 but extra-cellular lactose is absent, the pathway leads to a state where intra-cellular lactose has been entirely consummed, without passing through a state where intra is at its highest level (the only source of intra-cellular lactose is the extra-cellular one). Moreover, when this state is reached, there is no way to increase the concentration of intra-cellular lactose: its level then remains to 0. ϕ5 ≡ AG(extra = 0)∧(intra = 1) =⇒ A [(intra = 1)U AG(intra = 0)] On the other hand, when intra-cellular lactose is present at level 2 but extracellular lactose absent, the pathway leads to a state where intra-cellular lactose has decreased to level 1. ϕ6 ≡ AG(extra = 0) ∧ (intra = 2) =⇒ AF (intra = 1) These two previous CTL formulas cannot be checked in Figures 7 and 9 because the formulas concern the dynamics of the system when extra is absent whereas the figures focus on the dynamics when extra is present. The formal language CTL has been developped in a computer science framework, and then is not dedicated to gene regulatory networks. For example, it is not possible to express in CTL that the dynamic system presents n different stable states. Nevertheless if we know a frontier between two stable behaviours, it becomes possible to express it in CTL. Let us consider a system where if variable a is at a level less than 2, the system converges to a stable state, and if variable a is at a level greater than 2, the system converges to another stable state. This property can be translated into the formula: ((a < 2) ⇒ AG(a < 2))
∧
((a >= 2) ⇒ AG(a >= 2))
Even if in some cases the translation of a property is tricky, in practice, CTL is sufficient to express the majority of biological properties useful for gene regulatory networks.
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
129
Let us now emphasize that the CTL language makes the link between the biological experiments and the models that are supposed to represent the behaviours of the studied biological system. Indeed, – a CTL formula can be confronted against a model: the traces of the dynamics of the model verify the temporeal properties expressed through the CTL formula, – a CTL formula can also be confronted against traces observed through an wet experiment: either the wet experiment is a realisation of the temporal property coded by the CTL formula, or it accounts for a counter example. So, the modelling activity has to focus on the manner to select models that satisfy CTL formulas representing dynamic knowledge, extracted from experiments, about the system.
4 4.1
Computer Aided Elaboration of Formal Models The Landscape
The subject of this article is to present our computer aided method to accompany the process of discovery in biology, by using formal modelling in order to make valuable predictions. According to this point of view, the “ultimate model”, which would perfectly mimic the in vivo behaviour, is not our object of interest. It may be surprising for computer scientists, but this model is in fact rarely a subject of interest for biologists. Indeed the “ultimate model” would be untractable. The majority of valuable results, for a researcher in biology, comes from well chosen wet experiments and contributions to biology “are” wet experiments. The theoretical models are only intermediate objects, which reflect intermediate hypotheses that facilitate a good choice of experiments. Consequently, a computer aided process of discovery requires to formally manage these models. It implies a formal expression of the sensible knowledge about the biological function under interest. It also implies a formal expression of the set of (possibly successive) biological hypotheses that motivate the biological research. So, our method manages automatically the set of all possible models and we take benefit of this in order to guide a sensible choice of wet experiments. There are two kind of knowledge: – Structural knowledge, that inventories the set of relevant genes as well as the possible gene interactions. This knowledge can come from static analysis of gene sequences or protein sequences; it can come from dynamic data, e.g. transcriptomic data, via machine learning techniques; it can also come from the literature. This kind of knowledge can be formalized by one or several putative regulatory graphs. – Behavioural knowledge, that reflects the dynamic properties of the biological system, such as the response to a given stress, some possible stationnary states, known oscillations, etc. This kind of knowledge can be formalized by a set of temporal formulas.
130
G. Bernot and J.-P. Comet
They give rise to several formal objects of different nature: – The set M of all the structurally possible regulatory networks (for each putative regulatory graph, we consider all possible values of all the parameters). So, M can be seen as the set of all possible models according to the terminology of formal logic, since each regulatory network defines a unique state graph. For example, once the decision to make permease implicit is taken, the possible regulatory graphs are drawn in Figure 6, where all possible threshold distributions are considered. Remember that it gives rise to 53512 different parameterizations (section 2.3). – The set Φ of the CTL formulas that formalize the dynamic properties. For example, according to Section 3.2, the set Φ can contain the 6 formulas from ϕ1 to ϕ6 . – Moreover, as already mentioned, the biological research is usually motivated by a biological hypothesis, that can be formalized via a set of CTL formulas H. For example, let us consider the following hypothesis: “If extra-cellular lactose is constantly present then the positive circuit on intra-cellular lactose is functional.” It would mean that when extra-cellular lactose is constantly present, there is a multi-stationnarity on intra, which is separated by its auto-induction threshold 2 (according to the notion of characteristic state [26]). It can be formalized with H = {ψ1 , ψ2 } as follows: ψ1 ≡ AG(extra = 1) =⇒ (intra = 2 =⇒ AG(intra = 2)) ψ2 ≡ AG(extra = 1) =⇒ (intra < 2 =⇒ AG(intra < 2)) Of course, if “the ultimate model” were known and properly defined (i.e. a regulatory network with known values for all its parameters), it would satisfy exactly the set of behavioural properties that are true in vivo, and thus M would be reduced to a singleton (the ultimate model), Φ would be useless and H would be decided, by simply checking if H is satisfied by M. The difficulty comes from the uncertainty of the model structure and parameters, the incompleteness of the behavioural knowledge and the complexity of the systems which makes intuitive reasoning almost useless when studying hypotheses. Fortunately, once the formalization step is performed, formal logic and formal models allow us to test hypotheses, to check consistency, to elaborate more precise models incrementally, and to suggest new biological experiments. The set of potential models M and the set of properties Φ ∪ H being given, two obvious scientific questions naturally arise: 1. Is it possible that Φ ∪ H and M ? In other words: does it exist at least one model of M that satisfies all the formulas in Φ ∪ H ? In the remainder, this question will be referred to as the consistency question of knowledge and hypotheses.
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
131
2. And if so, is it true in vivo that Φ ∪ H and M ? As a matter of fact, the existence of a mathematical model satisfying the hypotheses is not sufficient. We must verify that the model reflecting the real in vivo behaviour belongs to the set of models that satisfy Φ ∪ H. It implies to propose experiments in order to validate or refute H (assuming that the knowledge Φ is validated). For both questions, we can take benefit of computer aided proofs and computer optimized validation schemas can be proposed. More precisely, a CTL property can be confronted to traces and traces can be either generated by wet experiments or extracted from a state graph. Consequently a logic such as CTL establishes a bridge between in vivo experiments and mathematical models. 4.2
Consistency
In practice, when actually working with researchers in biology, there is an obvious method to check consistency: 1. Draw all the sensible regulatory graphs according to biological knowledge, with all the sensible, possible threshold allocations. It formalizes the structural knowledge. 2. From the discussions with the biologists, express in CTL the known behavioural properties as well as the considered biological hypotheses. It defines Φ and H. 3. Then, automatically generate, for each possible regulatory graph, all the possible values for all Thomas’parameters: we get all the possible regulatory networks. For all of them, generate the corresponding state graph: it defines M. Our software platform SMBioNet handles this automatically. 4. Check each of these models against Φ ∪ H. SMBioNet intensively uses the model checker called NuSMV [5] to perform this step automatically. If no model survive to the fourth step, then reconsider the hypotheses and perhaps extend model schemas. . . On the contrary, if at least one model survives, then the biological hypotheses are consistent. Even better: the possible parameter sets Kv,ω have been exhaustively identified. If we consider for example the set of models M characterized by Figure 6-A, there are 19 parameter settings leading to a dynamics compatible with the set of properties Φ∪H proposed above. Among the 8+2=10 parameters that govern the dynamics, 6 of them are completely identified (i.e., shared by the 19 parameter settings): Kintra = 0, Kintra,g = 0, Kintra,extra = 1, Kintra,extra g = 1, Kg = 0 and Kg,intra = 1. The 4 other parameters are those were intra is a resource of itself. With respect to the classical ODE simulation method where some possible parameters are identified by trial and error, this method has the obvious advantage to compute the exhaustive set of possible models according to the current biological knowledge. It has also the crucial advantage of facilitating the refutation of models in a systematic manner.
132
G. Bernot and J.-P. Comet
The four steps described before, as such, replace a “brute force simulation method” by a “brute force refutation method” based on formal logic. In fact, the method is considerably more sophisticated in order to avoid the brute force enumeration of all the possible models. For example, in SMBioNet, when several regulatory networks share the same state graph, only one of them is considered: even better, it is not necessary to generate common state graphs as they can be identified a priori. In [9,17], logic programming and constraint solving are integrated in this method and they almost entirely avoid the enumeration of models in order to establish consistency. In [23] model checking is replaced by proof techniques based on products of automata. Moreover, in practice, the use of multiplexes considerably reduces the number of different networks to be considered. Anyway, all these clever approaches considerably improve the algorithmic treatment, but the global method remains the same. 4.3
Selection of Biological Experiments
Once the first question (consistency) is positively answered, the second question (validation) has to be addressed: we aim at proposing “wet” experiment plans in order to validate or refute H (assuming that the knowledge Φ is validated). Here, we will address this question from a point of view entirely based on formal logic. The Global Shape. Our framework is based on CTL whose atoms allow the comparison of the discrete expression level of a gene with a given integer value. So, a regulatory graph being given, we can consider the set CT Lsyntax of all the formulas that can be written about the regulatory graph, according to the CTL language. Notice that this set is not restricted to valid formulas: for example if ϕ belongs to CT Lsyntax then its negation ¬ϕ also belongs to CT Lsyntax. – The set of hypotheses H is a subset of CT Lsyntax. In Figure 10, we delimit the set CT Lsyntax with a bold black line and H is drawn in a circle. – Unfortunately, it usually does not exist a single feasible wet experiment that can decide if H is valid or not in vivo (except for trivial hypotheses, which do not deserve a research campaign). A given wet experiment reveals a small set of CTL properties, which are usually elementary properties. So, feasible wet experiments define a subset of CT Lsyntax: the set of all properties that can be decided, without any ambiguity, from a unique feasible wet experiment. Such properties are often called observable properties and we note Obs this subset of CT Lsyntax. In Figure 10, Obs is represented by the vertically hatched set. – H is usually disjoint from Obs, however we can consider the set T hΦ(H) of all the consequences of H (assuming the knowledge Φ). In Figure 10, T hΦ(H) is represented by the horizontally hatched set. – Let E = T hΦ (H) ∩ Obs be the intersection of T hΦ (H) and Obs. It denotes the set of all the consequences of the hypotheses that can be verified experimentally. If ψ is a formula belonging to E then there exists a wet experiment e after which the validity of ψ is decided without any ambiguity:
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
133
• If the experiment e “fails” then ψ is false in vivo and the hypothesis H is refuted. • If on the contrary the experiment e “succeeds” then ψ is true in vivo. . . Of course it usually does not imply H. In Figure 10, the intersection E defines the set of relevant experiments with respect to the hypothesis H. CT Lsyntax
1111111111 0000000000 0000000000 1111111111 0000000000 1111111111 000 111 0000000000 1111111111 000 111 0000000000 1111111111 000 111 0000000000 1111111111 0000000000 1111111111 0000000000 1111111111
11111111111 00000000000 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111 H 00000000000 11111111111 00000000000 11111111111 00000000000 11111111111
Obs T hΦ (H)
E
Fig. 10. Sets of CTL formulas involved in the computer aided selection of biological experiments
Observability and Refutability. According to Popper [20], we should only consider hypotheses H for which we can propose experiments able to refute H: if H is false in vivo, there must exist a wet experiment e that fails. In other words: ¬H =⇒ (∃ψ ∈ E | ¬ψ), which is equivalent to: E =⇒ H. Consequently, refutability depends on the “power” of Obs with respect to H because E = T hΦ (H) ∩ Obs. If Obs is “big enough” then it increases the capability of E to refute H. If E does not imply H (assuming Φ) it may result that experimental capabilities in biology are unsufficient, so H is out of the scope of current know-how and current knowledge. It may also be possible that the wet laboratory has not enough fundings, or that the experimental cost would be disproportionate with regard to the problem under consideration. We have shown so far that, for the modelling process, properties (Φ and H) are as much important as models (M). Refutability issues prove that “experimental observability” (Obs) constitutes the third support of the process. Often, observable formulas are of the form (ρ =⇒ AF (ω)) or (ρ =⇒ EF (ω)) where ρ characterizes some initial states that biologists can impose to a population of cells at the beginning of the experiment, and ω is deducible without any ambiguity from what can be observed at the end of the experiment. We use AF when all repeated experiments give the same result, and EF when we suspect that some additional conditions are imposed by the chosen experimental protocol during the experiments. According to our small running example, we may consider that only external lactose can be controled, and that the flux of entering lactose can be roughly
134
G. Bernot and J.-P. Comet
estimated. So, ρ can be of the form extra = 0 or extra = 1, possibly prefixed by a modality such as AG. Moreover, if the flux is considered “high”, it denotes the presence of many permease proteins, and consequently implies that intra has reached the threshold 2 (according to Figure 6-A). So, we may imagine for example that ω can be of the form intra = 2 or its negation intra < 2, possibly prefixed by modalities such as AG or AU . This defines Obs. We will see later on that this observability is a rough underestimation, and how formal proofs can help improving it. Selection of Experimental Schemas. Unfortunately, E is infinite in general, so, the art of choosing “good” wet experiments can be formalized by heuritics to select a finite (small) subset of formulas in E that has “good” chances to refute H if one of the corresponding experiments fails. Classical testing frameworks from computer science [3,12] aim at selecting such subsets. However the subsets selected by the corresponding software testing tools are always huge because running lot of tests on a computer costs almost nothing. Nevertheless, the main idea of these frameworks can still be suitably applied to regulatory networks. Tests are selected incrementally and completeness to the limit is the main preoccupation: if H is not valid then the incremental selection process must be able to provide a counter-example after a certain number of iterations. It formally means that each possible reason for H to be false is tested after a finite (possibly large) amount of selection time. Let us illustrate this completeness criteria on a simple case: according to our example, H is made of 2 formulas ψ1 and ψ2 . Spending a lot of money to refute only ψ1 would be a bad idea: this strategy would be incomplete because if H is false because ψ2 is false, then this strategy will never refute H. Thus, one must try to refute both formulas. Refutation of ψ1 : AG(extra = 1) =⇒ (intra = 2 =⇒ AG(intra = 2)) It is well known that the truth table of “=⇒” is always true when the precondition is false. Consequently, any wet experiment that does not ensure AG(extra = 1) has no chance to refute the hypothesis. So, Popper tells us that any experiment associated with ψ1 must constantly have external lactose (and remind that, from the context, glucose is always absent). For the same reason, one must start with a population of bacteria with intra = 2 as initial state. . . and unfortunately the precondition ρ ≡ AG(extra = 1)∧(intra = 2) is not reachable according to our description of observable formulas. Moreover, our knowledge (ϕ1 to ϕ6 ) never concludes on the atom intra = 2, so CTL cannot propose a sufficient condition to reach this initial state. Let us postpone this problem for a short time. Refutation of ψ2 : AG(extra = 1) =⇒ (intra < 2 =⇒ AG(intra < 2)) The same reasonning applies: one must start with a population of bacteria with intra < 2 as initial state, and ensure AG(extra = 1). Here, the formal definition
On the Use of Temporal Formal Logic to Model Gene Regulatory Networks
135
of “ 0. The posterior Normal Inverse Gamma joint density of the parameters (β, σ 2 ) denoted by N IG(0, V, a, b), is given by ∗
p(β, σ 2 |y) ∝ (σ 2 )−(a +p/2+1) × 1 × exp − 2 (β − m∗ ) (V ∗ )−1 (β − m∗ ) + 2b∗ 2σ with m∗ = (V −1 + X X)−1 X Y V ∗ = (V −1 + X X)−1 γ = {Y Y − (m∗ ) (V ∗ )−1 m∗ } a∗ = a + rn/2,
b∗ = b + γ/2
where a, b > 0 and V is a positive definite matrix. Throughout this paper we assume that X = 1n ⊗ B, where B is a known matrix, and that X X = nB B is full rank. The design or basis function matrix B encodes the type of basis used for the clustering: linear splines in (9), wavelets in (15) or Fourier in (8). The latter is the most appropriate choice in the context of a study of daily rhythms as in Section 4. The Bayes factor associated with this model can be calculated from its marginal likelihood L(y); see for example (7) and (13). Thus L(y) =
nr/2 1 ba |V ∗ |1/2 Γ (a∗ ) ∗ π (b )a∗ |V |1/2 Γ (a)
(1)
Unlike for univariate data, within the class of problems illustrated in Section 4, there are a myriad of different shapes of expressions over time possible for each gene. Consequently, Bayes factors associated with different gene combinations are highly discriminative and informative. Let C denote a partition belonging to the space of partitions C, on a space Ω of cardinality n, and c a cluster of such partition. (9) assume that each observation is exchangeable within its containing cluster. The Normal Inverse-Gamma conjugate Bayesian linear regression model for each observation in a cluster c takes the form Y (c) = X (c) β (c) + ε(c) (c)
(c) Here β (c) = (β 1 , . . . , β (c) is the design p ) is the vector of parameters with p ≤ r, X (c) 2 matrix of size nc r × p, ε ∼ N (0, σc Irnc ) where nc is number of observations in cluster c and Irnc is the indicator function of size rnc × rnc . A partition C of the N observations divides into N clusters of sizes {n1 , . . . , nN }, with n = i=1 ni . Assuming the parameters of different clusters are independent then, because the likelihood separates, it is straightforward to check (16) that the log marginal likelihood score Σ(C) for any partition C with clusters c ∈ C is given by
Searching a Multivariate Partition Space Using MAX-SAT
Σ(C) = log p(C) +
log pc (y)
243
(2)
c∈C
Here the prior p(C) is often chosen from the class of cohesion priors over the partition space (14) which assigns weights to different models in a plausible and convenient way: see e.g. (16). An essential property of the search for MAP models - dramatically increasing the efficiency of the partition search - is that with the right family of priors the search is local. That is, if C + and C − differ only in the sense that the cluster c+ ∈ C + is split − − into two clusters c− 1 , c2 ∈ C then the log marginal likelihood score is a linear function − only of the posterior cluster probabilities on c+ , c− 1 and c2 . 2.1 Choosing an Appropriate Prior over Partitions Although there are many possible choices for a prior over partitions, an appropriate choice in this scenario is the Crowley partition prior p(C) (5; 12; 3) for partition C p(C) =
N Γ (λ)λN
Γ (ni ) Γ (n + λ) i=1
(3)
where λ > 0 is the parameter of the partition prior, N is the number of clusters and n is the total number of observations, with ni the number of observations in cluster ci . This prior is consistent in the sense of (12). The authors argue that this property is extremely desirable for any partition process to hold. Conveniently if we use a prior from this family then the score in (2) decomposes. Thus Σ(C) = log p(N, n1 , . . . , nN |y) = log p(N, n1 , . . . , nN ) +
N
log p(y i )
i=1
= log Γ (λ) − log Γ (n + λ) +
N
Si
i=1
where Si = log p(y i ) + log Γ (ni ) + log λ Thus, the score Σ(C) is decomposable into the sum of the scores Si over individual clusters plus a constant term. This is especially useful for weighted MAX-SAT which needs the score of an object to be expressible as a sum of component scores. The choice of the Crowley prior in (3) ensures that the score of a partition is expressible as a linear combination of scores associated with individual sets within the partition. It is this property that enables us to find straightforward encoding of the MAP search as a weighted MAX-SAT problem. Note that a particular example of a Crowley prior is the Multinomial-Dirichlet prior used by (9), where λ is set so that λ ∈ (1/n, 1/2).
244
S. Liverani, J. Cussens, and J.Q. Smith
2.2 Searching the Partition Space The simplest search method using the local property is agglomerative hierarchical clustering (AHC). It starts with all the observations in separate clusters, our original C0 , and evaluates the score of this partition. Each cluster is then compared with all the other clusters and the two clusters which increase the log likelihood in (2) by the most are combined to produce a new partition C1 . We now substitute C1 for C0 and repeat this procedure to obtain the next partition C2 . We continue in this way until we have evaluated the log marginal score Σ(Ci ) for each partition {Ci , 1 ≤ i ≤ n}. We then choose the partition which maximizes the score Σ(Ci ). A drawback of this method is that the set of partitions searched is an extremely small subset of the set of all partitions. The number of partitions of a set of elements n grows quickly with n. For example, there are 5.1 × 1013 ways to partition 20 elements, and the AHC evaluates only 1331 of them! Despite searching only a small number of partitions, AHC is surprisingly powerful and often finds good partitions of clusters, especially when used for time-course profile clustering as in Section 4. It is also very fast. However one drawback is that the final choice of optimal partition is completely dependent on the early combinations of elements into clusters. This initial part of the combination process is subject to be sensitive and can make poor initial choices, especially in the presence of outliers or poor choices of hyperparameters when used with Bayes factor scores (see Section 4) in a way carefully described in (16). Analogous instabilities in search algorithms over similar model spaces have prompted some authors to develop algorithms that devote time to early refinement of the initial choices in the search (4) or to propose alternative stochastic search (10). The latter method appears very promising but is difficult to implement within our framework due to the size of the datasets. We propose an enhancement of the widely used AHC with weighted MAX-SAT. This is simple to use in this context provided a prior such as (3) is used over the model space which admits a decomposable score. Weighted MAX-SAT is able to explore many more partitions and different regions of the partition space, as we will demonstrate in Section 4, and is not nearly as sensitive to the instabilities that AHC, used on its own, is prone to exhibit.
3 Encoding the Clustering Algorithm (6) showed that for the class of Bayesian networks a decomposition of the marginal likelihood score allowed weighted MAX-SAT algorithms to be used. The decomposition was in terms of child-parent configurations p(xi |Paxi ) associated with each random variable xi in the Bayesian network. Here our partition space under the Crowley prior exhibits an analogous decomposition into cluster scores. 3.1 Weighted MAX-SAT Encoding For each considered cluster ci , a propositional atom, also called ci , is created. In what follows no distinction is made between clusters and the propositional atoms representing them. Propositional atoms are just binary variables with two values: TRUE and
Searching a Multivariate Partition Space Using MAX-SAT
245
FALSE. A partition is represented by setting all of its clusters to TRUE and all other clusters to FALSE. However, most truth-value assignments for the ci do not correspond to a valid partition, and so such assignments must be ruled out by constraints represented by logical clauses. To rule out the inclusion of overlapping clusters we assert clauses of the form: ci ∨ cj
(4)
for all non-disjoint pairs of clusters ci , cj . (A bar over a formula represents negation.) Each such clause is logically equivalent to ci ∧ cj : both clusters cannot be included in a partition. In general, it is also necessary to state that each data point must be included in some cluster in the partition. Let {cy1 , cy2 , . . . , cyi(y) } be the set of all clusters containing data point y. For each y a single clause of the form: cy1 ∨ cy2 ∨ · · · ∨ cyi(y)
(5)
is created. The ‘hard’ clauses in (4) and (5) suffice to rule out non-partitions; it remains to ensure that each partition has the right score. This can be done by exploiting the decomposability of the partition score into cluster scores and using ‘soft’ clauses to represent cluster scores. If Si , the score for cluster ci , is positive the following weighted clause is asserted: S i : ci
(6)
Such a clause intuitively says: “We want ci to be true (i.e. to be one of the clusters in the partition) and this preference has weight Si .” If a cluster cj has a negative score Sj then this weighted clause is asserted: −Sj : cj
(7)
which states a preference for cj not to be included in the partition. Given an input composed of the clauses in (4)–(7) the task of a weighted MAX-SAT solver is to find a truth assignment to the ci which respects all hard clauses and maximizes the sum of the weights of satisfied soft clauses. Such an assignment will encode the highest scoring partition constructed from the given clusters. Note that if a given cluster ci can be partitioned into clusters ci1 , ci2 , . . . cij(i) where Si < Si1 + Si2 + · · · + Sij(i) , then due to the decomposability of the partition score, ci cannot be a member of any optimal partition: any partition with ci can be improved by replacing ci with ci1 , ci2 , . . . cij(i) . Removing such clusters prior to the logical encoding reduces the problem considerably and can be done reasonably quickly: for example, one particular collection of 1023 clusters which would have generated 495,285 clauses was reduced to 166 clusters with 13,158 clauses using this approach. The filtering process took 25 seconds using a Python script. This cluster reduction technique was used in addition to those mentioned in the sections immediately following.
246
S. Liverani, J. Cussens, and J.Q. Smith
3.2 Reducing the Number of Cluster Scores To use weighted MAX-SAT algorithms effectively in this context, the challenge in even moderately sized partition spaces is to identify promising clusters that might be components of an optimal partition. The method in (6) of evaluating the scores only of subsets of less than a certain size is not ideal to this context since in our applications many good clusters appear to have a high cardinality. However there are more promising techniques formulated in other contexts to address this issue. One of these, which we use in the illustrative example, is outlined below and others presented in Section 5. Reduction by iterative augmentation. A simple way to reduce the number of potential cluster scores for weighted MAX-SAT is to evaluate all the possible clusters containing a single observation and to iteratively augment the size of the plausible clusters only if their score increases too, thanks to the nice decomposability of our score function. We will focus our discussion in this paper to an algorithm, the iterative augmentation algorithm described below. Step 1. Compute the cluster score for all n observations as if each belonged to a different cluster. Save these scores as input for weighted MAX-SAT. Set k ← 0 and c ← ∅. Step 2. Set k ← k + 1, j ← k + 1 and c ← {k}. Exit the algorithm when k = n. Step 3. Add element j to cluster c and compute the score for this new cluster c . If Sc > Sc + Sj , then – Save the score for cluster c – If j = n, go to Step 2. – c ← c and j ← j + 1 – Go to Step 3 else – If j = n, go to Step 2. – Set j ← j + 1 – Go to Step 2. The main advantage of this algorithm is that it evaluates the actual cluster scores, never approximating them by pairwise dissimilarities or in any other way. Furthermore, this method does not put any restriction on the maximum size of the potential clusters. Hybrid AHC algorithm. Even though this algorithm performs extremely well when the number of clustered units n < 100, it slows down quickly as the number of observational vectors increases. However this deficiency disappears if we use it in conjunction with the popular AHC search to refine clusters of fewer than 100 units. When used to compare partitions of profiles as described in Section 2, AHC performs extremely well when the combined clusters are large. So to improve its performance we use weighted MAX-SAT to reduce dependence on poor initialization. By running a mixture of AHC together with weighted MAX-SAT we are able to reduce the dependence whilst retaining the speed of AHC and its efficacy with large clusters. AHC is used to initialize a candidate partition. Then weighted MAX-SAT is used as a ‘split’ move to refine these clusters and find a new and improved partition on which to start a new AHC algorithm. The hybrid algorithm is described below.
Searching a Multivariate Partition Space Using MAX-SAT
247
Step 1. Initialize by running AHC to find best scoring partition C1 on this search. Step 2. (Splitting step) Take each cluster c in C1 . Score promising subsets of c and run a weighted MAX-SAT solver to find the highest scoring partition of c. Note that, because our clusters are usually several orders of magnitude smaller than the whole set, this step will be feasible at least for interesting clusters. Step 3. Substitute all the best sub-clusters of each cluster c in C1 to form next partition C2 . Step 4. If C1 = C2 (i.e. if the best sub-cluster for each cluster in C1 is the cluster itself) then stop. Step 5. (Combining step) If this is not the case then by the linearity of the score C2 must be higher scoring than C1 . Now take C2 and - beginning with this starting partition to test combinations of clusters in C2 - using AHC. (Note we could alternatively use weighted MAX-SAT here as well). This step may combine together spuriously clustered observations that initially appeared in different clusters of C1 and were thrown out of these clusters in the first weighted MAX-SAT step. Find the optimal partition C3 doing this. Step 6. If C3 = C2 stop, otherwise go to Step 2. This hybrid algorithm obviously performs at least as well as AHC and is able to undo any early erroneous combination of AHC. The shortcomings of AHC, discussed in (16), are overcome by checking each cluster running weighted MAX-SAT to identify outliers. Note that the method is fast because weighted MAX-SAT is only run on subsets of small cardinalities. We note that at least in the applications that we have encountered most clusters of interest appear to contain fewer than a hundred units.
4 A Simple Example We will illustrate the implementation of weighted MAX-SAT for clustering problems in comparison to and in conjunction to the widely used AHC. Here we demonstrate that weighted MAX-SAT can be used to cluster time-course gene expression data. The cluster scores are computed in C++, on the lines of the algorithm by (9) and it includes the modifications suggested by (16) and (1). All runs of weighted MAX-SAT were conducted using the C implementation available from the UBCSAT home page http://www.satlib.org/ubcsat. UBCSAT (17) is an implementation and experimentation environment for Stochastic Local Search (SLS) algorithms for SAT and MAX-SAT. We have used their implementation of WalkSat in this paper. 4.1 Data Our algorithm will be illustrated by an example on a recent microarray experiment on the plant model organism Arabidopsis thaliana. This experiment was designed to detect genes whose expression levels, and hence functionality, might be connected with circadian rhythms. The aim is to identify the genes (of order 1,000) which may be connected with the circadian clock of the plant. A full analysis and exposition of this data, together with a discussion of its biological significance is given in (8).
248
S. Liverani, J. Cussens, and J.Q. Smith
We will illustrate our algorithms on genes selected from this experiment. The gene expression of n = 22, 810 genes was measured at r = 13 time points over two days by Affymetrix microarrays. Constant white light was shone on the plants for 26 hours before the first microarray was taken, with samples every four hours. The light remained on for the rest of the time course. Thus, there are two cycles of data for each of the Arabidopsis microarray chip. Subjective dawn occurs at about the 24th and 48th hours – this is when the plant has been trained to expect light after 12 hours of darkness. 4.2 Hybrid AHC Using Weighted MAX-SAT
−20 −10
0
10
20
Although our clustering algorithms apply to a huge space of over 22,000 gene profiles, to illustrate the efficacy of our hybrid method it is sufficient to show results on a small subset of the genes: here a proxy for two clusters. Thus we will illustrate how our hybrid algorithm can outperform AHC and how it rectifies partitions containing genes clustered spuriously in an initial step. In the example below we have therefore selected 15 circadian genes from the dataset above and contaminated these with 3 outliers that we generated artificially. We set the parameters v = 10, a = 0.001, b = 0.001 and λ = 0.5 and ran AHC which obtained the partition formed by 2 clusters shown in Figure 1. AHC is partially successful: the 15 circadian genes have been clustered together, and so have the 3 outliers. The latter cluster is a typical example of misclassification in the sense of (16) in that it is rather coarse with a relatively high associated variance. The score for this partition is Σ(CAHC ) = 64.89565.
30
40
50
60
70
60
70
−0.4
−0.2
0.0
0.2
Time Cluster 1 ( 3 genes)
30
40
50 Time Cluster 2 ( 15 genes)
Fig. 1. Clusters obtained on 18 genes of Arabidopsis thaliana using AHC (Σ(CAHC ) = 64.89565). The y-axis is the log of gene expression. Note the different y-axis scale for the two clusters.
249
−20
−10
0
5
Searching a Multivariate Partition Space Using MAX-SAT
30
40
50
60
70
60
70
60
70
−20 −10
0
10
20
Time Cluster 1 ( 1 genes)
30
40
50
−5
0
5
10 15 20
Time Cluster 2 ( 1 genes)
30
40
50 Time Cluster 3 ( 1 genes)
Fig. 2. Clusters obtained on 3 outliers of Arabidopsis thaliana using AHC (1 cluster, S1 = −156.706) and weighted MAX-SAT (3 cluster, S1 = −145.571)
Following the hybrid AHC algorithm we then ran MAX-SAT on both the clusters obtained by AHC. The clusters obtained are shown in Figures 2 and 3. Both the clusters obtained by AHC have been split up by MAX-SAT. The score of the partition formed by these 5 clusters, including the constants, is now Σ(CMAX-SAT ) = 79.43005. This is the log of the marginal likelihood and taking the appropriate exponential, in terms of Bayes factor, this represents a decisive improvement for our model. Note that the increase in the log marginal likelihood is supported also by the visual display. The outliers are very different between themselves and from the real data and it seems reasonable that each one would generate a better cluster on its own - note the different scale of the y-axis. The other 15 genes have a more similar shape and it seems visually reasonable to cluster them together, as AHC does initially, but MAX-SAT is able to identify a more subtle difference between 2 shapes contained in that cluster. It was not necessary in our case to run AHC again to combine clusters, given the nature of our data. A single iteration of the loop described in our hybrid algorithm identified the useful refinement of the original partition. This example shows how, as discussed in (16), AHC can be unstable especially when dealing with outliers at an early stage in the clustering. The weighted MAX-SAT is helpful to refine the algorithm, and obtain a higher scoring partition.
S. Liverani, J. Cussens, and J.Q. Smith
−0.10
0.00
0.10
250
30
40
50
60
70
60
70
−0.4
−0.2
0.0
0.2
Time Cluster 4 ( 4 genes)
30
40
50 Time Cluster 5 ( 11 genes)
Fig. 3. Clusters obtained on 15 genes of Arabidopsis thaliana using AHC (1 cluster, S2 = 255.973) and weighted MAX-SAT (2 clusters, S2 = 259.372)
It is clear that in larger examples involving thousands of genes the improvements above add up over all moderate sized clusters of an initial partition, by simply using weighted MAX-SAT over each cluster in the partition, as described in our algorithm and illustrated above.
5 Further Work on Cluster Scores for Large Clusters In the approach taken in this paper clusters are explicitly represented as propositional atoms in the weighted MAX-SAT encoding and so it is important to reduce the number of clusters considered as much as possible. The hybrid method with iterative augmentation that we have described in Section 3.2 works very efficiently for splitting clusters with cardinality smaller than 100. However it slows down dramatically for greater cardinalities. It would be useful to generalize the approach so that it can also be employed to split up larger clusters. The main challenge here is to identify good candidate sets. Two methods that we are currently investigating are outlined below. Reducing cluster scores using cliques. One promising method for identifying candidate clusters is to use a graphical approach based on pairwise proximity between the clustered units. (2) - a well known and highly cited paper - proposes the CAST algorithm to identify the clique graph which is closest to the graph obtained from the proximity matrix. A graph is called a clique graph if it is a disjoint union of complete graphs. The disjoint cliques obtained by the CAST algorithm define the partition. We suggest using an approach similar to (2), enhanced by the use of weighted MAX-SAT and a fully Bayesian model.
Searching a Multivariate Partition Space Using MAX-SAT
251
We focus on maximal cliques, instead of clique graphs as in (2), to identify possible clusters to feed into weighted MAX-SAT. A maximal clique is a set of vertices that induces a complete subgraph, and that is not a subset of the vertices of any larger complete subgraph. The idea is to create an undirected graph based on the adjacency matrix obtained by scoring each pair of observations as a possible cluster and then use the maximal cliques of this graph to find plausible clusters. It is reasonable to assume that a group of elements is really close and should belong to the same cluster when it forms a clique. This considerably reduces the number of clusters that need to be evaluated and are the input for weighted MAX-SAT, which will then identify the highest scoring partition. The first step is to calculate the proximity between observations i and j (i, j = 1, . . . , n) such as D = {dij } = Sij − (Si + Sj ) which gives a matrix of adjacencies A A = {aij } =
1 0
if dij > K otherwise
from which we can draw a graph (Sij is the score for the cluster of 2 elements, i and j). Each vertex represents an observation. Two vertices are connected by an edge according to matrix D. The adjacency matrix defines an undirected graph. The maximal cliques, the intersections between maximal cliques and the union of maximal cliques with common elements define the potential cluster scores for weighted MAX-SAT. Although such methods are deficient in the sense that they use only pairwise relationships within putative clusters, they identify potentially high scoring clusters quickly. Of course, it does not matter whether some of these clusters turn out to be low scoring within this candidate set, because each is subsequently fully scored for weighted MAXSAT and their deficiency identified. This is in contrast to the method of (2) which is completely based on pairwise dissimilarities. So the only difficulty with this approach is induced by those clusters which are actually high scoring but nevertheless are not identified as promising. Other advantages of this method are that all the scores that are calculated are used as weights in the weighted MAX-SAT and it does not induce any artificial constraint on cluster cardinalities. Reducing cluster scores by approximating. An alternative to the method described above is to represent the equivalence relation given by a partition directly: for each distinct pair of data points yi , yj , an atom ai,j would be created to mean that these two data points are in the same cluster. Only O(n2 ) such atoms are needed. Hard clauses (O(n3 ) of them) expressing the transitivity of the equivalence relation would have to be added. With this approach it might be possible to indirectly include information on cluster scores by approximating cluster scores by a quadratic function of the data points in it. A second-order Taylor approximation is an obvious choice. Such an approach would be improved by using a different approximating function for each cluster size.
252
S. Liverani, J. Cussens, and J.Q. Smith
6 Discussion WalkMaxSat appears to be a promising algorithm for enhancing partition search. It looks especially useful to embellish other methods such as AHC to explore regions around the AHC optimal partition and to find close partitions with better explanatory power. We demonstrated above that this technique can enhance performance on small subsets of the data and on large datasets too, in conjunction with AHC. Although we have not tested this algorithm in the following regard, the algorithm can also be used as a useful exhaustive local check of a MAP partition found by numerical search (10). Also, note that weighted MAX-SAT can be used not just for MAP identification, but also by following the adaptation suggested by (6) in model averaging, using to identify all models that are good. There are many embellishments of the types of methods described above that will potential further improve our hybrid search algorithm. However, in this paper we have demonstrated that in circumstances where the Crowley priors are appropriate weighted MAX-SAT solvers can provide a very helpful addition to the tool box of methods for MAP search over a partition space.
References [1] Anderson, P.E., Smith, J.Q., Edwards, K.D., Millar, A. J.: Guided conjugate Bayesian clustering for uncovering rhytmically expressed genes. CRiSM Working Paper (2006) [2] Ben-Dor, A., Shamir, R., Yakhini, Z.: Clustering gene expression patterns. Journal of Computational Biology 6(3-4), 281–297 (1999) [3] Booth, J.G., Casella, G., Hobert, J.P.: Clustering using objective functions and stochastic search. J. Royal Statist. Soc.: Series B 70(1), 119–139 (2008) [4] Chipman, H.A., George, E.I., McCulloch, R.E.: Bayesian treed models. Machine Learning 48(1-3), 299–320 (2002) [5] Crowley, E.M.: Product partition models for normal means. Journal of the American Statistical Association 92(437), 192–198 (1997) [6] Cussens, J.: Bayesian network learning by compiling to weighted MAX-SAT. In: Proceedings of the 24th Conference on Uncertainty in Artificial Intelligence (2008) [7] Denison, D.G.T., Holmes, C.C., Mallick, B.K., Smith, A.F.M. (eds.): Bayesian Methods for Nonlinear Classification and Regression. Wiley Series in Probability and Statistics. John Wiley and Sons, Chichester (2002) [8] Edwards, K.D., Anderson, P.E., Hall, A., Salathia, N.S., Locke, J.C.W., Lynn, J.R., Straume, M., Smith, J.Q., Millar, A.J.: FLOWERING LOCUS C Mediates Natural Variation in the High-Temperature Response of the Arabidopsis Circadian Clock. The Plant Cell 18, 639– 650 (2006) [9] Heard, N.A., Holmes, C.C., Stephens, D.A.: A quantitative study of gene regulation involved in the immune response of anopheline mosquitoes: An application of Bayesian hierarchical clustering of curves. Journal of the American Statistical Association 101(473), 18–29 (2006) [10] Lau, J.W., Green, P.J.: Bayesian Model-Based Clustering Procedures. Journal of Computational and Graphical Statistics 16(3), 526 (2007) [11] Liverani, S., Anderson, P.E., Edwards, K.D., Millar, A.J., Smith, J.Q.: Efficient Utilitybased Clustering over High Dimensional Partition Spaces. Journal of Bayesian Analysis 4(3), 539–572 (2009)
Searching a Multivariate Partition Space Using MAX-SAT
253
[12] McCullagh, P., Yang, J.: Stochastic classification models. In: Proceedings International Congress of Mathmaticians, vol. III, pp. 669–686 (2006) [13] O’Hagan, A., Forster, J.: Bayesian Inference: Kendall’s Advanced Theory of Statistics, 2nd edn., Arnold (2004) [14] Quintana, F.A., Iglesias, P.L.: Bayesian clustering and product partition models. J. Royal Statist. Soc.: Series B 65(2), 557–574 (2003) [15] Ray, S., Mallick, B.: Functional clustering by Bayesian wavelet methods. J. Royal Statist. Soc. Series B 68(2), 305–332 (2006) [16] Smith, J.Q., Anderson, P.E., Liverani, S.: Separation measures and the geometry of Bayes factor selection for classification. J. Royal Statist. Soc.: Series B 70(5), 957–980 (2006) [17] Tompkins, D.A.D., Hoos, H.H.: UBCSAT: An implementation and experimentation environment for SLS algorithms for SAT and MAX-SAT. In: Hoos, H.H., Mitchell, D.G. (eds.) Theory and Applications of Satisfiability Testing: Revised Selected Papers of the Seventh International Conference, pp. 306–320. Springer, Heidelberg (2005)
A Novel Approach for Biclustering Gene Expression Data Using Modular Singular Value Decomposition V.N. Manjunath Aradhya1,2, Francesco Masulli1,3 , and Stefano Rovetta1 1
2
Dept. of Computer and Information Sciences University of Genova, Via Dodecaneso 35, 16146 Genova, Italy {aradhya,masulli,ste}@disi.unige.it Dept. of ISE, Dayananda Sagar College of Engg, Bangalore, India - 560078 3 Sbarro Institute for Cancer Research and Molecular Medicine, Center for Biotechnology, Temple University, BioLife Science Bldg., 1900 N 12th Street Philadelphia, PA 19122 USA Abstract. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis used in many fields, including machine learning, data mining, pattern recognition, image analysis and bioinformatics. Recently, biclustering (or co-clustering), performing simultaneous clustering on the row and column dimensions of the data matrix, has been shown to be remarkably effective in a variety of applications. In this paper we propose a novel approach to biclustering gene expression data based on Modular Singular Value Decomposition (Mod-SVD). Instead of applying SVD directly on on data matrix, the proposed approach computes SVD on modular fashion. Experiments conducted on synthetic and real dataset demonstrated the effectiveness of the algorithm in gene expression data.
1
Introduction
Clustering is the unsupervised classification of patterns (observations, data items, or feature vectors) into groups (clusters). The clustering problem has been addressed in many contexts and by researchers in many disciplines; this reflects its broad appeal and usefulness as one of the steps in exploratory data analysis [18]. Inspite of mature statistical literature on clustering, DNA microarray data are the triggering aid for the development of multiple new methods. The clusters produced by these methods reflect the global patterns of expression data, but an interesting cellular process for most cases may be only involved in a subset of genes co-expressed only under a subset of conditions. Discovering such local expression patterns may be the key to uncovering many genetic pathways that are not apparent otherwise. Therefore, it is highly desirable to move beyond the clustering paradigm, and to develop approaches capable of discovering local patterns in microarray data [6,9]. In recent development, biclustering is one of the promising innovations in the field of clustering. Hartigan’s [14] so-called ”direct clustering” is the inspiration for the introduction of the term biclustering to gene expression analysis by F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 254–265, 2010. c Springer-Verlag Berlin Heidelberg 2010
A Novel Approach for Biclustering Gene Expression Data
255
Cheng and Church [6]. Biclustering, also called co-clustering, is the problem of simultaneously clustering rows and columns of a data matrix. Unlike clustering which seeks similar rows or columns, co-clustering seeks ”blocks” (or co-clusters) of rows and columns that are inter-related. In recent years, biclustering has received a lot of attention in several practical applications such as simultaneous clustering of documents and words in text mining [16], genes and experimental conditions in bioinformatics [6,19], tokens and contexts in natural language processing [27], among others. Many approaches to biclustering gene expression data have been proposed to date (and the problem is known to be NP-complete). Existing biclustering methods include δ-biclusters [6,32], gene shaving [15], flexible overlapped biclustering algorithm (FLOC) [33], order preserving sub-matrix (OPSM) [3], interrelated two-way clustering (ITWC) [29], coupled two-way clustering (CTWC) [13], and spectral biclustering [19], statistical algorithmic model (SAMBA) [28], iterative signature algorithm [17], a fast divide-and-conquer algorithm (Bimax) [9], and maximum similarity biclustering (MSBE) [22]. A detailed survey on biclustering algorithms for biological data analysis can be found in [23]; the paper presents a comprehensive survey on the models, methods and applications in the field of biclustering algorithms. Another interesting paper on comparison and evaluation of biclustering methods for gene expression data is described in [9]. The paper compares five prominent biclustering methods with respect to their capability of identifying groups of (locally) co-expressed genes; hierarchical clustering and a baseline biclustering algorithm. A different approaches to biclustering gene expression data based on Hough transform, possibilistic approach, fuzzy, genetic algorithm can also be seen in literature. A geometric biclustering algorithm based on the Hough Transform (HT) for analysis of large scale microarray data is presented in [35]. The HT is performed in a two-dimensional (2-D) space of column pairs and coherent columns are then combined iteratively to form larger and larger biclusters. This reduces the computational complexity considerably and makes it possible to analyze large-scale microarray data. An HT-based algorithm has also been developed to analyze three-color microarray data, which involve three experimental conditions [36]. However, the original HT-based algorithm becomes ineffective, in terms of both computing time and storage space, as the number of conditions increases. Another approach based on geometric interpretation of the biclustering problem is described in [12]. An implementation of the geometric framework using the Fast Hough transform for hyperplane detection can be used to discover biologically significant subsets of genes under subsets of conditions for microarray data analysis. Work on possibilistic approach for biclustering microarray data is presented in [5]. This approach obtain potentially-overlapping biclusters, the Possibilistic Spectral Biclustering algorithm (PSB), based on Fuzzy technology and spectral clustering. The method is tested on S. cerevisiae cell cycle expression data and on a human cancer dataset. An approach to the biclustering problem using the Possibilistic Clustering paradigm is described in [11]; this method finds one
256
V.N.M. Aradhya, F. Masulli, and S. Rovetta
bicluster at a time, assigning a membership to the bicluster for each gene and for each condition. Possibilistic clustering is tested on the Yeast database, obtaining fast convergence and good quality solutions. Genetic Algorithm (GA) have also been used in [24] to maximize homogeneity in a single bicluster. Given the complexity of the biclustering problem, the method adopts a global search method, namely genetic algorithms. A multicriterion approach is also used to optimize two fitness functions implementing the criteria of bicluster homogeneity (Mean Squared Residue) and size [7]. Comparing fuzzy approaches to biclustering is described in [10]. Fuzzy central clustering approaches to biclustering show very interesting properties, such as speed in finding a solution, stability, capability of obtaining large and homogeneous biclusters are discussed. SVD based methods have also been used in order to obtain biclusters in gene expression data and also in many potential applications [8,21]. Applying SVD directly on the data may obtain biclusters, but obtaining large number of biclusters and functionally enriched with GO categories is still a challenging problem. Hence in this paper we propose a method based on modular SVD for biclustering in gene expression data. By doing this, local features of genes and conditions can be extracted efficiently in order to obtain better biclusters. The organization of the paper is as follows: in Sect 2, we explain the proposed Modular SVD based method. In Sect 3, we perform experiments on synthetic and real dataset. Finally conclusions are drawn at the end.
2
Proposed Method
In this section we describe our proposed method which is based on modular (data subsets) SVD. The SVD is one of the most important and powerful tool used in numerical signal processing. It is employed in a variety of signal processing applications, such as spectrum analysis, filter design, system identification, etc. SVD based methods has also been used in order to obtain biclusters in gene expression data and also in many potential applications [8,21]. Applying SVD directly on the data may obtain biclusters, but obtaining efficient biclusters on data is still a challenging problem. The standard SVD based method may not be very effective under different conditions, since it considers the global information of gene and conditions and represents them with a set of weights. Hence in this work, we made an attempt at overcoming the aforementioned problem by partitioning a gene expression data into several smaller subsets of data and then SVD is applied to each of the sub-data separately. The three main steps involved in our method, named M-SVD Biclustering, are: 1. An original whole data set denoted by a matrix is partitioned into a set of equally sized sub-data in a non-overlapping way. 2. SVD is performed on each of such data subsets. 3. At last, a single global feature is synthesized by concatenating each data data subsets.
A Novel Approach for Biclustering Gene Expression Data
257
In this work, we partitioned the data along rows in a non-overlapping way, which is as shown in figure 1. The size of each partition is around 20% of the original matrix.
Fig. 1. Procedure of applying SVD on partitioned data
Each step of the algorithm is explained in detail as follows: Data Partition: Let us consider a m × n matrix A, which contain m genes and n conditions. Now, this matrix A is partitioned into K d-dimensional sub-matrices of similar sizes in a non-overlapping way, where A = (A1 , A2 , ..., AK ) with Ak being the sub-data of A. Figure 1 shows a partition procedure for a given data matrix. Note that a partition of A can be obtained in many different ways e.g., selection groups of continuous rows or groups of continuous columns, or also randomly sampling some rows or some columns. In this work, we selected group of continuous rows for experiment purpose. Apply SVD on K sub-data: Now according to the second step, conventional SVD is applied on K sub-pattern. The SVD provides a factorization for all matrices, even matrices that are not square or have repeated eigenvalues. In general, the theory of SVD states that any matrix A of size m × n can be factorized into a product of unitary matrices and a diagonal matrix, as follows [20]: A = U ΣV T
(1)
where U ∈ m×m is unitary, V ∈ n×n is unitary, and Σ ∈ m×n has the form Σ = diag(λ1 , λ2 , .....λp), where p is the minimum value of m or n. The diagonal
258
V.N.M. Aradhya, F. Masulli, and S. Rovetta
elements of Σ are called the singular values of A and are usually ordered in descending manner. The SVD has the eigenvectors of AAT in U and of AT A in V. This decomposition may be represented as shown in figure 2:
Fig. 2. Illustration of SVD method
The ”diagonal” of Σ contains the singular values of A. They are real and non negative numbers. The upper part (green strip) contains the positive singular values. There are r positive singular values, where r is the rank of A. The lower part of the diagonal (gray strip) contains the (n - r), or ”vanishing”, singular values. SVD is known to be more robust than usual eigen-vectors of covariance matrix. This is because the robustness is determined by the directional vectors rather than mere scalar quantity like magnitudes (singular values stored in Σ). Since U and V matrices are inherently orthogonal in nature, these directions are encoded in U and V matrices. This is unlike the eigenvectors which need not be orthogonal. Hence, a small perturbation like noise has very little effect to disturb the orthogonal properties encoded in the U and V matrices. This could be the main reason for the robust behavior of the SVD. Finally, from each of the data partitions, we would expect that the eigenvectors corresponding to the largest eigenvalue would provide the optimal clusters. However, we also observed that an eigenvector with smaller eigenvalues could yield clusters. Instead of clustering each eigenvector individually, we perform clustering by applying k-means to the data projected to the best three or four eigenvectors. In general, it can be described as shown below: – Apply SVD on each partitioned data – Project the data for assigned number of eigenvalues – Apply K-means algorithm to the projected data using desired number of clusters. – Finally, find the bicluster’s for specified number of clusters.
A Novel Approach for Biclustering Gene Expression Data
259
Steps to obtain biclusters – [U S V] = svd(partion data) = kmeans(projected – cluster index sired number of clusters) – cluster data = cluster index(x,y)
(U
&
V),
de-
f f fA More formally, the proposed method is presented in the form of Algorithm as shown below. Algorithm: Modular SVD – Input: Gene Expression Data – Output: Homogeneity H and Bicluster’s size N – Steps: • 1: Acquire gene expression matrix and generate K number of ddimensional sub-data in a non-overlapping way and reshaped into K × n matrix A = (A1 , A2 , ..., AK ) with Ak being the sub-data of A. • 2: Apply standard SVD method to each sub-data obtained from the Step 1. • 3: Perform final clustering step by applying the k-means to the data projected to get the best three or four eigenvectors. • 4: Repeat this procedure for all the partition present in the gene expression data. • 5: At last, computation of Heterogeneity H and size N , are done using the resultant bicluster’s obtained from each partition matrix. – Algorithm ends
3
Experiment Results and Comparative Study
In this section we experimentally evaluate the proposed method with pertaining to synthetic and standard dataset. The proposed algorithm has been coded in R language on Pentium IV 2 GHz with 756 MB of RAM under Windows platform. Following [6], we are interested in the largest biclustes N from DNA microarray data that do not exceed as assigned homogeneity constraint. To this aim we can utilize the definition of biclustering average heterogeneity H. The size N of a biclusters is usually defined as the number of cells in the gene expression matrix X belonging to it that is the product of cardinality ng = |g| and nc = |c| (here g, c refers to genes and conditions respectively): N = ng · nc
(2)
here ng and nc are the number of columns and number of rows of X, respectively.
260
V.N.M. Aradhya, F. Masulli, and S. Rovetta
We can define H and G as the Sum-squared residue and Mean-squared residue respectively, two related quantities that measures the biclusters and heterogeneity: d2ij (3) H= i∈g j∈c
1 2 H G= d = N i∈g j∈c ij N
(4)
where d2ij = (xij + xIJ − xiJ − xIj )/n, xIJ , xIj and xiJ are the biclusters mean, the row mean and the column mean of X for the selected genes and conditions respectively. 3.1
Results on Synthetic Dataset
We compared our results with a standard spectral method [19] using a synthetic and a standard real dataset. To obtain the synthetic dataset, we generated matrices with random values and the size of the matrices varied from 100 × 10 (rows × columns). Table 1 shows the experimental results of heterogeneity and bicluster size obtained using standard spectral and proposed method. From the result we can notice that spectral method failed to find bicluster, however the proposed method successfully extracted bicluster from the data. We also tested our method on another synthetic data set. To this aim we generated matrices with random numbers, on which 3 biclusters were similar, with dimensions ranging from 3-5 rows and 5-7 columns. Heterogeneity Hand biclusters size N are tabulated in Table 2. The obtained heterogeneity from both the methods is competitive. However if we consider bicluster size, the proposed method achieved better size compared to standard spectral method. Figure 3 shows the example of biclusters extracted using the proposed method for synthetic dataset. From the above two results, it is ascertained that proposed M-SVD based method performs better compared to standard spectral method both in heterogeneity and bicluster size. Table 1. Heterogeneity and size N for Synthetic Dataset of size 100 × 10 Methods Heterogeneity (H) Size (N) Spectral[19] — — M-SVD-BC 0.90 70 Table 2. Heterogeneity (H) and size N for Synthetic Dataset of size 10 × 10 Methods Heterogeneity (H) Size (N) Spectral[19] 7.1 30 M-SVD-BC 7.21 51
A Novel Approach for Biclustering Gene Expression Data
261
Fig. 3. (a):A synthetic dataset with multiple biclusters of different patterns (b-d):and the biclusters extracted
3.2
Results on Real Dataset
For the experiments on real data, we tested our proposed method on the standard dataset of Bicat Yeast, SyntrenEcoli and Yeast. Data structure with information about the expression levels of 419 probesets over 70 conditions follow Affymetrix probeset notation in Bicat Yeast dataset [2]. Affymetrix data files are normally available in DAT, CEL, CHP and EXP formats. Date containing in CDF files can also be used and contain the information about which probes belong to which probe set. For more information on file formats refer [1]. Results pertaining to this dataset are reported in Table 3. Another important experiment on performance index, defined as the ratio of N/G is also considered. The largest the ratio, the better the performance. Figure 4 shows the relationship between the two quality criteria obtained by the proposed and Spectral [19] method respectively. The first column in the graph represents the bicluster size (N) for varying number of principal components (PC’s). The second column shows the value of performance index, defined as the ratio N/G. Table 3. H and N for standard Bicat Yeast Dataset Methods Heterogeneity (H) Size (N) Spectral[19] 0.721 680 M-SVD-BC 0.789 2840
Another data structure with information about the expression levels of 200 genes over 20 conditions from transcription regulatory network is also used in our experiment [26]. Detailed description about Syntren can be found in [4]. The results of heterogeneity and bicluster size is tabulated in Table 4. The homogeneity obtained from spectral method is high compared to proposed method. However the proposed method obtained less homogeneity and achieved better bicluster size compared to spectral method.
262
V.N.M. Aradhya, F. Masulli, and S. Rovetta
Fig. 4. Performance comparison of bicluster techniques: Size (N) and ratio N/G Table 4. H and N for standard Syntren E. coli Dataset Methods Heterogeneity (H) Size (N) Spectral[19] 19.95 16 M-SVD-BC 7.07 196
We also applied our algorithm to the yeast dataset which is genomic database composed by 2884 genes and 17 conditions [30]. The original microarray gene expression data can be obtained from the web site http://arep.med.harvard.edu/ biclustering/yeast.matrix. Results are expressed as both the homogeneity H and bicluster size N . Table 5 shows the H and N obtained from well known algorithms on Yeast dataset. Here we compared our method with Deterministic Biclustering with Frequent pattern mining [34], Flexible Overlapped Clusters [33], Cheng and Church [6], Single and Multi objective Genetic Algorithms [25,7], Fuzzy Co-clustering with Ruspini’s condition [31], and Spectral [19]. From the results it is ascertained that the proposed modular SVD extracts better bicluster size compared to other well known techniques.
Table 5. H and N for standard Yeast Dataset Methods Heterogeneity (H) Size (N) DBF [34] 115 4000 FLOC [33] 188 2000 Cheng and Church [6] 204 4485 Single-objective GA [7] 52.9 1408 Multi-objective GA [25] 235 14828 FCR [31] 973.9 15174 Spectral [19] 201 12320 Proposed Method 259 18845
A Novel Approach for Biclustering Gene Expression Data
263
From the above observations and results it is clear that applying SVD on modular approach yields better performance compared to standard approaches.
4
Conclusion
In this paper, a novel approach for biclustering gene expression data based on Modular SVD is proposed. Instead of applying SVD directly to the data, the proposed method uses modular approach to extract biclusters. The standard SVD based method may not be very effective under different conditions of gene, since it considers the global information of gene and conditions and represents them with a set of weights. While applying SVD on modular way, local features of genes and conditions can be extracted efficiently in order to obtain better biclusters. The efficiency of the proposed method is tested on synthetic as well as real dataset demonstrated the effectiveness of the algorithm when compared with well known existing ones.
References 1. http://www.affymetrix.com/analysis/index.affix 2. Barkow, S., Bleuler, S., Prelic, A., Zimmermann, P., Zitzler, E.: Bicat: A biclustering analysis toolbox. Bioinformatics 19, 1282–1283 (2006) 3. Ben-Dor, A., Chor, B., Karp, R., Yakhini, Z.: Discovering local structure in gene expression data: the order-preserving sub-matrix problem. In: Proceedings of the Sixth Annual International Conference on Computational Biology, pp. 49–57. ACM Press, New York (2002) 4. Blucke, Leemput, Naudts, Remortel, Ma, Verschoren, Moor, Marchal: Syntren: a generator of synthetic gene expression data for design and analysis of structure learning algorithm. BMC Bioinformatics 7, 1–16 (2006) 5. Cano, C., Adarve, L., L´ opez, J., Blanco, A.: Possibilistic approach for biclustering microarray data. Computers in Biology and Medicine 37, 1426–1436 (2007) 6. Cheng, Y., Church: Biclustering of expression data. In: Proceedings of the Intl Conf. on intelligent Systems and Molecular Biology, pp. 93–103 (2000) 7. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Trans. on Evolutionary Computatation 6, 182– 197 (2002) 8. Dhillon, I.S.: Co-clustering documents and words using bipartite spectral graph partitioning. In: Proceedings of the 7th ACM SIGKDD, pp. 269–274 (2001) 9. Prelic, A., et al.: A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics 22, 1122–1129 (2006) 10. Filippone, M., Masulli, F., Rovetta, S., Zini, L.: Comparing fuzzy approaches to biclustering. In: Proceedings of International Meeting on Computational Intelligence for Bioinformatics and Biostatistics, CIBB (2008) 11. Filippone, M., Masulli, F., Stefano, R.: Possibilistic approach to biclustering: An application to oligonucleotide microarray data analysis. In: Proceedings of the Computational Methods in System Biology, pp. 312–322 (2006) 12. Gan, X., Alan, Yan, H.: Discovering biclusters in gene expression data based on high dimensional linear geometries. BMC Bioinformatics 9, 209–223 (2008)
264
V.N.M. Aradhya, F. Masulli, and S. Rovetta
13. Getz, G., Levine, E., Domany, E.: Coupled two-way clustering analysis of gene microarray data. In: Proceedings of National Academy of Science, 12079–12084 (2000) 14. Hartigan, J.A.: direct clustering of a data matrix. Journal of the American Statistical Association 67, 123–129 (1972) 15. Hastie, T., Levine, E., Domany, E.: ’Gene shaving’ as a method for identifying distinct set of genes with similar expression patterns. Genome Biology 1, 0003.1– 0003.21 (2000) 16. Mallela, S., Dhillon, I., Modha, D.: Information-theoretic co-clustering. In: In Proceedings of the 9th International Conference on Knowledge Discovery and Data Mining (KDD), pp. 89–98 (2003) 17. Ihmels, J., Bergmann, S., Barkai, N.: Defining transcription modules using largescale gene expression data. Bioinformatics 20, 1993–2003 (2004) 18. Jain, A.K., Murthy, M.N., Flynn, P.J.: Data clustering: A review. ACM Computing Surveys 31, 264–323 19. Kluger, Y., Basri, Chang, Gerstein: Spectral biclustering of microarray data: Coclustering genes and conditions. Genome Research 13, 703–716 (2003) 20. Lay, D.C.: Linear Algebra and its Applications. Addison-Wesley, Reading (2002) 21. Li, Z., Lu, X., Shi, W.: Process variation dimension reduction based on svd. In: Proceedings of the Intl Symposium on Circuits and Systems, pp. 672–675 (2003) 22. Liu, X., Wang, L.: Computing the maximum similarity bi-clusters of gene expression data. Bioinformatics 23, 50–56 (2007) 23. Madeira, S.C., Oliveira, A.L.: Biclustering algorithms for biological data analysis: A survey. IEEE & ACM Trans. on Computational Biology and Bioinformatics 1, 24–45 (2004) 24. Mitra, S., Banka, H.: Mulit-objective evolutionary biclustering of gene expression data. Pattern Recognition 39, 2464–2477 (2006) 25. Mitra, S., Banka, H.: Multi-objective evolutionary biclustering of gene expression data. Pattern Recognition 39, 2464–2477 (2006) 26. Orr, S.: Network motifs in the transcriptional regulation network of escherichia coli. Nature Genetics 31, 64–68 (2002) 27. Rohwer, R., Freitag, D.: Towards full automation of lexicon construction. In: HLTNAACL 2004: Workshop on Computational Lexical Semantics, pp. 9–16 (2004) 28. Tanay, A., Sharan, R., Shamir, R.: Discovering statistically significant biclusters in gene expression data. Bioinformatics 18, 136–144 (2002) 29. Tang, C., Zhang, L., Zhang, A., Ramanathan, M.: Interrelated two-way clustering: an unsupervised approach for gene expression data analysis. In: Proceedings of the Second Annual IEEE International Symposium on Bioinformatics and Bioengineering, BIBE, pp. 41–48 (2001) 30. Tavazoie, S., Hughes, J.D., Campbell, M.J., Cho, R.J., Church, G.M.: Systematic determination of genetic network architecture. Nature Genetics 22, 281–285 (1999) 31. Tjhi, W.C., Chen, L.: A partitioning based algorithm to fuzzy co-cluster documents and words. Pattern Recognition Letters 27, 151–159 (2006) 32. Yang, J., Wang, W., Wang, H., Yu, P.: δ-cluster: capturing subspace correlation in a large data set. In: Proceedings of the 18th IEEE International Conference Data Engineering, pp. 517–528 (2002) 33. Yang, J., Wang, W., Wang, H., Yu, P.: Enhanced biclustering on expression data. In: Proceedings of the Third IEEE Conference on Bioinformatics and Bioengineering, pp. 321–327 (2003)
A Novel Approach for Biclustering Gene Expression Data
265
34. Zhang, Z., Teo, A., Ooi, B.: Mining deterministic biclusters in gene expression data. In: Proceedings of the 4th IEEE Symposium on Bioinformatics and Bioengineering, p. 283 (2004) 35. Zhao, H., Alan, Xie, X., Yan, H.: A new geometric biclustering algorithm based on the Hough transform for analysis of large scale microarray data. Journal of Theoretical Biology 251, 264–274 (2008) 36. Zhao, H., Yan, H.: Hough feature, a novel method for assessing drug effects in three-color cdna microarray experiments. BMC Bioinformatics 8, 256 (2007)
Using Computational Intelligence to Develop Intelligent Clinical Decision Support Systems Alexandru G. Floares1,2 1 2
SAIA - Solutions of Artificial Intelligence Applications, Cluj-Napoca, Romania
[email protected] Department of Artificial Intelligence, Cancer Institute Cluj-Napoca, Romania
[email protected] Abstract. Clinical Decision Support Systems have the potential to optimize medical decisions, improve medical care, and reduce costs. An effective strategy to reach these goals is by transforming conventional Clinical Decision Support in Intelligent Clinical Decision Support, using knowledge discovery in data and computational intelligence tools. In this paper we used genetic programming and decision trees. Adaptive Intelligent Clinical Decision Support have also the capability of self-modifying their rules set, through supervised learning from patients data. Intelligent and Adaptive Intelligent Clinical Decision Support represent an essential step toward clinical research automation too, and a stronger foundation for evidence-based medicine. We proposed a methodology and related concepts, and analyzed the advantages of transforming conventional Clinical Decision Support in intelligent, adaptive Intelligent Clinical Decision Support. These are illustrated with a number of our results in liver diseases and prostate cancer, some of them showing the best published performance.
1
Introduction
Using IT to replace painful, invasive, or costly procedures, to optimize various medical decisions, to improve medical care and reduce costs, represent major goals of Biomedical Informatics. Using Clinical Decision Support Systems (CDSS), which are computer systems designed to impact clinician decision making about individual patients at the point in time that these decision are made [1], on a large scale is a major step toward these goals. Transforming conventional CDSS in Intelligent Clinical Decision Support (i-CDSS), using knowledge discovery in data and computational intelligence, could mark a revolutionary step in the evolution of evidence-based medicine. Adaptive i-CDSS have also the capability of self-modifying their rules set through learning from patients data. Intelligent and adaptive CDSS are a significant step toward clinical research automation too. We are performing a series of investigations on constructing i-CDSS for several liver diseases, prostate and thyroid cancer, and chromosomal disorders (e.g., Down syndrome) during pregnancy. Here, we proposed a methodology and related concepts, and analyzed the advantages of transforming conventional CDSS F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 266–275, 2010. c Springer-Verlag Berlin Heidelberg 2010
Using Computational Intelligence to Develop i-CDSS
267
in intelligent and evolving CDSS. These are briefly illustrated with some of our results in liver diseases, some of them showing the best published performance.
2
Proposed Methods and Concepts
We are developing a set of methods for transforming conventional CDSS into iCDSS in liver diseases, especially chronic hepatitis C and B, prostate and thyroid cancer, and chromosomal disorders (e.g., Down syndrome) during pregnancy. Some of these methods and concepts are mature enough to be presented, but still under development. 2.1
Non-invasive i-CDSS for Fibrosis and Necroinflammation Assessment in Chronic Hepatitis B and C
Chronic Hepatitis B and C are major diseases of mankind and a serious global public health problem. The persons with these chronic diseases are at high risk of death from cirrhosis and liver cancer. Liver biopsy is the gold standard for grading the severity of disease and staging the degree of fibrosis and the grade of necroinflammation. The most used scoring systems are METAVIR A (A for activity) or Ishak NI (NI for necroinflammatory), and METAVIR F or Ishak F for the fibrosis stage (F). By assigning scores for severity, grading and staging of hepatitis are used as CDSS for patient management. The first step of the proposed strategy consists in identifying the problems of the conventional CDSS, potentially solvable with the aid of computational intelligence. We want to notice that the investigated CDSS are based on clinical practice guidelines, adapted to our clinical studies; they were built and implemented by our interdisciplinary team. We analyzed the clinical workflows used in monitoring chronic hepatitis B and C patients. Liver biopsy is invasive, painful, and relatively costly; complications severe enough to require hospitalization can occur in approximately 4% of patients [2]. In a review of over 68,000 patients recovering from liver biopsy, 96% experienced adverse symptoms during the first 24 hours of recovery. Hemorrhage was the most common symptom, but infections also occurred. Side effects of the biopsies included pain, tenderness, internal bleeding, pneumothorax, and rarely, death [3]. The second step consists in searching the literature for potential solutions and critically evaluate them. There are two main non-invasive diagnosis techR niques of interest [4]. FibroScan is a type of ultrasound machine that uses transient elastography to measure liver stiffness. The device reports a value that is measured in kilopascals (kPa). FibroTest for assessing fibrosis, and ActiTest for assessing necroinflammatory activity are available through BioPredictive (www.biopredictive.com). These tests use algorithms to combine the results of serum tests of beta 2-macroglobulin, haptoglobulin, apolipoprotien A1, total bilirubin, gamma glutamyltranspeptidase (GGT), and alanine aminotransferase (ALT). The results of these diagnosis techniques are not directly interpretable by a pathologist, but can be extrapolated to a fibrosis and necroinflammation score.
268
A.G. Floares
R FibroTest and Fibroscan have reasonably good utility for the identification of cirrhosis, but lesser accuracy for earlier stages. It is considered that refinements are necessary before these tests can replace liver biopsy [4], and these motivate us to continue our investigation. The third step consists in investigating the possibility of building i-CDSS, capable to solve the problems of the conventional CDSS. To this goal, we used a knowledge discovery in data approach, based on computational intelligence. The existing solutions have two main weak points which could be addressed: the accuracy, which might be improved, and the fact that the results are not expressed in the familiar pathologist scoring systems. We extracted and integrated information from various non-invasive data sources, e.g. imaging, clinical, and laboratory data, and build i-CDSS capable to predict the biopsy results - fibrosis stage and necroinflammation grade - with an acceptable accuracy. The meaning of an acceptable accuracy is a matter of consensus. Probably, in this context, taking into account the opinions of the hepatologists from our team, and the performances of the best commercial systems (see above), the majority of physicians will agree that the prediction accuracy should be at least 80%.
2.2
i-CDSS for Optimizing Interferon Treatment and Reducing Errors Costs in Chronic Hepatitis B and C
Another major conventional CDSS that will be investigated and transformed into an adaptive i-CDSS, using the same three steps approach, is related to treatment. Chronic hepatitis B and C are treated with drugs called Interferon or Lamivudine, which can help some patients. This treatment decision is based on several patients selection criteria. For example, the Romanian Ministry of Health’s criteria, for selecting the patients with chronic hepatitis C who will benefit from Interferon treatment, are: 1. Chronic infection with hepatitis C virus (HCV): antibodies against HCV (anti-HCV) are present for at least 3 months. (a) the hepatitis B surface antigen (HBsAg) is present for at least 6 months, or (b) the hepatitis B e antigen (HBeAg) is present for at least 10 weeks. 2. The cytolytic syndrome: the transaminases level is increased or normal. 3. Pathology (biopsy): the Ishak NI ≥ 4 and Ishak F ≤ 3. 4. The virus is replicating: the transaminases level is increased or normal, and anti-HCV are present, and RNA-HCV ≥ 105 copies/milliliter. For hepatitis B there is a similar set of selection rules. A careful analysis of these conventional CDSS could identify two problems: 1. Invasiveness: the patients’ selection criteria include fibrosis and necroinflammation assessed by liver biopsy, an invasive medical procedure. 2. Cost of the wrong decisions: patients selection mistakes are very costly because Interferon or Lamivudine therapy costs thousands of dollars.
Using Computational Intelligence to Develop i-CDSS
269
Developing intelligent CDSS, based on non-invasive medical investigations, and optimized selection criteria, could be of great benefit to the patients and could also save money. To this goal, one should investigate if it is possible: 1. To build i-CDSS capable to predict the biopsy results - fibrosis stage and necroinflammation grade - with an accuracy of at least 80%. 2. To integrate the i-CDSS predicting the biopsy results with the other selection criteria in an i-CDSS for Interferon treatment. 3. To make the Interferon treatment i-CDSS an evolving one, capable to optimize the treatment decisions by self-modifying through learning. Evolving i-CDSS can minimize the costs due to wrong patients selection, and maximize the benefit of the treatment. They can optimize the selection rule sets by finding the relevant selection criteria and their proper cutoff values. For this, the results of the Interferon treatment must be clearly defined as numerical or categorical attributes and registered in a data base for each treated patient. Then, intelligent agents are employed to learn the prediction of the treatment results. They must be capable of expressing the extracted information in the form of rules, using non-invasive clinical, laboratory and imaging attributes as inputs. Using feature selection [5], one will find the relevant patient selection criteria. Thus, the i-CDSS started with the largely accepted patients selection criteria but these are evolving. It is worth to mention that the evolved selection criteria could be different and usually better than those initially proposed by physicians. However, they should be always evaluated by the experts. In the supervised learning process, intelligent agents also discover the proper cutoff values of the relevant selection criteria. Again, these are usually better than those proposed by experts, but they should always be evaluated by them. In our opinion, learning and adapting capabilities are of fundamental importance for evidence basedmedicine. The third step of our approach is also the key step and the most complex one. The i-CDSS are the result of a data mining predictive modeling strategy, which is now patent pending, consisting mainly in: 1. Extracting and integration information from various medical data sources, after a laborious preprocessing: (a) cleaning features and patients, (b) various treating of missing data, (c) ranking features, (d) selecting features, (e) balancing data. 2. Testing various classifiers or predictive modeling algorithms. 3. Testing various ensemble methods for combining classifiers. The following variables were removed: 1. Variables that have more than 70% missing values. 2. Categorical variables that have a single category counting for more than 90% cases.
270
A.G. Floares
3. Continuous variables that have very small standard deviation (almost constants). 4. Continuous variables that have a coefficient of variation CV ≤ 0.1 (CV = standard deviation/mean). 5. Categorical variables that have a number of categories greater than 95% of the cases. For modeling, we first tested the fibrosis and necroinflammation prediction accuracy of various types of computational intelligence agents: 1. 2. 3. 4. 5.
Neural Networks of various types and architectures, Decision trees C5.0 and Classification and Regression Trees Support Vector Machines, with various kernels Bayesian Networks Genetic Programming based agents.
Usually, physicians prefer white-box algorithms for supporting their clinical decisions. From the above algorithms, decision trees, Bayesian Networks and Genetic Programming are white-box. Decision trees can be produced using Genetic Programming too. Genetic Programming has the unique capabilities of producing automatically mathematical models. Unfortunately, physicians are not very familiar with mathematical models, and they will be inclined to favor decision trees. Mathematical models are preferred when they are more accurate than the other white-box algorithms and also simple. Simplicity could be partially controlled by properly restricting the function set to the simplest reaching the desired performances. As decision trees, we have chosen C5.0 algorithm, the last version of the C4.5 algorithm [6], with 10-fold cross-validation. As ensemble method, we used Freund and Schapire’s boosting [7] for improving the predictive power of C5.0 classifier learning systems. A set of C5.0 classifiers is produced and combined by voting, and by adjusting the weights of training cases. We suggest that boosting should always be tried when peak predictive accuracy is required, especially when unboosted classifiers are already quite accurate. We also used a linear version of steady state genetic programming proposed by Banzhaf (see [8] for a detailed introduction and the literature cited there). In linear genetic programming the individuals are computer programs represented as a sequence of instructions from an imperative programming language or machine language. Nordin introduced the use of machine code in this context (cited in [8]); The major preparatory steps for GP consist in determining: 1. the set of terminals (see Table 1), 2. the set of functions (see Table 1), 3. the fitness measure, 4. the parameters for the run (see Table 1), 5. the method for designating a result, and 6. the criterion for terminating a run. The function set, also called instruction set in linear GP, can be composed of standard arithmetic or programming operations, standard mathematical functions, logical functions, or domain-specific functions. The terminals are the attributes and parameters.
Using Computational Intelligence to Develop i-CDSS
271
Table 1. Genetic Programming Parameters Parameter Setting Population size 500 Mutation frequency 95% Block mutation rate 30% Instruction mutation rate 30% Instruction data mutation rate 40% Crossover frequency 50% Homologous crossover 95% Program Size 80-128 Demes Crossover between demes 0% Number of demes 10 Migration rate 1% Dynamic Subset Selection Target subset size 50 Selection by age 50% Selection by difficulty 50% Stochastic selection 0% Frequency (in generation equivalents) 1 Function set {+, -, *, /} Terminal set 64 = j + k Constants j Inputs k
Because we are reporting work in progress, we will only mention some of the main features of our experiments; a detailed description is prepared. The aforementioned criteria for feature selection were passed by about 40 features. Usually, removing the patients with missing values of the selected features is a conservative and robust practice. The problem is that the disproportion between the number of features and the number of records is increasing. We experimented with 40 features and a small number of patients, due to cleaning of patients with missing values for some of these features, and the accuracy was about 80%. With 25 features the accuracy significantly increased, but it starts to decrease again when the number of features was reduced to 20, 15, and 10, respectively. The rationale for these experiments is that we want to end up with a small enough number of features for the method to be clinical feasible, but large enough for an acceptable accuracy. Both decision trees and genetic programming algorithms performed their own feature selection but they needed enough features to do it properly. As we previously mentioned, we are preparing a detailed manuscript with these results, but here we will just give a short description of the results of applying GP to 20 features. We used a commercial GP software package - Discipulus, by RML Technologies, Inc. - and the main parameter settings of the GP algorithm are presented in Table 1. This software package has the facility of building teams of models. The experiments were performed on a Lenovo ThinkStation, Intel Xeon CPU, E5450, 2 processors 3.00GHz, 32 GB RAM, Windows Vista 64 bit OS.
272
3
A.G. Floares
Results
We started with a dataset of 700 patients and more than 100 inputs. The accuracy of the first experiments, using the aforementioned algorithms with default settings, and without a careful data preprocessing, was about 60%. Preprocessing increased the accuracy with 20% to 25% for most of the algorithms. C5.0 accuracy was one of the highest, about 80%. Parameter tuning and boosting increase the accuracy of some i-CDSS even to 100% [9], [10], [11]. GP results were similar and building teams of models increase the accuracy to about 91% 92%. The GP results are preliminary results, but for some reasons (not presented here) we believe that this accuracy is more robust against the subjectivity of the ultrasound images interpretations. Moreover, they are not only in the acceptable accuracy range, starting from about 80%, but they also outperform the best two R [4]. We methods already in use in clinical practice FibroTest and Fibroscan obtained similar results for prostate cancer, predicting the Gleason score with about 92% accuracy (manuscript in preparation). We also build the i-CDSS for Interferon treatment [12], [13], [14]. This non-invasive i-CDSS is of a special kind, being able to adapt - by attempting to predict the progressively accumulating results of the Interferon treatment, it will identify in time the proper patients selection criteria, and their cutoff values from data. Thus, the rules set of this i-CDSS is evolving. We tried to develop not only the technical foundation of the intelligent evolving CDSS, but also the related concepts. A central one is TM i-Biopsy , which is an i-CDSS capable to predict, with an acceptable accuracy (e.g., at least 80%), the results usually given by a pathologist, examining the tissue samples from biopsies, expressed as scores of a largely accepted scoring system. To do this, it takes as inputs and integrate various routine, non-invasive, clinical, imaging and lab data. Also, to distinguish between the scores of the real TM biopsy and their counterparts predicted by the i-Biopsy , we proposed the general terms of i-scores. For example, in the gastroenterological applications, we have: TM
1. The liver i-Biopsy is the i-CDSS corresponding to the real liver biopsy; the TM i-Metavir F scores are the values predicted by i-Biopsy for the Metavir-F fibrosis scores, designating exactly the same pathological features. 2. The i-Metavir F scores and the biopsy Metavir F scores could have different values for the same patient, at the same moment, depending for example on the prediction accuracy. 3. i-Metavir F scores are obtained in a non-invasive, painless, and riskless manner, as opposed to Metavir-F scores, assessed by liver biopsy. For simplicity, we referred only to the Metavir F scores, but these considerations are quite general, and can be easily extrapolated to other liver scores, e.g., Ishak F, Metavir A, and Ishak NI. TM Also, we developed i-Biopsy as a non-invasive i-CDSS counterpart for prostate biopsy, and we proposed the i-Gleason score (work in progress). While we built i-CDSS with accuracy reaching even 100%, the GP results proved to be robust,
Using Computational Intelligence to Develop i-CDSS
273
showing a constant accuracy of about 92%, for different lot of patients and medical team. For amniocentesis and thyroid cancer we have encouraging preliminary results too (not shown). We have built the following i-CDSS which can be used for Interferon treatment decision support: 1. Module for liver fibrosis prediction, according to either Metavir F or Ishak R ), F scoring system, with and without liver stiffness (Fibroscan 2. Module for the grade of necroinflammation (activity) prediction, according to Ishak NI scoring systems.
4
Discussions
A short digression about the meaning of the diagnosis accuracy, of the i-CDSS in general and i-Biopsy in special, seems to be necessary, because it confused many physicians, especially when reporting very high values like 100%. Typically, physicians believe that 100% accuracy is not possible in medicine. The meanings will be made clear by means of examples. Typically, an invasive liver biopsy is performed to the patient, and a pathologist analyzes the tissue samples assessing fibrosis, necroinflammation, etc., expressed as scores. The pathologist may have access to other patient’s medical data, but usually these are not necessary for the pathological diagnosis. Moreover, in some studies it is required that the pathologist knows nothing about the patient. His or her diagnosis can be more or less correct or even wrong, for many reasons not discussed here. We have proposed i-CDSS predicting the fibrosis scores resulted from liver biopsy with accuracy reaching even 100% for a number of systems. On the contrary, for the i-CDSS a number of the clinical, imaging, and lab data of the patient are essential, because they were somehow incorporated in the system. They were used like input features to train the system, and they are required for a new, TM is in fact a relationship between these unseen patient, because the i-Biopsy inputs and the fibrosis or necroinflammation scores, as outputs. The category of i-CDDSs discussed here do not deal directly with diagnosis correctness, but with diagnosis prediction accuracy. Without going into details, this is due in part to the supervised nature of the learning methods used to build them. The intelligent agents learned to predict the results of the biopsy given by the pathologist, and the pathologist diagnosis could be more or less correct. For example, suppose that the pathologist diagnosis is wrong, the i-Biopsy could still be 100% accurate in predicting this wrong diagnosis, but this is rarely the case. In other words, the i-Biopsy will predict, in a non-invasive and painless way, and without the risks of the biopsy, a diagnosis which could be even 100% identical with the pathologist diagnosis, if the biopsy is performed. While the accuracy and the correctness of TM the diagnosis are related in a subtle way, they are different matters. i-Biopsy will use the information content of several non-invasive investigations to predict the pathologist diagnosis, without performing the biopsy. The correctness of the diagnosis is a different matter, but typically a good accuracy correlates well with a correct diagnoses. The accuracy of the diagnosis, as well as other performance
274
A.G. Floares
measures like the area under the receiver operating characteristic (AUROC), for a binary classifier system [13], are useful for intelligent systems comparison. From the point of view of accuracy, one of the most important medical criteriTM system outperformed the ons, to our knowledge the proposed liver i-Biopsy most popular and accurate system, FibroTest and ActiTest [14] commercialized TM R by BioPredictive company, and Fibroscan . The liver i-Biopsy is a multiclasses classifier, expressing the results in the pathologist’s scoring systems, e.g., five classes for Metavir F and seven classes for Ishak F. Multi-classes classifiers are more difficult to develop than binary classifiers, with outputs not directly related to the fibrosis scores. We also build binary classifiers as decision trees with similar accuracy and mathematical models (work in progress). Despite the fact that AUROC is only for binary classifiers, loosely speaking a 100% accuracy n classes classifier is equivalent with n binary classifiers with AUROC = 1 (maximal). BioPredictive company analyzed a total of 30 studies14 which pooled 6,378 subjects with both FibroTest and biopsy (3,501 chronic hepatitis C). The mean standardized AUROC was 0.85 (0.82-0.87). The robustness of these results TM results need is clearly demonstrated by this cross-validation, while i-Biopsy TM to be cross-validated. The fact that i-Biopsy , in its actual setting, relies on routine ultrasound features is both a strong point and a weak one, because of the subjectiveness in ultrasound images interpretation. It is worth to note that TM could be superior to in certain circumstances the result of the liver i-Biopsy that of real biopsy. When building the i-CDSS, the results of the potentially erroneous biopsies, which are not fulfilling some technical requirements, were TM eliminated from the data set. Thus, the i-Biopsy predicted results correspond only to the results of the correctly performed biopsies, while a number of the real biopsy results are wrong, because they were not correctly performed. Due to the invasive and unpleasant nature of the biopsy, is very improbable that a patient will accept a technically incorrect biopsy to be repeated. Unlike real TM biopsy, i-Biopsy can be used to evaluate fibrosis evolution, which is of interest in various biomedical and pharmaceutical studies, because, being non-invasive, painless and without any risk, can be repeated as many time as needed. Also, in the early stages of liver diseases, often the symptoms are not really harmful for the patient, but the treatment is more effective than in more advanced fibrosis stages. The physician will hesitate to indicate an invasive, painful and risky liver biopsy, and the patient are not as worried about their disease as they are about TM the pain of the biopsy. However, i-Biopsy can be performed and an early start of the treatment could be much more effective.
References 1. Berner, E.S. (ed.): Clinical Decision Support Systems. Springer, New York (2007) 2. Lindor, A.: The role of ultrasonography and automatic-needle biopsy in outpatient percutaneous liver biopsy. Hepatology 23, 1079–1083 (1996) 3. Tobkes, A., Nord, H.J.: Liver biopsy: Review of methodology and complications. Digestive Disorders 13, 267–274 (1995)
Using Computational Intelligence to Develop i-CDSS
275
R for the predic4. Shaheen, A.A., Wan, A.F., Myers, R.P.: FibroTest and Fibroscan tion of hepatitis C-related fibrosis: a systematic review of diagnostic test accuracy. Am. J. Gastroenterol. 102(11), 2589–2600 (2007) 5. Guyon, I., et al.: Feature Extraction: Foundations and Applications. In: Studies in Fuzziness and Soft Computing, Springer, Heidelberg (2006) 6. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993) 7. Freund, Y., Schapire, R.E.: A decision theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997) 8. Brameier, M., Banzhaf, W.: Linear Genetic Programming. Springer, Heidelberg (2007) 9. Floares, A.G., et al.: Toward Intelligent Virtual Biopsy: Using Artificial Intelligence to Predict Fibrosis Stage in Chronic Hepatitis C Patients without Biopsy. Journal of Hepatology 48(2) (2008) 10. Floares, A.G.: Liver Intelligent Virtual Biopsy and the Intelligent METAVIR and Ishak Fibrosis Scores. In: Proceedings of the Computational Intelligence in Bioinformatics and Biostatistics, Vietri Sul Mare, Italy (2008) 11. Floares, A.G., et al.: Intelligent virtual biopsy can predict fibrosis stage in chronic hepatitis C, combining ultrasonographic and laboratory parameters, with 100% accuracy. In: Proceedings of The XXth Congress of European Federation of Societies for Ultrasound in Medicine and Biology (2008) 12. Floares, A.G.: Artificial Intelligence Support for Interferon Treatment Decision in Chronic Hepatitis B. In: International Conference on Medical Informatics and Biomedical Engineering; WASET - World Academy of Science, Engineering and Technology, Venice, Italy (2008) 13. Floares, A.G.: Intelligent Systems for Interferon Treatment Decision Support in Chronic Hepatitis C Based on i-Biopsy. In: Proceedings of Intelligent Data Analysis in Biomedicine and Pharmacology, Artificial Intelligence in Medicine, Washington DC (2008) 14. Floares, A.G.: Intelligent clinical decision supports for Interferon treatment in chronic hepatitis C and B based on i-Biopsy. In: Proceedings of the International Joint Conference on Neural Networks, Atlanta, Georgia, USA (2009)
Different Methodologies for Patient Stratification Using Survival Data Ana S. Fernandes2, Davide Bacciu3, Ian H. Jarman1, Terence A. Etchells1, José M. Fonseca2, and Paulo J.G. Lisboa1 1
School of Computing and Mathematical Sciences, Liverpool John Moores University, Byrom Street, Liverpool L3 3AF, UK {T.A.Etchells,I.H.Jarman,P.J.Lisboa}@ljmu.ac.uk 2 Faculdade de Ciências e Tecnologia, Universidade Nova de Lisboa {asff,jmf}@uninova.pt 3 Dipartimento d’Informatica, Università di Pisa {bacciu}@di.unipi.it
Abstract. Clinical characterization of breast cancer patients related to their risk and profiles is an important part for making their correct prognostic assessments. This paper first proposes a prognostic index obtained when it is applied a flexible non-linear time-to-event model and compares it to a widely used linear survival estimator. This index underpins different stratification methodologies including informed clustering utilising the principle of learning metrics, regression trees and recursive application of the log-rank test. Missing data issue was overcome using multiple imputation, which was applied to a neural network model of survival fitted to a data set for breast cancer (n=743). It was found the three methodologies broadly agree, having however important differences. Keywords: Prognostic risk index, patient stratification.
1 Introduction Clinical oncologists are very interested in making prognostic assessments of patients with operable breast cancer, in order to better tailor the treatments and to better assess the impact of prognostic factors on survival. Therefore it is necessary to identify patients with higher and lower survival risk as treatment may vary following the survival behaviour. However, this risk is only significant if stratified with different thresholds, depending not only on a prognostic index, but also on the prognostic factors. Based on survival models, the most widely used test statistic to stratify prognostic indices in the presence of censored data is the log-rank test. However this statistic finds the different patient risk groups by thresholding only the Prognostic Index (PI), making an assumption that the threshold separates distinct patient populations, while in practice it may be cutting across a single patient population. It would be desirable to stratify by identifying distinct patient populations directly from the prognostic F. Masulli, L. Peterson, and R. Tagliaferri (Eds.): CIBB 2009, LNBI 6160, pp. 276–290, 2010. © Springer-Verlag Berlin Heidelberg 2010
Different Methodologies for Patient Stratification Using Survival Data
277
factors. However, traditional clustering methods will often return clusters that have little specificity for outcome differences. This paper presents a comparison between three stratification methodologies: the first is a log-rank bootstrap aggregation methodology, which uses the log-rank statistic at its core but carries out bootstrap re-sampling of the population of prognostic indices in order to gain robustness over a maximum significant search. The second methodology is based on regression trees, applied to the continuous value prognostic scores. The third methodology uses informed clustering utilising the principle of learning metrics. More specifically it estimates a local metric based on the Fisher information matrix of the patients’ distribution with respect to the PI. Cluster number estimation is performed in the Fisher-induced affine space, stratifying the patients into groups characterized by significantly different survival. Two different survival models were considered: PLANN-ARD, a neural network for time-to-event data, and Cox proportional hazards, such that each model provides an independent prognostic index. It is important to note that survival models must take account of censorship, which occurs when a patient drops out of the study before the event of interest is observed or if the event of interest does not take place until the end of the study. All patients in this study were censored after 5 years of follow-up. Real-world clinical data, especially when routinely acquired, are likely to have missing data. Previous research [3] on this dataset showed information to be missing at random (MAR): hence, it can be successfully imputed. The application of multiple imputation in combination with neural network models for time-to-event modelling takes account of missing data and censorship within principled frameworks [3]. Section 2 gives a description of the data set used to train the model and defines the predictive variables chosen for the analysis. Sections 3 and 4 present the prognostic models and the methodologies used for patient stratification into different survival groups, followed by the conclusions of the analysis..
2 Data Description and Model Selection The data set consists of 931 routinely acquired clinical records for female patients recruited by Christie Hospital in Wilmslow, Manchester, UK, during 1990-94. The current study only comprises patients with early or operable breast cancer and whose cases are filtered using the standard TNM (Tumour, Nodes, Metastasis) staging system as tumour size less than 5 cm, node stage less than 2 and without clinical symptoms of metastatic spread. Two of the 931 records in the training data were identified as outliers and removed. Time-to-event was measured in months from surgery with a follow up of 5 years and the event of interest death to any cause. 16 explanatory variables in addition to outcome variables were acquired for all patient records. All covariates and their attributes are listed in Table 1. The category “others” in the “Histological type” variable includes also patients with “in situ” tumour. However these records shouldn’t be included in the data set because this category of patients has a different disease dynamic, therefore this category was been removed from the data set. For the purpose of this study will only focus only on Histological type lobular and ductal. The final data set comprises 743 subjects.
278
A.S. Fernandes et al.
Missing data is a common problem in prediction research. Different causes may lead to missing data, and it is important to carefully consider them since approaches to handle missing data rely on assumptions on the causes. After analysing the data, information has been considered to be Missing at Random (MAR). Here, missingness is related to known patient characteristics, that is the probability of a missing value on a predictor is independent of the values of the predictor itself, but depends on the observed values of other variables. Given this, a new attribute may be created to denote missing information or the missing values can be imputed. The latter has been shown to be effective [1,2] and does not make assumptions relating to the distribution of “missingness” in the training data, which is essential on future patients inference. Therefore, the missing covariates were imputed following the method indicated in [1,2] and repeated 10 times. The choice of this number is a conservative choice, as several studies have shown that the required number of repeated imputations can be as low as three for data with up to 20% missing information. Model selection was carried out through Cox regression (proportional hazards) [3], where six predictive variables were identified: age at diagnosis, node stage, histological type, ratio of axillary nodes affected to axilar nodes removed, pathological size (i.e. tumour size in cm) and oestrogen receptor count. All of these variables are binary coded as 1-from-N.
3 Prognostic Modeling for Breast Cancer Patients In clinical practice, more specifically for breast cancer patients, it is common practice to define prognostic indices that rank patient data by severity of the illness. In order to determine the prognostic index for each patient it is necessary to define a prognostic model. Two analytical models have been used to fit the data set to a prognostic index: a piecewise linear model Cox regression, also known as proportional hazards, and a flexible model consisting of Partial Logistic Artificial Neural Networks regularised with Automatic Relevance Determination (PLANN-ARD). Cox regression factorises dependence on time covariates, where the hazard rate is modelled for each patient with covariates xp at time tk, as follows:
h(x p , t k )
Ni
∑ bx )
h 0 (t k ) = .exp( 1 − h(x p , t k ) 1 − h 0 (t k )
i
(1)
i=1
where h0 represents the empirical hazard for a reference population and xi are the patient variables. Using this model, the prognostic index is defined by the traditional linear index βx. As in this study, missing data was imputed 10 times, there are 10 different data sets to be used with such model, which means that the final patient prognostic index was determined as the mean of 10 prognostic indices. PLANN-ARD has the structure of a multi-layer perceptron with a single hidden layer and sigmoidal activations in the hidden and output layer nodes. Covariates and
Different Methodologies for Patient Stratification Using Survival Data
279
discrete monthly time increments are introduced into the network as inputs, where the output is the hazard for each patient and for each time. The objective function is the log-likelihood summed over the observed status of the patient with a binary indicator when the patient status is observed alive or has died. Flexible models have the potential to model arbitrary continuous functions, therefore need to be regularized in order to avoid overfitting. For this purpose, we have applied Automatic Relevance Table 1. Variable description, existing values, coding for each category for Christie Hospital data set
Variable description Age category
Values
Categories
1;2;3
9 1;2;3;4 9 1;2;3;4 9 1;2;3;4 9
20-39; 40-59; 60+ Invasive ductal ; Invasive lobular/lobular in situ; In situ/ mixed/ medullary/ ucoid/ papillary/ tubular/ other mixed in situ Pre-Menopausal; Peri-Menopausal; Post-menopausal Missing Well differentiated; Moderately differentiated; Poorly differentiated Missing 0; 1-3; 4+; 98 (too many to count) Missing 0-9 ; 10-19; 20+; 98(too many to count) Missing 0-20% ; 21-40% ; 41-60% ; 61+% Missing
1;2