R Programming for Bioinformatics
Chapman & Hall/CRC
Computer Science and Data Analysis Series The interface between ...
1199 downloads
8088 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
R Programming for Bioinformatics
Chapman & Hall/CRC
Computer Science and Data Analysis Series The interface between the computer and statistical sciences is increasing, as each discipline seeks to harness the power and resources of the other. This series aims to foster the integration between the computer sciences and statistical, numerical, and probabilistic methods by publishing a broad range of reference works, textbooks, and handbooks. SERIES EDITORS David Blei, Princeton University David Madigan, Rutgers University Marina Meila, University of Washington Fionn Murtagh, Royal Holloway, University of London Proposals for the series should be sent directly to one of the series editors above, or submitted to: Chapman & Hall/CRC 4th Floor, Albert House 1-4 Singer Street London EC2A 4BQ UK
Published Titles Bayesian Artificial Intelligence Kevin B. Korb and Ann E. Nicholson
Design and Modeling for Computer Experiments
Computational Statistics Handbook with MATLAB®, Second Edition
Kai-Tai Fang, Runze Li, and
Wendy L. Martinez and Angel R. Martinez
Introduction to Machine Learning
Pattern Recognition Algorithms for Data Mining Sankar K. Pal and Pabitra Mitra
Agus Sudjianto and Bioinformatics Sushmita Mitra, Sujay Datta, Theodore Perkins, and George Michailidis
Exploratory Data Analysis with MATLAB®
R Graphics
Wendy L. Martinez and Angel R. Martinez
Paul Murrell
Clustering for Data Mining: A Data
R Programming for Bioinformatics
Recovery Approach
Robert Gentleman
Boris Mirkin
Semisupervised Learning for
Correspondence Analysis and Data
Computational Linguistics
Coding with Java and R Fionn Murtagh
Steven Abney Statistical Computing with R Maria L. Rizzo
R Programming for Bioinformatics
Robert Gentleman Fred Hutchinson Cancer Research Center Seattle, Washington, U.S.A.
Chapman & Hall/CRC Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2009 by Taylor & Francis Group, LLC Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-4200-6367-7 (Hardcover) This book contains information obtained from authentic and highly regarded sources Reasonable efforts have been made to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all materials or the consequences of their use. The Authors and Publishers have attempted to trace the copyright holders of all material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in any future reprint Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data Gentleman, Robert, 1959R programming for bioinformatics / Robert Gentleman. p. cm. -- (Chapman & Hall/CRC computer science and data analysis series) Bibliographical references (p. ) and index. ISBN 978-1-4200-6367-7 1. Bioinformatics. 2. R (Computer program language) I. Title. II. Series. QH324.2.G46 2008 572.80285’5133--dc22 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
2008011352
To Tanja, Sophie and Katja
Contents
1 Introducing R 1.1 Introduction . . . . 1.2 Motivation . . . . 1.3 A note on the text 1.4 Acknowledgments .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
1 1 2 3 4
2 R Language Fundamentals 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . 2.1.1 A brief introduction to R . . . . . . . . . 2.1.2 Attributes . . . . . . . . . . . . . . . . . . 2.1.3 A very brief introduction to OOP in R . . 2.1.4 Some special values . . . . . . . . . . . . 2.1.5 Types of objects . . . . . . . . . . . . . . 2.1.6 Sequence generating and vector subsetting 2.1.7 Types of functions . . . . . . . . . . . . . 2.2 Data structures . . . . . . . . . . . . . . . . . . . 2.2.1 Atomic vectors . . . . . . . . . . . . . . . 2.2.2 Numerical computing . . . . . . . . . . . 2.2.3 Factors . . . . . . . . . . . . . . . . . . . 2.2.4 Lists, environments and data frames . . . 2.3 Managing your R session . . . . . . . . . . . . . . 2.3.1 Finding out more about an object . . . . 2.4 Language basics . . . . . . . . . . . . . . . . . . . 2.4.1 Operators . . . . . . . . . . . . . . . . . . 2.5 Subscripting and subsetting . . . . . . . . . . . . 2.5.1 Vector and matrix subsetting . . . . . . . 2.6 Vectorized computations . . . . . . . . . . . . . . 2.6.1 The recycling rule . . . . . . . . . . . . . 2.7 Replacement functions . . . . . . . . . . . . . . . 2.8 Functional programming . . . . . . . . . . . . . . 2.9 Writing functions . . . . . . . . . . . . . . . . . . 2.10 Flow control . . . . . . . . . . . . . . . . . . . . . 2.10.1 Conditionals . . . . . . . . . . . . . . . . 2.11 Exception handling . . . . . . . . . . . . . . . . . 2.12 Evaluation . . . . . . . . . . . . . . . . . . . . . . 2.12.1 Standard evaluation . . . . . . . . . . . . 2.12.2 Non-standard evaluation . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
5 5 5 6 7 8 9 11 12 12 12 15 16 18 22 24 25 26 28 29 36 37 38 39 41 42 44 45 50 51 52
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
vii
viii 2.12.3 Function evaluation . . . . 2.12.4 Indirect function invocation 2.12.5 Evaluation on exit . . . . . 2.12.6 Other topics . . . . . . . . 2.12.7 Name spaces . . . . . . . . 2.13 Lexical scope . . . . . . . . . . . . 2.13.1 Likelihoods . . . . . . . . . 2.13.2 Function optimization . . . 2.14 Graphics . . . . . . . . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
. . . . . . . . .
53 54 54 55 57 59 61 62 64
3 Object-Oriented Programming in R 3.1 Introduction . . . . . . . . . . . . . . . . . . . . 3.2 The basics of OOP . . . . . . . . . . . . . . . . 3.2.1 Inheritance . . . . . . . . . . . . . . . . 3.2.2 Dispatch . . . . . . . . . . . . . . . . . . 3.2.3 Abstract data types . . . . . . . . . . . 3.2.4 Self-describing data . . . . . . . . . . . 3.3 S3 OOP . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Implicit classes . . . . . . . . . . . . . . 3.3.2 Expression data example . . . . . . . . 3.3.3 S3 generic functions and methods . . . . 3.3.4 Details of dispatch . . . . . . . . . . . . 3.3.5 Group generics . . . . . . . . . . . . . . 3.3.6 S3 replacement methods . . . . . . . . . 3.4 S4 OOP . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Classes . . . . . . . . . . . . . . . . . . 3.4.2 Types of classes . . . . . . . . . . . . . . 3.4.3 Attributes . . . . . . . . . . . . . . . . . 3.4.4 Class unions . . . . . . . . . . . . . . . 3.4.5 Accessor functions . . . . . . . . . . . . 3.4.6 Using S3 classes with S4 classes . . . . . 3.4.7 S4 generic functions and methods . . . . 3.4.8 The syntax of method declaration . . . 3.4.9 The semantics of method invocation . . 3.4.10 Replacement methods . . . . . . . . . . 3.4.11 Finding methods . . . . . . . . . . . . . 3.4.12 Advanced topics . . . . . . . . . . . . . 3.5 Using classes and methods in packages . . . . . 3.6 Documentation . . . . . . . . . . . . . . . . . . 3.6.1 Finding documentation . . . . . . . . . 3.6.2 Writing documentation . . . . . . . . . 3.7 Debugging . . . . . . . . . . . . . . . . . . . . . 3.8 Managing S3 and S4 together . . . . . . . . . . 3.8.1 Getting and setting the class attribute 3.8.2 Mixing S3 and S4 methods . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
67 67 68 69 71 72 73 74 76 77 78 81 83 83 84 85 98 98 99 100 100 101 105 106 107 107 108 110 110 110 111 111 112 113 114
ix 3.9
Navigating the class and method hierarchy
. . . . . . . . . .
115
4 Input and Output in R 4.1 Introduction . . . . . . . . . . . . . . . . 4.2 Basic file handling . . . . . . . . . . . . 4.2.1 Viewing files . . . . . . . . . . . 4.2.2 File manipulation . . . . . . . . . 4.2.3 Working with R’s binary format 4.3 Connections . . . . . . . . . . . . . . . . 4.3.1 Text connections . . . . . . . . . 4.3.2 Interprocess communications . . 4.3.3 Seek . . . . . . . . . . . . . . . . 4.4 File input and output . . . . . . . . . . 4.4.1 Reading rectangular data . . . . 4.4.2 Writing data . . . . . . . . . . . 4.4.3 Debian Control Format (DCF) . 4.4.4 FASTA Format . . . . . . . . . . 4.5 Source and sink: capturing R output . . 4.6 Tools for accessing files on the Internet .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
119 119 120 124 125 129 130 131 133 136 137 138 139 140 141 142 143
5 Working with Character Data 5.1 Introduction . . . . . . . . . . . . . . . . 5.2 Builtin capabilities . . . . . . . . . . . . 5.2.1 Modifying text . . . . . . . . . . 5.2.2 Sorting and comparing . . . . . . 5.2.3 Matching a set of alternatives . . 5.2.4 Formatting text and numbers . . 5.2.5 Special characters and escaping . 5.2.6 Parsing and deparsing . . . . . . 5.2.7 Plotting with text . . . . . . . . 5.2.8 Locale and font encoding . . . . 5.3 Regular expressions . . . . . . . . . . . . 5.3.1 Regular expression basics . . . . 5.3.2 Matching . . . . . . . . . . . . . 5.3.3 Using regular expressions . . . . 5.3.4 Globbing and regular expressions 5.4 Prefixes, suffixes and substrings . . . . . 5.5 Biological sequences . . . . . . . . . . . 5.5.1 Encoding genomes . . . . . . . . 5.6 Matching patterns . . . . . . . . . . . . 5.6.1 Matching single query sequences 5.6.2 Matching many query sequences 5.6.3 Palindromes and paired matches 5.6.4 Alignments . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . .
145 145 146 151 152 153 155 155 158 159 159 159 160 166 167 169 169 171 172 173 174 175 177 179
x 6 Foreign Language Interfaces 6.1 Introduction . . . . . . . . . . . . . . . . . 6.1.1 Overview . . . . . . . . . . . . . . 6.1.2 The C programming language . . . 6.2 Calling C and FORTRAN from R . . . . 6.2.1 .C and .Fortran . . . . . . . . . . 6.2.2 Using .Call and .External . . . . 6.3 Writing C code to interface with R . . . . 6.3.1 Registering routines . . . . . . . . 6.3.2 Dealing with special values . . . . 6.3.3 Single precision . . . . . . . . . . . 6.3.4 Matrices and arrays . . . . . . . . 6.3.5 Allowing interrupts . . . . . . . . . 6.3.6 Error handling . . . . . . . . . . . 6.3.7 R internals . . . . . . . . . . . . . 6.3.8 S4 OOP in C . . . . . . . . . . . . 6.3.9 Calling R from C . . . . . . . . . . 6.4 Using the R API . . . . . . . . . . . . . . 6.4.1 Header files . . . . . . . . . . . . . 6.4.2 Sorting . . . . . . . . . . . . . . . 6.4.3 Random numbers . . . . . . . . . . 6.5 Loading libraries . . . . . . . . . . . . . . 6.5.1 Inspecting DLLs . . . . . . . . . . 6.6 Advanced topics . . . . . . . . . . . . . . 6.6.1 External references and finalizers 6.6.2 Evaluating R expressions from C . 6.7 Other languages . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
183 183 184 185 185 186 187 188 188 189 191 191 193 193 193 197 198 198 198 199 199 202 203 204 204 206 209
7 R Packages 7.1 Package basics . . . . . . . . . 7.1.1 The search path . . . . 7.1.2 Package information . . 7.1.3 Data and demos . . . . 7.1.4 Vignettes . . . . . . . . 7.2 Package management . . . . . . 7.2.1 biocViews . . . . . . . 7.2.2 Managing libraries . . . 7.3 Package authoring . . . . . . . 7.3.1 The DESCRIPTION file . 7.3.2 R code . . . . . . . . . . 7.3.3 Documentation . . . . . 7.3.4 Name spaces . . . . . . 7.3.5 Finding out about name 7.4 Initialization . . . . . . . . . . 7.4.1 Event hooks . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
211 212 212 213 215 215 216 218 219 219 220 220 221 224 226 226 227
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . spaces . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
xi 8 Data Technologies 8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . 8.1.1 A brief description of GO . . . . . . . . . . 8.2 Using R for data manipulation . . . . . . . . . . . 8.2.1 Aggregation and creating tables . . . . . . . 8.2.2 Apply functions . . . . . . . . . . . . . . . . 8.2.3 Efficient apply-like functions . . . . . . . . 8.2.4 Combining and reshaping rectangular data 8.3 Example . . . . . . . . . . . . . . . . . . . . . . . . 8.4 Database technologies . . . . . . . . . . . . . . . . 8.4.1 DBI . . . . . . . . . . . . . . . . . . . . . . 8.4.2 SQLite . . . . . . . . . . . . . . . . . . . . . 8.4.3 Using AnnotationDbi . . . . . . . . . . . 8.5 XML . . . . . . . . . . . . . . . . . . . . . . . . . . 8.5.1 Simple XPath . . . . . . . . . . . . . . . . . 8.5.2 The XML package . . . . . . . . . . . . . . 8.5.3 Handlers . . . . . . . . . . . . . . . . . . . . 8.5.4 Example data . . . . . . . . . . . . . . . . . 8.5.5 DOM parsing . . . . . . . . . . . . . . . . . 8.5.6 XML event parsing . . . . . . . . . . . . . . 8.5.7 Parsing HTML . . . . . . . . . . . . . . . . 8.6 Bioinformatic resources on the WWW . . . . . . . 8.6.1 PubMed . . . . . . . . . . . . . . . . . . . . 8.6.2 NCBI . . . . . . . . . . . . . . . . . . . . . 8.6.3 biomaRt . . . . . . . . . . . . . . . . . . . . 8.6.4 Getting data from GEO . . . . . . . . . . . 8.6.5 KEGG . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . .
229 229 229 230 230 232 234 234 236 238 239 241 243 254 256 257 257 258 258 261 263 264 265 265 266 270 272
9 Debugging and Profiling 9.1 Introduction . . . . . . . . . . . . . . . . 9.2 The browser function . . . . . . . . . . . 9.2.1 A sample browser session . . . . 9.3 Debugging in R . . . . . . . . . . . . . . 9.3.1 Runtime debugging . . . . . . . . 9.3.2 Warnings and other exceptions . 9.3.3 Interactive debugging . . . . . . 9.3.4 The debug and undebug functions 9.3.5 The trace function . . . . . . . . 9.4 Debugging C and other foreign code . . 9.5 Profiling R code . . . . . . . . . . . . . 9.5.1 Timings . . . . . . . . . . . . . . 9.6 Managing memory . . . . . . . . . . . . 9.6.1 Memory profiling . . . . . . . . . 9.6.2 Profiling memory allocation . . . 9.6.3 Tracking a single object . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
273 273 274 275 276 277 278 279 281 285 289 290 292 293 294 295 298
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . .
xii References
301
Chapter 1 Introducing R
1.1
Introduction
The purpose of this monograph is to provide a reference for scientists and programmers working on problems in bioinformatics and computational biology. It may also appeal to programmers who want to improve their programming skills or programmers who have been working in bioinformatics and computational biology but are familiar with languages other than R. A reasonable level of programming skill is presumed as is some familiarity with some of the basic tasks that need to be carried out in bioinformatics. We concentrate on programming tools and there is no discussion of either graphics or of the multitude of software for fitting models or carrying out machine learning. Reasonable coverage of these topics would result in a much longer monograph and to some extent they are orthogonal to our purpose. Bioinformatics blossomed as a scientific discipline in the 1990s when a number of technological innovations appeared that revolutionized biology. Suddenly, data on the complete genomic sequence of many different organisms were available, microarrays could measure the abundance of tens of thousands of mRNA species, and other arrays and technologies made it possible to study protein interactions and many other cellular processes at the molecular level. Basically, biology moved from a small data discipline to one with large complex data sets, virtually overnight. Faced with these sudden challenges, scientific programmers grabbed whatever tools were available and made use of them to help address some of the many problems. Perl was perhaps the most widely used and it remains a dominant player to this date. Other popular programming languages such as Java and Python are also used. R is an implementation of the S language (Becker et al., 1988; Chambers and Hastie, 1992; Chambers, 1998). S has been a favorite tool for statisticians and data analysts since the early 1980s when John Chambers and colleagues started to release versions of it from Bell Labs. It is now becoming one of the most widely used software tools for bioinformatics. This is mainly due to its flexibility and data handling and modeling capabilities. Some of these have been exposed through the Bioconductor Project (Gentleman et al., 2004) but many users simply find it a useful tool for doing analyses. However, our 1
2
R Programming for Bioinformatics
experience is that it is easy to write inefficient programs, and often the basic programming idioms are missed or ignored. In Chapter 2 we discuss the general properties of the R language and some of the unique aspects of programming in it. In Chapter 3 we discuss objectoriented programming in R. The paradigm is quite different and may take some getting used to, but like all object-oriented systems, mastering these topics is essential to writing good maintainable software. Then Chapter 4 discusses methods for getting data in and out, for interacting with databases and includes a discussion of XML, SOAP and other data mark-up and web-services languages and tools. Chapter 5 discusses different aspects of string handling and manipulations, including many of the standard sequence similarity tools that play a prominent role in computational biology. In Chapter 6 we consider interacting with foreign languages, primarily on C, but we also consider FORTRAN, Perl and Python. In Chapter 7 we describe how to write your own software packages that can be used locally or distributed more broadly. Finally we finish with Chapter 9, which discusses debugging and profiling of R code. R comes with a substantial amount of documentation. Specifically there are five manuals: An Introduction to R, The R Language Definition, R Installation and Administration, Writing R Extensions, and R Data Import and Export. We will draw on material in these manuals throughout this monograph, and readers who want more detail or alternative examples should consult them. We will rely most on the Writing R Extensions Manual, which we abbreviate to R Extensions. R News is a good source of information on R packages and on aspects of the language written at an accessible level. Readers are encouraged to browse the back issues for more information on topics that are just touched on in this volume. Venables and Ripley (2000) is another reference for programming in the S language, as is Chambers (2008).
1.2
Motivation
There are many good reasons to prefer R to other languages for scientific computation. The existence of a substantial collection of good statistical algorithms, access to high-quality numerical routines, and integrated data visualization tools are perhaps the most obvious ones. But as we have been trying to show through the Bioconductor Project (www.bioconductor.org), there are many more. Reproducibility is an essential part of any scientific investigation, but to date very little attention has been paid to this topic. Our efforts are R-based (Gentleman, 2005) and make use of the Sweave system (Leisch, 2002). Indeed, as we discuss later, this entire book has been written so that every example
Introducing R
3
is reproducible on the reader’s machine. The ability to integrate text and software into a single document greatly facilitates the writing of scientific papers and helps to ensure that all figures, tables and facts are based on the same data, and are essentially reproducible by the reader. A second strong motivation for using R is its ability to interoperate with many other languages. Algorithms that have been written in another language seldom need to be reimplemented for use in R. Typically one need merely write a small amount of interface code and the routines can be accessed from within R (this is described in Chapter 6). This approach also helps to ensure maximal code reuse. And finally, R supports the creation and use of self-describing data structures. In the Bioconductor Project we have relied heavily on this capability in our design and use of the ExpressionSet class. This data structure is designed to hold the output of a microarray experiment as well as detailed information on the experimental design, other covariates that are available for the samples that were run and links to information on the genes that correspond to the spots on the array. While this has been successful in that context, we have reused this data structure with similar benefits for other data types such as those that arise in proteomic studies (the PROcess package) and flow cytometry (the flowCore package).
1.3
A note on the text
This monograph was written using the Sweave system (Leisch, 2002), which is a tool that allows authors to integrate text (using LATEX) and computer code for the R language. Hence, all examples are reproducible by the reader and readers can obtain complete source code (but not the text) on a per-chapter basis from the web site for this monograph. There are a number of exercises given and solutions for some of them are available in the online supplements. The examples themselves are often shown integrated into the text of the chapters. Not all code is displayed; in many cases preliminary computations, loading of libraries and other mundane tasks are not displayed in the text version; they are included in the R code for the chapters. Any example that relies on simulation or the use of a random number generator will have a call to the set.seed function as a preliminary command. The sole reason for this is to ensure reproducibility of the output on the user’s machine. In cases where the code is intended to signal an error, the call is enclosed in either a call to try or more often in a call to tryCatch. This is done because any error signaled by R interrupts the Sweave process and causes typesetting to fail. Details on the behavior of try and tryCatch can be found in Section 2.11. Markup is used to distinguish some entities. For example, functions are
4
R Programming for Bioinformatics
marked up like mean, R packages using Biobase, function arguments, myarg, R classes with ExpressionSet, and R objects using x. When R prints a value that corresponds to a vector, some indexing information is provided. In the example below, we print a vector of integers from 1 to 30. The first thing printed is [1], which indicates that the first value on that line is the first value in the vector, and on the second printed line the [18] indicates that the first value in that line corresponds to the 18th element of the vector. The grey background is used for all code examples that were processed by the Sweave system. > 1:30 [1] 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [18] 18 19 20 21 22 23 24 25 26 27 28 29 30 It is essential that the reader follow along and experiment with some of the examples given, so two basic strategies are advised. First, make use of the help system, either by constructs such as help("[") or the shorthand, equivalent, ?"[". Many special functions and symbols need to be quoted. All help pages should have examples and these can be run using the function example, e.g., example("[")). The second basic strategy is to investigate the code itself, and for this purpose get is most useful; for example, try get("mode") and see if you can better understand how it works.
1.4
Acknowledgments
Many people have contributed, both directly and indirectly, to the creation of this book. Both the R and Bioconductor development teams have contributed substantially to my understanding, and many members of those projects have provided examples, clarified misunderstandings and provided a rich environment in which to discuss relevant issues. Members of my research group have contributed to many aspects; in particular, J. Gentry, S. DebRoy, H. Pag´es, M. Morgan, N. Li, T.-Y. Liu, M. Carlson, P. Aboyoun, D. Sarkar, F. Hahne and S. Falcon have contributed many ideas, examples and helped clarify issues. Wolfgang Huber and Vincent Carey made extensive comments and recommendations. All errors remain my own and I will attempt to remedy those that are found and reported, in a timely fashion.
Chapter 2 R Language Fundamentals
2.1
Introduction
In this chapter we introduce the basic language data types and discuss their capabilities and structures. Then topics such as flow-control, iteration, subsetting and exception handling will be presented. R directly supports two different object-oriented programming (OOP) paradigms, which are discussed in detail in Chapter 3. Many operations in R are vectorized, and understanding and using vectorization is an essential component of becoming a proficient programmer. The R language was primarily designed as a language for data manipulation, modeling and visualization, and many of the data structures reflect this view. However, R is itself a full-fledged programming language, with its own idioms – much like any other programming language. In some ways R can be considered as a functional programming language, although it is not purely functional. R supports a form of lexical scope that provides a useful paradigm for encapsulating computations. R is an implementation of the S language (Becker et al., 1988; Chambers and Hastie, 1992; Chambers, 1998). There is another commercial implementation available from Insightful Corporation, called S-PLUS. The two implementations are quite similar, and much of the material covered here can be used in either. However, there are many R-specific extensions that are used in this monograph and users of R are our intended audience.
2.1.1
A brief introduction to R
We presume a reasonable familiarity with R but there are a few points that will help to clarify some of the discussion. When R is started, a workspace is created and that workspace is where the user creates and manipulates variables. This workspace is an environment, and an environment is a set of bindings of names, or symbols, to values. The top-level workspace can be accessed through its name, which is .GlobalEnv. Assignment of value to a variable is generally done with either the = (equals) character, or a special symbol that is the concatenation of less than and minus, x = 10 > y = x The value associated with y is a copy of the value associated with x, and changes to x do not affect y. The semantics of rm(x) are that the association between x and its value is broken and the symbol x is removed from the environment, but nothing is done to the value that x referred to. If this value can be accessed in other ways, it will remain available. We provide an example in Section 2.2.4.3. Valid variable names, sometimes referred to as syntactic names, are any sequence of letters, digits, the period and the underscore, but they cannot begin with a digit or the underscore. If they begin with a period, the second character cannot be a digit. Variable names that violate these rules must be quoted (see the Quotes manual page) and the preferred quote is the backtick. > _foo = 10 > "10:10" = 20 > ls() [1] "10:10" "Rvers" [5] "biocUrls" "repos"
2.1.2
"_foo" "x"
"basename" "y"
Attributes
Attributes can be attached to any R object except NULL and they are used quite extensively. Attributes are stored, by name, in a list. All attributes can be retrieved using attributes, or any particular attribute can be accessed or modified using the attr function. Attributes can be used by programmers to attach any sort of information they want to any R object. R uses attributes for many things; the S3 class system is based largely on attributes, dimensions of arrays, and names on vectors, to name but a few. In the code below, we attach an attribute to x and then show how the printing of x changes to reflect the fact that it has an attribute. > x = 1:10 > attr(x, "foo") = 11 > x
R Language Fundamentals [1] 1 2 3 attr(,"foo") [1] 11
2.1.3
4
5
6
7
8
7
9 10
A very brief introduction to OOP in R
In order to fully explain some of the concepts in this chapter, the reader will need a little familiarity with the basic ideas in object oriented-programming (OOP), as they are implemented in R. A more comprehensive treatment of these topics is given in Chapter 3. There are two components: one is a class system that is used to define the class of different objects, and the second is the notion of a generic function with methods. R has two OOP systems: one is referred to as S3, and it mainly supports generic functions; the other is referred to as S4, and it has support for classes as well as generic functions, although these are somewhat different from the S3 variants. We will only discuss S3 here. In S3, the class system is very lax, and one creates an object (typically called an instance) from a class by attaching a class attribute to any R object. As a result, no checking is done, or can easily be done, to ensure common structure of different instances of the same class. A generic function is essentially a dispatching mechanism, and in S3 the dispatch is handled by concatenating the name of the generic function with that of the class. An example of a generic function is mean.
> mean function (x, ...) UseMethod("mean")
The general form of a generic function, as seen in the example above, is for a single expression, which is the call to UseMethod, which is the mechanism that helps to dispatch to the appropriate method. We can see all the defined methods for this function using the methods command.
> methods("mean") [1] mean.Date mean.POSIXct [4] mean.data.frame mean.default
mean.POSIXlt mean.difftime
8
R Programming for Bioinformatics
And see that they all begin with the name mean, then a period. When the function mean is called, R looks at the first argument and determines whether or not that argument has a class attribute. If it does, then R looks for a function whose name starts with mean. and then has the name of the class. If one exists, then that method is used; and if one does not exist, then mean.default is used.
2.1.4
Some special values
There are a number of special variables and values in the language and before embarking on data structures we will introduce these. The value NULL is the null object. It has length zero and disappears when concatentated with any other object. It is the default value for the elements of a list.
> length(NULL) [1] 0 > c(1, NULL) [1] 1 > list("a", NULL) [[1]] [1] "a" [[2]] NULL Since R has its roots in data analysis, the appropriate handling of missing data items is important. There are special missing data values for all atomic types and these are commonly referred to by the symbol NA. And similarly there are special functions for identifying these values, such as is.na, and many modeling routines have special methods for dealing with missing values. It is worth emphasizing that there is a distinct missing value (NA) for each basic type and these can be accessed through constants such as NA_integer_.
> typeof(NA) [1] "logical" > as.character(NA)
R Language Fundamentals
9
[1] NA > as.integer(NA) [1] NA > typeof(as.integer(NA)) [1] "integer" Note that the character string formed by concatenating the characters N and A is not a missing value. > is.na("NA") [1] FALSE The appropriate representation of values such as infinity and not a number (NaN) is provided. There are accompanying functions, is.finite, is.infinite and is.nan, that can be used to determine whether a particular value is one of these special values. All mathematics functions should deal with these values appropriately, according to the ANSI/IEEE 754 floating-point standard. > y = 1/0 > y [1] Inf > -y [1] -Inf > y - y [1] NaN
2.1.5
Types of objects
An important data structure in R is the vector. Vectors are ordered collections of objects, where all elements are of the same type. Vectors can be of any length (including zero), up to some maximum allowable, which is determined by the storage capabilities of the machine being used. Vectors typically
10
R Programming for Bioinformatics
represent a form of contiguous storage (character vectors are an exception). R has six basic vector types: logical, integer, real, complex, string (or character) and raw. The type of a vector can be queried by using one of the three functions mode, storage.mode or typeof. > typeof(y) [1] "double" > typeof(is.na) [1] "builtin" > typeof(mean) [1] "closure" > mode(NA) [1] "logical" > storage.mode(letters) [1] "character" There are also a number of predicate functions that can be used to test whether a value corresponds to one of the basic vector types. The code chunk below demonstrates the use of several of the predicate functions available. > is.integer(y) [1] FALSE > is.character(y) [1] FALSE > is.double(y) [1] TRUE > is.numeric(y) [1] TRUE
R Language Fundamentals
11
Exercise 2.1 What does the typeof is.na mean? Why is it different from that of mean?
2.1.6
Sequence generating and vector subsetting
It is also helpful to discuss a couple of functions operators before beginning the general discussion, as they will help make the exposition easier to follow. Some of these, such as the subsetting operator [, we will return to later for a more complete treatment. The colon, :, indicates a sequence of values, from the number that is to its left, to the number on the right, in steps of 1. We will also need to make some use of the subset operator, [. This operator takes a subset of the vector it is applied to according to the arguments inside the square brackets.
> 1:3 [1] 1 2 3 > 1.3:3.2 [1] 1.3 2.3 > 6:3 [1] 6 5 4 3 > x = 11:20 > x[4:5] [1] 14 15
These are just ordinary functions, and one can invoke them as if they are. The usual infix notation, with the : between the lower and upper bounds on the sequence, may lead one to believe that this is not an ordinary function. But that is not true, and one can also invoke this function using a somewhat more standard notation, ":"(2,4)". Quotes are needed around the colon to ensure it is not interpreted in an infix context by the parser.
Exercise 2.2 Find help for the colon operator; what does it do? What is the type of its return value? Use the predicate testing functions to determine the storage mode of the expressions 1:3 and 1.3:4.2.
12
2.1.7
R Programming for Bioinformatics
Types of functions
This section is slightly more detailed and can be skipped. In R there are basically three types of functions: builtins, specials and closures. Users can only create closures (unless they want to modify the internals of R), and these are the easiest functions to understand since they are written in R. The other two types of functions are interfaces that attempt to pass the calculations down to internal (typically C) routines for efficiency reasons. The main difference between the two types of internal functions is whether or not they evaluate their arguments; specials do not. More details on the internals of R are available in the R Language Definition (R Development Core Team, 2007b).
2.2 2.2.1
Data structures Atomic vectors
Atomic vectors are the most basic of all data structures. An atomic vector contains some number of values of the same type; that number could be zero. Atomic vectors can contain integers, doubles, logicals or character strings. Both complex numbers and raw (pure bytes) have atomic representations (see the R documentation for more details on these two types). Character vectors in the S language are vectors of character strings, not vectors of characters. For example, the string "super" would be represented as a character vector of length one, not of length five (for more details on character handling in R, see Chapter 5). A dim attribute can be added to an atomic vector to create a matrix or an array.
> x = c(1, 2, 3, 4) > x [1] 1 2 3 4 > dim(x) = c(2, 2) > x [1,] [2,]
[,1] [,2] 1 3 2 4
> typeof(x) [1] "double"
R Language Fundamentals
13
> y = letters[1:10] > y [1] "a" "b" "c" "d" "e" "f" "g" "h" "i" "j" > dim(y) = c(2, 5) > y [,1] [,2] [,3] [,4] [,5] [1,] "a" "c" "e" "g" "i" [2,] "b" "d" "f" "h" "j" > typeof(y) [1] "character" A logical value is either TRUE, FALSE or NA. The elements of a vector can have names, and a matrix or array can have names for each of its dimensions. If a dim attribute is added to a named vector, the names are discarded but other attributes are retained (and dim is added as an attribute). Vectors can be created using the function c, which is short for concatenate. Vectors for a particular class can be created using the functions numeric, double, character integer or logical; all of these functions take a single argument, which is interpreted as the length of the desired vector. The returned vector has initial values, appropriate for the type. The function seq can be used to generate patterned sequences of values. There are two variants of seq that can be very efficient: seq_len that generates a sequence from 1 to the value provided as its argument, and seq_along that returns a sequence of integers of the same length as its argument. If that argument is of zero length, then a zero length integer vector is returned, otherwise the sequence starts at 1. The different random number generating functions (e.g., rnorm, runif) can be used to generate random vectors. sample can be used to generate a vector sampled from its input. Notice in the following example that the result of typeof( c( 1, 3:5 )) is "double", whereas typeof( c( 1, "a" )) is "character". This is because all elements of a vector must have the same type, and R coerces all elements of c(1, "a") to character. > c(1, 3:5) [1] 1 3 4 5 > c(1, "c")
14
R Programming for Bioinformatics
[1] "1" "c" > numeric(2) [1] 0 0 > character(2) [1] "" "" > seq(1, 10, by = 2) [1] 1 3 5 7 9 > seq_len(2.2) [1] 1 2 > seq_along(numeric(0)) integer(0) > sample(1:100, 5) [1] 59 89 49 66 10 S regards an array as consisting of a vector containing the array’s elements, together with a dimension (or dim) attribute. A vector can be given dimensions by using the functions matrix (two-dimensional data) or array (any number of dimensions), or by directly attaching them with the dim function. The elements in the underlying vector correspond to the elements of the array. For matrices, the first column is stored first, followed by the second column and so on. Array extents can be named by using the dimnames function or the dimnames argument to matrix or array. Extent names are given as a list, with each list element being a vector of names for the corresponding extent. Exercise 2.3 Create vectors of each of the different primitive types. Create matrices and arrays by attaching dim attributes to those vectors. Look up the help for dimnames and attach dimnames to a matrix with two rows and five columns. 2.2.1.1
Zero length vectors
In some cases the behavior of zero length vectors may seem surprising. In Section 2.6 we discuss vectorized computations in R and describe the rules
R Language Fundamentals
15
that apply to zero length vectors for those computations. Here we describe their behavior in other settings. Functions such as sum and prod take as input one or more vectors and produce a value of length one. It is helpful if simple rules, such as sum(c(x,y)) = sum(x) + sum(y), hold. Similarly for prod we expect prod(c(x,y)) = prod(x)*prod(y). For these to hold, we require that the sum of a zero length vector be zero and that the product of a zero length vector be one. > sum(numeric()) [1] 0 > prod(numeric()) [1] 1 For other mathematical functions, such as gamma or log, the same logic suggests that these functions should return a zero length result when invoked with an argument of zero length.
2.2.2
Numerical computing
One of the strengths of R is its various numerical computing capabilities. It is important to remember that computers cannot represent all numbers and that machine computation is not identical to computation with real numbers. Readers unaware of the issues should consult a reference on numerical computing, such as Thisted (1988) or Lange (1999) for more complete details or Goldberg (1991). The issue is also covered in the R FAQ, where the following information is provided. The only numbers that can be represented exactly in R’s numeric type are (some) integers and fractions whose denominator is a power of 2. Other numbers have to be rounded to (typically) 53 binary digits accuracy. As a result, two floating point numbers will not reliably be equal unless they have been computed by the same algorithm, and not always even then. And a classical example of the problem is given in the code below. > a = sqrt(2) > a * a == 2 [1] FALSE
16
R Programming for Bioinformatics
> a * a - 2 [1] 4.440892e-16 The numerical characteristics of the computer that R is running on can be obtained from the variable named .Machine. These values are determined dynamically. The manual page for that variable provides explicit details on the quantities that are presented. The function all.equal compares two objects using a numeric tolerance of .Machine$double.eps^0.5. If you want much greater accuracy than this, you will need to consider error propagation carefully. Exercise 2.4 What is the largest integer that can be represented on your computer? What happens if you add one to this number? What is the smallest negative integer that can be represented?
2.2.3
Factors
Factors reflect the S language’s roots in statistical application. A factor is useful when a potentially large collection of data contains relatively few, discrete levels. Such data are usually referred to as a categorical variable. Examples include variables like sex, e.g., male or female. Some factors have a natural ordering of the levels, e.g., low, medium and high, and these are called ordered factors. While one can often represent factors by integers directly, such practice is not recommended and can lead to hard to detect errors. Factors are generally used, and are treated specially, in different statistical modeling functions such as lm and glm. Factors are not vectors and, in particular, is.vector returns FALSE for a factor. A factor is represented as an object of class factor , which is an integer vector of codes and an attribute with name levels. In the code below, we first set the random seed to ensure that all readers will get the same values if they run the code on their own machines.
> > > >
set.seed(123) x = sample(letters[1:5], 10, replace = TRUE) y = factor(x) y
[1] b d c e e a c e c c Levels: a b c d e > attributes(y)
R Language Fundamentals
17
$levels [1] "a" "b" "c" "d" "e" $class [1] "factor" The creation of factors typically either happens automatically when reading data from disk, e.g., read.table does automatic conversion, or by converting a character vector to a factor through a call to the function factor unless the option stringsAsFactors has been set to FALSE. When factor is invoked, the following algorithm is used. If no levels argument is provided, then the levels are assigned to the unique values in the first argument, in the order in which they appear. Values provided in the exclude argument are removed from the supplied levels argument. Then if x[i] equals the j th value in the levels argument, the ith element of the result is j. If no match is found for x[i] in levels, then the ith element of the result is set to NA. To obtain the integer values that are used for the encoding, use either as.integer or unclass. If the levels of the factor are themselves numeric, and you want to revert to the original numeric values (which do not need to correspond to the codes), the use of as.numeric(levels(f))[f] is recommended. Great caution should be used when comparing factors since the interpretation depends on both the codes and the levels attribute. One should only compare factors that have the same sets of levels, in the same order. One scenario where comparison might be reasonable is to compare values between two different subsets of a larger data set, but here still caution is needed. You should ensure that unused levels are not dropped, as this will invalidate any automatic comparisons. There are two tasks that are often performed on factors. One is to drop unused levels; this can be achieved by a call to factor since factor(y) will drop any unused levels from y if y is a factor. The second task is to coarsen the levels of a factor, that is, group two or more of them together into a single new level. The code below demonstrates one method for doing this. > > > + > >
y = sample(letters[1:5], 20, rep = T) v = as.factor(y) xx = list(I = c("a", "e"), II = c("b", "c", "d")) levels(v) = xx v
[1] I II II II I [18] II II I Levels: I II
I
II I
II I
I
II II I
II II II
18
R Programming for Bioinformatics
Things are quite similar for ordered factors. They can be created by either by using the ordered argument to factor or with ordered. Factors are instances of S3 classes. Ordinary factors have class factor and ordered factors have a class vector of length two with ordered as the additional element. An example of the use of an ordered factor is given below. > z = ordered(y) > class(z) [1] "ordered" "factor" Using a factor as an argument to the functions matrix or array coerces it to a character vector before creating the matrix.
2.2.4
Lists, environments and data frames
In this section we consider three different data structures that are designed to hold quite general objects. These data structures are sometimes called recursive since they can hold other R objects. The atomic vectors discussed above cannot. There are actually two types of lists in R: pairlists and lists. We will not discuss pairlists in any detail. They exist mainly to support the internal code and workings of R. They are essentially lists as found in Lisp or Scheme (of the car, cdr, cons, variety) and are not particularly well adapted for use in most of the problems we will be addressing. Instead we concentrate on the list objects, which are somewhat more vector-like in their implementation and semantics. 2.2.4.1
Lists
Lists can be used to store items that are not all of the same type. The function list can be used to create a list. Lists are also referred to as generic vectors since they share many of the properties of vectors, but the elements are allowed to have different types. > y = list(a = 1, 17, b = 4:5, c = "a") > y $a [1] 1 [[2]] [1] 17
R Language Fundamentals
19
$b [1] 4 5 $c [1] "a" > names(y) [1] "a" ""
"b" "c"
Lists can be of any length, and the elements of a list can be named, or not. Any R object can be an element of a list, including another list, as is shown in the code below. We leave all discussion of subsetting and other operations to Section 2.5. > l2 = list(mn = mean, var = var) > l3 = list(l2, y) Exercise 2.5 Create a list of length 4 and then add a dim attribute to it. What happens? 2.2.4.2
Data frames
A data.frame is a special kind of list. Data frames were created to provide a common structure for storing rectangular data sets and for passing them to different functions for modeling and visualization. In many cases a data set can be thought of as a rectangular structure with rows corresponding to cases and columns corresponding to the different variables that were measured on each of the cases. One might think that a matrix would be the appropriate representation, but that is only true if all of the variables are of the same type, and this is seldom the case. For example, one might have height in centimeters, city of residence, gender and so on. When constructing the data frame, the default behavior is to transform character input into factors. This behavior can be controlled using the option stringsAsFactors. Data frames deal with this situation. They are essentially a list of vectors, with one vector for each variable. It is an error if the vectors are not all of the same length. Data frames can often be treated like matrices, but this is not always true, and some operations are more efficient on data frames while others are less efficient. Exercise 2.6 Look up the help page for data.frame and use the example code to create a small data frame.
20 2.2.4.3
R Programming for Bioinformatics Environments
An environment is a set of symbol-value pairs, where the value can be any R object, and hence they are much like lists. Originally environments were used for R’s internal evaluation model. They have slowly been exposed as an R version of a hash table, or an associative array. The internal implementation is in fact that of a hash table. The symbol is used to compute the hash index, and the hash index is used to retrieve the value. In the code below, we create an environment, create the symbol value pair that relates the symbol a to the value 10 and then list the contents of the hash table. > e1 = new.env(hash = TRUE) > e1$a = 10 > ls(e1) [1] "a" > e1[["a"]] [1] 10 Environments are different from lists in two important ways, and we will return to this point later in Section 2.5. First, for environments, the values can only be accessed by name; there is no notion of linear order in the hash table. Second, environments, and their contents, are not copied when passed as arguments to a function. Hence they provide one mechanism for pass-byreference semantics for function arguments, but if used for that one should be cautious of the potential for problems. Perhaps one of the greatest advantages of the pass-by-value semantics for function calls is that in that paradigm function calls are essentially atomic operations. A failure, or error, part way through a function call cannot corrupt the inputs, but when an environment is used, any error part way through a function evaluation could corrupt the inputs. The elements of an environment can be accessed using either the dollar operator, $, or the double square bracket operator. The name of the value desired must be supplied, and unlike lists partial matching is not used. In order to retrieve multiple values simultaneously from an environment, the mget function should be used. In many ways environments are special. And as noted above they are not copied when used in function calls. This has at times surprised some users and here we give a simple example that demonstrates that these semantics mean that attributes really cannot be used on environments. In the code below, when e2 is assigned, no copy is made, so both e1 and e2 point to the same internal object. When e2 changes the attribute, it is changed for e1 as well. This is not what happens for most other types.
R Language Fundamentals
21
> e1 = new.env() > attr(e1, "foo") = 10 > e1 attr(,"foo") [1] 10 > e2 = e1 > attr(e2, "foo") = 20 > e1 attr(,"foo") [1] 20 In the next code segment, an environment, e1, is created and has some values assigned in to it. Then a function is defined and that function has some free variables (variables that are not parameters and are not defined in the function). We then make e1 be the environment associated with the function and then the free variables will obtain values from e1. Then we change the value of one of the free variables by accessing e1 and that changes the behavior of the function, which demonstrates that no copy of e1 was made.
> > > + + > >
e1 = new.env() e1$z = 10 f = function(x) { x + z } environment(f) = e1 f(10)
[1] 20 > e1$z = 20 > f(10) [1] 30 Next, we demonstrate the semantics of rm in this context. If we remove e1, what should happen to f? If the effect of the command environment(f) = e1 was to make a copy of e1, then rm(e1) should have no effect, but we know
22
R Programming for Bioinformatics
that no copy was made and yet, as we see, removing e1 appears to have no effect. > rm(e1) > f(10) [1] 30 > f function (x) { x + z } What rm(e1) does is to remove the binding between the symbol e1 and the internal data structure that contains the data, but that internal data structure is itself left alone. Since it can also be reached as the environment of f, it will remain available.
2.3
Managing your R session
The capabilities and properties of the computer that R is running on can be obtained from a number of builtin variables and functions. The variable R.version$platform is the canonical name of the platform that R was compiled on. The function Sys.info provides similar information. The variable .Platform has information such as the file separator. The function capabilities indicates whether specific optional features have been compiled in, such as whether jpeg graphics can be produced, or whether memory profiling (see Chapter 9) has been enabled. > capabilities() jpeg TRUE sockets TRUE profmem TRUE
png TRUE libxml TRUE cairo FALSE
tcltk TRUE fifo TRUE
X11 TRUE cledit FALSE
aqua http/ftp TRUE TRUE iconv NLS TRUE TRUE
R Language Fundamentals
23
A typical session using R involves starting R, loading packages that will provide the necessary tools to perform the analysis you intend and then loading data, and manipulating that data in a variety of ways. For every R session you have a workspace (often referred to as the global environment) where any variables you create will be stored. As an analysis proceeds, it is often essential that you are able to manage your session and see what packages are attached, what variables you have created and often inspect them in some way to find an object you previously created, or to remove large objects that you no longer require. You can find out what packages are on the search path using the search function and much more detailed information can be found using sessionInfo. In the code below, we load a Bioconductor package and then examine the search path. We use ls to list the contents of our workspace, and finally use ls to look at the objects that are stored in the package that is in position 2 on the search path. objects is a synonym for ls and both have an argument all.names that can be used to list all objects; by default, those that begin with a period are not shown. > library("geneplotter") > search() [1] [3] [5] [7] [9] [11] [13] [15] [17]
".GlobalEnv" "package:annotate" "package:AnnotationDbi" "package:DBI" "package:Biobase" "package:stats" "package:grDevices" "package:datasets" "Autoloads"
"package:geneplotter" "package:xtable" "package:RSQLite" "package:lattice" "package:tools" "package:graphics" "package:utils" "package:methods" "package:base"
> ls(2) [1] [3] [5] [7] [9] [11] [13] [15] [17] [19] [21] [23]
"GetColor" "alongChrom" "cPlot" "closeHtmlPage" "densCols" "histStack" "make.chromOrd" "multiecdf" "panel.smoothScatter" "plotExpressionGraph" "savepdf" "savetiff"
"Makesense" "cColor" "cScale" "dChip.colors" "greenred.colors" "imageMap" "multidensity" "openHtmlPage" "plotChr" "saveeps" "savepng" "smoothScatter"
24
R Programming for Bioinformatics
Most of the objects on the search path are packages, and they have the prefix package, but there are also a few special objects. One of these is .GlobalEnv, the global environment. As noted previously, environments are bindings of symbols and values. Exercise 2.7 What does sessionInfo report? How do you interpret it?
2.3.1
Finding out more about an object
Sometimes it will be helpful to find out about an object. Obvious functions to try are class and typeof. But many find that both str and object.size are more useful. > class(cars) [1] "data.frame" > typeof(cars) [1] "list" > str(cars) data.frame : $ speed: num $ dist : num
50 obs. of 2 variables: 4 4 7 7 8 9 10 10 10 11 ... 2 10 4 22 16 10 18 26 34 17 ...
> object.size(cars) [1] 1248 The functions head and tail are convenience functions that list the first few, or last few, rows of a matrix. > head(cars) 1 2 3 4 5 6
speed dist 4 2 4 10 7 4 7 22 8 16 9 10
R Language Fundamentals
25
> tail(cars) 45 46 47 48 49 50
speed dist 23 54 24 70 24 92 24 93 24 120 25 85
2.4
Language basics
Programming in R is carried out, primarily, by manipulating and modifying data structures. These different transformations and calculations are carried out using functions and operators. In R, virtually every operation is a function call and though we separate our discussion into operators and function calls, the distinction is not strong and the two concepts are very similar. The R evaluator and many functions are written in C but most R functions are written in R itself. The code for functions can be viewed, and in most cases modified, if so desired. In the code below we show the code for the function colSums. To view the code for any function you simply need to type its name and the prompt and the function will be displayed. Functions can be edited using fix. > colSums function (x, na.rm = FALSE, dims = 1) { if (is.data.frame(x)) x myP = get("+") > myP function (e1, e2)
.Primitive("+")
> myP(x, 5) [1] 6 7 8 9
One class of operators of some interest is the set of operators of the form %any%. Some of these, such as %*%, are part of R but users can define their own using any text string in place of any. The function should be a function of two arguments, although currently this is not checked. In the example below we define a simple operator that pastes together its two arguments.
28
R Programming for Bioinformatics
> "%p%" = function(x, y) paste(x, y, sep = "") > "hi" %p% "there" [1] "hithere"
2.5
Subscripting and subsetting
The S language has its roots in the Algol family of languages and has adopted some of the general vector subsetting and subscripting techniques that were available in languages such as APL. This is perhaps the one area where programmers more familiar with other languages fail to make appropriate use of the available functionality. Spending a few hours to completely familiarize yourself with the power of the subsetting functionality will be rewarded by code that runs faster and is easier to read. There are slight differences between subsetting of vectors, arrays, lists, data.frames and environments that can sometimes catch the unwary. But there are also many commonalities. One thing to keep in mind is that the effect of NA will depend on the type of NA that is being used. Subsetting is carried out by three different operators: the single square bracket [, the double square bracket [[, and the dollar, $. We note that each of these three operators are actually generic functions and users can write methods that extend and override them, see Chapter 3 for more details on object-oriented programming. One way of describing the behavior of the single bracket operator is that the type of the return value matches the type of the value it is applied to. Thus, a single bracket subset of a list is itself a list. Thesingle bracket operator can be used to extract any number of values. Both [[ and $ extract a single value. There are some differences between these two; $ does not evaluate its second argument while [[ does, and hence one can use expressions. The $ operator uses partial matching when extracting named elements but [ and [[ do not. > myl = list(a1 = 10, b = 20, c = 30) > myl[c(2, 3)] $b [1] 20 $c [1] 30
R Language Fundamentals
29
> myl$a [1] 10 > myl["a"] $ NULL > f = "b" > myl[[f]] [1] 20 > myl$f NULL Notice that the first subsetting operation does indeed return a list, then that the $ subscript uses partial matching (since there is no element of myl named a) and that [ does not. Finally, we showed that [[ evaluates its second argument and $ does not.
2.5.1
Vector and matrix subsetting
Subsetting plays two roles in the S language. One is an extraction role,where a subset of a vector is identified by a set of supplied indices and the resulting subset is returned as a value. Venables and Ripley (2000) referto this as indexing. The second purpose is subset assignment, where the goal is to identify a subset of values that should have their values changed; we call this subset assignment. There are four basic types of subscript indices: positive integers, negative integers, logical vectors and character vectors. These four typescannot be mixed; only one type may be used in any one subscript vector. For matrix and array subscripting, one can use different types of subscripts for the different dimensions. Not all vectors, or recursive objects, support all types of subscripting indices. For example, atomic vectors cannot be subscripted using $, while environments cannot be subscripted using [. Missing values can appear in the index vector and generally cause a missing value to appear in the output. 2.5.1.0.1 Subsetting with positive indices Perhaps the most common form of subsetting is with positive indices. Typically, a vector containing the integer subscripts corresponding to the desired values is used. Thus, to
30
R Programming for Bioinformatics
extract entries one, three and five from a vector, one can use the approach demonstrated in the next code chunk. > x = 11:20 > x[c(1, 3, 5)] [1] 11 13 15 The general rules for subsetting with positive indices are: A subscript consisting of a vector of positive integer values is taken to indicate a set of indexes to be extracted. A subscript that is larger than the length of the vector being subsetted produces an NA in the returned value. Subscripts that are zero are ignored and produce no corresponding values in the result. Subscripts that are NA produce an NA in the result. If the subscript vector is of length zero, then so is the result. Some of these rules are demonstrated next. > x = 1:10 > x[1:3] [1] 1 2 3 > x[9:11] [1]
9 10 NA
> x[0:1] [1] 1 > x[c(1, 2, NA)] [1]
1
2 NA
Exercise 2.8 Use the seq function to generate a subscript vector that selects those elements of a vector that have even-numbered subscripts.
R Language Fundamentals
31
2.5.1.0.2 Subsetting with negative indices In many cases it is simpler to describe the values that are not wanted, than to specify those that are. In this case, users can use negative subscript indices; the general rules are listed below. A subscript consisting of a vector of negative integer values is taken to indicate the indexes that are not to be extracted. Subscripts that are zero are ignored and produce no corresponding values in the result. NA subscripts are not allowed.
A zero length subscript vector produces a zero length answer. Positive and negative subscripts cannot be mixed. Exercise 2.9 Use the function seq to generate a sequence of indices so that those elements of a vector with odd-numbered indices can be excluded. Verify this on the builtin letters data. Verify the statement about zero length subscript vectors. 2.5.1.0.3 Subsetting with character indices Character indices can be used to extract elements of named vectors, lists. While technically having a names attribute is not necessary, the only possible result if the vector has no names is NA. There is no way to raise an error or warning with character subscripting of vectors or lists; for vectors NA is returned and for lists NULL is returned. Subsetting of matrices and arrays with character indices is a bit different and is discussed in more detail below. For named vectors, those elements whose names match one of the names in the subscript are returned. If names are duplicated, then only the value corresponding to the first one is returned. NA is returned for elements of the subscript vector that do not match any name. A character NA subscript returns an NA. If the vector has duplicated names that match a subscript, only the value with the lowest index is returned. One way to extract all elements with the same name is to use %in% to find all occurrences and then subset by position, as demonstrated in the example below. > x = 1:5 > names(x) = letters[1:5] > x[c("a", "d")] a d 1 4
32
R Programming for Bioinformatics
> names(x)[3] = "a" > x["a"] a 1 > x[c("a", "a")] a a 1 1 > names(x) %in% "a" [1]
TRUE FALSE
TRUE FALSE FALSE
Exercise 2.10 Verify that vectors can have duplicated names and that if a subscript matches a duplicated name, only the first value is returned. What happens with x[NA], and why does that not contradict the claims made here about NA subscripts? Hint: it might help to look back at Section 2.1.4. Lists subscripted by NA, or where the character supplied does not correspond to the name of any element of the list, return NULL. 2.5.1.0.4 Subsetting with logical indices A logical vector can also be used to subset a vector. Those elements of the vector that correspond to TRUE values in the subscript vector are selected, those that correspond to FALSE values are excluded and those that correspond to NA values are NA. The subscript vector is repeated as many times as necessary and no warning is given if the length of the vector being subscripted is not a multiple of the subscript vector. If the subscript vector is longer than the target, then any entries in the subscript vector that are TRUE or NA generate an NA in the output.
> (letters[1:10])[c(TRUE, FALSE, NA)] [1] "a" NA
"d" NA
"g" NA
> (1:5)[rep(NA, 6)] [1] NA NA NA NA NA NA
"j"
R Language Fundamentals
33
Exercise 2.11 Use logical subscripts to extract the even-numbered elements of the letters vector. 2.5.1.0.5 Matrix and array subscripts Empty subscripts are most often used for matrix or array subsetting. An empty subscript in any dimension indicates that all entries in that dimension should be selected. We note that x[] is valid syntax regardless of whether x is a list, a matrix, an array or a vector. > x = matrix(1:9, nc = 3) > x[, 1] [1] 1 2 3 > x[1, ] [1] 1 4 7 One of the peculiarities of matrix and array subscripting is that if the resulting value is such that the result has only one dimension of length larger than one, and hence is a vector, then the dimension attribute is dropped and the result is returned as a vector. This behavior often causes hard-to-find and hard-to-diagnose bugs. It can be avoided by the use of the drop argument to the subscript function, [. Its use is demonstrated in the code below. > x[, 1, drop = FALSE] [1,] [2,] [3,]
[,1] 1 2 3
> x[1, , drop = FALSE] [1,]
[,1] [,2] [,3] 1 4 7
Since arrays and matrices can be treated as vectors, and indeed that is how they are stored, it is important to know the relationship between the vector indices and the array indices. Arrays and matrices in S are stored in column major order. This is the form of storage used by FORTRAN and not that used by C. Thus the first, or left-most, index moves the fastest and the last,
34
R Programming for Bioinformatics
or right-most, index the slowest, so that a matrix is filled column by column (the row index changes). This is often referred to as being in column major order. The function matrix has an option named byrow that allows the matrix to be filled row by row, rather than column by column. Exercise 2.12 Let x be a vector of length 10 and has a dimension attribute so that it is a matrix with 2 columns and 5 rows. What is the matrix location of the 7th element of x? That is, which row and column is it in? Alternatively, which element of x is in the second row, first column? Finally, an array may be indexed by a matrix. If the array has k dimensions, then the matrix must be of dimension l by k and must contain integers in the range 1 to k. Each row of the index array is interpreted as identifying a single element of the array. Thus the subscripting operation returns l values. A simple example is given below. If the matrix of subscripts is either a character matrix or a matrix of logical values, then it is treated as if it were a vector and the dimensioning information is ignored. > x = array(1:27, dim = c(3, 3, 3)) > y = matrix(c(1, 2, 3, 2, 2, 2, 3, 2, 1), byrow = TRUE, + ncol = 3) > x[y] [1] 22 14
6
Character subscripting of matrices is carried out on the row and column names, if present. It is an error to use character subscripts if the row and column names are not present. Attaching a dim attribute to a vector removes the names attribute if there was one. If a dimnames attribute is present, but one or more of the supplied character subscripts is not present, a subscript out of bounds error is signaled, which is quite different from the way vectors are treated. Arrays are treated similarly, but with respect to the names on each of the dimensions. For data.frames the effects are different. Any character subscript for a row that is not a row name returns a vector of NAs. Any subscript of a column with a name that is not a column name raises and error. Exercise 2.13 Verify the claims made for character subsetting of matrices and data.frames. Arrays and matrices can always be subscripted singly, in which case they are treated as vectors and the dimension information is disregarded (as are the dimnames). Analogously, if a data.frame is subscripted with a single subscript, it is interpreted as list subscripting and the appropriate column is selected.
R Language Fundamentals
35
2.5.1.0.6 Subset assignments Subset expressions can appear on the left side of an assignment. If the subset is specified using positive indices, then the given subset is assigned the values on the right, recycling the values if necessary. Zero subscripts and NA subscripts are ignored. > x[1:3] = 10 > x , , 1
[1,] [2,] [3,]
[,1] [,2] [,3] 10 4 7 10 5 8 10 6 9
, , 2
[1,] [2,] [3,]
[,1] [,2] [,3] 10 13 16 11 14 17 12 15 18
, , 3
[1,] [2,] [3,]
[,1] [,2] [,3] 19 22 25 20 23 26 21 24 27
Negative subscripts can appear on the the left side of an assignment. In this case the given subset is assigned the values on the right side of the assignment, recycling the values if necessary. Zero subscripts are ignored and NA subscripts are not permitted. > x = 1:10 > x[-(2:4)] = 10 > x [1] 10
2
3
4 10 10 10 10 10 10
For character subscripts, the selected subset is assigned the values from the right side of the assignment, recycling if necessary. Missing values (character
36
R Programming for Bioinformatics
NA) create a new element of the vector, even if there is already an element with the name NA. Note that this is quite different from the effect of a logical NA, which has no effect on the vector.
Exercise 2.14 Verify the claims made about negative subscript assignments. Create a named vector, x, and set one of the names to NA. What happens if you execute x[NA]=20 and why does that not contradict the statements made above? What happens if you use x[as.character(NA)]=20? In some cases leaving all dimensions out can be useful. For example, x[], selects all elements of the vector x and it does not change any attributes. > x = matrix(1:10, nc = 2) > x[] = sort(x)
2.5.1.0.7 Subsetting factors There is a special method for the single bracket subscript operator on factors. For this method the drop argument indicates whether or not any unused levels should be dropped from the return value. The [[ operator can be applied to factors and returns a factor of length one containing the selected element.
2.6
Vectorized computations
By vectorized computations we mean any computation, by the application of a function call, or an operator (such as addition), that when applied to a vector automatically operates directly on all elements of the vector. For example, in the code below, we add 3 to all elements of a simple vector. > x = 11:15 > x + 3 [1] 14 15 16 17 18 There was no need to make use of a for loop to iterate over the elements of x. Many R functions and most R operators are vectorized. In the code below nchar is invoked with a vector of months names and the result is a vector with
the number of characters in the name of each month; there is no need for an explicit loop.
R Language Fundamentals
37
> nchar(month.name) [1] 7 8 5 5 3 4 4 6 9 7 8 8
2.6.1
The recycling rule
Since vectorized computations occur on most arithmetic operators we may encounter the problem of adding together two vectors of different lengths and some rule is needed to describe what will happen in that situation. This is often referred to as the recycling rule. The recycling rule states that the shorter vector is replicated enough times so that the result has at least the length of the longer vector and then the operator is applied to the resulting elements, of that modified vector, that correspond to elements of the longer vector. If the shorter vector is not an even multiple of the longer vector, a warning will be signaled. > 1:10 + 1:3 [1]
2
4
6
5
7
9
8 10 12 11
When a binary operation is applied to two or more matrices and arrays, they must be conformable, which means that they must have the same dimensions. If only one operand is a matrix or array and the other operands are vectors, then the matrix or array is treated as a vector. The result has the dimensions of the matrix or array. Dimension names, and other attributes, are transferred to the output. In general the attributes of the longest element, when considered as a vector, are retained for the output. If two elements of the operation are of the same length, then the attributes of the first one, when the statement is parsed left to right, are retained. Any vector or array computation where the recycling rule applies and one of the elements has length zero returns a zero length result. Some examples of these behaviors are given below. > 1:3 + numeric() numeric(0) > 1:3 + NULL numeric(0)
38
R Programming for Bioinformatics
> x = matrix(1:10, nc = 2) > x + (1:2) [1,] [2,] [3,] [4,] [5,]
2.7
[,1] [,2] 2 8 4 8 4 10 6 10 6 12
Replacement functions
In R it is sometimes helpful to appear to directly change a variable. We saw some examples of this with subassignment; e.g., x[2] = 10 gives the impression of having changed the value of x. Similarly, changing the names on a vector can be handled using names(x) = newVal. Some reflection on the fact that R is a pass-by-value language and that all operations are function calls means that, in principle, such an operation is not possible. That is, it is not possible to change x, since the function operates on a copy of x. Following Venables and Ripley (2000), any assignment where the left-hand side is not a simple identifier will be described as a replacement function. These functions achieve their objective by rewriting the call in such a way that the named variable (x in the examples above) is explicitly reassigned. We show how these two commands would be rewritten, below. You can make these calls directly, if you wish, and all functions are documented and can be inspected, just like any other functions. > x = 1:4 > x = [ x [1]
1 10
3
4
And for the names example: > names(x) = letters[1:4] > names(x) [1] "a" "b" "c" "d"
R Language Fundamentals
39
> x = "names"
file.rename renames the file specified by its first argument with the name given as its second argument. Symbolic links can be created using the file.symlink function. Lastly, one can create and manipulate directories themselves. The function dir.create will create a directory, which at that point can be manipulated like any other file. Note, however, that if the directory is not empty, file.remove will not remove it and will return FALSE. To remove directories that contain files, one must use the unlink function. unlink also works on plain files, but file.remove is probably more intuitive and slightly less dangerous. Note that incautious use of unlink can irretrievably remove important files.
Input and Output in R
129
In the example below, we demonstrate the use of some of these functions. We do most of the reading and writing to R temporary directory.
> newDir = file.path(tempdir(), "newDir") > newDir [1] "/tmp/RtmpHPmzRh/newDir" > newFile = file.path(newDir, "blah") > newFile [1] "/tmp/RtmpHPmzRh/newDir/blah" > dir.create(newDir) > file.create(newFile) [1] TRUE > file.exists(newDir, newFile) [1] TRUE TRUE > unlink(newDir, recursive = TRUE)
Setting the recursive argument to unlink to TRUE is needed to remove nonempty directories. If this argument has its default value, FALSE, then the command would fail to remove a non-empty directory the same as file.remove does. Unix users will recognize this as the equivalent of typing rm -r from the command line, so be careful! You can remove files and directories that you did not intend to and they generally cannot easily be retrieved or restored. The function file.choose prompts the user, interactively, to select a file, and then returns that file’s name as a character vector. On Windows, users are presented with a standard file selection dialogue; on Unix-like operating systems, they are expected to type the name at the command line.
4.2.3
Working with R’s binary format
R objects can be saved in a standard binary format, which is related to XDR (Eisler, 2006), and is platform independent. An arbitrary number of R objects can be saved into a single file using the save command. They can be reloaded into R using the load command. These files can be copied to any other computer and loaded into R without any translation. When an archive has been loaded, the return value of load is the name of all objects that were loaded.
130
R Programming for Bioinformatics
Both save and load allow the caller to specify a specific environment in which to find the bindings or in which to store the restored bindings.
4.3
Connections
As indicated above, all data input and output can be performed via connections. Connections are basically an extension of the notion of file and provide a richer set of tools for reading and writing data. Connections provide an abstraction of an input data source. Using connections allows a function to work in essentially the same way for data obtained from a local file, an R character vector, or a file on the Internet. Connections are implemented using the S3 class system and the base class is connection, which different types of connections extend. There are both summary and print methods for connections. The most commonly used connection is a file, which can be opened for reading or writing data. The set of possible values that can be specified for the open argument is detailed in the manual page. Other types of connections are the FIFO, pipe and socket. These are all described in some detail below. Connections can be used to read from zipped files, using one of gzfile, bzfile or unz, depending on what tool was used to compress the file. These connections can be supplied to readLines or read.delim, which then simply read directly from the compressed files. Of some general interest is the function showConnections that will show all connections and their status. With the default settings, only user-created open connections are displayed. This can be helpful in ensuring that a connection is open and ready or for finding connections that have been opened and forgotten. > showConnections(all = TRUE) 0 1 2 3 4 0 1 2 3 4
description "stdin" "stdout" "stderr" "RIO.tex" "" can write "no" "yes" "yes" "yes" "yes"
class "terminal" "terminal" "terminal" "file" "file"
mode "r" "w" "w" "w+" "w+"
text "text" "text" "text" "text" "text"
isopen "opened" "opened" "opened" "opened" "opened"
can read "yes" "no" "no" "yes" "yes"
Input and Output in R
131
Some connections support the notion of pushing character strings back onto the connection. One might presume that the function pushBack can only push back things that have been read; this is similar to the notion of rewinding a file, but this is not true. You can push back any character vector onto a connection that supports pushing back. Not all operating systems support all connections. In order to determine whether your system has support for sockets, pipes or URLs the capabilities function can be used. > capabilities() jpeg TRUE libxml TRUE
4.3.1
png TRUE fifo TRUE
tcltk TRUE cledit FALSE
X11 TRUE iconv TRUE
aqua http/ftp TRUE TRUE NLS profmem TRUE TRUE
sockets TRUE cairo FALSE
Text connections
A text connection is essentially a device for reading from, or writing to, an R character vector. The code below is taken from the manual page for textConnection and it demonstrates some of the basic operations that can be carried out on a textConnection that is being used for input. The connection can be used as input for any of the input functions, such as readLines and scan, but it also supports pushing data onto the connection. > zz = textConnection(LETTERS) > readLines(zz, 2) [1] "A" "B" > showConnections(all = TRUE) description "stdin" "stdout" "stderr" "RIO.tex" "" "LETTERS" can write 0 "no" 1 "yes" 2 "yes" 0 1 2 3 4 5
class "terminal" "terminal" "terminal" "file" "file" "textConnection"
mode "r" "w" "w" "w+" "w+" "r"
text "text" "text" "text" "text" "text" "text"
isopen "opened" "opened" "opened" "opened" "opened" "opened"
can read "yes" "no" "no" "yes" "yes" "yes"
132
R Programming for Bioinformatics
3 "yes" 4 "yes" 5 "no" > scan(zz, "", 4) [1] "C" "D" "E" "F" > pushBack(c("aa", "bb"), zz) > scan(zz, "", 4) [1] "aa" "bb" "G"
"H"
> close(zz) One can also write to a textConnection, and the effect is to create a character vector with the specified name; but you must be sure to use open="w" so that it is open for writing. You almost surely want to set local=TRUE; otherwise, the text connection is created in the top-level workspace. Since R’s input and output can be redirected to a connection, this allows users to capture function output and store it in a computable form. In the code below, we create a text connection that can be written to, then carry out some computations and use sink to divert the output of the command to the text connection. Since we did not set local=TRUE, creating the text connection creates a global variable named foo. We did set split=TRUE so that the output of the commands would be shown in the terminal and collected into the text connection. Other text can be written directly to the text connection using cat or other similar functions. > savedOut = textConnection("foo", "w") > sink(savedOut, split = TRUE) > print(1:10) [1]
1
2
3
4
5
6
7
8
9 10
> cat("That was my first command \n") That was my first command > letters[1:4] [1] "a" "b" "c" "d" > sink() > close(savedOut) > cat(foo, sep = "\n")
Input and Output in R
133
Another alternative for capturing the output of commands is the suggestively named capture.output. Unlike using sink, the commands for which the output is wanted are passed explicitly to capture.output.
4.3.2
Interprocess communications
Being able to pass data from one process to another can lead to substantial benefits and should often be considered as an alternative to reimplementation or other, more drastic solutions. One of the more popular methods of sharing data between processes has been the use of intermediate files; one process writes a file and the other reads it. However, if the mechanics are left to the programmer, this procedure is fraught with danger and often fails in rather peculiar ways. Fortunately, there are a wide number of programmatic solutions that allow software to handle most of the organizational details, thereby freeing the programmer to concentrate on the conceptual details. Some of the different connections and mechanisms for interprocess communication (IPC) have implementations as R connections, and we discuss those here. We also make some more general comments, and it is likely the future versions of R will include more refined IPC tools. A very sophisticated and detailed discussion of many of the concepts mentioned here is given in Stevens and Rago (2005), particularly Chapters 15 and 17 of that reference. 4.3.2.1
Socket connections
You can use the capabilities function to determine whether sockets and hence socketConnections are supported by your version of R. If they are, then the discussion in this section will be relevant. If they are not supported, then you will not be able to use them. Sockets are a mechanism that can be used to support interprocess communications. Each of the two processes establishes a connection to a socket, which is merely one end of the intercommunication process. One process is typically the server and the other the client. In the example below, we demonstrate how to establish a socket connection between two running R processes. For simplicity we presume that they are both running on the same computer, but that is not necessary; and in the general case, the processes can be on different computers. Furthermore, there is no requirement that both ends be R processes. The default for socket connections is to be in non-blocking mode. That means that they will return as soon as possible. On input, they return with the available input, possibly nothing; and on output, they return regardless of whether the write succeeded. The first R process sets up a socket connection on a named port in server mode. The port number is not important but you need to select one that is high enough not to conflict with a port that is in use. serverCon = socketConnection(port = 6543, server=TRUE)
134
R Programming for Bioinformatics writeLines(LETTERS, serverCon) close(serverCon)
Then, the second R process opens a connection to the same port but, this time, in client mode. Since the client mode is not blocking, we must poll until we have a complete input. The call to Sys.sleep ensures that some time elapses between calls to readLines and allows other processes to be run. clientCon = socketConnection(port = 6534) readLines(clientCon) while(isIncomplete(clientCon)) { Sys.sleep(1) readLines(clientCon)} close(clientCon) Unfortunately, connections are not exposed at the C level so there is no opportunity for accessing them directly at that level. 4.3.2.2
Pipes
A pipe is a shell command where the standard input can be written from R and the standard output can be read from R. A call to pipe creates a connection that can be opened by writing to it, or by reading from it. The pipe can be used as a connection for any function that reads and writes from connections. In the example below, the system function cal is used to get a calendar. > p1 = pipe("cal 1 2006") > p1 description "cal 1 2006" opened "closed"
class "pipe" can read "yes"
mode "r" can write "yes"
text "text"
> readLines(p1) [1] [3] [5] [7]
" January 2006" " S M Tu W Th F S" " 1 2 3 4 5 6 7" " 8 9 10 11 12 13 14" "15 16 17 18 19 20 21" "22 23 24 25 26 27 28" "29 30 31" ""
It is reasonably simple to extend this to provide a function that returns the calendar for either the current month, or any other month or year. The function is provided in RBioinf , and the code is shown below.
Input and Output in R
135
> library("RBioinf") > Rcal function (month, year) { pD = function(x) pipe(paste("date \"+%", x, "\"", sep = "")) if (missing(month)) month = readLines(pD("m")) if (missing(year)) year = readLines(pD("Y")) cat(readLines(pipe(paste("cal ", month, year))), sep = "\n") } An alternative to the use of pipe is available using the intern argument for system. Notice that the following calls are equivalent. But pipe is more general, and system could easily be written in terms of pipe. Further, there is no real reason why a pipe cannot be bidirectional; Stevens and Rago (2005) refer to these as STREAMS-based pipes, which are opened for both reading and writing, but only unidirectional pipes have been implemented in R. Basically this means that to capture the output of any pipe opened for writing, you will need to redirect the output to file, or perhaps a socket or FIFO, and then read from that using a separate connection. On OS X, users can read and write from the system clipboard using pipe("pbpaste") and pipe("pbcopy", "w"), respectively. > ww = system("ls -1", intern = T) > xx = readLines(pipe("ls -1")) > all.equal(ww, xx) [1] TRUE Another advantage to pipes over calls to system is that one can pass values to the system call via pipe after the subprocess has been started. With calls to system, the entire command must be assembled and sent at one time. Exercise 4.7 Rewrite Rcal to use system. Exercise 4.8 The following code establishes a pipe to the system command wc, which counts words, characters and lines. What happens to the output? How would you modify the commands to retrieve the output of wc?
136
R Programming for Bioinformatics
WC = pipe("wc", open="w") writeLines(letters,WC) 4.3.2.3
FIFOs
A FIFO is a special kind of file that stores data that are written to it. FIFO is an acronym for first-in, first-out, and FIFOs are also often referred to as named pipes. The next program to read the FIFO extracts the first record that was written, as the name suggests. Once a record has been read, it is automatically removed. Thus, the FIFO only retains data that have been written, but not yet read. FIFOs can thus be used for interprocess communication (as can socketConnections); but since FIFOs are named files, the communication channel is via the file system. Not all platforms support FIFOs. Most Unix-based versions and OS X do support fifo. Pipes can only be used to communicate between processes that share a common ancestor that created the pipe. When unrelated processes want to communicate, they must make use of some other mechanism, and often the appropriate tool is a FIFO. Stevens and Rago (2005) give two different uses for FIFOs: first as a way for shell commands to pass data without creating intermediate temporary files and second as a rendezvous point for client-server applications to pass data between clients and servers.
4.3.3
Seek
Some connections support direct interactions with the location of the current read and write positions. If the connection supports these interactions, isSeekable will return TRUE and seek can be used to find the current position and to alter it. In the code chunk below, we create a file, write to it, and then manipulate the reading position using seek. Notice that the part of the file read is repeated. The connection is closed and the file is unlinked at the end of the code chunk. > fName = file.path(tempdir(), "test1") > file.create(fName) [1] TRUE > sFile = file(fName, open = "r+w") > cat(1:10, sep = "\n", file = sFile) > seek(sFile) [1] 21 > readLines(sFile, 3) [1] "1" "2" "3"
Input and Output in R
137
> seek(sFile, 2) [1] 6 > readLines(sFile) [1] "2"
"3"
"4"
"5"
"6"
"7"
"8"
"9"
"10"
> close(sFile) > unlink(fName) Thus, using seek, one can treat a large file as random access memory. However, the cost can be quite high as reading and writing tends to be a bit slow. Other alternatives are to read the data in and use internal tools or database tools such as those described in Chapter 8.
4.4
File input and output
The most appropriate tool for reading and writing from files will generally depend on the contents of the file, and the purpose to which those contents will be put. The most general low-level reading function is scan. Perhaps two of the most general commands for file input/output are readLines and writeLines. As their names suggest, the former is used to read input and the latter to write output. Both functions take a parameter con, which will take either the name of a file or a connection. The default for this parameter is to read/write from stdin and stdout, respectively. From here on, however, they differ. readLines has the following formal arguments: n the (maximal) number of lines to read. Negative values indicate reading to
the end of the connection. Default is -1.
ok a logical value indicating whether it is OK to reach the end of the connec-
tion before n > 0 lines are read. If not, an error will be generated. The default of this is TRUE.
warn a logical value indicating whether or not to warn the user if a text file
is missing a final end-of-line character.
encoding the encoding that is assumed for the input. writeLines has the following formal arguments: text a character vector.
138
R Programming for Bioinformatics
sep a string to be written to the connection after each line of text. Default is
the new line character, "\n".
> a = readLines(con = system.file("CONTENTS", package = "base"), + n = 2) > a [1] "Entry: Arithmetic" [2] "Aliases: + - * ** / ^ %% %/% Arithmetic" > writeLines(a) Entry: Arithmetic Aliases: + - * ** / ^ %% %/% Arithmetic A rather frequent question on the R mailing list is how to create files and write to those files within a loop. For example, suppose that there is some interest in carrying out a permutation test and saving the permutations in separate files. In the code below, we show how to do this for a small example with 10 permutations. The files are written into the temporary directory in this example so that they will be removed when R exits. You should choose some other location to write to, but that will depend on your local file system. > mydir = tempdir() > for (i in 1:10) { + fname = paste("perm", i, sep = "") + prm = sample(1:10, replace = FALSE) + write(prm, file = file.path(mydir, fname)) + } Exercise 4.9 Select a location on your local file system and use the code above to write files in that location. How would you modify the code to write a comma-separated set of numbers? What seed was used to generate the permutations? Can you set a seed so you always get the same permutations?
4.4.1
Reading rectangular data
In many cases the data to be read in are in the form of a rectangular, or nearly rectangular, array. For those cases, there are specialized functions (read.table, read.delim and read.csv) with variants (read.csv2 and read.delim2) that are tailored to European norms for representing numbers.
Input and Output in R
139
These functions will take either the name of a file or a connection and attempt to read data from that. There are three primary ways in which they differ: what is considered to be a separator of the data items, the character used to delimit quoted strings, and what character is used for the decimal indicator. The most general of these is read.table and, in fact, the others are merely wrappers to read.table with appropriate values set for the arguments. However, comma-separated values (.csv) occur often enough that it is worthwhile to have the convenience function. Among the more important arguments to read.table are: as.is by default, character variables are turned into factors; setting as.is to TRUE, they are left as character values. The transforming of strings into factors can be controlled using the option stringsAsFactors. na.strings a vector of strings that are to be interpreted as missing values and hence any corresponding entries will be converted to NA during process-
ing.
fill if set to TRUE and some rows have unequal lengths, shorter rows are
padded.
comment a single character indicating the comment character. for any line of
the input file, all characters after the comment character are skipped.
sep the record separator. header a logical value indicating whether or not the first line of the file con-
tains the variable names.
When the data do not appear to be read in correctly, the three most common causes are: the quote character is used in the file for something other than a quotation, and hence the symbols are not matched (for biological data 3’ and 5’ are often culprits); the comment character appears in the file, not as a comment; or there are some characters in the file that have an unusual encoding and have caused confusion. The default behavior of these different routines is to turn character variables (columns) into factors. If this is not desired, and very often it is not, then either the as.is argument should be used or the more general colClasses should be used. colClasses can be used to specify classes for all columns. If a column has a NULL value for colClasses, then that column is skipped and not read into R.
4.4.2
Writing data
Since R’s roots are firmly in statistical applications where data have a rectangular form, typically with rows corresponding to cases and columns to variables, there are specialized tools for writing rectangular data. Some of these
140
R Programming for Bioinformatics
are aimed at producing a table that is suitable for being imported into a spreadsheet application such as Gnumeric or Microsoft’s Excel. The function write can be used to write fairly arbitrary objects. While it has a number of arguments that are useful for writing out matrices, it does not deal with data frames. For writing out data frames, there are three separate functions: the very general write.table and two specialized interfaces to the same functionality, write.csv and write.csv2. Another way to write R objects to a file is with the function cat. The transformation of R objects to character representations suitable for printing is different from those carried out by either write or print. By default, cat writes to the standard output connection, but the file argument can be any connection, the name of a file or the special form "|cmd", in which case the output of cat is sent to the system function named. In the code chunk below, we use this feature to send the output of cat to the cal function.
> cat("10 2005", file = "|cal") Other functions that are of interest include writeBin and readBin for reading and writing binary data, as well as writeChar and readChar for reading and writing character data. Readers are referred to the relevant manual pages and the R Data Import/Export Manual for more details on these functions.
4.4.3
Debian Control Format (DCF)
Debian Control Format (DCF) is used for some of the package-specific files in R (see Chapter 7); in particular, the DESCRIPTION file in all R packages and the CONTENTS file for installed packages. The functions read.dcf and write.dcf, are available in R to read and write files in this format. For a description of DCF, see help("read.dcf").
> x = read.dcf(file = system.file("CONTENTS", package = "base"), + fields = c("Entry", "Description")) > head(x, n = 3) Entry [1,] "Arithmetic" [2,] "AsIs" [3,] "Bessel" Description [1,] "Arithmetic Operators" [2,] "Inhibit Interpretation/Conversion of Objects" [3,] "Bessel Functions"
Input and Output in R
141
> write.dcf(x[1:3, ], file = "") Entry: Arithmetic Description: Arithmetic Operators Entry: AsIs Description: Inhibit Interpretation/Conversion of Objects Entry: Bessel Description: Bessel Functions read.dcf returns a matrix, while write.dcf takes a matrix and transforms it into DCF formatted output; the empty string "" as the file parameter tells the system to output to the console instead of specifying a particular file.
4.4.4
FASTA Format
Biological sequence data are available in a very wide range of formats. The FASTA format is probably the most widely used, but there are many others. A FASTA file consists of one or more biological sequences. Each sequence is preceded by a single line, beginning with a >, which provides a name and/or a unique identifier for the sequence and often other information. The description line can be followed by one or more comment lines, which are distinguished by a semicolon at the beginning of the line. After the header line and comments, the sequence is represented by one or more lines. Sequences may correspond to protein sequences or DNA sequences and should make use of the IUPAC codes; these can be found in many places, including http://en.wikipedia.org/wiki/Fasta_format. All lines should be shorter than 80 characters. Functions for reading and writing in the FASTA format are provided in the Biostrings package as readFASTA and writeFASTA, respectively. Exercise 4.10 Modify the function readFASTA in the Biostrings package, or any other FASTA reading function, to (1) transform the data to uppercase, (2) check that only IUPAC symbols are contained in the sequence data, and (3) check the line lengths to see if they are shorter than 80 characters. Exercise 4.11 There is a file in the Biostrings package, in a folder named extdata, named exFASTA.mfa. Using the system.file and readLines functions, process this file to answer the following questions. How many records are in the file? How long, in number of characters, are the different records? Can you tell if they are DNA or protein sequences that have been encoded?
142
R Programming for Bioinformatics
Compare your approach with that in the readFASTA function. What are the differences? Run a timing comparison to see which is faster (you might want to refer to Section 9.5 for details on how to do that).
4.5
Source and sink: capturing R output
While the standard interactions with R are primarily carried out by users typing commands to the command line and subsequently viewing the outputs that those commands generate, there are many situations where more programmatic interactions are important. Often, users will want to either supply input to R in some other way, or they may want to capture the output of a command into a file or variable so that it can be programmatically manipulated or simply for future reference. The main interface for input is source, which reads, parses and evaluates R commands. The input can be a file or a connection. Since the input is parsed, there is generally code rearrangement and in particular, by default, comments are dropped. The argument keep.source can be used to override this behavior, and there is also a global option, of the same name, that can be used to set behavior for the entire session. When source is run, it first reads, then parses, and finally evaluates, so no command will be evaluated if there is a syntax error in the file. Users can also carry out the three steps themselves, if they choose. They can first use scan to read in the commands as text; then use parse to parse but not evaluate those commands; and finally use eval to evaluate the set of parsed expressions. Such a strategy allows for much more fine-grained control over the process, although it is seldom ever needed. In other cases it will be quite helpful to capture the output of different commands issued to R. One example is the function printWithNumbers discussed in Chapter 9, which provides appropriate line numbers for R functions so that the at argument for trace can be more easily used. To implement this function, we used capture.output, which can be used to capture, as text, the output of a set of provided R expressions. Alternatively, R’s standard output can be diverted using sink. A call to sink will divert all standard output, but not error or warning messages, nor one presumes other conditions (e.g. Section 2.11). To divert error and warning messages, set the argument type to "messages". This capability should be used with caution, however, since it will not be easy to determine when errors are being signaled. Note that the requirements for the file argument are different when the messages are being diverted, than when output is being diverted. To turn off the redirection, simply call sink as second time with a NULL argument. The redirections established by sink form a stack, with new
Input and Output in R
143
redirections added to the top, and calls to sink with NULL arguments popping the top redirection off of the stack. The function sink.number can be used to determine how many redirections are in use. It is also possible to both capture the output, via a redirection, and to continue to have it displayed on the screen. This is achieved by setting the split argument in the call to sink.
4.6
Tools for accessing files on the Internet
R functions that can be used to obtain files from the Internet include
download.file and url, which opens a connection and allows for reading from that connection. The function url.show renders the remote file in the console. There are R functions for encoding URLs, URLencode and URLdecode, that
can be used to encode and decode URL names. URLs have a set of reserved characters, and not all characters are valid. An invalid character needs to be preceeded by a % sign if it is contained in a URL. The RCurl package provides an extensive interface to libcURL. The libcURL library supports transferring files via a wide variety of protocols including FTP, FTPS, HTTP, HTTPS, GOPHER, TELNET, DICT, FILE and LDAP. libcURL also supports HTTPS certificates, HTTP POST, HTTP PUT, FTP uploading, Kerberos, HTTP form-based upload, proxies, cookies, user+password authentication, file transfer resume, and http proxy tunneling. This package supports a very wide range of interactions with web resources and in particular is very helpful in posting and retrieving data from forms. Many bioinformatic databases and tools provide forms-based interfaces. These are often used interactively, by basically pointing a browser to the appropriate page and filling in values. However, one can post and retrieve answers programmatically. Alternatively, many provide Biomart interfaces, and the tools described in Section 8.6.3 can be used to obtain the data. RCurl can eliminate manual work (“screen-scraping”) with web pages to obtain data that have not been made available through standard web services. For example, when data can be obtained interactively using text input, radio button settings, and check-box selections, code resembling the following can be used to obtain that same data programmatically: > postForm("http://www.speakeasy.net/main.php", + "some_text" = "Duncan", "choice" = "Ho", + "radbut" = "eep", "box" = "box1, box2" ) The resulting data must be parsed, but the htmlTreeParse function can be very helpful for this. More details on XML and HTML parsing are given in
144
R Programming for Bioinformatics
Section 8.5. The next example and its solution are based on a discussion from the R help mailing list. The Worldwide Protein Data Bank (wwPDB) is an online source for PDB data. The mission of the wwPDB is to maintain a single Protein Data Bank (PDB) Archive of macromolecular structural data that is freely and publicly available to the global community. The web site is at http://www.wwpdb.org/, and data can be downloaded from that site. However, there are an enormous number of files, and one might want to be somewhat selective in downloading. The code below shows how to obtain all the file names. Individual files can then be obtained using wget or other similar functions. The calls to strsplit and gsub split the string on the new line character and remove any \r (carriage return) characters that are present. We could have done that in one step by using a regular expresssion (Section 5.3) but then strsplit becomes painfully slow. > > > + > > >
library("RCurl") url = "ftp://ftp.wwpdb.org/pub/pdb/data/structures/all/pdb/" fileNames = getURL(url, .opts = list(customrequest = "NLST *.gz") ) fileNames = strsplit(fileNames, "\n", fixed=TRUE)[[1]] fileNames = gsub("\r", "", fileNames) length(fileNames)
[1] 51261 The file names are informative, as they encode PDB identifiers, and given a map to these, say from some genes of interest, perhaps using biomaRt, Section 8.6.3, one can download individual files of interest. In the code below, we download the first file in the list using download.file. > fileNames[1] [1] "pdb100d.ent.gz" > download.file(paste("ftp://ftp.wwpdb.org/pub/pdb/data/", + "structures/all/pdb/pdb100d.ent.gz", sep = ""), + destfile = "pdb100d.ent.gz")
Chapter 5 Working with Character Data
5.1
Introduction
Working with character data is fundamental to many tasks in computational biology, but is not that common of a problem in statistical applications. The tools that are available in R are more oriented to the processing of data into a form that is suitable for statistical analysis, or to format outputs for publication. There is an increased awareness, and corresponding capabilities, for dealing with different languages and file encodings, but we will not do more than briefly touch on this subject. In this chapter we review the builtin capabilities in R, but then turn our attention to some problems that are more fundamental to biological applications. In biological applications there are a number of different alphabets that are relevant; perhaps the best known of them is the four letter alphabet that describes DNA, but there are others. The basic problems are exact matching of one or more query sequences in a target sequence, inexact matching of sequences, and the alignment of two or more sequences. There is also substantial interest in text mining applications, but we will not cover that subject here. Our primary focus will be on the methodology provided in the Biostrings package, but there are a number of other packages that have a biological focus, including seqinR, annotate, matchprobes, GeneR and aaMI. String matching problems exist in many different contexts, and have received a great deal of attention in the literature. Cormen et al. (1990) provide a nice introduction to the methods while Gusfield (1997) gives a much more in depth discussion with many biological applications. One can either search for exact matches of one string in another, or for inexact matches. Inexact matching is more difficult and often more computationally expensive. The chapter is divided into three main sections. First we describe the builtin functions for string handling and manipulation, plus some generic string handling issues. Next we discuss regular expressions and tools such as grep and agrep, and finally we present more detail on the biological problems and present a number of concrete examples. 145
146
R Programming for Bioinformatics
5.2
Builtin capabilities
Character vectors are one of the basic vector types in R (see Chapter 2 for more details). A character vector consists of zero or more character strings. Only the vector can be easily manipulated at the R level and most functions are vectorized. The string "howdy" will be stored as a length one character vector with the first element having five characters. In the code below, we construct a character vector of length three. Then we use nchar to ask how many characters there are in each of the strings. The function nchar returns the length of the elements of its argument. There are three different ways to measure length: bytes, chars and width. These are generally the same, at least in locales with single-byte characters. > mychar = c("as", "soon", "as possible") > mychar [1] "as"
"soon"
"as possible"
> nchar(mychar) [1]
2
4 11
Like other basic types, a character vector can be of length zero; and in the code below we demonstrate the difference between a character vector of length zero and a character string of length zero. The variable x represents a zero length character vector, while y represents a length one character vector, whose single element is the empty string. > x = character(0) > length(x) [1] 0 > nchar(x) integer(0) > y = "" > length(y) [1] 1 > nchar(y) [1] 0
Working with Character Data
147
To access substrings of a character vector, use either substr or substring. These two functions are very similar but handle recycling of arguments differently. The first three arguments are the character vector, a set of starting indices and a vector of ending indices. For substr, the length of the returned value is always the length of its first argument (x). For substring, it is the length of the longest of these three supplied arguments; the other arguments are recycled to the appropriate length. > substr(x, 2, 4) character(0) > substr(x, 2, rep(4, 5)) character(0) > substring(x, 2, rep(4, 5)) character(0) A biological application of substring is to build a function to translate DNA into the corresponding amino acid sequence. We can use substring to split an input DNA sequence into triples, which are then used to index into the GENETIC_CODE variable, and finally we paste the amino acid sequences together. The GENETIC_CODE variable presumes that the sequence given is the sense strand. > rD = randDNA(102) > rDtriples = substring(rD, seq(1, 102, by = 3), + seq(3, 102, 3)) > paste(GENETIC_CODE[rDtriples]) [1] "V" "R" "N" "Y" "P" "S" "K" "A" "L" "C" "*" "Q" "V" [14] "A" "C" "L" "Q" "*" "S" "N" "M" "D" "D" "L" "Q" "Q" [27] "L" "S" "N" "L" "V" "C" "L" "H" Exercise 5.1 Using the code above, create a simple function that maps from DNA to the amino acid sequence. It is also possible to modify a string and the replacement versions of substr and substring are used for this purpose. In the example below, we demonstrate some differences between the two functions. These functions are evaluated for their side effects, which are changes to the character strings contained
148
R Programming for Bioinformatics
in their first argument. There are no default values for either the starting position or the ending position in substr. For substring there is a default value for the stopping parameter.
> substring(x, 2, 4) = "abc" > x character(0) > x = c("howdy", "dudey friend") > substr(x, 2, 4) = "def" > x [1] "hdefy"
"ddefy friend"
> substring(x, 2) paste(1:3, "+", 4:5) [1] "1 + 4" "2 + 5" "3 + 4" > paste(1:3, 1:3, 4:6, sep = "+") [1] "1+1+4" "2+2+5" "3+3+6"
Working with Character Data
149
In some cases, the desire is to reduce a character vector with multiple character strings to one with a single character string, and the collapse argument can be used to reduce, or collapse, the input vector. > paste(1:4, collapse = "=") [1] "1=2=3=4" The reverse operation, that of splitting a long string into substrings, is performed using the strsplit function. strsplit takes any character string or a regular expression as the splitting criterion and returns a list, each element of which contains the splits for the corresponding element of the input. If the input string is long, be sure to either use Perl regular expressions or set fixed=TRUE, as the standard regular expression code is painfully slow. To split a string into single characters, use the empty string or character(0). While the help page recommends the use of either character(0) or NULL, these can be problematic if the second argument to strsplit is of length more than one. Compare the two outputs in the example below. > strsplit(c("ab", "cde", "XYZ"), c("Y", "")) [[1]] [1] "ab" [[2]] [1] "c" "d" "e" [[3]] [1] "X" "Z" > strsplit(c("ab", "cde", "XYZ"), c("Y", NULL)) [[1]] [1] "ab" [[2]] [1] "cde" [[3]] [1] "X" "Z" It is sometimes important to output text strings so that they look nice on the screen or in a document. There are a number of functions that are available,
150
R Programming for Bioinformatics
and we have produced yet another one that is designed to interact with the Sweave system. Two builtin functions are strtrim, which trims strings to a fixed width, and strwrap, which introduces line breaks into a text string. To trim strings to fit into a particular width, say for text display, use strtrim. The arguments to strtrim are the character vector and a vector of widths. The widths are interpreted as the desired width in a monospaced font. To wrap text use strwrap, which honors a number of arguments including the width, indentation, and a user-supplied prefix.
> x strwrap(x, 30, prefix = "myidea: ")[1:10] [1] [2] [3] [4] [5] [6] [7] [8] [9] [10]
"myidea: "myidea: "myidea: "myidea: "myidea: "myidea: "myidea: "myidea: "myidea: "myidea:
GNU GENERAL PUBLIC" LICENSE Version 2," June 1991" " Copyright (C) 1989," 1991 Free Software" Foundation, Inc. 51" Franklin St, Fifth" Floor, Boston, MA" 02110-1301 USA"
> writeLines(strwrap(x, 30, prefix = "myidea: ")[1:5]) myidea: myidea: myidea: myidea: myidea:
GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989,
When using Sweave to author documents, such as this book, the author will often need to ensure that no output text string is wider than the margins. While one might anticipate strwrap would facilitate such requests, it does not. We have written a separate simple function, strbreak, in the Biobase package, to carry out this task.
Exercise 5.3 Compare the function strbreak with strwrap and strtrim. What are the differences in terms of the output generated?
Working with Character Data
5.2.1
151
Modifying text
Text can be transformed; calls to toupper and tolower change all characters in the supplied arguments to upper case and lower case, respectively. Nonalphabetic characters are ignored by these two functions. For general translation from one set of characters to another, use chartr. In the code chunk below we present a small function to translate from the DNA representation to the RNA representation. Basically, DNA is represented as a sequence of the letters A, C, T, G, while for RNA, U is substituted for T. We first transform the input to upper case, and then use chrtr to transform all instances of T into U. Notice that the function is vectorized, since we have only made use of functions that are themselves vectorized. We use the randDNA function to generate random DNA strings.
> + + + + + > >
dna2rna = function(inputStr) { if (!is.character(inputStr)) stop("need character input") is = toupper(inputStr) chartr("T", "U", is) } x = c(randDNA(15), randDNA(12)) x
[1] "TCATCCATTCGTGGG" "GTTGGTCCATAG" > dna2rna(x) [1] "UCAUCCAUUCGUGGG" "GUUGGUCCAUAG"
Exercise 5.4 Write a function for translating from RNA to DNA. Test it and dna2rna on a vector of inputs. The function chartr can translate from one set of values to another. Hence it is simple to write a function that computes the complementary sequence for either DNA or RNA.
> compSeq = function(x) chartr("ACTG", "TGAC", + x) > compSeq(x) [1] "AGTAGGTAAGCACCC" "CAACCAGGTATC"
152
R Programming for Bioinformatics
Exercise 5.5 Write a function to test whether a sequence is a DNA sequence or an RNA sequence. Modify the function compSeq above to use the test and perform the appropriate translation, depending on the type of input sequence. Users can also use sub and gsub to perform character conversions, and these functions are described more fully in Section 5.3. One limitiation of chartr is that it does strict exchange of characters, and for some problems you will want to either remove characters or replace a substring with a longer or shorter substring, which cannot be done with chartr but can be done with sub or gsub. While complement sequences are of some interest in biological applications, reverse complementing is more common as it reflects the act of transcriptions. Tools for performing this manipulation on DNA and RNA sequences are provided in the matchprobes and Biostrings packages. Exercise 5.6 Look at the manual page for strsplit to get an idea of how to write a function that reverses the order of characters in the character strings of a character vector. Use this to write a reverseComplement function.
5.2.2
Sorting and comparing
The basis for ordering of character strings is lexicographic order in the current locale, which can be determined by a call to Sys.getlocale. Comparisons are done one character at a time; if one string is shorter than the other and they match up to the length of the shorter string, the longer string will be sorted larger. The arithmetic operators , ==, and != can all be applied to character vectors. And hence other functions such as max, min and order can also be used.
> set.seed(123) > x = sample(letters[1:10], 5) > x [1] "c" "h" "d" "g" "f" > sort(x) [1] "c" "d" "f" "g" "h" > x < "m" [1] TRUE TRUE TRUE TRUE TRUE
Working with Character Data
5.2.3
153
Matching a set of alternatives
Searching or matching a set of input character strings in a reference list or table can be performed using one of match, pmatch or charmatch. Each of these has different capabilities, but all work in a more or less similar manner. The first argument is the set of strings that matches are desired for; the second is the table in which to search. The returned value from these functions is a vector of the same length as the first argument that contains the index of the matching value in the second argument, or the value of the nomatch parameter if no match is found. The function %in% is similar to match but returns a vector of logical values, of the same length as its left operand indicating which elements were found in the right operand. The first argument (left operand in the case of %in%) is converted to a character vector (using as.character) prior to evaluation. > exT = c("Intron", "Exon", "Example", "Chromosome") > match("Exon", exT) [1] 2 > "Example" %in% exT [1] TRUE Both pmatch and charmatch perform partial matching. Partial matching is similar to that used for arguments to functions, where matching is done per character, left to right. For both functions, the elements of the first argument are compared to the values in the second argument. First, exact matches are determined. Then, any remaining arguments are tested to see if there is an unambiguous partial match and, if so, that match is used. By default, the elements of the table argument are used only once; for pmatch, this behavior can be changed by setting the duplicates.ok argument to TRUE. These functions do not accept regular expressions. For matching using regular expressions, see the discussion in Section 5.3. The functions differ in how they deal with non-matches versus ambiguous partial matches, but otherwise are very similar. With pmatch, the empty string, "" matches nothing, not even the empty string, while with charmatch it does match the empty string. charmatch reports ambiguous partial matches as 0 and non-matches as NA, while pmatch uses NA for both. In the example below, the first partial match fails because two different values in exT begin with a capital E. The second call identifies the second element since enough characters were supplied to uniquely identify that value. The third example succeeds since there is only one value in exT that begins with a capital I, and the fourth example demonstrates the use of the very similar function charmatch..
154
R Programming for Bioinformatics
> pmatch("E", exT) [1] NA > pmatch("Exo", exT) [1] 2 > pmatch("I", exT) [1] 1 > charmatch("I", exT) [1] 1
Exercise 5.7 Test the claims made above about matching of the empty string; show that with pmatch there is no match, while with charmatch there is. The behavior is a bit different if multiple elements of the input list match a single element of the table, versus when one element of the input list matches multiple elements in the table. In the first example below, even though more characters matched for the second string, it is not used as the match; thus all partial matches are equal, regardless of the quality of the partial match. Using either duplicates.ok=TRUE or charmatch will find all partial matches in the table.
> pmatch(c("I", "Int"), exT) [1]
1 NA
> pmatch(c("I", "Int"), exT, duplicates.ok = TRUE) [1] 1 1 > charmatch(c("I", "Int"), exT) [1] 1 1 If there are multiple exact matches of an input string to the table, then
pmatch returns the index of the first, while charmatch returns 0, indicating
ambiguity.
Working with Character Data
155
> pmatch(c("ab"), c("ab", "ab")) [1] 1 > charmatch(c("ab"), c("ab", "ab")) [1] 0
5.2.4
Formatting text and numbers
Formatting text and numbers can be accomplished in a variety of different ways. Formatting character strings or numbers, including interpolation of values into character strings, can be accomplished using paste and sprintf. Formatting of numbers can be achieved using either format or formatC. Use the xtable package for formatting R objects into LATEXor HTML tables. The function sprintf is an interface to the C routine sprintf, which supports all of the functionality of that routine, with R-style vectorization. The function formatC formats numbers using C style format specifications. But it does so on a per-number basis; for common formatting of a vector of numbers, you should use format. format is a generic function with a number of specialized methods for different types of inputs, including matrices, factors and dates.
5.2.5
Special characters and escaping
A string literal is a notation for representing sets of characters, or strings, within a computer language. In order to specify the extent of the string, a common solution is the use of delimiters. These are usually quotation marks and in R either single, , or double, ", quotes can be used to delimit a string. The delimiters are not part of the string, so the problem of how to have a string with either a single or double quote in it arises. In one sense this is easy to solve, since strings delimited with double quotes can contain a single quote, and vice versa, but that does not entirely preclude the need for a mechanism for indicating that a character is to be treated specially. A fairly widely used solution is the use of an escape character. The meaning of the escape character is to convey the intention that the next character be treated specially. In R, the escape character is the backslash, \. Both strings below are valid inputs in R, and they are two distinct literals representing the same string. > I\ m a string [1] "I m a string"
156 >
R Programming for Bioinformatics "I m a string"
[1] "I m a string" The next problem that arises is how to have the escape character appear in a string. But we have essentially solved that problem too: simply escape the escape character. > s = "I m a backslash: \\" > s [1] "I m a backslash: \\" The printed value shows the escape character. That is because the print function shows the string literal that this variable is equal to, in the sense that it could be copied into your R session and be valid. To see the string literal, you can use cat. Notice that there are no quotes and that only one backslash appears in the output. > cat(s) I m a backslash: \ You can print a string without additional quotes around it using the
noquote function, but that is not the same as using cat; you will still see
the R representation of the string. Notice in the example that there is a double backslash printed, unlike the output of cat.
> noquote(s) [1] I m a backslash: \\ Special characters represent non-printing characters, such as new lines and tabs. These control characters are single characters. You can check this using the function nchar. Octal and hexidecimal codes require an escape as well. More details are given in Section 10.3.1 of R Development Core Team (2007b). > nchar(s)
Working with Character Data
157
[1] 18 > strsplit(s, NULL)[[1]] [1] "I" [11] "s"
" " "m" " " "a" " " "b" "a" "c" "l" "a" "s" "h" ":" " " "\\"
"k"
> nchar("\n") [1] 1 > charToRaw("\n") [1] 0a The backslash was not escaped and so it is interpreted with its special meaning in the third line, and R correctly reports that there is a single character. On the fourth line, we convert the character code into raw bytes and see the ASCII representation for the new line character. All would be relatively easy, except that the backslash character sometimes gets used for different things; and on Windows, it turns out to be the file separator. Even that is fine, although when creating pathnames in R, you must remember to escape the backslashes, as is done in the example below. Of course, one should use both file.path and system.file to construct file paths and then the correct separator is used. > fn = "c:\\My Documents\\foo.bar" > fn [1] "c:\\My Documents\\foo.bar" Now, if there is a desire to change the backslashes to forward slashes, that can be handled by a number of different R functions such as either chartr or gsub. > old = "\\" > new = "/" > chartr(old, new, fn) [1] "c:/My Documents/foo.bar" With gsub, the solution is slightly more problematic, since the string created in R will be passed to another program that also requires escaping. In the first
158
R Programming for Bioinformatics
call to gsub below, we must double each backslash so that the string, when passed to the Perl regular expression library (PCRE), has the backslashes escaped. In the second line, where we state that fixed=TRUE, only one escape is needed. > gsub("\\\\", new, fn) [1] "c:/My Documents/foo.bar" > gsub("\\", new, fn, fixed = TRUE) [1] "c:/My Documents/foo.bar"
5.2.6
Parsing and deparsing
Parsing is the act of translating a textual description of a set of commands into a representation that is suitable for computation. When you type a set of commands at the console, or read in function definitions from a file, the parser is invoked to translate the textual description into the internal representation. The inverse operation is called deparsing – which turns the internal representation into a text string. In the code below, we first parse a simple function call, and then show that the parsed value is indeed executable in R and then deparse it to get back a text representation. The parsed quantity is an expression and > v1 = parse(text = "mean(1:10)") > v1 expression(mean(1:10)) > eval(v1) [1] 5.5 > deparse(v1) [1] "expression(mean(1:10))" > deparse(v1[[1]]) [1] "mean(1:10)" Other functions that are commonly used for printing or displaying data are
Working with Character Data
159
cat, print and show. In order to control the width of the output string, either strwrap or strtrim can be used.
5.2.7
Plotting with text
When creating a plot, one often wants to add text to the output device. Our treatment is quite cursory since there are other more comprehensive volumes (Murrell, 2005; Venables and Ripley, 2002) that deal with the topic of plotting data and working with the R graphics system. We would like to draw attention to the notion of tool-tips. An implementation of them is in the imageMap function of the geneplotter package, which creates an HTML page and a MAP file that, when rendered in a browser, has user-supplied tool-tips embedded.
5.2.8
Locale and font encoding
String handling is affected by the locale and indeed what is a valid character, and hence what is a valid identifier in R is determined by the locale. Locale settings facilitate the use of R with different alphabets, monetary units and times. The locale can be queried and set using Sys.getlocale and Sys.setlocale.
> Sys.getlocale() [1] "C" These capabilities have been greatly expanded in recent versions of R, and many users in countries with multi-byte character sets, e.g., UTF-8 encodings, are able to work with those encodings. We will not cover these issues here. Users who want to explore native language support should examine the functions iconv and gettext. The former translates strings from one encoding into another while the latter describes the tools R uses to translate error and warning messages. Section 1.9 of R Development Core Team (2007c) should also be consulted.
5.3
Regular expressions
Regular expressions have become widely used in applied computing, spawning a POSIX standard as well as a number of books, including Friedl (2002) and Stubblebine (2007). Their uses include validation of input sequences, such
160
R Programming for Bioinformatics
as email addresses and genomic sequences, as well as a variety of search and optionally replace problems such as finding words or sentences with specific beginnings or endings. A regular expression is a pattern that describes a set of character strings. In R, there are three different types of regular expressions that you can use: extended regular expressions, basic regular expressions and Perl-like regular expressions. The first two types of regular expressions are implemented using glibc, while the third is implemented using the Perl-compatible regular expressions (PCRE) library. We will present a view of the capabilities in R that is based on the description in the manual page for regular expressions which you can access via the command, ?regex, that is itself based on the manual pages for GNU grep and the PCRE manual pages. Among the functions that facilitate the use of regular expressions are grep, sub, gsub regexp and gregexpr. While agrep provides some similar capabilities, it does not use regular expressions, but rather depends on metrics between strings. The functions strsplit, apropos and browseEnv also allow the use of regular expressions. In the examples below, we mainly use regexpr and gregexpr since they show both the position and the length of the match, and that is pedagogically useful. We do not have the space to cover all possible uses or examples of regular expressions and rather focus on those tasks that seem to recur often in handling biological strings. Readers should consult the R manual pages, any of the many books (Friedl, 2002; Stubblebine, 2007), or online resources dedicated to regular expressions for more details.
5.3.1
Regular expression basics
All letters and digits, as well as many other single characters, are regular expressions that match themselves. Some characters have special meaning and are referred to as meta-characters. Which characters are metacharacters depends on the type of regular expression. The following are meta-characters for extended regular expressions and for Perl regular expressions: . \ | ( ) [ { ^ $ * + ?. For basic regular expressions, the characters ? { | ( ), and + lose their special meaning and will be matched like any other character. Any meta-character that is preceded by a backslash is said to be quoted and will match the character itself; that is, a quoted metacharacter is not interpreted as a meta-character. Notice that in the discussion in Section 5.2.5, we referred to essentially the same idea as escaping. There is syntax that indicates that a regular expression is to be repeated some number of times and this is discussed in more detail in Section 5.3.1.3 Regular expressions are constructed analogously to arithmetic expressions by using various operators to combine smaller expressions. Concatenating regular expressions yields a regular expression that matches any string formed by concatenating strings that match the concatenated subexpressions. Of some specific interest is alternation using the | operator, quantifiers that determine
Working with Character Data
161
how many times a construct may be applied (see below), and grouping of regular expressions using brackets (parentheses), (). For example, the regular expression (foo|bar) matches either the string foo or the string bar. The precedence order of the operations is that repetition is highest, then concatenation and then alternation. Enclosing specific subexpressions in parentheses overrides these precedence rules. 5.3.1.1
Character classes
A character class is a list of characters listed between square brackets, [ and ], and it matches any single character in that list. If a caret, ^, is the first character of the list, then the match is to any character not in the list. For example, [AGCT] matches any one of A, G, C or T, while [^123] matches any character that is not a 1, 2 or 3. A range of characters may be specified by giving the first and last characters, separated by a dash, such as [1-9], which represents all single digits between 1 and 9. Character ranges are interpreted in the collation order of the current locale. The following rules apply to metacharacters that are used in a character class: a literal ] can be included by placing it first; a literal ^ can be included by placing it anywhere but first; a literal -, must be placed either first or last. Alternation does not work inside character classes because | has its literal meaning. The period . matches any single character except a new line, and is sometimes referred to as the wild card character. Special shorthand notation for different sets of characters are often available; for example, \d represents any decimal digit, \s is shorthand for any space character, and their upper-case versions represent their negation. The symbol \w is a synonym for [[:alnum:]_], the alphanumeric characters plus the underscore, and \W is its negation. Exercise 5.8 Write a function that takes a character vector as input and checks to see which elements have only nucleotide characters in them. The set of POSIX character classes is given in Table 5.1. These POSIX character classes only have their special interpretation within a regular expression character class; for example, [[:alpha:]] is the same as [A-Za-z]. 5.3.1.2
Anchors, lookaheads and backreferences
An anchor does not match any specific character, but rather matches a position within the text string, such as a word boundary, a place between characters, or the location where a regular expression matches. Anchors are zero-width matches. The symbols \< and \>, respectively, match the empty string at the beginning and end of a word. In the example below, we use gregexpr to show all the beginnings and endings of words. Notice that the length of the match is always zero.
162
R Programming for Bioinformatics
alphanumeric characters: [:alpha:] and [:digit:]. alphabetic characters: [:lower:] and [:upper:]. blank characters, space and tab. control characters. In ASCII, these characters have octal codes 000 through 037, and 177 (DEL). [:digit:] the digits: 0 1 2 3 4 5 6 7 8 9. [:graph:] graphical characters: [:alnum:] and [:punct:]. [:lower:] lower-case letters in the current locale. [:print:] printable characters: [:alnum:], [:punct:] and space. [:punct:] punctuation characters: ^ ! ” # $ % & ’ ( ) * + , − . / : ; < = > ? @ [ ] \ _ { | } * and ∼ [:space:] space characters: tab, newline, vertical tab, form feed, carriage return, and space. [:upper:] upper-case letters in the current locale. [:xdigit:] hexadecimal digits: 0 1 2 3 4 5 6 7 8 9 A B C D E F a b c d e f.
[:alnum:] [:alpha:] [:blank:] [:cntrl:]
Table 5.1: Predefined, POSIX, character classes.
> gregexpr("\\", "my first anchor") [[1]] [1] 3 9 16 attr(,"match.length") [1] 0 0 0 The caret ^ and the dollar sign $ are meta-characters that, respectively, match at the beginning and end of a line. The symbol \b matches the empty string at the edge of a word (either the start or the end); and \B matches the empty string provided it is not at the edge of a word. In the code below, we show that \b is equivalent to both \> and \ gregexpr("\\b", "once upon a time")
Working with Character Data
163
[[1]] [1] 1 5 6 10 11 12 13 17 attr(,"match.length") [1] 0 0 0 0 0 0 0 0 > gregexpr("\\>", "once upon a time") [[1]] [1] 5 10 12 17 attr(,"match.length") [1] 0 0 0 0 > gregexpr("\\= 2.7.0)" attr(,"class") [1] "DependsList" "list"
A comprehensive and graphical overview of the package dependencies can be obtained using the pkgDepTools package. This package parses information from a CRAN-style package repository and uses that to build a dependency graph based on the Depends field, the Suggests field or both. Then tools in the graph, RBGL and Rgraphviz packages can be used to find paths through the graph, locate subgraphs, reverse the order of edges to find the packages that depend on a specified package and many other tasks.
7.2.1
biocViews
All contributors to the Bioconductor Project are asked to choose a set of terms from the set of terms that are currently available. These can be found under the Developer link at the Bioconductor web site. The terms are arranged in a hierarchy. These are then included in the DESCRIPTION file. Below we show the relevant entry for the limma package, which is one of the longer ones. biocViews: Microarray, OneChannel, TwoChannel, DataImport, QualityControl, Preprocessing, Statistics, DifferentialExpression, MultipleComparisons, TimeCourse These specifications are then used when constructing the web pages used to find and download packages. An interested user can select topics and view only that subset of packages that has the corresponding biocViews term.
R Packages
7.2.2
219
Managing libraries
In many situations it makes sense to maintain one or more libraries in addition to the standard library. One case is when there is a system level R that all users access, but all users are expected to maintain their own sets of add-on packages. The location of the default library can be obtained from the variable .Library, while the current library search path can be accessed, or modified, via the .libPaths function. > .Library [1] "/Users/robert/R/R27/library" > .libPaths() [1] "/Users/robert/R/R27/library" The environment variable R_LIBS is used to initialize the library search path when R is started. The value should be a colon-separated list of directories. If the environment variable R_LIBS_USER is defined, then the directories listed there are added after those defined in R_LIBS. It is possible to define versionspecific information so that different libraries are used for different versions of R, in R_LIBS_USER. Site-specific libraries should be specified by the environment variable R_LIBS_SITE, and that controls the value of the R variable .Library.site. Explicit details are given in the R Extensions manual.
7.3
Package authoring
Authoring a package requires that all of the different components of a package, many described above, be created and assembled appropriately. One easy way to start is to use the package.skeleton function, which can be used when creating a new package. This function creates a skeleton directory structure, which can then be filled in. A list of functions or data objects that you would like to include in the package can be specified and appropriate entries in the R and data directories will be made, as will stub documentation files. It will also create a Read-and-delete-me file in the directory that details further instructions. The R Extensions manual provides very detailed and explicit information on the requirements of package building. In this section we will concentrate on providing some general guidance and in describing the strategies we have used to create a large number of packages. Perhaps the easiest way to create a package is to examine an existing package and to modify it to
220
R Programming for Bioinformatics
suit your needs. If you have published a paper describing your package, or have a particular way that you want to have your package cited, then you should use the functionality provided by the citation function. If you provide a CITATION file, then it will be accessed by the citation function.
7.3.1
The DESCRIPTION file
Every R package must contain a DESCRIPTION file. The format of the DESCRIPTION file is as an ASCII file with field names followed by a colon and followed by the field values. Continuation lines must begin with a space or a tab. The Package, Version, License, Description, Title, Author, and Maintainer fields are mandatory, all other fields are optional. Widely used optional fields are Depends, which lists packages that the package depends on, Collate which specifies the order in which to collate the files in the R subdirectory. Packages listed in the Depends field are attached in the order in which they are listed in that field, and prior to attaching the package itself. This ensures that all dependencies are further down the search path than the package being attached. The Imports field should list all packages that will be imported, either explicitly via an imports directive in the NAMESPACE file, or implicitly via a call to the double-colon operator. The Suggests field should contain all packages that are needed for package checking, but are not needed for the package to be attached. Lazy loading (Ripley, 2004) is a method that allows R to load either data or code, essentially on demand. Whether or not your package uses lazy loading for data is controlled by the LazyData field, while lazy loading of code is controlled by the LazyLoad field; use either yes or true to turn lazy loading on, and either no or false to ensure it is not used. If your package contains S4 classes or makes use of the methods package, then you should set LazyLoad to yes so that the effort of constructing classes and generic functions is done at install time and not every time the package is loaded or attached. If the LazyLoad field in the DESCRIPTION file is set to true, then when the package is installed all code is parsed and a database consisting of two binary files, filebase.rdb, which contains the objects, and filebase.rdx, which contains an index, is created.
7.3.2
R code
All R code for a package goes into the R subdirectory. The organization there is entirely up to the author. We have found it useful to place all class definitions in one file and to place all generic function definitions in one file. This makes them easy to find, and it is then relatively easy to determine the class structure and capabilities of a package.
R Packages
221
In some cases, some files will need to be loaded before others; classes need to be defined before their extensions or before any method that dispatches on them is defined. To control the order in which the files are collated, for loading into R, use the Collate directive in the DESCRIPTION file. If there is code that is needed on a specific platform, it should be placed in an appropriately named subdirectory of the R directory. The possible names are unix and windows. There should be corresponding subdirectories of the man directory to hold the help pages for functions defined only for a specific platform, or where the argument list or some other features of the function behave in a platform-specific manner. In addition, there are often operations that must occur at the time that the package is loaded into R, or when it is built. There are different functions that can be used, depending on whether or not the package has a name space. These are described in Section 7.4.
7.3.3
Documentation
The importance of good documentation cannot be overemphasized. It is unfortunate that this is often the part of software engineering that is overlooked, left to last, and seldom updated. Fortunately the R package building and checking tools do comparisons between code and documentation and find many errors and omissions. We divide our discussion of documentation into two parts; one has to do with the documentation of functions and their specific capabilities while the other has to do with documenting how to make use of the set of functions that are provided in an R package. Function documentation should concentrate on describing what the inputs are and what outputs are generated by any function, while vignettes should concentrate on describing the sorts of tasks that the code in the package can perform. Vignettes should describe how the functions work together, possibly with code from other packages, to achieve particular analyses or computational objectives. It is reasonable for a package to have a number of vignettes if the code can be used for different purposes. In R, the standard is to have one help page per function, data set, or important variable, although sometimes similar concepts will be discussed on a single help page. These help pages use a well-defined syntax that is similar to that of LATEX and is often referred to as Rd format, since that is the suffix that is used for R documentation files. The Rd format is exhaustively documented in the R Extensions manual. It is often useful to include at least one small data set with your package so that it can be used for the examples. Once the R code has been written, a template help page is easily constructed using the prompt function. The help page files created by prompt require hand editing to tailor them to the capabilities and descriptions of the specific functions. The function promptPackage can be used to provide a template file of documentation for the package. Other specialized prompt functions include promptClass and promptMethods for documenting S4 classes and methods.
222
R Programming for Bioinformatics
\name{channel} \alias{channel} \title{Create a new ExpressionSet instance by selecting a specific channel} \description{ This generic function extracts a specific element from an object, returning a instance of the ExpressionSet class. } \usage{ channel(object, name, ...) } \arguments{ \item{object}{An S4 object, typically derived from class \code{\link{eSet}}} \item{name}{The name of the channel, a (length one) character vector.} \item{...}{Additional arguments.} } \value{ An instance of class \code{\link{ExpressionSet}}. } \author{Biocore} \examples{ obj library("Biobase") > getNamespaceImports("Biobase") $base [1] TRUE $tools [1] TRUE > getNamespaceUsers("tools") [1] "Biobase"
7.4
Initialization
Many package developers want to have a message printed when the package is attached. And that is perfectly reasonable for interactive use, but there can be situations where it is particularly problematic. In order to make it easy for others to suppress your start-up message, you should construct it using packageStartupMessage and then suppressPackageStartupMessages can be used to suppress the message if needed. When the code in a package is assembled into a form suitable for use with R, via the package building system, there are some computations that can happen once, at build time, others that must happen at the time the package is installed, and still others that must happen every time the package is attached or loaded. Construction of internal class hierarchies, for S4 classes and methods, where all elements are either in the recommended packages or in the package being built, can be performed at build time. Finding and linking to system libraries must be done at install time, and in many cases again at load time. If the function .First.lib is defined in a package, it is called with arguments libname and pkgname after the package is loaded and attached. While it is
R Packages
227
rare to detach packages, there is a corresponding function .Last.lib, which if defined will be called when a package is detached. When a NAMESPACE file is present, the package can be either loaded or attached. Since there is a difference between loading and attaching the single initialization function, .First.lib is no longer sufficient. A package with a name space can provide two functions: .onLoad and .onAttach. These functions, if defined, should not be exported. Many packages will not need either function, since import directives take the place of calls to require and useDynLib directives can replace direct calls to library.dynam. When a package with a name space is supplied as an argument to the library function, first loadNamespace is invoked and then attachNamespace. If a package with a name space is loaded due to either an import directive or the double-colon operator, then only loadNamespace is called. loadNamespace checks whether the name space is already loaded and registered with the internal registry of loaded name spaces. If so, the loaded name space is returned, and it is not loaded a second time. Otherwise, loadNamespace is called on all imported name spaces, and definitions of exported variables of these packages are copied to the imports frame for the package being loaded. Then either the package code is loaded and run or the binary image is loaded, if it exists. Finally, the .onLoad function is run, if the package defined one.
7.4.1
Event hooks
A set of functions is available that can be used to set actions that should be performed before packages are attached or detached, and similarly before name spaces are loaded or unloaded. These functions are getHook, setHook and packageEvent. Among other things, these hooks allow users to have some level of control over what happens when a package is attached or detached.
Chapter 8 Data Technologies
8.1
Introduction
Handling data efficiently and effectively is an essential task in Bioinformatics. In this chapter we present some of the many tools that are available for dealing with data. The R Data Import/Export Manual (R Development Core Team, 2007a) should also be consulted for other topics and in many cases for more details regarding different technologies and their interfaces in R. The solution to many bioinformatic tasks will require some use of web-oriented technologies. Generating requests, posting and reading forms data, as well as locating and using web services are programming tasks that are likely to arise. We begin our discussion by describing a range of tools that have been implemented in R and that can be used to process and transform data. Next we discuss the different interfaces to databases that are available but focus our discussion on SQLite as it is used extensively within the Bioconductor Project. We then discuss capabilities for interacting with data sources in XML. We conclude this chapter by considering the usage of different bioinformatic data sources via web protocols and in particular discuss some resources available from the NCBI and also demonstrate some basic usage of the biomaRt package.
8.1.1
A brief description of GO
GO (The Gene Ontology Consortium, 2000) is a valuable bioinformatic resource that consists of an ontology, or restricted vocabulary, of terms that are ordered as a directed acyclic graph (DAG). We will use GO as the basis for a number of examples in this chapter and hence give a brief treatment. GO is described in more detail in Gentleman et al. (2004) and Hahne et al. (2008). There are three separate components: molecular function (MF), biological process (BP) and cellular component (CC). GO uses its own set of identifiers and for each term, detailed information is available. A separate project (Camon et al., 2004) maps genes, really proteins, to GO terms. There are a number of evidence codes that are used to explain the reason that a gene was mapped to a particular term. 229
230
8.2
R Programming for Bioinformatics
Using R for data manipulation
We have seen many of the different capabilities of R in Chapter 2. Here, we take a slightly different approach and concern ourselves mainly with data processing, that is, with those tasks that take as input one or more data sets and process them to provide a new, processed output data set that can be used for other purposes. There are many different solutions to most of these tasks, and our goal is to provide some broad coverage of the different capabilities. We will make use of data from one of the metadata packages to demonstrate some of the different computations. Using these data, one is typically interested in counting things, such as “How many probes on a microarray correspond to genes that lie on chromosome 7?”, or in dividing the probes according to chromosomal location, or selecting one probe to represent each distinct Entrez Gene ID.
8.2.1
Aggregation and creating tables
Aggregating data and computing simple summaries are common tasks, and there are specialized efficient functions for performing many of them. We first load the hgu95av2 metadata package and then will extract the information about which chromosome each probe is located on. This is a bit cumbersome and would not be how you should approach this problem in practice, since there are other tools (see Section 8.2.2) that are more appropriate. > library("hgu95av2") > chrVec = unlist(as.list(hgu95av2CHR)) > table(chrVec) chrVec 1 10 1234 453 2 20 807 309 X Y 499 41
11 692 21 147
12 698 22 350
13 225 3 661
14 408 4 448
15 349 5 537
16 500 6 716
17 724 7 573
18 171 8 419
19 745 9 426
> class(chrVec) [1] "character" Exercise 8.1 Which chromosome has the the most probe sets and which has the fewest?
Data Technologies
231
Next, we might want to know the identities of those genes on the Y chromosome. We can solve this problem in many different ways, but since we might want to ultimately plot values in chromosome coordinates, we will make use of the function split. In the code below, we split the names of chrVec because they correspond to the different chromosomes. The return value is a list of length 25 where each element has all the Affymetrix probe IDs for the probes that correspond to genes located in the chromosome. We then use the sapply to check our results, and can compare the answer with that found above using table.
> byChr = split(names(chrVec), chrVec) > sapply(byChr, length) 1 1234 2 807 X 499
10 453 20 309 Y 41
11 692 21 147
12 698 22 350
13 225 3 661
14 408 4 448
15 349 5 537
16 500 6 716
17 724 7 573
18 171 8 419
19 745 9 426
Then we can list all of the probe sets that are found on any given chromosome simply by subsetting byChr appropriately.
> byChr[["Y"]] [1] [4] [7] [10] [13] [16] [19] [22] [25] [28] [31] [34] [37] [40]
"629_at2" "31415_at" "31911_at" "1185_at2" "36553_at2" "35885_at" "41108_at2" "36321_at" "31534_at" "38355_at" "36554_at2" "31411_at" "32428_at" "34172_s_at2"
"39168_at2" "40342_at" "31601_s_at" "32991_f_at" "33593_at" "35929_s_at" "41138_at2" "38182_at" "40030_at" "32864_at" "35073_at2" "34753_at2" "37583_at" "31413_at"
"34215_at2" "32930_f_at" "35930_at" "40436_g_at2" "31412_at" "32677_at" "31414_at" "40097_at" "41214_at" "40435_at2" "34477_at" "35447_s_at2" "33665_s_at2"
232
R Programming for Bioinformatics apply lapply sapply tapply by eapply mapply rapply esApply
matrices, arrays, data.frames lists, vectors lists, vectors atomic objects, typically vectors similar to tapply environments multiple values recursive version of lapply ExpressionSets, defined in Biobase
Table 8.1: Different forms of the apply functions.
8.2.2
Apply functions
There are a number of functions, listed in Table 8.1, that can be used to apply a function, iteratively, to a set of inputs. The apply function operates on arrays, matrices or data.frames where one would like to apply a function to each row, or each column; and in the case of arrays, to any other dimension. The notion is easily extended to lists, where lapply and sapply are appropriate, or to ragged arrays, tapply, or to environments, eapply. If the problem requires that a function be applied to two or more inputs, then mapply may be appropriate. When possible, the return value of the apply functions is simplified. These functions are not particularly efficient and for large matrices more efficient alternatives are discussed in Section 8.2.3. One of the main reasons to prefer the use of an apply-like function, over explicitly, using a for loop is that it more clearly and succinctly indicates what is happening and hence makes the code somewhat easier to read and understand. The return value from apply will be simplified if possible, in that if all values are vectors of the same length, then a matrix will be returned. The matrix will have one column for each computation and one row for each value returned. Thus, if a matrix has two columns and five rows and a function that returns three values is applied to the rows, the return value will be a matrix with three rows and five columns. The function tapply takes as input an atomic vector and a list of one or more vectors (usually factors) of the same length as the first argument. The first argument is split according to the unique values of the supplied factors, and the specified summary statistic is computed for those values in each group. For tapply, users can specify whether or not to try and simplify the return value using the simplify argument. For lists, lapply will apply a function to each element, and it does not attempt to simplify the return value, which will always be a list of the same length as the list that was operated on. One cannot use an S4 class that is coercible to a list in a call to lapply; that is because S4 methods set on as.list will not be detected when the code is evaluated, so that the user-
Data Technologies
233
defined conversion will not be used. To obtain a simplified result, for example if all return values are the same length, then use sapply. If you want to operate on a subset of the values in the list, leaving others unchanged, then rapply can be used. With rapply you can specify the class of objects to be operated on; the other elements can either be left unchanged, or replaced by a user-supplied default value. To apply a function to every element of an environment, use eapply. The order of the output is not guaranteed, as there is no ordering of values in the environment. By default, objects in the environment whose names begin with a dot are not included. In the Biobase package, a function named esApply is provided to simplify the process of applying a function to either the rows or columns of the expression data contained within the ExpressionSet. It also simplifies the use of phenotypic data, from the pData slot in the function being applied. 8.2.2.1
An eapply example
The hgu95av2MAP contains the mappings between Affymetrix identifiers and chromosome band locations. For example, in the code below we find the chromosome band that the gene for probe 1001_at (TIE1) maps to.
> library("hgu95av2") > hgu95av2MAP$"1001_at" [1] "1p34-p33"
We can extract all of the map locations for a particular chromosome or part of a chromosome by using regular expressions (Section 5.3) and the apply family of functions. Suppose we want to find all genes that map to the p arm of chromosome 17. Then we know that their map positions will all start with the characters 17p. This is a simple regular expression, ^17p, where the caret, ^, means that we should match the start of the word. We do this in two steps. First we use eapply and grep and ask for grep to return the value that matched.
> myPos = eapply(hgu95av2MAP, function(x) grep("^17p", + x, value = TRUE)) > myPos = unlist(myPos) > length(myPos) [1] 190
234
R Programming for Bioinformatics
Exercise 8.2 Use the function ppc from Exercise 2.16 to create a new function that can find and return the probes that map to any chromosome (just prepend the caret to the chromosome number) or the chromosome number with a p or a q after it.
8.2.3
Efficient apply-like functions
While the apply family of functions provides a very useful abstraction and a succinct notation, its generality precludes a truly efficient implementation. For this reason there are other, more efficient functions, for tasks that are often performed, provided in R and in some add-on packages. These include rowSums, rowMeans, colSums and colMeans, which compute, per row or column, sums and means for numeric arrays. If the input array is a data.frame, then these functions first attempt to coerce it to a matrix and if successful then the operations are carried out directly on the matrix. From Biobase, a number of other functions for finding quantiles are implemented based on the function rowQ, which finds specified sample quantiles on a per-row basis for numeric arrays. Based on this function, other often-wanted summaries, rowMin, rowMax, etc. have been implemented. For statistical operations, the functions rowttests, rowFtests and rowpAUCs, all from the genefilter package, provide very efficient tools for computing ttests, F-tests and various quantities related to receiver operator curves (ROCs) in a row-wise fashion. There are methods for matrices and ExpressionSets.
8.2.4
Combining and reshaping rectangular data
Data in the form of a rectangular array, either a matrix or a data.frame, can be combined into new rectangular arrays in many different ways. The most commonly used functions are rbind and cbind. The first joins arrays row-wise; any number of input arrays can be specified, but they must all have the same number of columns. The row names are retained, and the column names are obtained from the first argument (left to right) that has column names. For cbind, the arrays must all have the same number of rows, and otherwise the operation is symmetric to that of rbind; an example of their use is given in the next code chunk. > x = matrix(1:6, nc = 2, dimnames = list(letters[1:3], + LETTERS[1:2])) > y = matrix(21:26, nc = 2, dimnames = list(letters[6:8], + LETTERS[3:4])) > cbind(x, y) A B C D a 1 4 21 24
Data Technologies
235
b 2 5 22 25 c 3 6 23 26 > rbind(x, y) A B a 1 4 b 2 5 c 3 6 f 21 24 g 22 25 h 23 26 Data matrices with row, or column, names can be merged to form a new combined data set. The operation is similar to the join capability of most databases and is accomplished using function merge. The function supports merging data frames or matrices on the basis of either shared row or column names, as well as on other values. In some settings it is of interest to reshape an input data matrix. This commonly arises in the analysis of repeated measures and other statistical problems, but a similar issue arises in bioinformatics when dealing with some of the different metadata resources. The function reshape helps to transform a data set from the representation where different measurements on the same individual are represented by different rows, to one where the different measurements are represented by columns. One other useful function is stack, which can be used to concatenate vectors and simultaneously compute a factor that indicates the original input vector each value corresponds to. stack expects either a named list or a data frame as its first argument and, further, that each entry of that list or data frame is itself a vector. It returns a data frame with two columns and as many rows as there are values in the input, where the columns are named values and ind. The function unstack expects an input data frame with two columns and attempts to undo the stacking. If all vectors are the same length, then the return value from unstack is a data frame.
> s1 = list(a = 1:3, b = 11:12, c = letters[1:6]) > ss = stack(s1) > ss 1 2 3
values ind 1 a 2 a 3 a
236 4 5 6 7 8 9 10 11
R Programming for Bioinformatics 11 12 a b c d e f
b b c c c c c c
> unsplit(s1, ss[, 2]) [1] "1" [11] "f"
8.3
"2"
"3"
"11" "12" "a"
"b"
"c"
"d"
"e"
Example
We now provide a somewhat detailed and extended example, using many of the tools and functions described to map data to chromosome bands. This information is available in all Bioconductor metadata packages; the suffix for the appropriate environment is MAP. In our example we will make use of the HG-U95Av2 GeneChip so the appropriate data environment is hgu95av2MAP. In the next code chunk we extract the MAP locations and then carry out a few simple quality assessment procedures; we look to see if many probe sets are mapped to multiple locations and also to see how many probe sets have no MAP location.
> mapP = as.list(hgu95av2MAP) > mLens = unlist(eapply(hgu95av2MAP, length)) Then we can use table to summarize the results.
> mlt = table(mLens) > mlt mLens 1 12438
2 185
3 2
Data Technologies
237
And we see that there are some oddities, in that some probe sets are annotated at several positions. There are several reasons for this. One is that there is a homologous region shared between chromosomes X and Y, and another is that not all gene locations are known precisely. The two probe sets that report three locations correspond to a single gene, ACTN1. In the code below we see that the three reported locations are all relatively near each other and most likely reflect difficulties in mapping. > len3 = mLens[mLens == 3] > hgu95av2SYMBOL[[names(len3)[1]]] [1] "ACTN1" > hgu95av2MAP[[names(len3)[1]]] [1] "14q24.1-q24.2" "14q24"
"14q22-q24"
Exercise 8.3 How many genes are in the homologous region shared by chromosomes X and Y. In the next example we show that there are 532 probe sets that do not have a map position. > missingMap = unlist(eapply(hgu95av2MAP, + function(x) any(is.na(x)))) > table(missingMap) missingMap FALSE TRUE 12093 532 Next, we can see what the distribution of probe sets per map position is. For those probe sets that have multiple map positions, we will simply select the first one listed. > > > >
mapPs = sapply(mapP, function(x) x[1]) mapPs = mapPs[!is.na(mapPs)] mapByPos = split(names(mapPs), mapPs) table(sapply(mapByPos, length))
1 2 36 26
3 9
4 5
5 1
6 1
7 1
8 11 13 1 1 1
238
R Programming for Bioinformatics
Exercise 8.4 Which chromosome band has the most probe sets contained in it? How many chromosome bands are from chromosome 2? How many are on the p-arm and how many on the q-arm?
8.4
Database technologies
Relational databases are commonplace and provide a very useful mechanism for storing data in structured tables. A standard mechanism for querying relational databases is the Structured Query Language (SQL). This section is not a tutorial on either of these topics; interested readers should consult some of the very many books and other resources that describe both relational databases and SQL. Rather, we concentrate on describing the R interfaces to relational databases. R software for relational databases is covered in Section 4 of the R Data Import/Export manual R Development Core Team (2007a). There is a database special interest group (SIG) that has its own mailing list that can be used to discuss topics of interest to those using databases. Databases for which there are existing R packages include SQLite (RSQLite), Postgres (RdbiPgSQL), MySQL (RMySQL) and Oracle (ROracle). In addition there is an interface to the Open Database Connectivity (ODBC) standard via the RODBC package. Most of these rely on there being an instance of the particular database already installed, RSQLite being an exception. Three of these packages, RSQLite, RMySQL and ROracle, use the DBI interface and hence depend on the DBI package. The Postgres driver has not been updated to the DBI interface. The DBI package provides a common interface to all supported databases, thereby allowing users to use their favorite database and have the R code be relatively unchanged. Ideally one should be able to write code in R that will perform well, regardless of which database engine was actually used. There are two basic reasons to want to interact with a database from within R. One is to allow for access to large, existing data collections that are stored in a database. For these sorts of interactions, the database typically already exists and a user will need to have an account and access permission to connect to the database. The user then typically make requests for specific subsets of the data. A second common usage is for more interactive usage, where data are transferred from R to the database and vice versa. Recently the Bioconductor Project has begun moving annotation data packages into a relational database format, relying primarily on SQLite. In addition, as the size of microarray and other high throughput data sets increases, it will become problematic to retain all data in memory and database interactions will likely be used to help manage very large data sets. It will seldom be sensible to create large database tables from R. Most databases have specialized import routines that
Data Technologies
239
will substantially reduce the time required to install and create large tables.
8.4.1
DBI
DBI provides a number of classes and generic functions (see Chapter 3 for more details on object-oriented programming) for database interactions. Different packages then support specific implementations of methods for particular databases. Some functions are required, and every package must implement them in order to be DBI compliant; other functions are optional and packages can implement the underlying functionality or not. In the next code chunk we demonstrate the DBI equivalent of HelloWorld, using SQLite. In this example, we attach the SQLite package, then initialize a DBI driver and establish a connection. For SQLite, a database can be a file on the local system; and in the code below, if the database named test does not exist, it will be created.
> > > > >
library("RSQLite") m = dbDriver("SQLite") con = dbConnect(m, dbname = "test") data(USArrests) dbWriteTable(con, "USArrests", USArrests, overwrite = TRUE)
[1] TRUE > dbListTables(con) [1] "USArrests" One of the important features of the DBI interface is the notion of a result set. The function dbSendQuery submits and executes the SQL statement, but does not extract any records; rather, these are retained on the database side until they are requested via a call to fetch. The result set remains open until it is explicitly cleared via a call to dbClearResult. If you forget to save the result set, it can be obtained by calling dbListResults on the connection object. Result sets allow users to perform queries that may result in very large data sets and still control their transfer to R. As seen in the code below, setting the parameter n to a negative value in a call to fetch retrieves all remaining records.
> rs = dbSendQuery(con, "select * from USArrests") > d1 = fetch(rs, n = 5) > d1
240
R Programming for Bioinformatics
row_names Murder Assault UrbanPop Rape 1 Alabama 13.2 236 58 21.2 2 Alaska 10.0 263 48 44.5 3 Arizona 8.1 294 80 31.0 4 Arkansas 8.8 190 50 19.5 5 California 9.0 276 91 40.6 > dbHasCompleted(rs) [1] FALSE > dbListResults(con) [[1]] > d2 = fetch(rs, n = -1) > dbHasCompleted(rs) [1] TRUE > dbClearResult(rs) [1] TRUE One can circumvent dealing with result sets by using dbGetQuery instead of dbSendQuery. dbGetQuery performs the query and returns all of the selected data in a data frame, dealing with fetching and result sets internally. A call to dbListTables will show all tables in the database, regardless of the type of database. The syntax required is quite different for different databases; hence DBI helps to simplify some aspects of database interaction. We show the SQLite variant below.
> dbListTables(con) [1] "USArrests" > dbListFields(con, "USArrests") [1] "row_names" "Murder" [5] "Rape"
"Assault"
"UrbanPop"
Exercise 8.5 Is there a DBI generic function that will retrieve an entire table in a single
Data Technologies
241
command. If so, what is its name, and what is its return value? A SQLite-specific way of listing all tables is given in the example below. > query = paste("SELECT name FROM sqlite_master WHERE", + "type= table ORDER BY name;") > rs = dbSendQuery(con, query) > fetch(rs, n = -1) name 1 USArrests Exercise 8.6 Select all entries from the USArrests database where the murder rate is larger than 10.
8.4.2
SQLite
SQLite is a very lightweight relational database, with a number of advanced features such as the ability to store Binary Large Objects (BLOBs) and to create prepared statements. SQLite stores each database as a file, the format of which is platform independent, so these files can be moved to other computers and will work on those platforms and hence it is well suited as a method for storing large data files. In the code below we load the SQLite package, initialize a driver and then open a database that has been supplied with the RBioinf package that accompanies this volume. The database contains a number of tables that map between identifiers on the Affymetrix HG-U95Av2 GeneChip and different quantities of interest such as GO categories or PubMed IDs (that map to published papers that discuss the corresponding genes). We then list the tables in that database. > > > + > > >
library("RSQLite") m = dbDriver("SQLite") testDB = system.file("extdata/hgu95av2-sqlite.db", package = "RBioinf") con = dbConnect(m, dbname = testDB) tabs = dbListTables(con) tabs
[1] "acc" "go_evi" [4] "go_ont_name" "go_probe" > dbListFields(con, tabs[2])
"go_ont" "pubmed"
242
R Programming for Bioinformatics
Table Name acc
Description map between Affymetrix and Genbank go_evi descriptions of evidence codes go_ont map from GO ID to Ontology go_ont_name long names of GO Ontologies go_probe map from Affymetrix ID to GO, with evidence codes pubmed map from Affymetrix IDs to PubMed IDs
Field Names affy_id, acc_num evi, description go_id, ont ont, ont_name affy_id, go_id, evi affy_id, pm_id
Table 8.2: Description of the tables in the test database supplied with the RBioinf package.
[1] "evi"
"description"
The database has six tables and they are described in Table 8.2. The different tables map between Affymetrix identifiers, GO identifiers and labels, as well one table that maps to PubMed identifiers. Exercise 8.7 For each table in the hgu95av2.db database, determine the type of each field. Exercise 8.8 How many GO evidence codes are there, and what are they? 8.4.2.1
Inner joins
The go_ont table maps GO IDs to the appropriate GO ontology, one of BP, MF or CC. We can extract data from the go_ont_name table to get the more descriptive listing of the ontology for each GO identifier. This requires an inner join, which is demonstrated in the code below. We first use paste to construct the query, which will then be used in the call to dbSendQuery. The inner join is established in the WHERE clause, where we require the two references to be identical. We only fetch and show the first three results. > + + > > >
query = paste("SELECT go_ont.go_id, go_ont.ont,", "go_ont_name.ont_name FROM go_ont,", "go_ont_name WHERE (go_ont.ont = go_ont_name.ont)") rs = dbSendQuery(con, query) f3 = fetch(rs, n=3) f3
Data Technologies
243
go_id ont ont_name 1 GO:0004497 MF Molecular Function 2 GO:0005489 MF Molecular Function 3 GO:0005506 MF Molecular Function >
dbClearResult(rs)
[1] TRUE Exercise 8.9 Use an inner join to relate GenBank IDs to GO ontology codes. 8.4.2.2
Self joins
The following compound statement selects all Affymetrix probes annotated at GO ID GO:0005737 with evidence codes IDA and ISS. This uses a self join and demonstrates a common abbreviation syntax for table names. > query = paste("SELECT g1.*, g2.evi FROM go_probe g1,", + "go_probe g2 WHERE (g1.go_id = GO:0005737 ", + "AND g2.go_id = GO:0005737 ) ", + "AND (g1.affy_id = g2.affy_id) ", + "AND (g1.evi = IDA AND g2.evi = ISS )") > rs = dbSendQuery(con, query) > fetch(rs) affy_id 1 41306_at 2 1069_at 3 38704_at 4 39501_f_at
8.4.3
go_id GO:0005737 GO:0005737 GO:0005737 GO:0005737
evi IDA IDA IDA IDA
evi ISS ISS ISS ISS
Using AnnotationDbi
As of release 2.2 of Bioconductor, most annotation packages have been produced using SQLite and infrastructure in the AnnotationDbi package. This infrastructure provides increased flexibility and makes linking various data sources simpler. The implementation provides access to data objects in the usual way, but it also provides direct access to the tables and provides a number of more powerful functions. Before presenting examples using this package, we first digress slightly to provide details on some of the concepts that underly the AnnotationDbi package. First, a bimap consists of two sets of objects, the left objects and
244
R Programming for Bioinformatics
the right objects, where the names are unique within a set. There can be any number of links between the left objects and the right objects, and these can be thought of as edges. The edges can be tagged or named. Both the left objects and the right objects can have named attributes associated with them. In other words, a bimap is a bipartite graph and it represents the relationships between two sets of identifiers. Bimaps can handle one-to-one relationships, as well as one-to-many, and many-to-many relationships. Bimaps are represented by the Bimap class. An example of a bimap would be to have probe IDs as the left keys, GO IDs as the right keys, and edges, tagged by evidence codes, as the edges between the probe IDs and the GO IDs. We will demonstrate the use of the AnnotationDbi interface using the hgu95av2.db package. Every annotation package provides a function that can be used to access the SQLite connection directly. The name of that function is concatenation of the basename of the package, hgu95av2 in this case, and the string dbconn, separated by an underscore. Name mangling ensures that multiple databases can be attached at the same time. This function can be used to reopen the connection to the database if needed. We first load the database and establish a connection. > library("hgu95av2.db") > mycon = hgu95av2_dbconn() You can then query the tables in the database directly. In a slight abuse of the idea, you can conceptualize tables in the database as arrays. However, they are really bimaps, and we can use specialized tools to extract information from the bimap. The toTable function displays all of the information in a map that includes both the left and right values along with any other attributes that might be attached to those values. The left and right keys can be extracted using Lkeys and Rkeys, respectively. > colnames(hgu95av2GO) [1] "probe_id" "go_id"
"Evidence" "Ontology"
> toTable(hgu95av2GO)[1:10, ] 1 2 3 4 5 6
probe_id 1000_at 1000_at 1000_at 1001_at 1001_at 1001_at
go_id Evidence Ontology GO:0006468 IDA BP GO:0006468 IEA BP GO:0007049 IEA BP GO:0006468 IEA BP GO:0007165 TAS BP GO:0007498 TAS BP
Data Technologies 7 8 9 10
1003_s_at 1003_s_at 1003_s_at 1003_s_at
GO:0006928 GO:0007165 GO:0007186 GO:0042113
TAS IEA TAS IEA
245 BP BP BP BP
> Lkeys(hgu95av2GO)[1:10] [1] "1000_at" "1001_at" [5] "1004_at" "1005_at" [9] "1008_f_at" "1009_at"
"1002_f_at" "1003_s_at" "1006_at" "1007_s_at"
> Rkeys(hgu95av2GO)[1:10] [1] "GO:0008152" "GO:0006953" "GO:0006954" "GO:0019216" [5] "GO:0006928" "GO:0007623" "GO:0006412" "GO:0006419" [9] "GO:0008033" "GO:0043039" The links function returns a data frame with one row for each link, or edge, in the bimap that it is applied to. It does not report attribute information. > links(hgu95av2GO)[1:10, ] 1 2 3 4 5 6 7 8 9 10
probe_id 1000_at 1000_at 1000_at 1001_at 1001_at 1001_at 1003_s_at 1003_s_at 1003_s_at 1003_s_at
go_id Evidence GO:0006468 IDA GO:0006468 IEA GO:0007049 IEA GO:0006468 IEA GO:0007165 TAS GO:0007498 TAS GO:0006928 TAS GO:0007165 IEA GO:0007186 TAS GO:0042113 IEA
A common programming task is to invert the mapping, which typically goes from probes, or genes, to other quantities, such as their symbol. The reversed map then goes from symbol to probe or gene ID. With the old-style annotation packages, this was most easily accomplished using the reverseSplit function from Biobase. But with the new database annotation packages the operations are much simpler. The revmap can be used to reverse most maps. It takes as input an instance of the Bimap class and returns a function that can be queried using keys from that correspond to values in the original mapping. In the example below, we reverse the map from probes to symbols and then use the returned function to find all probes associated with the symbol ABL1.
246
R Programming for Bioinformatics
> is(hgu95av2SYMBOL, "Bimap") [1] TRUE > rmMAP = revmap(hgu95av2SYMBOL) > rmMAP$ABL1 [1] "1635_at" "1636_g_at" "1656_s_at" "2040_s_at" [5] "2041_i_at" "39730_at"
The revmap function can also be used on lists and there uses the reverseSplit function. A simple example is shown below.
> myl = list(a = "w", b = "x", c = "y") > revmap(myl) $w [1] "a" $x [1] "b" $y [1] "c"
8.4.3.1
Mapping symbols
In this section we address a more advanced topic. The material is based on, and similar to, the presentation in Hahne et al. (2008), but the problem is important and common. We want to map from gene symbols to some other form of identifier, perhaps because the symbols were obtained from a paper, or other report, and we would like to see whether we can obtain similar findings using other data sources. But since most other sources do not use symbols for mapping, we must first map the available symbols back to some identifier, such as EntrezGene ID. The code consists of four functions, three helpers and the main function findEGs that maps from symbols to Entrez Gene IDs. We need to know about the table structure to write the helper functions as they are basically R wrappers around SQL statements. The hgu95av2_dbschema function can be used to obtain all the information about the schema.
Data Technologies
> + + + + + > + + + + + > + + + + + > + + + + + + + + + + + + + + +
247
queryAlias = function(x) { it = paste("( ", paste(x, collapse = " , "), " ", sep = "") paste("select _id, alias_symbol from alias", "where alias_symbol in", it, ");") } queryGeneinfo = function(x) { it = paste("( ", paste(x, collapse = " , "), " ", sep = "") paste("select _id, symbol from gene_info where", "symbol in", it, ");") } queryGenes = function(x) { it = paste("( ", paste(x, collapse = " , "), " ", sep = "") paste("select * from genes where _id in", it, ");") } findEGs = function(dbcon, symbols) { rs = dbSendQuery(dbcon, queryGeneinfo(symbols)) a1 = fetch(rs, n = -1) stillLeft = setdiff(symbols, a1[, 2]) if (length(stillLeft) > 0) { rs = dbSendQuery(dbcon, queryAlias(stillLeft)) a2 = fetch(rs, n = -1) names(a2) = names(a1) a1 = rbind(a1, a2) } rs = dbSendQuery(dbcon, queryGenes(a1[, 1])) ans = merge(a1, fetch(rs, n = -1)) dbClearResult(rs) ans }
The logic is to first look to see if the symbol is current, and if not, to then search the alias table to see if there are other less current symbols. Each of the first two queries within the findEGs function returns the symbol (the second columns of a1 and a2) and an identifier that is internal to the SQLite database (the first columns). The last query uses those internal IDs to extract the corresponding Entrez Gene IDs.
248
R Programming for Bioinformatics
> findEGs(mycon, c("ALL1", "AF4", "BCR", "ABL")) _id symbol gene_id 1 20 ABL 25 2 540 BCR 613 3 3758 AF4 4299 4 3921 ABL 4547 The three columns in the return are the internal ID, the symbol and the Entrez Gene ID (gene_id). 8.4.3.2
Combining data from different annotation packages
By using a real database to store the annotation data, we can take advantage of its capabilities to combine data from different annotation packages, or indeed from any SQLite database. Being able to select items from multiple tables does rely on their being a common value that can be used to identify those entries that are the same. It is important to realize that the internal IDs used in the AnnotationDbi packages cannot be used to map between packages. In the example here, we join tables from the hgu95av2.db package and the GO.db package. And we use GO identifiers as the link across the two data packages. We attach the GO database to the HG-U95Av2 database, but could just as well have done it the other way around. In this section we are using the term attach to mean attaching using the SQL function ATTACH, not the R function, or concept, of attaching. We rely on some knowledge of where the GO database is located and its name, together with the system.file function, to construct the path to that database. The hgu95av2.db package is already attached and we now use the connection to it, mycon, to pass the SQL query that will attach the two databases. > GOdbloc = system.file("extdata", "GO.sqlite", package="GO.db") > attachSql = paste("ATTACH ", GOdbloc, " as go;", sep = "") > dbGetQuery(mycon, attachSql) NULL Next, we are going to select some data, based on the GO ID, from two tables, one in the HG-U95Av2 database and one in the GO database. We limit the query to ten values. The WHERE clause on the last line of the SQL query is the part of the query that requires the GO identifiers be the same. The other parts of the query, the first five lines, set up what variables to extract and what to name them.
Data Technologies
249
> sql = paste("SELECT DISTINCT a.go_id AS hgu95av2.go_id ,", + "a._id AS hgu95av2._id ,", + "g.go_id AS GO.go_id , g._id AS GO._id ,", + "g.ontology", + "FROM go_bp_all AS a, go.go_term AS g", + "WHERE a.go_id = g.go_id LIMIT 10;") > dataOut = dbGetQuery(mycon, sql) > dataOut hgu95av2.go_id hgu95av2._id GO.go_id GO._id GO:0000002 255 GO:0000002 13 GO:0000002 1633 GO:0000002 13 GO:0000002 3804 GO:0000002 13 GO:0000002 4680 GO:0000002 13 GO:0000003 41 GO:0000003 14 GO:0000003 43 GO:0000003 14 GO:0000003 81 GO:0000003 14 GO:0000003 83 GO:0000003 14 GO:0000003 104 GO:0000003 14 GO:0000003 105 GO:0000003 14 ontology 1 BP 2 BP 3 BP 4 BP 5 BP 6 BP 7 BP 8 BP 9 BP 10 BP 1 2 3 4 5 6 7 8 9 10
The query combines the go_bp_all table from the HG-U95Av2 database with the go_term table from the GO database, based on the go_id. For illustration purposes, internal ID (_id) and the go_id from both tables are included in the output. This makes it clear that the go_ids can be used to join these tables but the internal IDs cannot. The internal IDs, _id, are suitable for joins within a single database but cannot be used across databases. 8.4.3.3
Metadata about metadata
In order to appropriately combine tables from various databases, users are encouraged to look at the standard schema definitions. The latest schemas are the 1.0 schemas, and these can be found in the inst/DBschemas/schemas_1.0
250
R Programming for Bioinformatics
directory of the AnnotationDbi package. These schemas can also be obtained interactively using the corresponding dbschema function, as shown below. Because all output is merely cated to the screen, we use capture.output to collect it and print only the first few tables. > schema = capture.output(hgu95av2_dbschema()) > head(schema, 18) [1] "--" [2] "-- HUMANCHIP_DB schema" [3] "-- ===================" [4] "--" [5] "" [6] "-- The \"genes\" table is the central table." [7] "CREATE TABLE genes (" [8] "
_id INTEGER PRIMARY KEY,"
[9] " gene_id VARCHAR(10) NOT NULL UNIQUE z Gene ID" [10] ");"
-- Entre
[11] "" [12] "-- Data linked to the \"genes\" table." [13] "CREATE TABLE probes (" [14] " probe_id VARCHAR(80) PRIMARY KEY, acturer ID" [15] " accession VARCHAR(20) NULL, nk accession number" [16] " _id INTEGER NULL, ENCES genes" [17] " FOREIGN KEY (_id) REFERENCES genes (_id)" [18] ");"
-- manuf -- GenBa -- REFER
Data Technologies
251
The above example prints the schema used for the HG-U95Av2 database into your R session. Each database has three tables that describe the contents of that database, as well as where the information contained in the database originated. The metadata table describes the package itself and gives information such as the schema name, schema version, chip name and a manufacturer URL. This schema information is useful for telling users which version of the schema they should consult if they want to make queries that join different databases together, like the compound query described above. The map_metadata table lists the various maps provided by the package and where the data for each map was obtained. And finally, the map_counts table gives the number of values that are contained in that map. A summary of the tables, number of elements that are mapped, information on the schema, and on the data used to create the package are printed by calling a function that has the same name as the package.
> qcdata = capture.output(hgu95av2()) > head(qcdata, 20) [1] [2] [3] [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20]
"Quality control information for hgu95av2:" "" "" "This package has the following mappings:" "" "hgu95av2ACCNUM has 12625 mapped keys (of 12625 keys)" "hgu95av2ALIAS2PROBE has 36833 mapped keys (of 36833 keys)" "hgu95av2CHR has 12117 mapped keys (of 12625 keys)" "hgu95av2CHRLENGTHS has 25 mapped keys (of 25 keys)" "hgu95av2CHRLOC has 11817 mapped keys (of 12625 keys)" "hgu95av2ENSEMBL has 11156 mapped keys (of 12625 keys)" "hgu95av2ENSEMBL2PROBE has 8286 mapped keys (of 8286 keys)" "hgu95av2ENTREZID has 12124 mapped keys (of 12625 keys)" "hgu95av2ENZYME has 1957 mapped keys (of 12625 keys)" "hgu95av2ENZYME2PROBE has 709 mapped keys (of 709 keys)" "hgu95av2GENENAME has 12124 mapped keys (of 12625 keys)" "hgu95av2GO has 11602 mapped keys (of 12625 keys)" "hgu95av2GO2ALLPROBES has 8383 mapped keys (of 8383 keys)" "hgu95av2GO2PROBE has 5898 mapped keys (of 5898 keys)" "hgu95av2MAP has 12093 mapped keys (of 12625 keys)"
Alternatively, the contents of the map_counts table can be obtained from the MAPCOUNTS object, while the contents of the metadata table can be obtained by calling the appropriate dbInfo function, as demonstrated below.
252
R Programming for Bioinformatics
> hgu95av2MAPCOUNTS > hgu95av2_dbInfo()
8.4.3.4
Making new data packages with SQLForge
Included in the AnnotationDbi package is a collection of functions that can be used to make new microarray annotation packages. Making a chip annotation package is a two-step process. In simple terms, a file containing the mapping between the chip identifiers and some standard biological identifiers is used, in conjunction with a special intermediate database, to construct a chip-specific database. The second step wraps that chip-specific database into an R package. In more detail, the first step is to construct an SQLite database that conforms to a schema for the organism that the chip is designed for. Conforming to a standard schema is essential as it allows the new package to integrate with all other annotation packages, such as GO.db and KEGG.db. This database building step requires two inputs. It requires an input file that maps probe IDs to another known ID, typically a tab delimited file. If the chip is an Affymetrix chip and you have one of their csv files, you can use that as an input instead. If a tab delimited file is used, then this file must have two columns, where the first column is the probe ID and the second column is the other ID and no header should be used; the first line in the file should be the first pair of mappings. The other ID can be an Entrez Gene ID, a RefSeq ID, a Gene Bank ID, a Unigene ID or a mixture of Gene Bank and RefSeq IDs. If there is other information in the form of alternate IDs that are also matched to the probe IDs, these can also be included as other, optional, files. The second required input is an intermediate database. This database contains information for all genes in the model organism, and many different biological entities, such as Entrez Gene, KEGG, GO, and Uniprot. These databases are provided as Bioconductor packages and there is one package for each supported model organism. These packages are very large, and are not required unless you want to make annotation packages for the organism in question. Packages can be downloaded using biocLite, as is shown below for the intermediate package needed to construct annotation for human microarrays.
> source("http://bioconductor.org/biocLite.R") > biocLite("human.db0") For demonstration purposes, a file mapping probes on the HG-U95Av2 GeneChip to GenBank IDs is provided in the extdata directory of
Data Technologies
253
AnnotationDbi. In the example below, we first obtain the path to that file and then set up the appropriate metadata. Details on what terms to use for each of the model organisms are given in the vignette for the AnnotationDbi package. > hgu95av2_IDs = system.file("extdata", "hgu95av2_ID", package="AnnotationDbi") > #Then specify some of the metadata for my database > myMeta = c("DBSCHEMA" = "HUMANCHIP_DB", "ORGANISM" = "Homo sapiens", "SPECIES" = "Human", "MANUFACTURER" = "Affymetrix", "CHIPNAME" = "Affymetrix Human Genome U95 Set Version 2", "MANUFACTURERURL" = "http:www.affymetrix.com") We then create a temporary directory to hold the database, and construct one. > tmpout = tempdir() > popHUMANCHIPDB(affy = FALSE, prefix = "hgu95av2Test", fileName = hgu95av2_IDs, metaDataSrc = myMeta, baseMapType = "gb", outputDir = tmpout, printSchema = TRUE) In the above example, setting the DBSCHEMA value is especially important as it specifies the schema to be used for the database. The function popHUMANCHIPDB actually populates the database and its name reflects the schema that it supports. To create a mouse chip package, you would use popMOUSECHIPDB. The second phase of making an annotation data package is wrapping these SQLite databases into an AnnotationDbi-compliant source package. We need to specify the schema, PkgTemplate, the version number, Version, as well as other details. Once that has been done, the function makeAnnDbPkg is used to carry out the computations and its output is a fully formed R package that can be installed and used by anyone. > seed makeAnnDbPkg(seed, file.path(tmpout, "hgu95av2Test.sqlite"), dest_dir = tmpout)
254
R Programming for Bioinformatics
Creating package in /tmp/Rtmpv0RzTT/hgu95av2Test.db
In order to simplify the process, there is a wrapper function that performs both steps; it makes the intermediate SQLite database and then constructs a complete annotation package. In most cases this would be preferred to the two-step option previously discussed.
> makeHUMANCHIP_DB(affy=FALSE, prefix="hgu95av2", fileName=hgu95av2_IDs, baseMapType="gb", outputDir = tmpout, version="2.1.0", manufacturer = "Affymetrix", chipName = "Affymetrix Human Genome U95 Set Version 2", manufacturerUrl = "http://www.affymetrix.com")
Functions are available for six of the major model organisms: makeFLYCHIP_DB,
makeHUMANCHIP_DB, makeMOUSECHIP_DB, makeRATCHIP_DB, makeYEASTCHIP_DB, makeARABIDOPSISCHIP_DB.
8.5
XML
The eXtensible Markup Language (XML) is a widely used standard for marking up data and text in a structured way. It is an important tool for communication between different clients and servers on the World Wide Web, where servers and clients use XML dialects to negotiate queries on service availability, to submit requests, and to encode the results of requested computations. Readers unfamiliar with XML should consult one of the many references available, such as Skonnard and Gudgin (2001) or Harold and Means (2004). There are many other related tools and paradigms that can be used to interact with XML documents. In particular, XSL, which is a stylesheet language for XML, and XSL Transformations (XSLT, http://www.w3.org/TR/xslt), which can be used to transform XML documents from one form to another. The Sxslt package, available from Omegahat, provides an interface to an XSLT translator. Also of interest is the XPath language (http://www.w3.org/TR/xpath), which was designed for addressing parts of an XML document.
Data Technologies
255
An XML document is tree-like in structure. It consists of a series of elements, which we will sometimes also refer to as nodes. An example is given in Program 8.1. Normally each element has both an opening tag and a closing tag, but in some circumstances these can be collapsed into a single tag. The syntax for an opening tag is to have the name of the element enclosed between a less-than sign, . Following the element name, and before the closing >, there can be any number of named attributes. The end of the element is signaled using a similar syntax except that here the name of the node is preceded by a forward slash. Between the opening and closing tags there can be other XML elements, plaintext, or a few other constructs, but we will only consider the simple case of plaintext here and refer the reader to the specialized references already mentioned for more details on the XML format. The most basic value of an XML element is the sub-document rooted at that element. When an element has no XML children, the value is the textual content between the opening and closing tag. A small extract from one of the files supplied by the IntAct database (Kerrien et al., 2006) is shown in Program 8.1. In this example the first element is named participantList and that element has no attributes, but one child, participant, which has an attribute named id. The participant element itself has a subelement, named interactorRef. The interactorRef element has no attributes but it does have a value, in this case the number 803. 803 ... Program 8.1: An XML snippet. Typically, but not necessarily, a schema or DTD is used to describe a specific XML format. The schema describes the allowable tags and provides information on what content is allowed. XML documents are strictly hierarchical and essentially tree-like. Each element in the document may have one or more child elements. The schema describes the set of required and allowed child elements for any given element. The only real benefit to using XML over any other form of markup is that there are good parsers, validators and many other tools that have been written in just about every computer language that can be used; neither you nor those working with you will need to write that sort of low level code. XML name spaces are described in most books on XML as well as at http://www.w3.org/TR/REC-xml-names/. The use of name spaces allows
256
R Programming for Bioinformatics
the reuse of tags in different contexts. A simple example of a name space, taken from the web site named above is shown below. In this example there are two name spaces, bk and isbn. These can then be used as a prefix on the tag names in the document, e.g., bk:title and isbn:number. Cheaper by the Dozen 1568491379 Program 8.2: An example of a name space in XML. There are two basic methods for parsing XML documents. One is the document object model, or DOM, parsing and the other is event-style, or SAX, parsing. The DOM approach is to read the entire document into memory and to operate on it as a whole. The SAX approach is to parse the document and to act on different entities as the document is parsed. SAX parsing can be much more efficient for large files as it is seldom necessary to have the entire document in memory at one time.
8.5.1
Simple XPath
Our tutorial on XPath is very brief, and is intended only to make the further examples comprehensible. Readers should consult a more definitive reference if they are going to make extensive use of XPath. //participant selects all participant elements. //participant/interactorRef selects all interactorRef elements that are children of a participant element. //@id selects all attributes that are named id. //participant[@id ] selects all participant elements that have an attribute named id. There are many more capabilities; elements can be selected on the value of an attribute, the first child of an element, the last child, children with specific properties and many other criteria can be used.
Data Technologies
8.5.2
257
The XML package
Processing of XML documents in R can be done with the XML package. The package is extensive and provides interfaces to most tools that you will need to efficiently process XML documents. While the XML package has relatively little explicit documentation, it has a very large number of examples that can be studied to get a better idea of the extensive capabilities that are contained in that package. These files can be found in the examples subdirectory for the installed package. Given the structure of an XML document, a convenient and flexible way to process the document is to process each element with a function that is specialized to deal with that particular element. Such functions are sometimes referred to as handlers, and this is the approach that the XML package has taken. DOM-style parsing is done by the xmlTreeParse function while SAX, or stream, style parsing is handled by the xmlEventParse function. In either case, handler functions can be defined and supplied. These handlers are then invoked, when appropriate, and facilitate appropriate processing of the document. With xmlTreeParse, the value returned by the handler is then placed into the saved version of the XML document. A rather convenient way to remove elements is to provide a handler that returns NULL. The return value from xmlTreeParse is an object of class XMLDocument. The return value for xmlEventParse is the handler’s argument that was supplied. It is presumed that this is a closure, and that the handlers have processed the elements and stored the quantities of interest in local variables that can be extracted later.
8.5.3
Handlers
Basically, handlers can be specified for any type of element and when a element is encountered, during parsing, the appropriate handler is invoked. Handlers can be specialized to either the start of element processing, or they can be run after the element has been processed, but before the next element is read. For DOM processing using xmlTreeParse, either a named list of handlers or a single function can be supplied. If a single function is supplied, then it is called on every element of the underlying DOM tree. For SAX processing, xmlEventParse, the handlers are more extensive. The standard function or handler names are startElement, endElement, comment, getEntity, entityDeclaration, processingInstruction, text, cdata, startDocument and endDocument. In addition you can provide handler functions for specific tags such as by giving the handler the name myTag.
258
8.5.4
R Programming for Bioinformatics
Example data
An XML file containing data from the IntAct database (Kerrien et al., 2006) is supplied as part of the RBioinf package. We will use it for our XML parsing examples. More extensive examples, and a reasonably complete solution for dealing with IntAct, is provided in the RIntact package (Chiang et al., 2007). In the code below we find the location of that file, so that it can be parsed. > Yeastfn = system.file("extdata", "yeast_small-01.xml", package = "RBioinf") No matter what type of processing you do, you will need to ascertain some basic facts about the XML document being processed. Among these is finding out what name spaces are being used as this will be needed in order to properly process the document. This information can be obtained using the xmlNamespaceDefinitions function. A default name space has no name, and can be retrieved using the getDefaultNamespace function. We save the default name space in a variable named namespace and pass that to any function that needs to know the default name space. In the code below we read in the document and then ascertain whether it is using a name space, and if so what it is. This will be important for parsing the document. > > > >
yeastIntAct = xmlTreeParse(Yeastfn) nsY = xmlNamespaceDefinitions(xmlRoot(yeastIntAct)) ns = getDefaultNamespace(xmlRoot(yeastIntAct)) namespaces = c(ns = ns)
Exercise 8.10 How many name space definitions are there for the XML document that was parsed? What are the URIs for each of them?
8.5.5
DOM parsing
DOM-style parsing basically retrieves the entire XML document into memory. It can then be processed in different ways. The simplest way to invoke DOM-style parsing is to use the xmlTreeParse function, which was done above. This results in a large object that is stored as a list. Since we are not interested in all of the contents of this file, we can specify handlers for the elements that are not of interest and drop them. Since the default return value is the handlers, you must be sure to ask for the tree to be returned. In the code below we remove all elements named sequence, organism, primaryRef, secondaryRef and names. We see that the resulting document is much smaller.
Data Technologies
259
> nullf = function(x, ...) NULL > yeast2 = xmlTreeParse(Yeastfn, handlers = list(sequence = nullf, organism = nullf, primaryRef = nullf, secondaryRef = nullf, names = nullf), asTree=TRUE) We can easily compare the size of the two documents and see that the second is much smaller.
> object.size(yeastIntAct) [1] 47253568 > object.size(yeast2) [1] 11793648 If instead, it is desirable to obtain the entire tree so that it can later be manipulated and queried, it may be beneficial to set the argument useInternalNodes to TRUE. This causes the document to be stored in an internal, to the XML package, format rather than to convert it into an R object. This tends to be faster, since no conversion is done, but also restricts the sort of processing that can be done. However, one can use XPath expressions via the function getNodeSet to process the data. Since the XML file is using a default name space, we must use it when referencing the elements in any call to getNodeSet. We make use of the namespaces object created above. Recall that it has a value named ns, and that is used to replace the ns in "//ns:attributeList with the appropriate URI. Recall from Section 8.5.1 that the XPath directive selects all elements in the document named attributeList.
> yeast3 = xmlTreeParse(Yeastfn, useInternalNodes = TRUE) > f1 = getNodeSet(yeast3, "//ns:attributeList", namespaces) > length(f1) [1] 10 We see that there are ten elements named attributeList in the document. But our real goal is to find all protein-protein interactions and we next attempt
260
R Programming for Bioinformatics
to do just that. As in almost all cases of interacting with XML files, we need to know a reasonable amount about how the document is actually constructed to be able to extract the relevant information. We will make use of both the xmlValue function and the xmlAttr functions to extract values from the elements. We first obtain the interaction detection methods; in this case all interactions are two hybrid, which is a method for detecting physical protein interactions (Fields and Song, 1989). We then obtain the name of the organism being studied, Saccharomyces cerevisiae.
> iaM = getNodeSet(yeast3, "//ns:interactionDetectionMethod//ns:fullName", namespaces) > sapply(iaM, xmlValue) [1] "two hybrid" > f4 = getNodeSet(yeast3, "//ns:hostOrganism//ns:fullName", namespaces) > sapply(f4, xmlValue) [1] "Saccharomyces cerevisiae" We can obtain the interactors and the interactions in a similar way. We first use getNodeSet and an XPath specification to obtain the elements of the XML document that contain the information we want, and then can use sapply and xmlValue to extract the quantities of interest. In this case we first obtain the interactors and then the interactions they participate in.
> interactors = getNodeSet(yeast3, "//ns:interactorList//ns:interactor", namespaces) > length(interactors) [1] 503 > interactions = getNodeSet(yeast3, "//ns:interactionList/ns:interaction", namespaces) > length(interactions) [1] 524
Data Technologies
261
There are 503 different interactors that are involved in 524 different interactions. An alternative is to use xpathApply to perform both operations in a single operation.
>
interactors = xpathApply(yeast3, "//ns:interactorList//ns:interactor", xmlValue, namespaces = namespaces)
A similar, but different functionality is provided by xmlApply. The function operates on the children of the node passed as an argument.
8.5.6
XML event parsing
We now discuss using the XML package to perform event-based parsing of an XML document. One of the advantages of this approach is that the entire document does not need to be processed and stored in R, but rather, it is processed an element at a time. To take full advantage of the event parsing model, we will rely on lexical scope (Section 2.13). In the code below we create a few simple functions for parsing different elements in the XML file. The name of the function is irrelevant and can be whatever you want. The functions should take two arguments; the first will be the name of the element, the second will be the XML attributes. In the code below we define three separate handlers and in this example they are essentially unrelated to each other. The first one is called entSH and it first prints out the name of the element and then saves the values of two attributes, level and minorVersion. We have it print as a debugging mechanism that allows us to be sure that nodes are being handled. Then we create an environment that will be used to store these values and make it the environment of the function entSH.
> entSH = function(name, attrs, ...) { cat("Starting", name, "\n") level environment(entSH) = e2
262
R Programming for Bioinformatics
In the next code, we create two more handlers, one to extract the taxonimic ID and the other to count the number of interactions. They share an environment, but that is only for expediency; there is no data sharing in the example.
> hOrg = function(name, attrs, ...) { taxid e3$taxid = NULL > environment(hOrg) = e3 > hInt = function(name, attrs, ...) numInt environment(hInt) = e3
And now we can use these handlers to parse the data file and then print out the values. The name of the handler function is irrelevant since the link between XML element name and any particular handler function is determined by the name used in the handlers list. So, in the code chunk below, we have installed handlers for three specific types of elements, namely entrySet, hostOrganism and interactor.
> s1 = xmlEventParse(Yeastfn, handlers = list(entrySet = entSH, hostOrganism = hOrg, interactor = hInt)) Starting entrySet > environment(s1$entrySet)$level level "2" > environment(s1$hostOrganism)$taxid ncbiTaxId "4932" > environment(s1$interactor)$numInt [1] 503
Data Technologies
8.5.7
263
Parsing HTML
HTML is another markup language and, if machine generated, can often be parsed automatically, however, many HTML documents do not have closing tags or use non-standard markup, which can make parsing very problematic. The function htmlTreeParse can be used to parse HTML documents. This function can be quite useful for some screen-scraping activities. For example, we can use htmlTreeParse to parse the Bioconductor build reports (see code example below). The return value is a list of length three and the actual HTML has been converted to XML in the children sublist. This can now be processed using the standard XML tools discussed previously. In the call to htmlTreeParse, we set useInternalNodes to TRUE so that we will be able to use XPath syntax to extract elements of interest. > url = paste("http://www.bioconductor.org/checkResults/", "2.1/bioc-LATEST/", sep = "") > s1 = htmlTreeParse(url, useInternalNodes = TRUE) > class(s1) [1] "XMLInternalDocument" For example, we can extract all of the package names. To do this, we will use the getNodeSet function, together with XPath syntax to quickly identify the elements we want. By looking at the source code for the check page, we see that the packages are listed as the value of an element and in particular the syntax is spikeLI so we first look for elements in the tree that are named A and have an href attribute. This is done using XPath syntax in the first line of the code chunk (note that currently it seems that XML translates all element names to lower case; so you must specify them in lower case). We see that there are many more such elements than there are packages, so we will need to do some more work to retrieve the package names. > f1 = getNodeSet(s1, "//a[@href]") > length(f1) [1] 4243 There are two different approaches that can be taken at this point. One is to use the function xmlGetAttr to retrieve the values for the href attributes; these
264
R Programming for Bioinformatics
can then be processed by grep and sub to find those that refer to Bioconductor packages. A second approach is to return to the HTML source and there we notice that the elements we are interested in are always subelements of b elements. In the code below we refine our XPath query to select only those a elements that are direct descendants of b elements.
> f2 = getNodeSet(s1, "//b/a[@href]") > p2 = sapply(f2, xmlValue) > length(p2) [1] 261 > p2[1:10] [1] "lamb1" [5] "lemming" [9] "aCGH"
"wilson2" "pitt" "ACME"
"wellington" "liverpool" "A" "ABarray"
We can compare our results to the web page and see that this procedure has indeed largely retrieved the package names as desired. While the process requires some manual intervention, using htmlTreeParse and tools provided with the XML package greatly simplifies the process of retrieving values from HTML, when that is necessary.
Exercise 8.11 Carry out the first suggestion above. That is, starting with f1, retrieve the element attributes and then process them via grep and gsub to find the names of the packages. Compare your results with those above.
8.6
Bioinformatic resources on the WWW
Many bioinformatic resources provide support for the remote automated retrieval of data. Several different mechanisms are used, including SOAP, responses to queries (often in XML) and the BioMart system. Examples of service providers are the NCBI, the Kyoto Encyclopedia of Genes and Genomes (KEGG) and Ensemble. In this section we discuss and provide some simple examples that make use of these resources. In most cases there are specific R packages that provide and support the required interface.
Data Technologies
8.6.1
265
PubMed
The National Library of Medicine (NLM) provides support for a number of different web services. We have developed a set of tools that can be used to query PubMed. The software is contained in the annotate package, and more details and documentation are provided as part of that package. Some of our earlier work in this area was reported in Gentleman and Gentry (2002). Some functions in annotate that provide support for accessing online data resources are itemized below. genbank users specify GenBank identifiers and can request the related links
to be rendered in the browser or returned in XML.
pubmed users specify PubMed identifiers and can request them to be rendered
in the browser or returned in XML.
pm.getabst the abstracts for the specified PubMed IDs will be downloaded
for processing.
pm.abstGrep supports processing downloaded abstracts via grep to find terms
contained in the abstract, such as the name of your favorite gene.
8.6.2
NCBI
In this example, we initiate a request to the EInfo utility to provide a list of all the databases that are available through the NCBI system. These can then be queried in turn to determine what their contents are. And indeed, it is possible to build a system, in R, for querying the NCBI resources that would largely parallel the functionality supplied by the biomaRt package, which is discussed in some detail in Section 8.6.3. > ezURL = "http://eutils.ncbi.nlm.nih.gov/entrez/eutils/" > t1 = url(ezURL, open = "r") > if (isOpen(t1)) { z = xmlTreeParse(paste(ezURL, "einfo.fcgi", sep = ""), isURL = TRUE, handlers = NULL, asTree = TRUE) dbL = xmlChildren(z[[1]]$children$eInfoResult)$DbList dbNames = xmlSApply(dbL, xmlValue) length(dbNames) dbNames[1:5] } DbName "pubmed" DbName "nucgss"
DbName DbName "protein" "nucleotide"
DbName "nuccore"
266
R Programming for Bioinformatics
We see that at the time the query was issued, there were 37 databases. The names of five of them are listed, and the others can be retrieved from the dbNames object. Parsing of the XML is handled by fairly standard tools, and in particular we want to draw attention to the apply-like functions. Because XML document objects have complex R representations, the use of XPath and xmlApply will generally simplify the code that needs to be written.
8.6.3
biomaRt
BioMart is a query-oriented data system that is being developed by the European Bioinformatics Institute (EBI) and the Cold Spring Harbor Laboratory (CSHL). The biomaRt package provides an interface to BioMart. > library("biomaRt") > head(listMarts()) biomart 1 ensembl 2 compara_mart_homology_49 3 compara_mart_pairwise_ga_49 4 compara_mart_multiple_ga_49 5 snp 6 genomic_features 1 ENSEMBL 49 GENES 2 ENSEMBL 49 HOMOLOGY 3 ENSEMBL 49 PAIRWISE ALIGNMENTS 4 ENSEMBL 49 MULTIPLE ALIGNMENTS 5 ENSEMBL 49 VARIATION 6 ENSEMBL 49 GENOMIC FEATURES
version (SANGER) (SANGER) (SANGER) (SANGER) (SANGER) (SANGER)
Users can then select one of the BioMart databases to query; we will select the ensembl mart. We can then query that mart to find out which data sets it supports, and we will choose to use the human one. > ensM = useMart("ensembl") > ensData = head(listDatasets(ensM)) > dim(ensData) [1] 6 3 > ensMH = useDataset("hsapiens_gene_ensembl", mart = ensM)
Data Technologies
267
If you know both the name of the BioMart server and data set in advance, you can make the whole request in one step.
> ensMH = useMart("ensembl", dataset = "hsapiens_gene_ensembl")
Now we are ready to make data requests. biomaRt supports many more interactions than we will be able to cover, so interested readers should refer to the package vignette for more details and examples. To understand biomaRt’s query API, we must understand what the terms filter and attribute mean. A filter defines a restriction on a query; for example, you might obtain results for a subset of genes, filtered by a gene identifier. Attributes define the values we want to retrieve, for instance, GO identifiers; or PFAM identifiers for the selected genes. You can get a listing of available filters with listFilters and a listing of the available attributes with listAttributes.
> filterSummary(ensMH) 1 2 3 4 5 6 7
category group FILTERS GENE: FILTERS EXPRESSION: FILTERS REGION: FILTERS GENE ONTOLOGY: FILTERS PROTEIN: FILTERS SNP: FILTERS MULTI SPECIES COMPARISONS:
> lfilt = listFilters(ensMH, group = "GENE:") > nrow(lfilt) [1] 166 > head(lfilt) name description 1 affy_hc_g110 Affy hc g 110 ID(s) 2 affy_hc_g110-2 Affy hc g 110 ID(s) 3 affy_hg_focus Affy hg focus ID(s) 4 affy_hg_focus-2 Affy hg focus ID(s) 5 affy_hg_u133_plus_2 Affy hg u133 plus 2 ID(s) 6 affy_hg_u133_plus_2-2 Affy hg u133 plus 2 ID(s)
268
R Programming for Bioinformatics
We can see that there are several types of filters. There are two filters in the GENE group. We next query the attributes to see which ones are available. > head(attributeSummary(ensMH)) 1 2 3 4 5 6
category group Features EXTERNAL: Features GENE: Features EXPRESSION: Features PROTEIN: Features GENOMIC REGION: Homologs AEDES ORTHOLOGS:
> lattr = listAttributes(ensMH, group = "PROTEIN:") > lattr name description 1 family Ensembl Family ID 2 family_description Family Description 3 interpro Interpro ID 4 interpro_description Interpro Description 5 interpro_short_description Interpro Short Description 6 pfam PFAM ID 7 prints PRINTS ID 8 prosite Prosite ID 9 prot_smart SMART ID 10 signal_domain Signal domain
8.6.3.1
A small example
We will begin with a small set of three Entrez Gene IDs: 983 (CDC2), 3581 (IL9R) and 1017 (CDK2). The function getGene can be used to retrieve the corresponding records. Note that it returns one record per Ensembl transcript ID, which is often more than the number of Entrez Gene IDs. In the code below, we use getGene to retrieve gene-level data, and print out the symbols for the three genes. Note that the order of the genes in the return value need not be in the same order as in the request. Also, the getGene interface provides a limited set of values; if you want more detailed information, you will need to use getBM and the attributes and filters, described above, or one of the other helper functions in biomaRt such as getGO. > entrezID = c("983", "3581", "1017") > rval = getGene(id = entrezID, type = "entrezgene",
Data Technologies
269
mart = ensMH) > unique(rval$hgnc_symbol) [1] "CDK2" "IL9R" "CDC2" Exercise 8.12 What other data were returned by the call to getGene? In order to obtain other information on the quantities of interest, the getBM function provides a very general interface. In the code below we show how to obtain Interpro domains for the same set of query genes as we used above.
> ensembl = useMart("ensembl", dataset = "hsapiens_gene_ensembl") > ipro = getBM(attributes=c("entrezgene","interpro", "interpro_description"), filters = "entrezgene", values = entrezID, mart = ensembl) > ipro 1 2 3 4 5 6 7 8 9 10
entrezgene 1017 1017 1017 1017 1017 3581 983 983 983 983
interpro IPR000719 IPR008271 IPR001245 IPR008351 IPR002290 IPR003531 IPR000719 IPR001245 IPR002290 IPR008271
interpro_description 1 Protein kinase, core 2 Serine/threonine protein kinase, active site 3 Tyrosine protein kinase 4 JNK MAP kinase 5 Serine/threonine protein kinase 6 Short hematopoietin receptor, family 1 7 Protein kinase, core 8 Tyrosine protein kinase 9 Serine/threonine protein kinase 10 Serine/threonine protein kinase, active site
270
8.6.4
R Programming for Bioinformatics
Getting data from GEO
The Gene Expression Omnibus (GEO) is a repository for gene expression or molecular abundance data. The repository has an online interface where users can select and download data sets of interest. The GEOquery package provides a useful set of interface tools that support downloading of GEO data and their conversion into ExpressionSet and other Bioconductor data structures suitable for analysis. The main function in that package is getGEO, which is invoked with the name of the data set that you would like to download. It may be advantageous to use the destdir argument to store the downloaded file in a permanent location on your local file system as the default location is removed when the R session ends. In the code below, we download a GEO data set and then convert it into an expression set.
> library(GEOquery) > gds = getGEO("GDS1") File stored at: /tmp/Rtmpv0RzTT/GDS1.soft > eset = GDS2eSet(gds, do.log2 = TRUE) File stored at: /tmp/Rtmpv0RzTT/GPL5.soft The conversion to an ExpressionSet is quite complete and all reporter and experiment information is copied into the appropriate locations, as is shown in the example below.
> s1 = experimentData(eset) > abstract(s1) > s1@pubMedIds
Experiment data Experimenter name: Laboratory: Contact information: Title: Testis gene expression profile URL: PMIDs: 11116097
Data Technologies
271
Abstract: A 28 word abstract is available. Use abstract me thod. notes: : able_begin channel_count: 1 description: Adult testis gene expression profile and gene discovery. Examines testis, whole male minus gonads, ovary and who le female minus gonads from adult, 12-24 hours post-eclo sion, genotype y w[67c1]. feature_count: 3456 order: none platform: GPL5 platform_organism: Drosophila melanogaster platform_technology_type: spotted DNA/cDNA pubmed_id: 11116097 reference_series: GSE462 sample_count: 8 sample_organism: Drosophila melanogaster sample_type: RNA title: Testis gene expression profile type: gene expression array-based update_date: Aug 17 2006 value_type: count
272
8.6.5
R Programming for Bioinformatics
KEGG
The Kyoto Encyclopedia of Genes and Genomes (Kanehisa and Goto, 2000) provides a great deal of biological and bioinformatic information. Much of it can be downloaded and processed locally, but they also provide a web service that uses the Simple Object Access Protocol (SOAP). This protocol uses XML to structure requests and responses for web service interactions. The SOAP protocol includes rules for encapsulating requests and responses (e.g., rules for specifying addresses, selecting methods or specifying error handling actions), and for encoding complex data types that form parts of requests and responses (e.g., encoding arrays of floating point numbers). SOAP services are provided in R through the SSOAP package, available from the Omegahat project. And the Bioconductor package KEGGSOAP provides an interface to some of the data resources provided by KEGG. In the example below we obtain the genes in the Riboflavin metabolism pathway in Saccharomyces cerevisiae. We then compare the online answer with the answer that can be obtained from data in the KEGG package. > library("KEGG") > library("KEGGSOAP") > KEGGPATHID2NAME$"00740" [1] "Riboflavin metabolism" > SoapAns = get.genes.by.pathway("path:sce00740") > SoapAns [1] [5] [9] [13]
"sce:YAR071W" "sce:YBL033C" "sce:YBR092C" "sce:YBR093C" "sce:YBR153W" "sce:YBR256C" "sce:YDL024C" "sce:YDL045C" "sce:YDR236C" "sce:YDR487C" "sce:YHR215W" "sce:YOL143C" "sce:YPR073C"
Notice that the species abbreviation has been prepended to all gene names. We will use gsub to remove the prefix. Then we can use setdiff to see if there are any differences between the two. > SA = gsub("^sce:", "", SoapAns) > localAns = KEGGPATHID2EXTID$sce00740 > setdiff(SA, localAns) character(0)
Chapter 9 Debugging and Profiling
9.1
Introduction
In this chapter we provide some guidance on tools and strategies that should make debugging your code easier and faster. Basically, you must first try to identify the source of the error. While it is generally easy to find where the program actually failed, that is not usually the place where the programming error occurred. Some bugs are reproducible; that is, they occur every time a sequence of commands is executed on all platforms, and others can be more elusive; they arise intermittently and perhaps only under some operating systems. One of the first things that you should do when faced with a likely bug is to try and ensure its reproducibility. If it is not easily reproduced, then your first steps should be to find situations where it is, as only then is there much hope of finding the problem. One of the best debugging strategies is to write code so that bugs are less likely to arise in the first place. You should prefer the use of simple short functions, each performing a particular task. Such functions are easy to understand and errors are often obvious. Long, convoluted functions tend to both give rise to more bugs and to be more difficult to debug. This chapter is divided into several sections. First we discuss the browser function, which is the main tool used in debugging code in R. R functions such as debug, trace and recover make use of browser as a basic tool. The debugging tools are all intended primarily for interactive use and most require some form of user input. We then discuss debugging in R, beginning by recommending that static code analysis using functions from the codetools package be used, and then covering some of the basic tools that are available in R. Then we cover debugging procedures that can be applied to detect problems with underlying compiled code. We conclude by discussing tools and methods for profiling memory and the execution of R functions. 273
274
9.2
R Programming for Bioinformatics
The
browser
function
The browser function is the building block for many R debugging techniques. A call to browser halts evaluation and starts a special interactive session where the user can inspect the current state of the computations and step through the code one command at a time. The browser can be called from inside any function, and there are ways to invoke the browser when an error or other exception is raised. Once in the browser, users can execute any R command; they can view the local environment by using ls; and they can set new variables, or change the values assigned to variables simply by using the standard methods for assigning values to variables. The browser also understands a small set of commands specific to it. A summary of the available browser commands and other useful R commands are given in Table 9.1 and Table 9.2. Of these, perhaps the most important to remember for new users is Q, which causes R to quit the debugger and to return control to the command line. Any user input is first parsed to see if it is consistent with a special debugger instruction and, if so, the debugger instruction will be performed. Most of these commands consist of a single letter and are described below. Any local variable with the same name as one of these commands cannot be viewed by simply typing its name, as is standard practice in R, but rather will need to be wrapped in a call to print.
ls()
list the variables defined inside the function
x
print the value of variable x
print(x)
print the value of variable x – useful when x is one of n, l, Q or cont
where
print the call stack
Q
stop the current execution and return to the top-level R interpreter prompt
Table 9.1: Browser commands with non-modal functionalities. When the browser is active, the prompt changes to Browse[i]> for some positive integer i. The browser can be invoked while a browser session is active, in which case the integer is incremented. Any subsequent calls to browser are nested and control returns to the previous level once a session has
Debugging and Profiling
275
Initial Mode
Step through Mode
Debugger
n
start the step through debugger
execute the next step in the function
c
continue execution
continue execution; if inside a loop, execute until the loop ends
cont
same as c
same as c
carriage return
same as c
same as n
Table 9.2: Browser commands with modal functionalities. finished. Currently the browser only provides access to the active function; there are no easy ways to investigate the evaluation environments of other functions on the call stack. The browser command where can be used to print out the current call stack. To change evaluation environments, you can use direct calls to recover from inside of the debugger, but be warned that the set of selections offered may be confusing since for this usage many of the active functions relate to the operation of the browser and not to the evaluation of the function you are interested in.
9.2.1
A sample browser session
Here we show a sample browser session. We first modify the function
setVNames from the RBioinf package so that it starts with a call to browser.
> setVNames = function(x, nm) { + browser() + names(x) = nm + asSimpleVector(x, "numeric") + } Then, when setVNames is invoked, as is shown below, the evaluation of the function call browser() halts the execution at that point and a prompt for the browser is printed in the console.
276
R Programming for Bioinformatics
> x = 1:10 > x = setVNames(x, letters[1:10]) Browse[1]> At the browser prompt, the user can type and execute almost any valid R expression, with the exception of the browser commands described in Tables 9.1 and 9.2, which, if used, will have the interpretation described there. Sometimes, the user may unintentionally start a large number of nested browser sessions. For example, if the prompt is currently Browse[2]>, then the user is at browser level 2. Typing c at the prompt will generally continue evaluation of that expression until completion, at which point the user is back at browser level 1 and the prompt will change to Browse[1]>. Typing Q will exit from the browser; no further expressions will be evaluated and the user is returned to the top-level R interpreter, where the prompt is >.
9.3
Debugging in R
In this section we describe methods that can be used to debug code that is written in R. As described in the introduction, an important first step is to use tools for static code analysis to try and detect bugs while developing software, rather than at runtime. One aspect of carefully investigating your code for unforeseen problems is the use of the functionality embodied in the codetools package. The tools basically inspect R functions and packages and ascertain which variables are local and which are global. They can be used to find variables that are never used or that have no local or global binding, and hence are likely to cause errors. In the example below, we define a function, foo, that we use to demonstrate the use of the codetools package by finding all the global variables referenced in the function. > + + + + > >
foo = function(x, y) { x = 10 z = 20 baz(100) } library("codetools") findGlobals(foo)
[1] "="
"baz" "{"
Debugging and Profiling
277
findGlobals reports that there are three global symbols in this function: =, { and baz. The symbols x and y are formal arguments, and hence not global symbols. The numbers, 10, 20 and 100, are constants and hence not symbols, either local or global. And z is a local variable, since it is defined and assigned a value in the body of foo. In the next code chunk we can find the local variables in the body of the function foo.
> findLocals(body(foo)) [1] "x" "z" Notice that x is reported as a local variable, even though it is an argument to foo. The reason is that it is assigned to in the body so that the argument, if supplied, is ignored; and if the argument is not supplied, then x will indeed be local. The functions that you are likely to use the most are checkUsage and checkUsagePackage. The first checks a single function or closure while the latter checks all functions within the specified package. In the code below, we run checkUsage on the function foo, defined above. Note that the fact that there is no definition for baz is detected as is the fact that z is created but does not seem to be used. > checkUsage(foo, name = "foo", all = TRUE) foo: foo: foo: foo:
no visible global function definition for baz parameter x changed by assignment parameter y may not be used local variable z assigned but may not be used
Making use of the tools provided in the codetools package can help find a number of problems with your code and using it is well worth the effort. The package checking code, R CMD check, uses codetools and reports potential issues.
9.3.1
Runtime debugging
When an error, or unintended outcome, occurs while the program is running, the first step is to locate the source of the error and this is often done in two stages. First you must locate where R has detected the error, and then usually look back from that point to determine where the problem actually occurred. One might think that the important thing is to know which line of
278
R Programming for Bioinformatics
which function gave rise to the error. But in many cases, the error arises not because of that particular line, but rather because of some earlier manipulation of the data that rendered it incorrect. Hence, it is often helpful to know which functions are active at the time the error was thrown; by active we mean that the body of the function is being evaluated. In R (and most other computer languages), when a function is invoked, the statements in the body of the function are evaluated sequentially. Since each of those statements typically involves one or more calls to other functions, the set of functions that is being evaluated simultaneously can be quite large. When an error occurs, we would like to see a listing of all active functions, generally referred to as the call stack. While our emphasis, and that of most users, is on dealing with errors that arise, the methods we describe here can be applied to other types of exceptions, such as warnings, which we discuss in Section 9.3.2. But some tools, such as traceback, are specific to errors. The variable .Traceback stores the call stack for the last uncaught error. Errors that are caught using try or tryCatch do not modify .Traceback. By default, traceback prints the value in .Traceback in a somewhat more userfriendly way. Consider the example below, which makes use of functions supplied in the RBioinf package. > x = convertMode(1:4, list()) Error in asSimpleVector(from, mode(to)) : invalid mode list
> traceback() 3: stop("invalid mode ", mode) 2: asSimpleVector(from, mode(to)) 1: convertMode(1:4, list()) Each line starting with a number in the output from traceback represents a new function call. Because of lazy evaluation, the sequence of function calls can sometimes be a little odd. Since line numbers are not given, it is not always clear where the error occurred, but at least the user has some sense of which calls were active, and that can greatly help to narrow down the potential causes of the error.
9.3.2
Warnings and other exceptions
Sometimes, instead of getting an error, we get an unexpected warning. Just like unexpected errors, we want to know where they occurred. There are two
Debugging and Profiling
279
strategies that you can use. First, you can turn all warnings to errors by setting the warn option, as is done in the example below.
> saveopt = options(warn = 2)
Now any warning will be turned into an error. Later you can restore the settings using the value that was saved when the option was set.
> options(saveopt)
The second strategy is to use the function withCallingHandlers, which provides a very general mechanism for catching errors, warnings, or other conditions and invoking different R functions to debug them. In the example below, we handle warnings; other exceptions can be included by simply adding handlers for them to the call to withCallingHandlers.
> withCallingHandlers(expression, + warning=function(c) recover())
9.3.3
Interactive debugging
There are a number of different ways to invoke the browser. Users can have control transferred to the browser on error, they can have the browser invoked on entry to a specific function, and more generally the trace function provides a number of capabilities for monitoring when functions are entered or exited. Both debug and trace interact fairly gracefully with name spaces. They allow the user to debug or trace evaluation within the name space and do not require editing of the source code and rebuilding the package, and so are generally the preferred methods of interacting with code in packages with name spaces. 9.3.3.1
Entering the browser on error
By setting the error option, users can request that the browser be invoked when an error is signaled. This can be much simpler than editing the code and placing direct calls to the browser function in the code. In the code chunk below, we can set the error option to the function recover.
280
R Programming for Bioinformatics
> options(error = recover) From this point onwards, until you reset the error option, whenever an error is thrown, R will call the function recover with no arguments. When called, the recover function prints a listing of the active calls and asks the user to select one of them. On selection of a particular call, R starts a browser session inside that call. If the user exits the browser session by typing c, she is again asked to select a call. At any time when making the call selection, the user can return to the R interpreter prompt by selecting 0. Here is an example session with recover: > x = convertMode(1:4, list()) Error in asSimpleVector(from, mode(to)) : invalid mode list Enter a frame number, or 0 to exit 1:convertMode(1:4, list()) 2:asSimpleVector(from, mode(to)) Selection: 2 Called from: eval(expr, envir, enclos) Browse[1]> ls() [1] "mode" "x" Browse[1]> mode [1] "list" Browse[1]> x [1] 1 2 3 4 Browse[1]> c Enter a frame number, or 0 to exit 1:convertMode(1:4, list()) 2:asSimpleVector(from, mode(to)) Selection: 1 Called from: eval(expr, envir, enclos)
Debugging and Profiling
281
Browse[1]> ls() [1] "from" "to" Browse[1]> to list() Browse[1]> Q
9.3.4
The debug and undebug functions
It is sometimes useful to enter the browser whenever a particular function is invoked. This can be achieved using the debug function. We will again use the setVNames function, which must first be restored to its original state; this can be done by removing the copy from your workspace, so that the one in the RBioinf package will again be found. > rm("setVNames") Then we execute the code below, testing to see if we managed to set the names as intended. > x = matrix(1:4, nrow = 2) > names(setVNames(x, letters[1:4])) NULL We see that the names have not been set. Notice also that there is no error, but our program is not performing as we would like it to. We suspect the error is in asSimpleVector. So we can apply the function debug to it. This function does nothing more than set a flag on the function that requests that the debugger be entered whenever the function supplied as an argument is invoked. > debug(asSimpleVector) Now any call to asSimpleVector, either directly from the command line or from another function, will start a browser session at the start of the call to asSimpleVector in the step-through debugging mode.
282
R Programming for Bioinformatics
> names(setVNames(x, letters[1:4])) debugging in: asSimpleVector(x, "numeric") debug: { if (!(mode %in% c("logical", "integer", "numeric", "double", "complex", "character"))) stop("invalid mode ", mode) Dim = dim(x) nDim = length(Dim) Names = names(x) if (nDim > 0) DimNames = dimnames(x) x = as.vector(x, mode) names(x) = Names if (nDim > 0) { dim(x) = Dim dimnames(x) = DimNames } x } Browse[1]> where where 1: asSimpleVector(x, "numeric") where 2: setVNames(x, letters[1:4]) Browse[1]> debug: if (!(mode %in% c("logical", "integer", "numeric", "double", "complex", "character"))) stop("invalid mode ", mode) Browse[1]> x [,1] [,2] [1,] 1 3 [2,] 2 4 attr(,"names") [1] "a" "b" "c" "d" As we suspected, at entry, the parameter x has the names attribute set. So the error must be somewhere inside this function. We continue the debugging and examining the value of x.
Debugging and Profiling
Browse[1]> debug: Dim = dim(x) Browse[1]> debug: nDim = length(Dim) Browse[1]> Dim [1] 2 2 Browse[1]> debug: Names = names(x) Browse[1]> nDim [1] 2 Browse[1]> debug: if (nDim > 0) DimNames = dimnames(x) Browse[1]> Names [1] "a" "b" "c" "d" Browse[1]> debug: x = as.vector(x, mode) Browse[1]> debug: names(x) = Names Browse[1]> x [1] 1 2 3 4 Browse[1]> debug: if (nDim > 0) { dim(x) = Dim dimnames(x) = DimNames } Browse[1]> x a b c d 1 2 3 4
283
284
R Programming for Bioinformatics
We have correctly set the value of x back. Browse[1]> debug: dim(x) = Dim Browse[1]> debug: dimnames(x) = DimNames Browse[1]> x [1,] [2,]
[,1] [,2] 1 3 2 4
However, after setting the dimension, the names attribute gets removed. Now we know where the error is — we should set the name attribute after setting the dimension and the dimnames. We first go to the end of the function. Browse[1]> debug: dimnames(x) = DimNames Browse[1]> x [1,] [2,]
[,1] [,2] 1 3 2 4
Browse[1]> debug: x Then we verify that setting the names does not disturb the dimension and then quit from the browser. Browse[1]> names(x) = Names Browse[1]> x [1,]
[,1] [,2] 1 3
Debugging and Profiling
285
[2,] 2 4 attr(,"names") [1] "a" "b" "c" "d" Browse[1]> Q After finishing debugging, we undebug asSimpleVector, and now the debugger will not be called on entry to asSimpleVector. > undebug(asSimpleVector) There is no easy way to find out which functions are currently being debugged.
9.3.5
The trace function
The trace function provides all the functionality of the debug function and it can do some other useful things. First of all, it can be used to just print all calls to a particular function when it is entered and exited. > > > + +
trace(asSimpleVector) x = list(1:3, 4:5) for (i in seq(along = x)) { x[[i]] = asSimpleVector(x[[i]], "complex") }
trace: asSimpleVector(x[[i]], "complex") trace: asSimpleVector(x[[i]], "complex") > untrace(asSimpleVector) Each time the function being traced is called, a line is printed starting with trace: and followed by the call. Here the asSimpleVector function was called twice inside the for loop. That is why we see two lines starting with trace:. A call to untrace stops the tracing. Secondly, it can be used like debug — but to only start the browsing at a particular point inside the function. Suppose we want to start the browser just before we enter the if block that sets the dimension and the dimnames. We can use the function printWithNumbers to print asSimpleVector with appropriate line numbers, the index of that place in the function. The function is printed in the code chunk below and break points can be set for any line
286
R Programming for Bioinformatics
that has a number. When set, the tracer function will be evaluated just prior to the evaluation of the specified line number.
> printWithNumbers(asSimpleVector)
function (x, mode = "logical") 1: { 2: if (!(mode %in% c("logical", "integer", "numeric", "dou ble", "complex", "character"))) stop("invalid mode ", mode) 3: Dim untrace(asSimpleVector)
Finally, the trace function can also be used to debug calls to a particular method for a S4 generic function (Section 3.7). To demonstrate that, we turn the subsetAsCharacter function into an S4 generic function.
> setGeneric("subsetAsCharacter") [1] "subsetAsCharacter" In addition to creating a generic from the existing subsetAsCharacter function, this command also sets the original function as the default method. We define an additional method for character vectors and simple subscripts.
> setMethod("subsetAsCharacter", signature(x = "character", i = "missing", j = "missing"), function(x, i, j) x) [1] "subsetAsCharacter"
288
R Programming for Bioinformatics
Now we will use trace to debug the subsetAsCharacter generic only when x is of class "character".
> trace("subsetAsCharacter", tracer = browser, signature=c(x = "numeric")) [1] "subsetAsCharacter"
Note that, in this particular case, there was no specific subsetAsCharacter method with this signature. So the tracing will occur for the default method — but only when the signature matches the one given to trace.
> subsetAsCharacter(1.5, 1:2) Tracing subsetAsCharacter(1.5, 1:2) on entry Called from: subsetAsCharacter(1.5, 1:2) Browse[1]> ls() [1] "i" "j" "x" Browse[1]> x [1] 1.5 Browse[1]> c [1] "1.5" NA
> subsetAsCharacter(1 + (0+0i), 1:2) [1] "1+0i" NA > subsetAsCharacter("x") [1] "x" > untrace("subsetAsCharacter")
Debugging and Profiling
9.4
289
Debugging C and other foreign code
Debugging compiled code is quite complex and generally requires some knowledge of programming, how compiled programs are evaluated and other rather esoteric details. In this section we presume a fairly high standard of knowledge and recommend that if you have not used any of the tools described here, or similar tools, you should consider consulting a local expert for advice and guidance. URLs are given for the different software discussed, and readers are referred to those locations for complete documentation of the tools presented. The R Extensions Manual also provides some more detailed examples and discussions that readers may want to consult. The most widely used debugger for compiled code is gdb (see http://www.gnu.org/software/gdb). It can be used on Windows (provided you have installed the tools for building and compiling your own version of R and R packages), Unix, Linux and OS X. The ddd(http://www.gnu.org/ software/ddd/) graphical interface to gdb can be quite helpful for users not familiar with gdb. In order to make use of gdb, you must compile R, and all compiled code that you want to inspect, using the appropriate compiler flags. The compiler flags can be set in the file R_HOME/config.site. We suggest turning off all optimization; to yield the best results, do not use -O2 or similar, and use the -g flag . While gdb is supposed to be able to deal with optimized compiled code, there are often small glitches, and using no optimization removes this potential source of confusion. If you change these flags, you will need to remake all of R, typically by issuing the make clean directive, followed by make. Any libraries that have been installed and that have source code will need to have the source recompiled using the new flags, if you intend to debug them. R can be invoked with the ddd debugger by using the syntax R -d ddd, or equivalently R --debugger=ddd. Similar syntax is used for other debuggers. Options can be passed through to the debugger by using the --debugger-args option as well. Unix-like systems can make use of valgrind (http://valgrind.org) to check for memory leaks and other memory problems. The code given below runs valgrind while evaluating the code in the file someCode.R. Valgrind can make your code run quite slowly, so be patient when using it.
R -d "valgrind --tool=memcheck --leak-check=yes" --vanilla < someCode.R
290
9.5
R Programming for Bioinformatics
Profiling R code
There are often situations where code written in R takes rather a long time to run. In very many cases, the problem can be overcome simply by making use of more appropriate tools in R, by rearranging the code so that the computations are more efficient, or by vectorizing calculations. In some cases, when even after all efforts have been expended, the code is still too slow to be viable, rewriting parts of the code in C or some other foreign language (see Chapter 6 for more complete details) may be appropriate. However, in all cases, it is still essential that a correct diagnosis of the problem be made. That is, it is essential to determine which computations are slow and in need of improvement. This is especially important when considering writing code in a compiled language, since the diagnosis can help to greatly reduce the amount of foreign code that is needed and in some cases can help to identify a particular programming construct that might valuably be added to R itself. Another tool that is often used is timing comparison. That is, two different implementations are run and the time taken for each is recorded and reported. While this can be valuable, some caution in interpreting results is needed. Since R carries out its own memory management, it is possible that one version will incur all of the costs of memory allocation and hence look much slower. The functions Rprof and summaryRprof can be used to profile R commands and to provide some insight into where the time is being spent. In the next code chunk, we make use of Rprof to profile the computation of the median absolute deviation about the median (or MAD) on a large set of simulated data. The first call to Rprof initiates profiling. Rprof takes three optional arguments: first the name of the file to print the results to, second a logical argument indicating whether to overwrite or append to the existing file, and third the sampling interval, in seconds. Setting this too small, below what the operating system supports, will lead to peculiar outputs. We make use of the default settings in our example. > Rprof() > mad(runif(1e+07)) [1] 0.371 > Rprof(NULL) The second call to Rprof, with the argument NULL, turns profiling off. The contents of the file Rprof.out are the active calls, computed every interval seconds. These can be summarized by a call to summaryRprof, which tabulates them and reports on the time spent in different functions.
Debugging and Profiling
> summaryRprof() $by.self "sort.int" "is.na" "runif" "-" "abs" "list" "" "Sweave" "doTryCatch" "evalFunc" "try" "tryCatch" "tryCatchList" "tryCatchOne" "eval.with.vis" "mad" "median" "median.default" "mean" "sort" "sort.default"
self.time self.pct total.time total.pct 0.20 35.7 0.24 42.9 0.14 25.0 0.14 25.0 0.10 17.9 0.10 17.9 0.06 10.7 0.06 10.7 0.04 7.1 0.04 7.1 0.02 3.6 0.02 3.6 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.54 96.4 0.00 0.0 0.54 96.4 0.00 0.0 0.44 78.6 0.00 0.0 0.34 60.7 0.00 0.0 0.24 42.9 0.00 0.0 0.24 42.9 0.00 0.0 0.24 42.9
$by.total "" "Sweave" "doTryCatch" "evalFunc" "try" "tryCatch" "tryCatchList" "tryCatchOne" "eval.with.vis" "mad" "median" "median.default" "sort.int" "mean" "sort" "sort.default"
total.time total.pct self.time self.pct 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.56 100.0 0.00 0.0 0.54 96.4 0.00 0.0 0.54 96.4 0.00 0.0 0.44 78.6 0.00 0.0 0.34 60.7 0.00 0.0 0.24 42.9 0.20 35.7 0.24 42.9 0.00 0.0 0.24 42.9 0.00 0.0 0.24 42.9 0.00 0.0
291
292
R Programming for Bioinformatics
"is.na" "runif" "-" "abs" "list"
0.14 0.10 0.06 0.04 0.02
25.0 17.9 10.7 7.1 3.6
0.14 0.10 0.06 0.04 0.02
25.0 17.9 10.7 7.1 3.6
$sampling.time [1] 0.56 The output has three components. There are two arrays, the first sorted by self-time and the second sorted by total-time. The third component of the response is the total time spent in the execution of the commands. Given the command, it is no surprise that all of the total-time was spent in the function mad. However, since the self-time for that function is zero, we can conclude that computational effort was expended elsewhere. When looking at self-time, we see that the bulk of the time is spent in sort.int, runif and is.na. And, since we know that there are no missing values, it does seem that some savings are available, as there is no need to run the is.na function. Although one is able to control checking for NAs in the call to mad, no such fine-grained control is possible with sort. Hence, you must either live with the inefficiency or write your own version of sort that does allow the user to turn off checking for missing values.
9.5.1
Timings
The basic tool for timing is system.time. This function returns a vector of length five, but only three of the values are normally printed. The three elements are the user cpu time, system cpu time, and elapsed time. Times are reported in seconds, the resolution is system specific, but is typically to 1/100th of a second. In the output shown below, the same R code was run three times, simultaneously, in a pristine R session. As you can see, there is about a 5% difference between the system time for the first evaluation and those of the subsequent evaluations. So when comparing the execution time of different methods, it is prudent to change the order, and to repeat the calculations in different ways, to ensure that the observed effects are real and important. >
system.time(mad(runif(10000000))) user 1.821
>
system elapsed 0.663 2.488
system.time(mad(runif(10000000)))
Debugging and Profiling user 1.817 >
293
system elapsed 0.635 2.455
system.time(mad(runif(10000000))) user 2.003
system elapsed 0.632 2.638
The optional argument gcFirst is TRUE by default and ensures that R’s garbage collector is run prior to the evaluation of the supplied expression. By running the garbage collector first, it is likely that more consistent timings will be produced.
9.6
Managing memory
There are some tools available in R to monitor memory usage. In R, memory is divided into two separate components: memory for atomic vectors (e.g., integers, characters) and language elements. The language elements are the SEXPs described in Chapter 6, while vector storage is contiguous storage for homogeneous elements. Vector storage is further divided into two types: the small vectors, currently less than 128 bytes, which are allocated by R (which obtains a large chunk of memory, and then parcels it out as needed) and larger vectors for which memory is obtained directly from the operating system. R attempts to manage memory effectively and has a generational garbage collector. Explicit details on the functioning of the garbage collector are given in the R Internals manual (R Development Core Team, 2007d). During normal use, the garbage collector runs automatically whenever storage requests exceed the current free memory available. A user can trigger garbage collection with the gc command, which will report the number of Ncells (SEXPs) used and the number of Vcells (vector storage) used, as well as a few other statistics. The function gcinfo can be used to have information print every time the garbage collector runs. > gc() used (Mb) gc trigger (Mb) max used (Mb) Ncells 318611 8.6 597831 16 407500 10.9 Vcells 165564 1.3 29734436 227 35230586 268.8 One can also find out how many of the Ncells are allocated to each of
294
R Programming for Bioinformatics
the different types of SEXPs using memory.profile. In the example below, we obtain the output of memory.profile and sort it, from largest to smallest. This should be approximately equal to the value for Ncells used by gc, but minor discrepancies are likely to occur to reflect the creation of new objects or the effects of garbage collection.
> ss = memory.profile() > sort(ss, decreasing = TRUE) pairlist 176473 integer 5974 double 3045 special 224 ... 1
language character 48112 41736 list logical 5336 5124 builtin environment 2035 1675 weakref complex 121 3 raw any 1 0
char symbol 9838 7242 promise closure 4934 4579 S4 externalptr 1654 513 expression NULL 2 1 bytecode 0
> sum(ss) [1] 318623
9.6.1
Memory profiling
Memory profiling has an adverse effect on performance, even if it is not being used, and hence is implemented as a compile time option. To use memory profiling, R must be compiled with it enabled. This means that readers will have to ensure that their version of R has been compiled to allow for memory profiling if they want to follow the examples in this section. There are three different strategies that can be used to profile memory usage. You can incorporate memory usage information in the output created by Rprof by setting the argument memory.profiling to TRUE. And in that case, information about total memory usage is reported for each sampling time. The information can then be summarized in different ways using summaryRprof. There are four options for summarizing the output; none (the default) excludes memory usage information, while both requests that memory usage information be printed with the other profiling information. Two more advanced options are tseries and stats, which require that a second argument, index, also be specified. The index argument specifies how to summarize the calls on the stack trace. In the code below, we examine memory usage from performing RMA on the Dilution data. First we load the
Debugging and Profiling
295
necessary packages, then set up profiling and run the code we want to profile. > > > > >
library("affy") library("affydata") data(Dilution) Rprof(file = "profRMA", memory.profiling = TRUE) r1 = rma(Dilution)
Background correcting Normalizing Calculating Expression > Rprof(NULL) And in the next code segment, we read in the profiling data and display selected parts of it. By setting memory to "tseries", the return value is a data frame with one row for each sampling time, and values that help track usage of vector storage (both large and small), language elements (nodes), calls to duplicate, and the call stack at the time the data were sampled. > pS = summaryRprof(file = "profRMA", memory = "tseries") > names(pS) [1] "vsize.small" "vsize.large" [4] "duplications" "stack:2"
"nodes"
Users can then examine these statistics to help identify potential inefficiencies in their code. For example, we plot the number of calls to duplicate. What is quite remarkable in this plot is that there are a few spikes in calls to duplicate, which are in the thousands. While such duplication may be necessary, it is likely that it is not. Further tracking down the source of this and making sure it is necessary could greatly speed up the processing time and possibly decrease memory usage.
9.6.2
Profiling memory allocation
Another mechanism for memory profiling is provided by the Rprofmem function, which collects and prints information on the call stack whenever a large (as determined by the user) object is allocated. The argument threshold sets the size threshold, in bytes, for recording the memory allocation. This tool can help to identify inefficiencies that arise due to copying large objects without getting overwhelmed by the total number of copies. As observed in
R Programming for Bioinformatics
60000 40000 20000 0
Number of calls to duplicate
80000
296
0.0
0.2
0.4
0.6
0.8
Time
FIGURE 9.1: Time series view of calls to duplicate during the processing of Affymetrix data.
Debugging and Profiling
297
Figure 9.1, there are very many calls to duplicate during the evaluation of the rma function. It is not clear whether these are large or small objects. In the next code segment, we request that allocation of objects larger than 10000 bytes be recorded. Once the computations are completed, we view the first five lines of the output file. The functions being called suggest that there is a lot of allocation begin performed to retrieve the probe names. In the example, we needed to trim the output, using strtrim, so that it fits on the page; readers would not normally do that. > Rprofmem(file = "rma2.out", threshold = 1e+05) > s2 = rma(Dilution) Background correcting Normalizing Calculating Expression > Rprofmem(NULL)
> noquote(readLines("rma2.out", n = 5))
[1] new page:".deparseOpts" "deparse" "eval" "match.arg" ".loc al" "indexProbes" "indexProbes" ".local" "pmindex" "pmindex" " .local" "probeNames" "probeNames" "rma" "eval.with.vis" "doTry Catch" "tryCatchOne" "tryCatchList" "tryCatch" "try" "evalFunc " "" "Sweave" [2] new page:"switch" "" "data" "cleancdfname" "cdf FromLibPath" "switch" "getCdfInfo" ".local" "indexProbes" "ind exProbes" ".local" "pmindex" "pmindex" ".local" "probeNames" " probeNames" "rma" "eval.with.vis" "doTryCatch" "tryCatchOne" " tryCatchList" "tryCatch" "try" "evalFunc" "" "Sweav e" [3] new page:"match" "cleancdfname" "cdfFromLibPath" "switch" "getCdfInfo" ".local" "indexProbes" "indexProbes" ".local" "pm index" "pmindex" ".local" "probeNames" "probeNames" "rma" "eva l.with.vis" "doTryCatch" "tryCatchOne" "tryCatchList" "tryCatc h" "try" "evalFunc" "" "Sweave" [4] new page:"file.info" ".find.package" "cdfFromLibPath" "swi tch" "getCdfInfo" ".local" "indexProbes" "indexProbes" ".local " "pmindex" "pmindex" ".local" "probeNames" "probeNames" "rma" "eval.with.vis" "doTryCatch" "tryCatchOne" "tryCatchList" "tr
298
R Programming for Bioinformatics
yCatch" "try" "evalFunc" "" "Sweave" [5] new page:"as.vector" ".local" "indexProbes" "indexProbes" ".local" "pmindex" "pmindex" ".local" "probeNames" "probeNames " "rma" "eval.with.vis" "doTryCatch" "tryCatchOne" "tryCatchLi st" "tryCatch" "try" "evalFunc" "" "Sweave"
> length(readLines("rma2.out")) [1] 6239
Exercise 9.1 Write a function to parse the output of Rmemprof and determine the total amount of memory allocated. Use the names from the call stack to assign the memory allocation to particular functions.
9.6.3
Tracking a single object
The third mechanism that is provided is to trace a single object and determine when and where it is duplicated. The function tracemem is called with the object to be traced, and subsequently, whenever the object (or a natural descendant) is duplicated, a message is printed. In the code below, we first trace duplication of Dilution in the call to rma, but find that there is none; and there should be none, so that is good. When subsetting an instance of the ExpressionSet class, however, it seems that around four copies are made and none should be, so there are definitely some inefficiencies that could be fixed.
> tracemem(Dilution) [1] "" > s3 tracemem(s3) [1] ""
Debugging and Profiling
>
299
s2 = s3[1:100,]
tracemem[0x6367894 tracemem[0x5759994 tracemem[0x6517df0 tracemem[0x55de304
-> -> -> ->
0x5759994]: 0x6517df0]: 0x55de304]: 0x560156c]:
[ [ featureData