BIOMARKERS IN DRUG DEVELOPMENT A Handbook of Practice, Application, and Strategy Edited by MICHAEL R. BLEAVINS, Ph.D., ...
195 downloads
1531 Views
6MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
BIOMARKERS IN DRUG DEVELOPMENT A Handbook of Practice, Application, and Strategy Edited by MICHAEL R. BLEAVINS, Ph.D., DABT Michigan Technology and Research Institute Ann Arbor, Michigan
CLAUDIO CARINI, M.D., Ph.D., FRCPath Fresenius Biotech of North America Waltham, Massachusetts
MALLÉ JURIMA-ROMET, Ph.D. MDS Pharma Services Montreal, Quebec, Canada
RAMIN RAHBARI, M.S. Innovative Scientific Management New York, New York
A JOHN WILEY & SONS, INC., PUBLICATION
BIOMARKERS IN DRUG DEVELOPMENT
BIOMARKERS IN DRUG DEVELOPMENT A Handbook of Practice, Application, and Strategy Edited by MICHAEL R. BLEAVINS, Ph.D., DABT Michigan Technology and Research Institute Ann Arbor, Michigan
CLAUDIO CARINI, M.D., Ph.D., FRCPath Fresenius Biotech of North America Waltham, Massachusetts
MALLÉ JURIMA-ROMET, Ph.D. MDS Pharma Services Montreal, Quebec, Canada
RAMIN RAHBARI, M.S. Innovative Scientific Management New York, New York
A JOHN WILEY & SONS, INC., PUBLICATION
Copyright 2010 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/ permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. You should consult with a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services or for technical support, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Biomarkers in drug development : a handbook of practice, application, and strategy / [edited by] Michael R. Bleavins . . . [et al.]. p. ; cm. Includes index. ISBN 978-0-470-16927-8 (cloth) 1. Biochemical markers 2. Drug development. I. Bleavins, Michael R. [DNLM: 1. Biomarkers, Pharmacological. 2. Drug Design. 3. Drug Discovery. QV 744 B6154 2009] R853.B54B5645 2009 615'.10724—dc22 2009021627 Printed in the United States of America 10
9
8
7
6
5
4
3
2
1
CONTENTS
CONTRIBUTORS
ix
PREFACE
xv
PART I
1
BIOMARKERS AND THEIR ROLE IN DRUG DEVELOPMENT
Biomarkers Are Not New
1 3
Ian Dews
2
Biomarkers: Facing the Challenges at the Crossroads of Research and Health Care
15
Gregory J. Downing
3
Enabling Go/No Go Decisions
31
J. Fred Pritchard and Mallé Jurima-Romet
PART II
4
IDENTIFYING NEW BIOMARKERS: TECHNOLOGY APPROACHES
Imaging as a Localized Biomarker: Opportunities and Challenges
41 43
Jonathan B. Moody, Philip S. Murphy, and Edward P. Ficaro
5
Protein Biomarker Discovery Using Mass Spectrometry– Based Proteomics
101
Joanna M. Hunter and Daniel Chelsky
6
Quantitative Multiplexed Patterning of ImmuneRelated Biomarkers
121
Dominic Eisinger, Ralph McDade, and Thomas Joos v
vi CONTENTS
7
Gene Expression Profiles as Preclinical and Clinical Cancer Biomarkers of Prognosis, Drug Response, and Drug Toxicity
135
Jason A. Sprowl and Amadeo M. Parissenti
8
Use of High-Throughput Proteomic Arrays for the Discovery of Disease-Associated Molecules
155
Douglas M. Molina, W. John W. Morrow, and Xiaowu Liang
PART III 9
CHARACTERIZATION AND VALIDATION
Characterization and Validation Biomarkers in Drug Development: Regulatory Perspective
177
179
Federico Goodsaid
10
Fit-for-Purpose Method Validation and Assays for Biomarker Characterization to Support Drug Development
187
Jean W. Lee, Yuling Wu, and Jin Wang
11
Molecular Biomarkers from a Diagnostic Perspective
215
Klaus Lindpaintner
12
Strategies for the Co-Development of Drugs and Diagnostics: FDA Perspective on Diagnostics Regulation
231
Francis Kalush and Steven Gutman
13
Importance of Statistics in the Qualification and Application of Biomarkers
247
Mary Zacour
PART IV
14
BIOMARKERS IN DISCOVERY AND PRECLINICAL SAFETY
Qualification of Safety Biomarkers for Application to Early Drug Development
287
289
William B. Mattes and Frank D. Sistare
15
Development of Serum Calcium and Phosphorus as Clinical Biomarkers for Drug-Induced Systemic Mineralization: Case Study with a MEK Inhibitor
301
Alan P. Brown
16 Biomarkers for the Immunogenicity of Therapeutic Proteins and Its Clinical Consequences
323
Claire Cornips and Huub Schellekens
17
New Markers of Kidney Injury Sven A. Beushausen
335
CONTENTS
vii
PART V TRANSLATING FROM PRECLINICAL RESULTS TO CLINICAL AND BACK
359
18 Translational Medicine—A Paradigm Shift in Modern Drug Discovery and Development: The Role of Biomarkers
361
Giora Z. Feuerstein, Salvatore Alesci, Frank L. Walsh, J. Lynn Rutkowski, and Robert R. Ruffolo, Jr.
19
Clinical Validation and Biomarker Translation
375
David Lin, Andreas Scherer, Raymond Ng, Robert Balshaw, Shawna Flynn, Paul Keown, Robert McMaster, and Bruce McManus
20
Predicting and Assessing an Inflammatory Disease and Its Complications: Example from Rheumatoid Arthritis
399
Christina Trollmo and Lars Klareskog
21
Pharmacokinetic and Pharmacodynamic Biomarker Correlations
413
J.F. Marier and Keith Gallicano
22 Validating In Vitro Toxicity Biomarkers Against Clinical Endpoints
433
Calvert Louden and Ruth A. Roberts
PART VI 23
BIOMARKERS IN CLINICAL TRIALS
Opportunities and Pitfalls Associated with Early Utilization of Biomarkers: Case Study in Anticoagulant Development
443
445
Kay A. Criswell
24
Integrating Molecular Testing Into Clinical Applications
463
Anthony A. Killeen
25 Biomarkers for Lysosomal Storage Disorders
475
Ari Zimran, Candida Fratazzi, and Deborah Elstein
26 Value Chain in the Development of Biomarkers for Disease Targets
485
Charles W. Richard, III, Arthur O. Tzianabos, and Whaijen Soo
PART VII
27
LESSONS LEARNED: PRACTICAL ASPECTS OF BIOMARKER IMPLEMENTATION
493
Biomarkers in Pharmaceutical Development: The Essential Role of Project Management and Teamwork
495
Lena King, Mallé Jurima-Romet, and Nita Ichhpurani
viii
CONTENTS
28
Integrating Academic Laboratories Into Pharmaceutical Development
515
Peter A. Ward and Kent J. Johnson
29
Funding Biomarker Research and Development Through the Small Business Innovative Research Program
527
James Varani
30
Novel and Traditional Nonclinical Biomarker Utilization in the Estimation of Pharmaceutical Therapeutic Indices
541
Bruce D. Car, Brian Gemzik, and William R. Foster
31 Anti-Unicorn Principle: Appropriate Biomarkers Don’t Need to Be Rare or Hard to Find
551
Michael R. Bleavins and Ramin Rahbari
32
Biomarker Patent Strategies: Opportunities and Risks
565
Cynthia M. Bott and Eric J. Baude
PART VIII WHERE ARE WE HEADING AND WHAT DO WE REALLY NEED?
575
33
577
IT Supporting Biomarker-Enabled Drug Development Michael Hehenberger
34
Redefining Disease and Pharmaceutical Targets Through Molecular Definitions and Personalized Medicine
593
Craig P. Webb, John F. Thompson, and Bruce H. Littman
35 Ethics of Biomarkers: The Borders of Investigative Research, Informed Consent, and Patient Protection
625
Heather Walmsley, Michael Burgess, Jacquelyn Brinkman, Richard Hegele, Janet Wilson-McManus, and Bruce McManus
36
Pathodynamics: Improving Biomarker Selection by Getting More Information from Changes Over Time
643
Donald C. Trost
37 Optimizing the Use of Biomarkers for Drug Development: A Clinician’s Perspective
693
Alberto Gimona
38
Nanotechnology-Based Biomarker Detection
709
Joshua Reineke
INDEX
731
CONTRIBUTORS
Salvatore Alesci, M.D., Ph.D., Wyeth Research, Collegeville, Pennsylvania Robert Balshaw, Ph.D., Syreon Corporation, Vancouver, British Columbia, Canada Eric J. Baude, Ph.D., Brinks Hofer Gilson & Lione, P.C., Ann Arbor, Michigan Sven A. Beushausen, Ph.D., Pfizer Global Research and Development, Chesterfield, Missouri Michael R. Bleavins, Ph.D., DABT, Michigan Technology and Research Institute, Ann Arbor, Michigan Cynthia M. Bott, Ph.D., Honigman Miller Schwartz and Cohn LLP, Ann Arbor, Michigan Jacquelyn Brinkman, M.Sc., University of British Columbia, Vancouver, British Columbia, Canada Alan P. Brown, Ph.D., DABT, Pfizer Global Research and Development, Ann Arbor, Michigan Michael Burgess, Ph.D., University of British Columbia, Vancouver, British Columbia, Canada Bruce D. Car, B.V.Sc., Ph.D., Bristol-Myers Squibb Co., Princeton, New Jersey Daniel Chelsky, Ph.D., Caprion Proteomics, Inc., Montreal, Quebec, Canada Claire Cornips, B.Sc., Utrecht University, Utrecht, The Netherlands Kay A. Criswell, Ph.D., Pfizer Global Research and Development, Groton, Connecticut Ian Dews, MRCP, FFPM, Envestia Ltd., Thame, Oxfordshire, UK ix
x
CONTRIBUTORS
Gregory J. Downing, D.O., Ph.D., U.S. Department of Health and Human Services, Washington, DC Dominic Eisinger, Ph.D., Rules Based Medicine, Inc., Austin, Texas Deborah Elstein, Ph.D., Gaucher Clinic, Shaare Zedek Medical Center, Jerusalem, Israel Giora Z. Feuerstein, M.D., Wyeth Research, Collegeville, Pennsylvania Edward P. Ficaro, Ph.D., INVIA Medical Imaging Solutions, Ann Arbor, Michigan Shawna Flynn, B.Sc., Syreon Corporation, Vancouver, British Columbia, Canada William R. Foster, Ph.D., Bristol-Myers Squibb Co., Princeton, New Jersey Candida Fratazzi, Massachusetts
M.D., Altus
Pharmaceuticals,
Inc.,
Waltham,
Keith Gallicano, Ph.D., Watson Laboratories, Corona, California Brian Gemzik, Ph.D., Bristol-Myers Squibb Co., Princeton, New Jersey Alberto Gimona, M.D., Merck Serono International, Geneva, Switzerland Federico Goodsaid, Ph.D., Center for Drug Evaluation and Research, U.S. Food and Drug Administration, Silver Spring, Maryland Steven Gutman, M.D., M.B.A., University of Central Florida, Orlando, Florida; formerly with the U.S. Food and Drug Administration, Rockville, Maryland Richard Hegele, M.D., Ph.D., University of British Columbia, Vancouver, British Columbia, Canada Michael Hehenberger, Ph.D., IBM Healthcare & Life Sciences, Somers, New York Joanna M. Hunter, Ph.D., Caprion Proteomics, Inc., Montreal, Quebec, Canada Nita Ichhpurani, B.A., PMP, MDS Pharma Services, Mississauga, Ontario, Canada Kent J. Johnson, M.D., The University of Michigan Medical School, Ann Arbor, Michigan Thomas Joos, Ph.D., Rules Based Medicine, Inc., Austin, Texas Mallé Jurima-Romet, Ph.D., MDS Pharma Services, Montreal, Quebec, Canada
CONTRIBUTORS
xi
Francis Kalush, Ph.D., U.S. Food and Drug Administration, Silver Spring, Maryland; formerly with USFDA, Rockville, Maryland Paul Keown, M.D., D.Sc., MBA, University of British Columbia, Vancouver, British Columbia, Canada Anthony A. Killeen, M.D., Ph.D., University of Minnesota, Minneapolis, Minnesota Lena King, Ph.D., DABT, CanBioPharma Consulting, Inc., Guelph, Ontario, Canada Lars Klareskog, M.D., Ph.D., Karolinska Institute, Stockholm, Sweden Jean W. Lee, Ph.D., Amgen, Inc., Thousand Oaks, California Xiaowu Liang, Ph.D., Antigen Discovery, Inc., Irvine, California David Lin, B.MLSc., University of British Columbia, Vancouver, British Columbia, Canada Klaus Lindpaintner, M.D., M.P.H., F. Hoffmann–La Roche AG, Basel, Switzerland Bruce H. Littman, M.D., Translational Medicine Associates, Stonington, Connecticut Calvert Louden, Ph.D., Johnson & Johnson Pharmaceuticals, Raritan, New Jersey J.F. Marier, Ph.D., FCP, Pharsight, A Certara Company, Montreal, Quebec, Canada William B. Mattes, Ph.D., DABT, The Critical Path Institute, Rockville, Maryland Ralph McDade, Ph.D., Rules Based Medicine, Inc., Austin, Texas Bruce McManus, M.D., Ph.D., University of British Columbia, Vancouver, British Columbia, Canada Robert McMaster, D.Phil., University of British Columbia, Vancouver, British Columbia, Canada Douglas M. Molina, Ph.D., Antigen Discovery, Inc., Irvine, California Jonathan B. Moody, Ph.D., INVIA Medical Imaging Solutions, Ann Arbor, Michigan W. John W. Morrow, Ph.D., Antigen Discovery, Inc., Irvine, California Philip S. Murphy, Ph.D., GlaxoSmithKline Research and Development, Uxbridge, Middlesex, UK
xii
CONTRIBUTORS
Raymond Ng, Ph.D., University of British Columbia, Vancouver, British Columbia, Canada Amadeo M. Parissenti, Ph.D., Laurentian University, Sudbury, Ontario, Canada J. Fred Pritchard, Ph.D., MDS Pharma Services, Raleigh, North Carolina Ramin Rahbari, M.S., Innovative Scientific Management, New York, New York Joshua Reineke, Ph.D., Wayne State University, Detroit, Michigan Charles W. Richard, III, M.D., Ph.D., Shire Human Genetic Therapies, Cambridge, Massachusetts Ruth A. Roberts, Macclesfield, UK
Ph.D., AstraZeneca
Research
and
Development,
Robert R. Ruffolo, Jr., Ph.D., Wyeth Research, Collegeville, Pennsylvania J. Lynn Rutkowski, Ph.D., Wyeth Research, Collegeville, Pennsylvania Huub Schellekens, M.D., Utrecht University, Utrecht, The Netherlands Andreas Scherer, Ph.D., Spheromics, Kontiolahti, Finland Frank D. Sistare, Ph.D., Merck Research Laboratories, West Point, Pennsylvania Whaijen Soo, M.D., Ph.D., Shire Human Genetic Therapies, Cambridge, Massachusetts Jason A. Sprowl, Ph.D., Laurentian University, Sudbury, Ontario, Canada John F. Thompson, M.D., Helicos BioSciences, Cambridge, Massachusetts Christina Trollmo, Ph.D., Karolinska Institute and Roche AB Sweden, Stockholm, Sweden Donald C. Trost, M.D., Ph.D., Analytic Dynamics, Niantic, Connecticut Arthur O. Tzianabos, Ph.D., Shire Human Genetic Therapies, Cambridge, Massachusetts James Varani, Ph.D., The University of Michigan Medical School, Ann Arbor, Michigan Heather Walmsley, M.A., Lancaster University, Bailrigg, Lancaster, UK Frank L. Walsh, Ph.D., Wyeth Research, Collegeville, Pennsylvania Jin Wang, M.S., Amgen, Inc., Thousand Oaks, California Peter A. Ward, M.D., The University of Michigan Medical School, Ann Arbor, Michigan
CONTRIBUTORS
xiii
Craig P. Webb, Ph.D., Van Andel Research Institute, Grand Rapids, Michigan Janet Wilson-McManus, M.T., B.Sc., University of British Columbia, Vancouver, British Columbia, Canada Yuling Wu, Ph.D., Amgen, Inc., Thousand Oaks, California Mary Zacour, Ph.D., BioZac Consulting, Montreal, Quebec, Canada Ari Zimran, M.D., Gaucher Clinic, Shaare Zedek Medical Center, Jerusalem, Israel
PREFACE
The impact of biomarker technologies and strategies in pharmaceutical development is still emerging but is already proving to be significant. Biomarker strategy forms the basis for translational medicine and for the current industry and regulatory focus to improve success rates in drug development. The pharmaceutical industry faces greater challenges today than at any time in its history: an ever-increasing expectation of safer, more efficacious, and better understood drugs in the face of escalating costs of drug development and increasing duration of clinical development times; high rates of compound failure in phase II and III clinical trials; remaining blockbuster drugs coming off patent; and many novel but unproven targets emerging from discovery. These factors have pressured pharmaceutical research divisions to look for ways to reduce development costs, make better decisions earlier, reassess traditional testing strategies, and implement new technologies to improve the drug discovery and development processes. There is consensus that biomarkers are valuable drug development tools that enhance target validation, thereby helping us better understand the mechanisms of action and enabling earlier identification of compounds with the highest potential for efficacy in humans. These important methods are also essential for eliminating compounds with unacceptable safety risks, enabling the concept of “fail fast, fail early,” and providing more accurate or complete information regarding drug performance and disease progression. At the same time that pharmaceutical scientists are focusing on biomarkers in drug discovery and development, clinical investigators and health care practitioners are using biomarkers increasingly in medical decision making and diagnosis. Similarly, regulatory agencies have recognized the value of biomarkers to guide regulatory decision making about drug safety and efficacy. The magnitude and seriousness of the U.S. Food and Drug Administration (FDA) commitment to biomarkers is reflected in its Critical Path initiative. In recent years, several pharmacogenomic tests have been incorporated into product labels and implemented in clinical practice to improve the risk–benefit ratio xv
xvi PREFACE
for patients receiving certain drug therapies (e.g., 6-mercaptopurine, irinotecan, warfarin). Agencies such as the FDA and European Medicines Association have taken a leadership role in encouraging biomarker innovation in the industry and collaboration to identify, evaluate, and qualify novel biomarkers. Moreover, a biomarker strategy facilitates the choice of a critical path to differentiate products in a competitive marketplace. In recent years, the topic of biomarkers has been featured at many specialized scientific meetings and has received extensive media coverage. We, the coeditors, felt that a book that approached the topic with an emphasis on the practical aspects of biomarker identification and use, as well as their strategic implementation, was missing and essential to improve the application of these approaches. We each have experience working with biomarkers in drug development, but we recognized that the specialized knowledge of a diverse group of experts was necessary to create the type of comprehensive book that is needed. Therefore, contributions were invited from authors who are renowned experts in their respective fields. The contributors include scientists from academia, research hospitals, biotechnology and pharmaceutical companies, contract research organizations and consulting firms, and the FDA. The result is a book that we believe will appeal broadly to pharmaceutical research scientists, clinical and academic investigators, regulatory scientists, managers, students, and all other professionals engaged in drug development who are interested in furthering their knowledge of biomarkers. As discussed in Part I, biomarkers are not new: They have been used for hundreds of years to help physicians diagnose and treat disease. What is new is a shift from outcome biomarkers to target and mechanistic biomarkers, the availability of “omics,” imaging, and other technologies that allow collection of large amounts of data at the molecular, tissue, and whole-organism levels, and the use of data-rich biomarker information for “translational research,” from the laboratory bench to the clinic and back. Part II chapter is dedicated to highlighting several important technologies that affect drug discovery and development, the conduct of clinical trials, and the treatment of patients. In Part III we’ve invited leaders from industry and regulatory agencies to discuss the qualification of biomarker assays in the fit-for-purpose process, including perspectives on the development of diagnostics. The importance of statistics cannot be overlooked, and this topic is also profiled in this part with a practical overview of concepts, common mistakes, and helpful tips to ensure credible biomarkers that can address their intended uses. Parts IV to VI present information on concepts and examples of utilizing biomarkers in discovery, preclinical safety assessment, clinical trials, and translational medicine. Examples are drawn from a wide range of target-organ toxicities, therapeutic areas, and product types. We hope that by presenting a wide range of biomarker applications, discussed by knowledgeable and experienced scientists, readers will develop an appreciation of the scope and breadth of biomarker knowledge and find examples that will help them in their own work.
PREFACE
xvii
Part VII focuses on “lessons learned” and the practical aspects of implementing biomarkers in drug development programs. Many pharmaceutical companies have created translational research divisions, and increasingly, external partners, including academic and government institutions, contract research organizations, and specialty laboratories, are providing technologies and services to support biomarker programs. This is changing the traditional organizational models within industry and paving the way toward greater collaboration across sectors and even among companies within a competitive industry. Perspectives from contributing authors representing several of these different sectors are presented in this part, as well as a legal perspective on potential intellectual property issues in biomarker development. The book concludes with Part VIII on future trends and developments, including developments in data integration, the reality of personalized medicine, and the addressing of ethical concerns. The field of biomarkers in drug development is evolving rapidly and this book presents a snapshot of some exciting new approaches. By utilizing the book as a source of new knowledge, or to reinforce or integrate existing knowledge, we hope that readers will benefit from a greater understanding and appreciation of the strategy and application of biomarkers in drug development and become more effective decision makers and contributors in their own organizations. Michael R. Bleavins Claudio Carini Mallé Jurima-Romet Ramin Rahbari
PART I BIOMARKERS AND THEIR ROLE IN DRUG DEVELOPMENT
1
1 BIOMARKERS ARE NOT NEW Ian Dews, MRCP, FFPM Envestia Ltd., Thame, Oxfordshire, UK
INTRODUCTION The word biomarker in its medical context is a little over 30 years old, having first been used by Karpetsky, Humphrey, and Levy in the April 1977 edition of the Journal of the National Cancer Institute, where they reported that the “serum RNase level … was not a biomarker either for the presence or extent of the plasma cell tumor.” Few new words can have proved so popular—a recent PubMed search lists more than 370,000 publications that use it! Part of this success can no doubt be attributed to the fact that the word gave a longoverdue name to a phenomenon that has been around at least since the seventh century b.c., when Sushustra, the “father of Ayurvedic surgery,” recorded that the urine of patients with diabetes attracted ants because of its sweetness. However, although the origins of biomarkers are indeed ancient, it is fair to point out that the pace of progress over the first 2500 years was somewhat less than frenetic.
UROSCOPY Because of its easy availability for inspection, urine was for many centuries the focus of attention. The foundation of the “science” of uroscopy is generally attributed to Hippocrates (460–355 b.c.), who hypothesized that urine was a
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
3
4
BIOMARKERS ARE NOT NEW
filtrate of the “humors,” taken from the blood and filtered through the kidneys, a reasonably accurate description. One of his more astute observations was that bubbles on the surface of the urine (now known to be due to proteinuria) were a sign of long-term kidney disease. Galen (a.d. 129–200), the most influential of the ancient Greco-Roman physicians, sought to make uroscopy more specific but in reality added little to the subject beyond the weight of his reputation, which served to hinder further progress in this as in many other areas of medicine. Five hundred years later, Theophilus Protospatharius, another Greek writer, moved things one step nearer to the modern world when he investigated the effects of heating urine and hence established the world’s first medical laboratory test. He discovered that heating urine from patients with symptoms of kidney disease caused cloudiness (in fact, the precipitation of proteins). In the sixteenth century, Paracelsus (1493–1541) in Switzerland used vinegar to bring out the same cloudiness (acid, like heat, will precipitate proteins). Events continued to move both farther north and closer to modernity when in 1695 Frederick Deckers of Leiden in the Netherlands identified this cloudiness as resulting from the presence of albumin. The loop was finally closed when Richard Bright (1789–1858), a physician at Guy’s Hospital in London, made the connection between proteinuria and autopsy findings of abnormal kidneys. The progress from Hippocrates’ bubbles to Bright disease represents the successful side of uroscopy, but other aspects of the subject now strike us as a mixture of common sense and bizarre superstition. The technique of collecting urine was thought to be of paramount importance for accurate interpretation. In the eleventh century, Ismail of Jurjani insisted on a full 24-hour collection in a vessel that was large and clean (very sensible) and shaped like a bladder, so that the urine would not lose its “form” (not at all sensible). His advice to keep the sample out of the sun and away from heat continues, however, to be wise counsel. Gilles de Corbeil (1165–1213), physician to King Philip Augustus of France, recorded differences in sediment and color of urine which he related to 20 different bodily conditions. He also invented the matula, or jorden, a glass vessel through which the color, consistency, and clarity of the sample could be assessed. Shaped like a bladder rounded at the bottom and made of thin clear glass, the matula was to be held up in the right (not the left) hand for careful inspection against the light. De Corbeil taught that different areas of the body were represented by the urine in different parts of the matula. These connections, which became ever more complex, were recorded on uroscopy charts that were published only in Latin, thus ensuring that the knowledge, and its well-rewarded use in treating wealthy patients, was confined to appropriately educated men. To further this education, de Corbeil, in his role as a professor at the Medical School of Salerno, set out his own ideas and those of the ancient Greek and Persian writers in a work called Poem on the Judgment
BLOOD PRESSURE
5
of Urines, which was set to music in order that medical students could memorize it more easily. It remained popular for several centuries.
BLOOD PRESSURE One of the first excursions away from urine in the search for markers of function and disease came in 1555 with the publication of a book called Sphygmicae artis iam mille ducentos annos perditae & desideratae Libri V by a physician from Poznán in Poland named Józef Struś (better known by his Latinized name, Iosephus Struthius). In this 366-page work, Struthius described placing increasing weights on the skin over an artery until the pulse was no longer able to lift the load. The weight needed to achieve this gave a crude measure of what he called “the strength of the pulse” or, as we would call it today, blood pressure. Early attempts at quantitative measurement of blood pressure had to be conducted in animals rather than human subjects because of the invasiveness of the technique. The first recorded success with these techniques dates from 1733, when the Reverend Stephen Hales, a British veterinary surgeon, inserted a brass pipe into a horse’s artery and connected the pipe to a glass tube. Hales observed the blood rising in the tube and concluded not only that the rise was due to the pressure of the blood in the artery but also that the height of the rise was a measure of that pressure. By 1847, experimental technique had progressed to the point where it was feasible to measure blood pressure in humans, albeit still invasively. Carl Ludwig inserted brass cannulas directly into an artery and connected them via further brass pipework to a U-shaped manometer. An ivory float on the water in the manometer was arranged to move a quill against a rotating drum, and the instrument was known as a kymograph (“wave-writer” in Greek). Meanwhile, in 1834, Jules Hérisson had described his sphygmomètre, which consisted of a steel cup containing mercury, covered by a thin membrane, with a calibrated glass tube projecting from it. The membrane was placed over the skin covering an artery and the pressure in the artery could be gauged from the movements of the mercury into the glass tube. Although minor improvements were suggested by a number of authors over the next few years, credit for the invention of the true sphygmomanometer goes to Samuel Siegfried Karl Ritter von Basch, whose original 1881 model used water in both the cuff and the manometer tube. Five years later, Scipione Riva-Rocci introduced an improved version in which an inflatable bag in the cuff was connected to a mercury manometer, but neither of these early machines attracted widespread interest. Only in 1901, when the famous American surgeon Harvey Cushing brought back one of Riva-Rocci’s machines on his return from a trip to Italy, did noninvasive blood pressure measurement really take off.
6
BIOMARKERS ARE NOT NEW
Sphygmomanometers of the late nineteenth century relied on palpation of the pulse and so could only be used to determine systolic blood pressure. Measurement of diastolic pressure only became possible when Nikolai Korotkoff observed in 1905 that characteristic sounds were made by the constriction of the artery at certain points in the inflation and deflation of the cuff. The greater accuracy allowed by auscultation of these Korotkoff sounds opened the way for the massive expansion in blood pressure research that characterized the twentieth century.
IMAGING To physicians keen to understand the hidden secrets of the human body, few ideas can have been more appealing than the dream of looking through the skin to examine the tissues beneath. The means for achieving this did not appear until a little over a century ago, and then very much by accident. On the evening of November 8, 1895, Wilhem Roentgen, a German physicist working at the University of Würzburg, noticed that light was coming from fluorescent material in his laboratory and worked out that this was the result of radiation escaping from a shielded gas discharge tube with which he was working. He was fascinated by the ability of this radiation to pass through apparently opaque materials and promptly set about investigating its properties in more detail. While conducting experiments with different thicknesses of tinfoil, he noticed that that if the rays passed though his hand, they cast a shadow of the bones. Quick to see the potential medical uses for his new discovery, Roentgen immediately wrote a paper entitled “On a new kind of ray: a preliminary communication” for the Würzburg Physical Medical Society, reprints of which he sent to a number of eminent scientists with whom he was friendly. One of these, Franz Exner of Vienna, was the son of the editor of the Vienna Presse, and hence the news was published quickly, first in that paper and then across Europe. Whereas we are inclined to believe that rapid publication is a feature of the Internet age, the Victorians were no slouches in this matter, and by January 24, 1896 a reprint of the Würzburg paper had appeared in the London Electrician, a major journal able to bring details of the new invention to a much wider technical audience. The speed of the response was remarkable. Many physics laboratories already had gas discharge tubes, and within a month physicists in a dozen countries were reproducing Roentgen’s findings. Edwin Frost produced an x-ray image of a patient’s fractured wrist for his physician brother, Gilmon Frost, at Dartmouth College in the United States, while at McGill University in Montreal, John Cox used the new rays to locate a bullet in a gunshot victim’s leg. Similar results were obtained in cities as far apart as Copenhagen, Prague, and Rijeka in Croatia. Inevitably, not everyone was initially quite so impressed; The Lancet of February 1, 1896 expressed considerable surprise that the
IMAGING
7
Belgians had decided to bring x-rays into practical use in hospitals throughout the country! Nevertheless, it was soon clear that a major new diagnostic tool had been presented to the medical world, and there was little surprise when Roentgen received a Nobel Prize in Physics in 1901. Meanwhile, in March 1896, Henri Becquerel, professor of physics at the Muséum National d’Histoire Naturelle in Paris, while investigating Roentgen’s work, wrapped a fluorescent mineral, potassium uranyl sulfate, in photographic plates and black material in preparation for an experiment requiring bright sunlight. However, a period of dull weather intervened, and prior to actually performing the experiment, Becquerel found that the photographic plates were fully exposed. This led him to write: “One must conclude from these experiments that the phosphorescent substance in question emits rays which pass through the opaque paper and reduce silver salts.” Becquerel received a Nobel prize, which he shared with Marie and Pierre Curie, in 1903, but it was to be many years before the use of spontaneous radioactivity reached maturity in medical investigation in such applications as isotope scanning and radioimmunoassay. The use of a fluoroscopic screen on which to view x-ray pictures was implicit in Roentgen’s original discovery and soon became part of the routine equipment not only of hospitals but even of shoe shops, where large numbers of children’s shoe fittings were carried out in the days before the true dangers of radiation were appreciated. However, the greatest value of the real-time viewing approach only emerged following the introduction of electronic image intensifiers by the Philips company in 1955. Within months of the introduction of planar x-rays, physicians were asking for a technique that would demonstrate the body in three dimensions. This challenge was taken up by a number of scientists in different countries, but because of the deeply ingrained habit of reviewing only the national, not the international, literature, these workers remained ignorant of each other’s progress for many years. Carl Mayer, a Polish physician, first suggested the idea of tomography in 1914. André-Edmund-Marie Bocage in France, Gustav Grossmann in Germany, and Allesandro Vallebona in Italy all developed the idea further and built their own equipment. George Ziedses des Plantes in the Netherlands pulled all these strands together in the 1930s and is generally considered the founder of conventional tomography. Further progress had to wait for the development of powerful computers, and it was not until 1972 that Godfrey Hounsfield, an engineer at EMI, designed the first computer-assisted tomographic device, the EMI scanner, installed at Atkinson Morley Hospital, London, an achievement for which he received both a Nobel prize and a knighthood. Parallel with these advances in x-ray imaging were ongoing attempts to make similar use of the spontaneous radioactivity discovered by Becquerel. In 1925, Herrman Blumgart and Otto Yens made the first use of radioactivity as a biomarker when they used bismuth-214 to determine the arm-to-arm
8
BIOMARKERS ARE NOT NEW
circulation time in patients. Sodium-24, the first artificially created biomarker radioisotope, was used by Joseph Hamilton to investigate electrolyte metabolism in 1937. Unlike x-rays, however, radiation from isotopes weak enough to be safe was not powerful enough to create an image merely by letting it fall on a photographic plate. This problem was solved when Hal Anger of the University of California, building on the efficient gamma-ray capture system using large flat crystals of sodium iodide doped with thallium developed by Robert Hofstadter in 1948, constructed the first gamma camera in 1957. The desire for three-dimensional images that led to tomography with x-rays also influenced radioisotope imaging and drove the development of singlephoton-emission computed tomography (SPECT) by David Kuhl and Roy Edwards in 1968. Positron-emission tomography (PET) also builds images by detecting energy given off by decaying radioactive isotopes in the form of positrons that collide with electrons and produce gamma rays that shoot off in nearly opposite directions. The collisions can be located in space by interpreting the paths of the gamma rays, and this information is then converted into a three-dimensional image slice. The first PET camera for human studies was built by Edward Hoffman, Michael Ter-Pogossian, and Michael Phelps in 1973 at Washington University. The first whole-body PET scanner appeared in 1977. Radiation, whether from x-ray tubes or from radioisotopes, came to be recognized as having dangers both for the patient and for personnel operating the equipment, and efforts were made to discover media that would produce images without these dangers. In the late 1940s, George Ludwig, a junior lieutenant at the Naval Medical Research Institute in Bethseda, Maryland, undertook experiments using industrial ultrasonic flaw-detection equipment in an attempt to determine the acoustic impedance of various tissues, including human gallstones surgically implanted into the gallbladders of dogs. His observations were detailed in a 30-page project report to the Naval Medical Research Institute dated June 16, 1949, now considered the first report of its kind on the diagnostic use of ultrasound. However, a substantial portion of Ludwig’s work was considered classified information by the Navy and was not published in medical journals. Civilian research into what became the two biggest areas of early ultrasonic diagnosis—cardiology and obstetrics—began in Sweden and Scotland, respectively, both making use of gadgetry initially designed for shipbuilding. In 1953, Inge Edler, a cardiologist at Lund University collaborated with Carl Hellmuth Hertz, a graduate student in the department of nuclear physics who was familiar with using ultrasonic reflectoscopes for nondestructive materials testing, and together they developed the idea of using this method in medicine. They made the first successful measurement of heart activity on October 29, 1953 using a device borrowed from Kockums, a Malmö shipyard. On December 16 of the same year, the method was used to generate an echo encephalogram. Edler and Hertz published their findings in 1954.
ELECTROCARDIOGRAPHY
9
At around the same time, Ian Donald of the Glasgow Royal Maternity Hospital struck up a relationship with boilermakers Babcock & Wilcox in Renfrew, where he used their industrial ultrasound equipment to conduct experiments assessing the ultrasonic characteristics of various in vitro preparations. With fellow obstetrician John MacVicar and medical physicist Tom Brown, Donald refined the equipment to the point where it could be used successfully on live volunteer patients. These findings were reported in The Lancet on June 7, 1958 as “Investigation of abdominal masses by pulsed ultrasound.” Nuclear magnetic resonance (NMR) in molecules was first described by Isidor Rabi in 1938. His work was followed up eight years later by Felix Bloch and Edward Mills Purcell, who, working independently, noticed that magnetic nuclei such as hydrogen and phosphorus, when placed in a magnetic field of a specific strength, absorb radio-frequency energy, a situation described as being “in resonance.” For the next 20 years NMR found purely physical applications in chemistry and physics, and it was not until 1971 that Raymond Damadian showed that the nuclear magnetic relaxation times of different tissues, especially tumors, differed, thus raising the possibility of using the technique to detect disease. Magnetic resonance imaging (MRI) was first demonstrated on small test tube samples in 1973 by Paul Lauterbur, and in 1975 Richard Ernst proposed using phase and frequency encoding and the Fourier transform, the technique that still forms the basis of MRI. The first commercial nuclear magnetic imaging scanner allowing imaging of the body appeared in 1980 using Ernst’s technique, which allowed a single image to be acquired in approximately 5 minutes. By 1986, the imaging time was reduced to about 5 seconds without sacrificing too much image quality. In the same year, the NMR microscope was developed, which allowed approximately 10-mm resolution on approximately 1-cm samples. In 1993, functional MRI (fMRI) was developed, thus permitting the mapping of function in various regions of the brain.
ELECTROCARDIOGRAPHY Roentgen’s discovery of x-rays grew out of the detailed investigation of electricity that was a core scientific concern of the nineteenth century, and it is little surprise that investigators also took a keen interests in the electricity generated by the human body itself. Foremost among these was Willem Einthoven. Before his day, although it was known that the body produced electrical currents, the technology was inadequate to measure or record them with any sort of accuracy. Starting in 1901, Einthoven, a professor at the University of Leiden, conducted a series of experiments using a string galvanometer. In his device, electric currents picked up from electrodes on the patient’s skin passed through a thin filament running between very
10
BIOMARKERS ARE NOT NEW
strong electromagnets. The interaction of the electric and magnetic fields caused the filament or “string” to move, and this was detected by using a light to cast a shadow of the moving string onto a moving roll of photographic paper. It was not, at first, an easy technique. The apparatus weighed 600 lb, including the water circulation system essential for cooling the electromagnets, and was operated by a team of five technicians. Over the next two decades Einthoven gradually refined his machine and used it to establish the electrocardiographic (ECG) features of many different heart conditions, work that was eventually recognized with a Nobel prize in 1924. As the ECG became a routine part of medical investigations it was realized that a system that gave only a “snapshot” of a few seconds of the heart’s activity could be unhelpful or even misleading in the investigation of intermittent conditions such as arrhythmias. This problem was addressed by Norman Holter, an American biophysicist, who created his first suitcase-sized “ambulatory” monitor as early as 1949, but whose technique is dated in many sources to the major paper that he published on the subject in 1957, and other authors cite an even later, 1961 publication.
HEMATOLOGY The scientific examination of blood in order to learn more about the health of the patient from whom it was taken can be dated to 1642, when Anthony van Leeuwenhoek first observed blood cells through his newly invented microscope. Progress was at first slow, and it was not until 1770 that leucocytes were discovered by William Hewson, an English surgeon, who also observed that red cells were flat rather than spherical, as had earlier been supposed. Association of blood cell counts with clinical illness depended on the development of a technical method by which blood cells could be counted. In 1852, Karl Vierordt at the University of Tübingen developed such a technique, which, although too tedious for routine use, was used by one of his students, H. Welcher, to count red blood cells in a patient with “chlorosis” (an old word for what is probably our modern iron-deficiency anemia). He found, in 1854, that an anemic patient had significantly fewer red blood cells than did a normal person. Platelets, the third major cellular constituent of blood, were identified in 1862 by a German anatomist, Max Schultze. Remarkably, all these discoveries were made without the benefit of cell staining, an aid to microscopic visualization that was not introduced until 1877 in Paul Ehrlich’s doctoral dissertation at the University of Leipzig. The movement of blood cell studies from the research laboratory to routine support of patient care needed a fast automatic technique for separating and counting cells, which was eventually provided by the Coulter brothers, Wallace and Joseph. In 1953 they patented a machine that detected the change in electrical conductance of a small aperture as fluid containing cells was drawn through.
BLOOD AND URINE CHEMISTRY
11
Cells, being nonconducting particles, alter the effective cross section of the conductive channel and so signal both their presence and their size. An alternative technique, flow cytometry, was also developed in stages between the late 1940s and the early 1970s. Frank Gucker at Northwestern University developed a machine for counting bacteria in a laminar stream of air during World War II and used it to test gas masks, the work subsequently being declassified and published in 1947. Louis Kamentsky at IBM Laboratories and Mack Fulwyler at the Los Alamos National Laboratory experimented with fluidic switching and electrostatic cell detectors, respectively, and both described cell sorters in 1965. The modern approach of detecting cells stained with fluorescent antibodies was developed in 1972 by Leonard Herzenberg and his team at Stanford University, who coined the term fluorescence-activated cell sorter (FACS).
BLOOD AND URINE CHEMISTRY As with hematology, real progress in measuring the chemical constituents of plasma depended largely on the development of the necessary technology. Until such techniques became available, however, ingenious use was made of bioassays, developed in living organisms or preparations made from them, to detect and in some cases quantify complex molecules. A good example of this is the detection of human chorionic gonadotrophin (hCG) in urine as a test for pregnancy. Selmar Aschheim and Bernhard Zondek in Berlin, who first isolated this hormone in 1928, went on to devise the Aschheim–Zondek pregnancy test, which involved five days of injecting urine from the patient repeatedly into an infantile female mouse which was subsequently killed and dissected. The finding of ovulation in the mouse indicated that the injected urine contained hCG and meant that the patient was pregnant. In the early 1940s, the mouse test gave way to the frog test, introduced by Lancelot Hogben in England. This was a considerable improvement, in that injection of urine or serum from a pregnant woman into the dorsal lymph sac of the female African clawed frog (Xenopus laevis) resulted in ovulation within 4 to 12 hours. Although this test was known to give a relatively high proportion of false negatives, it was regarded as an outstanding step forward in diagnosis. One story from the 1950s recounts that with regard to the possible pregnancy of a particular patient, “opinions were sought from an experienced general practitioner, an eminent gynecologist, and a frog; only the frog proved to be correct.” Pregnancy testing, and many other “biomarker” activities, subsequently moved from out-and-out bioassays to the “halfway house” of immunological tests based on antibodies to the test compound generated in a convenient species but then used in an ex vivo laboratory setting, and in 1960 a hemagglutination inhibition test for pregnancy was developed by Leif Wide and Carl Gemzell in Uppsala.
12
BIOMARKERS ARE NOT NEW
Not all immune reactions can be made to modulate hemagglutination, and a problem with the development of immunoassays was finding a simple way to detect whether the relevant antibody or antigen was present. One answer lay in the use of radiolabeled reagents. Radioimmunoassay was first described in a paper by Rosalyn Sussman Yalow and Solomon Berson published in 1960. Radioactivity is difficult to work with because of its safety concerns, so an alternative was sought. This came with the recognition that certain enzymes (such as ABTS or 3,3′,5,5′-tetramethylbenzidine) which react with appropriate substrates to give a color change could be linked to an appropriate antibody. This linking process was developed independently by Stratis Avrameas and G. B. Pierce. Since it is necessary to remove any unbound antibody or antigen by washing, the antibody or antigen must be fixed to the surface of the container, a technique first published by Wide and Porath in 1966. In 1971, Peter Perlmann and Eva Engvall at Stockholm University, as well as Anton Schuurs and Bauke van Weemen in the Netherlands, independently published papers that synthesized this knowledge into methods to perform enzyme-linked immunosorbent assay (ELISA). A further step toward physical methods was the development of chromatography. The word was coined in 1903 by the Russian botanist Mikhail Tswett to describe his use of a liquid–solid form of a technique to isolate various plant pigments. His work was not widely accepted at first, partly because it was published in Russian and partly because Arthur Stoll and Richard Willstätter, a much better known Swiss–German research team, were unable to repeat the findings. However, in the late 1930s and early 1940s, Archer Martin and Richard Synge at the Wool Industries Research Association in Leeds devised a form of liquid–liquid chromatography by supporting the stationary phase, in this case water, on silica gel in the form of a packed bed and used it to separate some acetyl amino acids derived from wool. Their 1941 paper included a recommendation that the liquid mobile phase be replaced with a suitable gas that would accelerate the transfer between the two phases and provide more efficient separation: the first mention of the concept of gas chromatography. In fact, their insight went even further, in that they also suggested the use of small particles and high pressures to improve the separation, the starting point for high-performance liquid chromatography (HPLC). Gas chromatography was the first of these concepts to be taken forward. Erika Cremer working with Fritz Prior in Germany developed gas–solid chromatography, while in the UK, Martin himself cooperated with Anthony James in the early work on gas–liquid chromatography published in 1952. Real progress in HPLC began in 1966 with the work of Csaba Horváth at Yale. The popularity of the technique grew rapidly through the 1970s, so that by 1980, this had become the standard laboratory approach to a wide range of analytes. The continuing problem with liquid or gas chromatography was the identification of the molecule eluting from the system, a facet of the techniques that was to be revolutionized by mass spectrometry.
FASHIONABLE “OMICS”
13
The foundations of mass spectrometry were laid in the Cavendish Laboratories of Cambridge University in the early years of the twentieth century. Francis Aston built the first fully functional mass spectrometer in 1919 using electrostatic and magnetic fields to separate isotope ions by their masses and focus them onto a photographic plate. By the end of the 1930s, mass spectrometry had become an established technique for the separation of atomic ions by mass. The early 1950s saw attempts to apply the technique to small organic molecules, but the mass spectrometers of that era were extremely limited by mass and resolution. Positive theoretical steps were taken, however, with the description of time-of-flight (TOF) analysis by W. C. Wiley and I. H. Maclaren. and quadruple analysis by Wolfgang Pauli. The next major development was the coupling of gas chromatography to mass spectrometry in 1959 by Roland Gohlke and Fred McLafferty at the Dow Chemical Research Laboratory in Midland, Michigan. This allowed, for the first time, an analysis of mixtures of analytes without laborious separation by hand. This, in turn, was the trigger for the development of modern mass spectrometry of biological molecules. The introduction of liquid chromatography–mass spectrometry (LC-MS) in the early 1970s, together with new ionization techniques developed over the last 25 years (i.e., fast particle desorption, electrospray ionization, and matrix-assisted laser desorption/ionzation), have made it possible to analyze almost every class of biological compound class right up into the megadalton range.
FASHIONABLE “OMICS” In Benet Street, Cambridge, stands a rather ordinary pub which on Saturday, February 28, 1953, enjoyed 15 minutes of fame far beyond Andy Warhol’s wildest dreams. Two young men arrived for lunch and, as James Watson watched, Francis Crick announced to the regulars in the bar that “we have found the secret of life.” The more formal announcement of the structure of DNA appeared in Nature on April 2 in a commendably brief paper of two pages with six references. Watson and Crick shared a Nobel prize with Maurice Wilkins, whose work with Rosalind Franklin at King’s College, London had laid the groundwork. Sadly, Franklin’s early death robbed her of a share of the prize, which is never awarded posthumously. Over the next two decades a large number of researchers teased out the details of the genetic control of cells, and by 1972 a team at the Laboratory of Molecular Biology of the University of Ghent, led by Walter Fiers, were the first to determine the sequence of a gene (a coat protein from a bacteriophage). The same team followed up in 1976 by publishing the complete RNA nucleotide sequence of the bacteriophage. The first DNA-based genome to be sequenced in its entirety was the 5368-base-pair sequence of bacteriophage
14
BIOMARKERS ARE NOT NEW
Φ-X174 elucidated by Frederick Sanger in 1977. The science of genomics had been born. Although the rush to sequence the genomes of ever more complex species (including humans in 2001) initially held out considerable hope of yielding new biomarkers, focus gradually shifted to the protein products of the genes. This process is dated by many to the introduction in 1977 by Patrick O’Farrell at the University of Colorado in Boulder of two-dimensional polyacrylamide gel electrophoresis (2-D PAGE). The subject really took off in the 1990s, however, with technical improvements in mass spectrometers combined with computing hardware and software to support the extremely complex analyses involved. The next “omics” to become fashionable was metabolomics, based on the realization that the quantitative and qualitative pattern of metabolites in body fluids reflects the functional status of an organism. The concept is by no means new, the first paper addressing the idea (but not using the word) having been “Quantitative Analysis of Urine Vapor and Breath by Gas–Liquid Partition Chromatography” by Robinson and Pauling in 1971. The word metabolomics, however, was not coined until the 1990s.
THE FUTURE Two generalizations may perhaps be drawn from the accelerating history of biomarkers over the last 2700 years. The first is that each new step depends on an interaction between increasing understanding of the biology and technical improvement of the tools leading to a continuous spiral of innovation. The second is the need for an open but cautious mind. Sushustra’s recognition of the implications of sweet urine has stood the test of time; de Corbeil’s Poem on the Judgment of Urines has not. The ultimate fate of more recent biomarkers will only be revealed by time.
2 BIOMARKERS: FACING THE CHALLENGES AT THE CROSSROADS OF RESEARCH AND HEALTH CARE Gregory J. Downing, D.O., Ph.D. U.S. Department of Health and Human Services, Washington, DC
INTRODUCTION Across many segments of the biomedical research enterprise and the health care delivery sectors, the impact of biomarkers has been transforming in many ways: from business and economics to policy and planning of disease management. The pace of basic discovery research progress has been profound worldwide, with the intertwining of innovative technologies and knowledge providing extensive and comprehensive lists of biological factors now known to play integral roles in disease pathways. These discoveries have had a vast impact on pharmaceutical and biotechnology industries, with tremendous growth in investment in biomarker research reaching into the laboratory technology and services sector. These investments have spawned new biomedical industry sectors, boosted the roles of contract research organizations, supported vast new biomarker discovery programs in large corporate organizations, and prompted the emergence of information management in research. Similarly, growth in the academic research programs supporting biomarker research has greatly expanded training capacity, bench and clinical research capacity, and infrastructure, while fueling the growth of intellectual property. Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
15
16
THE CROSSROADS OF RESEARCH AND HEALTH CARE
By many reports, private-sector applications of biomarkers in toxicity and early efficacy trials have been fruitful in developing decision-making priorities that are introducing greater efficiency in early to midstage medical product development. Despite the heavy emphasis in private and publicly funded research, the reach of the impact of biomarkers into clinical practice interventions at present is challenging to quantify. The costs of development remain high for many drugs, and the numbers of new chemical entities reaching the marketplace have continued to remain relatively low compared to prior years and expectations following robust research expansion of the 1980s and 1990s. Industry concerns about the sustainability of research and development programs have grown in the backdrop of the clinical challenges that are attendant on biomarker applications in clinical trials. Understanding of the clinical implications of disease markers as a consequence of relatively slow evidence development has taken much longer to discern than many had predicted. There have been many challenges in establishing a translational research infrastructure that serves to verify and validate the clinical value of biomarkers as disease endpoints and their value as independent measures of health conditions. The lack of the equivalent of the clinical trial infrastructure for biomarker validation and diagnostics has slowed progress compared to therapeutic and device development. Evidence development processes and evaluations have only now begun to emerge for biomarkers, and wide adoption of them in clinical practice measures has not yet matured. For some, the enthusiasm and economic balance sheets have not been squared, as the clinical measure indices that had been hoped for have been viewed as moderately successful by some and by others as bottlenecks in the pipelines of therapeutic and diagnostic development.
BRIEF HISTORY OF BIOMARKER RESEARCH, 1998–2008: THE FIRST DECADE During the last decade of the twentieth century, biomedical research underwent one of the most dramatic periods of change in history. Influenced by a multitude of factors—some scientific, others economic, and still others of policy—new frontiers of science emerged as technology and knowledge converged, and diverged—bringing new discoveries and hope to the forefront of medicine and health. These capabilities came about as a generation’s worth of science that brought to the mainstream of biomedical research the foundation for a molecular basis of disease: recombinant DNA technology. Innovative applications of lasers, novel medical imaging platforms, and other advanced technologies began to yield a remarkable body of knowledge that provided unheralded opportunities for discovery of new approaches to the management of human health and disease. Here we briefly revisit a part of the medical research history that led to the shaping of new directions that is, for now, captured simply by the term
BRIEF HISTORY OF BIOMARKER RESEARCH, 1998–2008: THE FIRST DECADE
17
biomarker, a biological indicator of health or disease. In looking backward to the 1980s and 1990s and the larger scheme of health care, many new challenges were being faced. The international challenges and global economic threats posed by human immunodeficiency virus (HIV) and AIDS provided the impetus for one of the first steps in target-designed therapies and the use of viral and immune indicators of disease. For the first time, strategically directed efforts in discovery and clinical research paradigms were coordinated at the international level using clinical measures of disease at the molecular level. The first impact of biomarkers on discovery and translational research, both privately and publicly funded, as related to biological measures of viral load, CD4+ T-lymphocyte counts, and other parameters of immune function and viral resistance came to be a mainstay in research and development. Regulatory authority was put in place to allow “accelerated approval” of medical products using surrogate endpoints for health conditions with grave mortality and morbidity. Simultaneously, clinical cancer therapeutics programs had some initial advances with the use of clinical laboratory tests that aided in the distinction between responders and nonresponders to targeted therapies. The relation of Her2/neu tyrosine kinase receptor in aggressive breast cancer and response to (Herceptin) [1], and similarly, the association of imatinib (Gleevac) responsiveness with the association of the presence of Philadelphia chromosome translocation involving BCR/Abl genes in chronic myelogenous leukemia [2], represented some of the cases where targeted molecular therapies were based on a biomarker test as a surrogate endpoint for patient clinical response. These represented the entry point of pharmaceutical science moving toward co-development, using diagnostic tests to guide selection of therapy around a biomarker. Diverse changes were occurring throughout the health care innovation pipeline in the 1990s. The rise of the biotechnology industry became an economic success story underpinned by successful products in recombinant DNA technology, monoclonal antibody production, and vaccines. The device manufacturing and commercial laboratory industries became major forces. In the United States, the health care delivery system underwent changes with the widespread adoption of managed care programs, and an effort at health care reform failed. For U.S.-based academic research institutions, it was a time of particular tumult for clinical research programs, often supported through clinical care finances, downsized in response to financial shortfalls. At a time when scientific opportunity in biomedicine was, arguably, reaching its zenith, there were cracks in the enterprise that was responsible for advancing basic biomedical discovery research to the clinic and marketplace. In late 1997, the director of the National Institutes of Health, Harold Varmus, met with biomedical research leaders from academic, industrial, governmental, and clinical research organizations, technology developers, and public advocacy groups to discuss mutual challenges, opportunities, and responsibilities in clinical research. In this setting, some of the first strategic considerations regarding “clinical markers” began to emerge among stake-
18
THE CROSSROADS OF RESEARCH AND HEALTH CARE
holders in clinical research. From a science policy perspective, steps were taken to explore and organize information that brought to light the need for new paradigms in clinical development. Some of these efforts led to the framing of definitions of terms to be used in clinical development, such as biomarkers (a characteristic that is measured and evaluated objectively as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention) and surrogate endpoints (a biomarker that is intended to substitute for a clinical endpoint and is expected to predict clinical benefit or harm, or lack of benefit or harm, based on epidemiologic, therapeutic, pathophysiologic, or other scientific evidence) and descriptions of the information needs and strategic and tactical approaches needed to apply them in clinical development [3]. A workshop was held to address statistical analysis, methodology, and research design issues in bridging empirical and mechanism-based knowledge in evaluating potential surrogate endpoints [4]. In-depth analyses were held to examine information needs, clinical training skills, database issues, regulatory policies, technology applications, and candidate disease conditions and clinical trials that were suitable for exploring biomarker research programs. As a confluence of these organizational activities, in April 1999 an international conference was hosted by the National Institutes of Health (NIH) and U.S. Food and Drug Administration (FDA) [5]. The leadership focused on innovations in technology applications, such as multiplexed gene analysis using polymerase chain reaction technologies, large-scale gel analysis of proteins, and positron-emission tomography (PET) and magnetic resonance imaging (MRI). A summary analysis was crafted for all candidate markers for a wide variety of disease states, and a framework was formed for multiple disease-based public–private partnerships in biomarker development. A series of research initiatives supported by industry, NIH, and FDA were planned and executed in ensuring months. New infrastructure for discovery and validation of cancer biomarkers was put in place. Public–private partnerships for biomarker discovery and characterization were initiated in osteoarthritis, Alzheimer disease, and multiple sclerosis. Research activities in toxicology markers for cardiovascular disease and metabolism by renal and hepatic transformation systems were initiated by FDA. These events did not yield a cross-sector strategic action plan, but did serve as a framework for further engagement across governmental, academic, industrial, and nongovernmental organizations. Among the breakthroughs was the recognition that new statistical analysis methods and clinical research designs would be needed to address multiple variables measured simultaneously and to conduct metaanalyses from various clinical studies to comprehend the effects of a biomarker over time and its role as a reliable surrogate endpoint. Further, it was recognized that there would be needs for data management, informatics, clinical registries, and repositories of biological specimens, imaging files, and common reagents. Over the next several years, swift movement across the research and development enterprise was under way. It is obvious that future biomarker research
SCIENCE AND TECHNOLOGY ADVANCES IN BIOMARKER RESEARCH
19
TABLE 1 Major Scientific Contributions and Research Infrastructure Supporting Biomarker Discovery Human Genome Project Mouse models of disease (recombinant DNA technology) Information management (informatics tools, open-source databases, open-source publishing, biomarker reference services) Population-based studies and gene–environment interaction studies Computational biology and biophysics Medical imaging: structural and functional High-throughput technologies: in vitro cell-based screening, nanotechnology platforms, molecular separation techniques, robotics, automated microassays, high-resolution optics Proteomics, metabolomics, epigenomics Pharmacogenomics Molecular toxicology Genome-wide association studies Molecular pathways, systems biology, and systems engineering
was driven in the 1990s and early years of the twenty-first century by the rapid pace of genome mapping and the fall in cost of large-scale genomic sequencing technology, driven by the Human Genome Project. A decade later, it is now apparent that biomarker research in the realm of clinical application has acquired a momentum of its own and is self-sustaining. The major schemes for applications of biomarkers can be described in a generalized fashion in four areas: (1) molecular target discovery, (2) earlyphase drug development, (3) clinical trials and late-stage therapeutic development, and (4) clinical applications for health status and disease monitoring. The building blocks for biomarker discovery and early-stage validation over the last decade are reflected in Table 1. Notable to completion of the international Human Genome Project was the vast investment in technology, database development, training, and infrastructure that have been applied throughout industry toward clinical research applications.
SCIENCE AND TECHNOLOGY ADVANCES IN BIOMARKER RESEARCH In the past decade of biomarker research, far and away the most influential driving force was completion of the Human Genome Project in 2003. The impact of this project on biomarker research has many facets beyond establishment of the reference data for human DNA sequences. This mammoth undertaking initiated in 1990 led to the sequence for the nearly 25,000 human genes and to making them accessible for further biological study. Beyond this and the other species genomes that have been characterized,
20
THE CROSSROADS OF RESEARCH AND HEALTH CARE
human initiatives to define individual differences in the genome provided some of the earliest large-scale biomarker discovery efforts. The human haplotype map (HapMap) project defined differences in single-nucleotide polymorphisms (SNPs) in various populations around the world to provide insights into the genetic basis of disease and into genes that have relevance for individual differences in health outcomes. A collaboration among 10 pharmaceutical industry companies and the Wellcome Trust Foundation, known as the SNP consortium, was formed in 1999 to produce a public resource of SNPs in the human genome [6]. The SNP consortium used DNA resources from a pool of samples obtained from 24 people representing several racial groups. The initial goal was to discover 300,000 SNPs in two years, but the final results exceeded this, as 1.8 million SNPs had been released into the public domain at the end of 2002 when the discovery phase was completed. The SNP consortium was notable, as it would serve as a foundation for further cross-industry public–private partnerships that would be spawned as a wide variety of community-based efforts to hasten the discovery of genomic biomarkers (see below). The next phase of establishing the basic infrastructure to support biomarker discovery, particularly for common chronic diseases, came in 2002 through the International HapMap Project, a collaboration among scientists and funding agencies from Japan, the United Kingdom, Canada, China, Nigeria, and the United States [7]. A haplotype is a set of SNPs on a single chromatid that are associated statistically. This rich resource not only mapped over 3.1 million SNPs, but established additional capacity for identifying specific gene markers in chronic diseases and represented a critical reference set for enabling population-based genomic studies to be done that could establish a gene– environmental basis for many diseases [8]. Within a short time of completing the description of the human genome, a substantial information base was in place to enable disease–gene discoveries on a larger scale. This approach to referencing populations to the welldescribed SNP maps is now the major undertaking for defining gene-based biomarkers. In recent years, research groups around the world have rapidly been establishing genome-wide association studies to identify specific gene sets associated with diseases for a wide range of chronic diseases. This new era in population-based genetics began with a small-scale study that led to the finding that age-related macular degeneration is associated with a variation in the gene for complement factor H, which produces a protein that regulates inflammation [9]. The first major implication in a common disease was revealed in 2007 through a study of type II diabetes variants [10]. To demonstrate the rapid pace of discovery of disease gene variants: At the time of this writing, within 18 months following the study, there are now 18 disease gene variants associated with defects in insulin secretion [11]. The rapid growth in genome-wide association studies (GWASs) is identifying a large number of multigene variants that are leading to subclassification
SCIENCE AND TECHNOLOGY ADVANCES IN BIOMARKER RESEARCH
21
of diseases with common phenotype presentations. Among the databases being established for enabling researchers public access to these association studies is dbGaP, the database of genotype and phenotype. The database, which was developed and is operated by the National Library of Medicine’s National Center for Biotechnology Information, archives and distributes data from studies that have investigated the relationship between phenotype and genotype, such as GWASs. At present, dbGAP contains 36 population-based studies that include genotype and phenotype information. Worldwide, dozens if not hundreds of GWASs are under way for a plethora of health and disease conditions associated with genetic features. Many of these projects are collaborative, involve many countries, and are supported through public–private partnerships. An example is the Genomics Association Information Network (GAIN), which is making genotype–phenotype information publicly available for a variety of studies in mental health disorders, psoriasis, and diabetic nephropathy [12]. For the foreseeable future, substantial large-scale efforts will continue to characterize disease states and catalog genes associated with clinically manifested diseases. As technology and information structures advance, other parameters of genetic modifications represent new biomarker discovery opportunities. The use of metabolomics, proteomics, and epigenomics in clinical and translational research is now being actively engaged. A new large-scale project to sequence human cancers, the Cancer Genome Atlas, is focused on applying large-scale biology in the hunt for new tumor genes, drug targets, and regulatory pathways. This project is focused not only on polymorphisms but also on DNA methylation patterns and copy numbers as biomarker parameters [13]. Again, technological advances are providing scientists with novel approaches to inferring sites of DNA methylation at nucleotide-level resolution using a technique known as high-throughput bisulfite sequencing (HTBS). Large-scale initiatives are also under way to bring a structured approach to relating protein biomarkers into focus for disease conditions. Advances in mass spectrometry, protein structure resolution, bioinformatics for archiving protein-based information, and worldwide teams devoted to disease proteomes have solidified in recent years. Although at a more nascent stage of progress in disease characterization, each of these emerging new fields is playing a key complementary role in biomarker discovery in genetics and genomics. Supporting this growth in biomarker discovery is massive investment over the last 10 years worldwide by public and private financers that has spawned hundreds of new commercial entities worldwide. Private-sector financing for biomarker discovery and financing has become a major component of biomedical research and development (R&D) costs in pharmaceutical development. Although detailed budget summaries have not been established for U.S. federal funding of biomarker research, in a recent survey by McKinsey and Co., biomarker R&D expenditures in 2009 were estimated at $5.3 billion, up from $2.2 billion in 2003 [14].
22
THE CROSSROADS OF RESEARCH AND HEALTH CARE
POLICIES AND PARTNERSHIPS Although progress in biomarker R&D has accelerated, the clinical translation of disease biomarkers as endpoints in disease management and as the foundation for diagnostic products has had more extensive challenges. A broad array of international policy matters over the past decade have moved to facilitate biomarker discovery and validation (Table 2). In the United States, the FDA has taken a series of actions to facilitate applications of biomarkers in drug development and use in clinical practices as diagnostic and therapeutic monitoring. A voluntary submission process of genomic data from therapeutic development was initiated by the pharmaceutical industry and the FDA in 2002 [15]. This program has yielded many insights into the role of drug-metabolizing enzymes in the clinical pharmacodynamic parameters of biomarkers in drug development. In July 2007, guidelines for the use of multiplexed genetic tests in clinical practice to monitor drug therapy were issued by the FDA [16] More recently, the FDA has begun providing label requirements indicating those therapeutic agents for which biomarker assessment can be recommended to avoid toxicity and enhance the achievement of therapeutic responses [17]. In 2007, Congress authorized the establishment of a private–public resource to support collaborative research with the FDA. One of the major obstacles to clinical genomic research expressed over the years has been a concern that research participants may be discriminated against in employment and provision of health insurance benefits as a result of the association of genetic disease markers. After many years of deliberation, the U.S. Congress passed legislation known as the Genetic Information Non-discrimination Act of 2008, preventing the use of genetic information to deny employment and health insurance. The past decade has seen many new cross-organizational collaborations and organizations developed to support biomarker development. For example,
TABLE 2
Major International Policy Issues Related to Biomarker Research
Partnerships and collaborations: industry, team science Expanded clinical research capacity through increases in public and private financing Open-source publishing, data-release policies Standards development and harmonization FDA Critical Path Initiative Regulatory guidances for medical product development Biomarkers Consortium Evidence-based medicine and quality measures of disease International regulatory harmonization efforts Public advocacy in medical research Genetic Information Non-discrimination Act of 2008 (U.S.)
POLICIES AND PARTNERSHIPS
23
the American Society for Clinical Oncology, the American Association for Cancer Research, and the FDA established collaborations in workshops and research discussions regarding the use of biomarkers for ovarian cancer as surrogate endpoints in clinical trials [18]. The FDA Critical Path Initiative was developed in 2006, with many opportunities described for advancing biomarkers and surrogate endpoints in a broad range of areas for therapeutic development [19,20]. Progress in these areas have augmented industry knowledge of application of biomarkers in clinical development programs and fostered harmony with international regulatory organizations in the ever-expanding global research environment. This program has been making progress on expanding the toolbox for clinical development—many of the components foster development and application of biomarkers. As an example of international coordination among regulatory bodies, recently the FDA and the European Medicines Agency (EMEA) for the first time worked together to develop a framework allowing submission, in a single application to the two agencies, of the results of seven new biomarker tests that evaluate kidney damage during animal testing of new drugs. The new biomarkers are KIM-1, albumin, total protein, β2-microglobulin, cystatin C, clusterin, and trefoil factor-3, replacing blood urea nitrogen (BUN) and creatinine in assessing acute toxicity [21]. The development of this framework is discussed in more detail by Goodsaid in Chapter 9. In 2007, Congress established legislation that formed the Reagan–Udall Foundation, a not-for-profit corporation to advance the FDA’s mission to modernize medical, veterinary, food, food ingredient, and cosmetic product development, accelerate innovation, and enhance product safety. Another important policy in biomarker development occurred with the abovementioned establishment of the Critical Path Institute in 2006 to facilitate precompetitive collaborative research among pharmaceutical developers. In working closely with the FDA, these collaborations have focused on toxicology and therapeutic biomarker validation [22]. In 2007, a new collaboration building on public–private partnerships with industry was formed to develop clinical biomarkers. In 2006, the Biomarkers Consortium was established as a public–private initiative with industry and government to spur biomarker development and validation projects in cancer, central nervous system, and metabolic disorders in its initial phase [23]. These programs all support information exchange and optimize the potential to apply well-characterized biomarkers to facilitate pharmaceutical and diagnostic development programs. Other policies that are broadening the dissemination of research findings relate to the growing directions toward open-source publishing. In 2003, the Public Library of Science began an open-source publication process that provides instant access to publications [24]. Many scientific journals have moved to make their archives available six to 12 months after publication. In 2008, the National Institutes of Health implemented policy that requires publications of scientific research with U.S. federal funding to be placed in the public domain within 12 months of publication [25]. All of these policy actions are
24
THE CROSSROADS OF RESEARCH AND HEALTH CARE
favoring biomarker research by accelerating the transfer of knowledge from discovery to development. New commercial management tools have been developed to provide extensive descriptions of biomarkers and their state of development. Such resources can help enhance industry application of wellcharacterized descriptive information and increase efficiency of research by avoiding duplication and establishing centralized credentialing of biomarker information [26]. New business models are emerging among industry and patient advocacy organizations to increase the diversity of financing options for early-stage clinical development [27]. Private philanthropy conducted with key roles of patient groups are supporting research in proof-of-concept research and target validation with the expectation that these targeted approaches will lead to commercial interests in therapeutic development. Patient advocacy foundations are supporting translational science in muscular dystrophy, amyotrophic lateral sclerosis, juvenile diabetes, multiple myeloma, and Pompe disease, often with partnerships from private companies [28,29].
CHALLENGES AND SETBACKS While progress in biomarker R&D has accelerated, the clinical translation of disease biomarkers as endpoints in disease management and as the foundation for diagnostic products has had more extensive challenges [30]. For example, we have not observed a large number of surrogate endpoints emerging as clinical trial decision points. Notable exceptions to this include imaging endpoints that have grown in substantial numbers. In most cases for drug development, biomarkers are being applied in therapeutic development to stratify patients into subgroups of responders, to aid in pharmacodynamic assessment, and to identify early toxicity indicators to avoid late-stage failures in therapeutic development. There are difficulties in aligning the biomarker science to clinical outcome parameters to establish the clinical value in medical practice decision making. As applied in clinical practice, biomarkers have their most anticipated applications in pharmacotherapeutic decisions in treatment selection and dosing, risk assessment, and stratification of populations for disease preemption and prevention. In the United States, challenges to the marketplace are presented by the lack of extensive experience with pathways for medical product review and reimbursement systems that establish financial incentives for biomarker development as diagnostic assays. Clinical practice guidelines for biomarker application in many diseases are lacking, leaving clinicians uncertain about what roles biomarker assays play in disease management. In addition, few studies have been done to evaluate the cost-effectiveness of inclusion of biomarkers and molecular diagnostics in disease management [31]. The lack of these key pieces in a system of modern health care can cripple plans for integration of valuable technologies into clinical practice.
LOOKING FORWARD
25
Scientific setbacks have also occurred across the frontier of discovery and development. Among notable instances was the use of pattern recognition of tandem mass spectrometric measurements of blood specimens in ovarian cancer patients. After enthusiastic support of the application in clinical settings, early successes were erased when technical errors and study design issues led to faulty assumptions about the findings. Across clinical development areas, deficiencies in clinical study design have left initial study findings unconfirmed, often due to overfitting of sample size to populations, and improper control for selection and design bias [32]. Commercial development of large-scale biology companies has also struggled in some ways to identify workable commercial models. Initial enthusiasm early in the decade about private marketing of genomic studies in disease models faltered as public data resources emerged. Corporate models for developing large proteomic databases faltered based on a lack of distinguished market value, little documented clinical benefit, and wide variability in quality of clinical biospecimens. Evidence development to support clinical utility of many biomarkers to be used as clinical diagnostics is difficult to establish, as clinical trial infrastructure has not yet been established to validate candidate biomarkers for clinical practice. An obstacle to this has been access to well-characterized biospecimens coupled with clinical phenotype information. This has led to calls for centralized approaches to biospecimen collection and archiving to support molecular analysis and biomarker research [33]. Furthermore, a wide variety of tissue collection methods and DNA and protein preparation for molecular analysis has been at the root of many problems of irreproducibility. Standards development and best practices have been represented as cornerstones in facilitating biomarker validation [34,35]. Similarly, reacting to the lack of reproducibility of findings in some studies, proposals have been made for standards in study design for biomarker validation for risk classification and prediction [36].
LOOKING FORWARD The next decade of biomarker research is promising, with a push toward more clinical applications to be anticipated. Key factors on the horizon that will be integral to clinical adoption are summarized in Table 3. The confluence of basic and translational research has set the stage for personalized medicine, a term of art used widely now, which indicates that health care practices can be customized to meet specific biological and patient differences. The term was not part of the lexicon in 1997 but speaks to the consumer-directed aspects of biomedical research. Genomic services offered to consumers have emerged with the use of GWASs, although clinical value and impact are not known. It is clear that the emergence of biomarkers in an ever-changing health care delivery system will in some fashion incorporate the consumer marketplace.
26
THE CROSSROADS OF RESEARCH AND HEALTH CARE
TABLE 3
Looking Ahead: Implementing Biomarkers in Clinical Care
Intellectual property policy Phenotypic disease characterization Clinical translation: biomarker validation and verification Clinical trials stratification based on biological diversity Surrogate endpoints: managing uncertainty and defining boundaries in medical practice Data-sharing models Dynamic forces in industry financing Co-development of diagnostics and therapeutics Clinical infrastructure for evidence development and clinical utility of diagnostics Health information exchange and network services Consumer genomic information services
Prospects for biomarkers to continue to play a major role in the transformation of pharmaceutical industry research remain high as new technology platforms, bioinformatics infrastructure, and credentialed biomarkers evolve. The emergence of a clearer role for federal regulators and increased attention to appraisal of value from genomic-based diagnostics will help provide guideposts for investment and a landscape for clinical application. One can anticipate that the impact of genomics in particular will probably provide a clinical benefit in chronic diseases and disorders, where multiple biomarker analyses reflect models and pathways of diseases. An emerging clinical marketplace is evolving for the development and application of biomarker assays as clinical diagnostics. The pathway for laboratory-developed tests will probably evolve to include FDA oversight of certain tests with added complexity of multiple variables integrated into index scoring approaches to assist in therapeutic selection. Clinical practice guidelines are beginning to emerge for the inclusion of biomarkers to guide stratification of patients and therapeutic decision making. The early impact of these approaches is now evident in oncology, cardiovascular, and infectious disease and immune disorders. Advancing clinical biomarker to improve safety and quality of health care as a mainstay remains many years away. The clinical evaluation processes for diagnostics and targeted molecular therapy as a systematic approach have not yet been firmly established. The use of electronic health information, and integration of information from health plans and longitudinal data collection and randomized clinical trials will need integration and coordination for effective implementation in medical decision making. In 2008, the first impact was felt for over-the-counter or electronically available consumer services based on genetic tests. Utilizing public and private genome-wide association databases, private-sector resources using powerful search engines coupled with family history information and SNP analysis developed a consumer service that identifies possible health risks. Although
REFERENCES
27
the medical benefit of such services remains undocumented, the successful entry of several services and the growth of online commercial genomic services indicates interest among health-conscious citizens for understanding inherited disease risk. Another noteworthy factor that will probably play an important role in the next decade of clinical biomarker adoption is the development of standards and interoperability specifications for the health care delivery system and consumers. Interoperable health record environments will probably provide more flexibility and mobility of laboratory information and provide advantages to consumer empowerment in prevention and disease management. One of the most important and sweeping challenges with biomarkers is the development of intellectual property policies that will bring opportunity and entrepreneurship in balance with meeting unmet market needs and clinical value. Because of the likelihood that single gene mutations or simple protein assays are not by themselves a discovery equivalent to diagnostic tests or new clinical markers, the arraying of the convergence of circles of technologies and knowledge will need new approaches to management for the combined aspects of technology to be brokered as real value in health care. Indeed, the challenges ahead in noting the importance overall for this notion is underscored more globally by Alan Greenspan in noting that “arguably, the single most important economic decision our lawmakers and courts will face in the next twenty-five years is to clarify the rules of intellectual property” [37]. Overall, a decade’s worth of work has charted a robust and vibrant course for biomarkers across the biomedical research and development landscape. Clinical applications of biomarkers in medical practice are coming more into focus through diagnostics and molecularly targeted therapies, but a long period of time may pass before biomarker-based medicine becomes a standard in all areas of health care practice.
REFERENCES 1. Ross JS, Fletcher JA, Linette GP, et al. (2003). The Her-2/neu gene and protein in breast cancer 2003: biomarker and target of therapy. Oncologist, 8:307–325. 2. Deininger M, Druker BJ (2003). Specific targeted therapy of chronic myelogenous leukemia with Imatinib. Pharmacol Rev, 55:401–423. 3. Biomarkers Definitions Working Group (2001). Biomarkers and surrogate endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther, 69:89–95. 4. De Grottola VG, et al. (2001). Considerations in the evaluation of surrogate endpoints in clinical trials: summary of a National Institutes of Health Workshop. Control Clin Trials, 22:485–502. 5. Downing GJ (ed.) (2000). Biomarkers and Surrogate Endpoints: Clinical Research and Applications. Elsevier Science, Amsterdam.
28
THE CROSSROADS OF RESEARCH AND HEALTH CARE
6. Sachidanandam R, Weissman D, Schmidt S, et al. (The International SNP Map Working Group) (2001). A map of human genome sequence variation containing 1.42 million single nucleotide polymorphisms. Nature, 409:928–933. 7. The International HapMap Consortium (2003). The International HapMap Project. Nature, 426:789–796. 8. The International HapMap Consortium (2007). A second generation human haplotype map of over 3.1 million SNPs. Nature, 449:851–861. 9. Klein RJ, Zeiss C, Chew EY, et al. (2005). Complement factor H polymorphism in age-related macular degeneration. Science, 308:385–389. 10. Sladek R, Rocheleau G, Rung J, et al. (2007). A genome-wide association study identifies novel risk loci for type 2 diabetes. Nature, 445:881–885. 11. Perry JR, Frayling TN (2008). New gene variants alter type 2 diabetes risk predominantly through reduced beta-cell function. Curr Opin Clin Nutr Metab Care, 11:371–378. 12. GAIN Collaborative Research Group (2007). New models of collaboration in genome-wide association studies: the Genetic Association Information Network. Nat Genet, 39(9):1045–1051. 13. Collis FS, Barker AD (2007). Mapping the cancer genome: pinpointing the genes involved in cancer will help chart a new course across the complex landscape of human malignancies. Sci Am, 296;50–57. 14. Conway M, McKinsey and Co. (2007). Personalized medicine: deep impact on the health care landscape. http://sequencing.hpcgg.org/PM/presentations/Tue_08_ Conway_061120%20Michael%20Conway%20Harvard%20Pers%20Med%20 Presentation.pdf (accessed Sept. 5, 2008). 15. Orr MS, Goodsaid F, Amur S, Rudman A, Frueh FW (2007). The experience with voluntary genomic data submissions at the FDA and a vision for the future of the voluntary data submission program. Clin Pharmacol, 81:294–297. 16. FDA (2007). Guidance for Industry and FDA Staff: Pharmacogenetic tests and genetic tests for heritable markers. http://www.fda.gov/cdrh/oivd/guidance/1549. html. 17. Frueh FW, et al. (2008).Pharmacogenomic biomarker information in drug labels approved by the United States Food and Drug Administration: prevalence of related drug use. Pharmacotherapy, 28:992–998. 18. Bast RC, Thigpen JT, Arbuck SG, et al. (2007). Clinical trial endpoints in ovarian cancer: report of an FDA/ASCO/AACR Public Workshop. Gynecol Oncol, 107(2):173–176. 19. The critical path to new medical products. http://www.fda.gov/oc/initiatives/ criticalpath/report2007.html. 20. FDA (2004). Challenge and opportunity on the critical path to new medical products. http://www.fda.gov/oc/initiatives/criticalpath/whitepaper.html (accessed Aug. 23, 2008). 21. FDA (2008). European Medicines Agency to consider additional test results when assessing new drug safety. http://www.fda.gov/bbs/topics/NEWS/2008/NEW01850. html. 22. Woolsey RL, Cossman J (2007). Drug development and the FDA’s Critical Path Initiative. Clin Pharmacol Ther, 81:129–133.
REFERENCES
29
23. The Biomarkers Consortium (2008). On the critical path of drug discovery. Clin Pharmacol Ther, 83:361–364. 24. Public Library of Science. http://www.plos.org. 25. National Institutes of Health (2008). Revised policy on enhancing public access to archived publications resulting from NIH-funded research. NOT 08–033. http:// grants.nih.gov/grants/guide/notice-files/not-od-08-033.html. 26. Thomson Reuters. BIOMARKERcenter. http://scientific.thomsonreuters.com/ products/biomarkercenter/ (accessed Sept. 23, 2008). 27. Kessel M, Frank F (2007). A better prescription for drug-development financing. Nat Biotechnol, 25:859–866. 28. PriceWaterhouseCoopers (2007). Personalized medicine: the emerging pharmacogenomics revolution. Blogal Technology Centre, Health Research Institute, San Jose, CA. 29. Trusheim MR, Berndt ER, Douglas FL (2007). Stratified medicine: strategic and economic implications of combiing drugs and clinical biomarkers. Nat Rev Drug Discov, 6(4):287–293. 30. Phillips KA, Van Bebber S, Issa A (2006). Priming the pipeline: a review of the clinical research and policy agenda for diagnostics and biomarker development. Nat Revi Drug Discov, 5(6):463–469. 31. Phillips KA, Van Bebber SL (2004). A systematic review of cost-effectiveness analyses of pharmacogenomic interventions. Pharmacogenomics, 5(8):1139–1149. 32. Ransohoff DW (2005). Lessons from controversy: ovarian cancer screening and serum proteomics. J Natl Cancer Inst, 97:315–319. 33. Ginsburg GS, Burke TW, Febbo T (2008). Centralized biospecimen repositories for genetic and genomic research. JAMA, 299:1359–1361. 34. National Cancer Institute (2007). National Cancer Institute best practices for biospecimen resources. http://biospecimens.cancer.gov/practices/. 35. Thomson Reuters (2008). Establishing the standards for biomarkers research. http://scientific.thomsonreuters.com/pm/biomarkers_white_paper_0308.pdf (accessed Sept. 4, 2008). 36. Pepe MS, Feng Z, Janes H, Bossoyt PM, Potter JD (2008). Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: standards for study design. J Natl Cancer Inst, eprint, Oct. 8. 37. Greenspan A (ed.) (2007). The Age of Turbulence: Adventures in a New World. Penguin Press, New York.
3 ENABLING GO/NO GO DECISIONS J. Fred Pritchard, Ph.D. MDS Pharma Services, Raleigh, North Carolina
Mallé Jurima-Romet, Ph.D. MDS Pharma Services, Montreal, Quebec, Canada
UNDERSTANDING RISK There is no question that most people who know something about our industry consider developing a drug product as a risky business. Usually, they mean that the investment of time and money is high while the chance of a successful outcome is low compared to other industries that create new products. Yet the rewards can be great, not only in terms of monetary return on investment (ROI) but also in the social value of contributing an important product to the treatment of human disease. Risk is defined as “the possibility of loss or injury” [1]. Therefore, inherent in the concept is the mathematical sense of probability of occurrence of something unwanted. Everyday decisions and actions that people take are guided by conscious and unconscious assessments of risk, and we are comfortable with compartmentalized schemes where we sense that a situation is very high, high, medium, low, or very low risk. We often deal with the concept of a relative risk (e.g., in comparison to other options). Some risks can be defined in more absolute terms, such as some type of population measure based on trend analysis of prior incidence statistics (e.g., current risk of postmenopausal Caucasian women in the United States being diagnosed with breast cancer).
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
31
32
ENABLING GO/NO GO DECISIONS
These types of population-based risk data, while often much debated in the scientific and popular press, do affect decision making at the individual level. As opposed to an individual’s skill at assessing risks, decisions and actions taken during drug development require objective and systematic risk assessment by groups of people. There are many stakeholders involved in the development of a drug product. These include the specialists that do the science required in drug development. Also included are the investors and managers who make decisions about how limited resources will be used. Clinical investigators who administer the drug and the patients who agree to participate in clinical trials are stakeholders, as are the regulators and institutional review boards (IRBs) or ethics committees that approve use of an experimental drug in humans. Each stakeholder has his or her unique perspective of risk. The prime focus of some is the business risk involved, including how much work and money is invested to progress the drug at each phase of development. On the other hand, IRBs, regulators, investigators, and patients are concerned primarily with the safety risk to the patient. Figure 1 depicts the major factors that contribute to three different perspectives of risk, broadly classified as risk to patient, business risk, and risk of therapeutic failure. Investigators and patients are concerned primarily with safety and efficacy factors associated with a particular therapy. Although regulators are concerned primarily with patient safety, regulations do affect how much development will be required for a particular drug candidate. The business risk can be greatly affected by the investment and development partners involved and the expectations set with owners and managers. Potential competitors in the marketplace affect risk, as does the available pool of money for development. The common factor for all three risk perspectives is the novelty
Figure 1 Major factors affecting three main perspectives of risk in drug development: business risk, risk to patient, and risk of therapeutic failure.
DECISION GATES
33
of the product. Drug candidates that represent new targets for therapy while representing a hope for improved efficacy do require more interactions with regulators and investigators, adding to development expense. In addition, because the target is unproven, there is a greater relative risk for therapeutic failure compared to proven pharmacological targets of disease intervention. Therefore, when attempting to express the risks involved in developing a drug, it is important to understand the varying perspectives of each major group of stakeholders. Before risk assessment can be fully appreciated in the context of decision making in drug development, it must be balanced with the perceived benefits of the product. For example, the patient and physician would tolerate a high level of risk to the patient if the potential benefits offered by a novel therapeutic are for a life-threatening disease for which there is no effective treatment. In a similar way, a high degree of business risk may be tolerated by investors in novel drug candidates if the return on investment, if successful, were very high. Like risk, benefits are also in the eye of the beholder. Ideally, on many occasions during drug development, each stakeholder is asked to assess his or her view of risk versus benefit based on current data. This assessment will become part of the decision-making process that drives drug development in a logical and, hopefully, collaborative way. Effective decision making requires integrating these varying risk–benefit assessments in a balanced way. The decision gate approach is a useful way to integrate these needs into the drug development process.
DECISION GATES Drug development is a process that proceeds through several high-level decision gates from identification of a potential therapeutic agent through to marketing a new drug product [2]. A decision gate is a good analogy. For a drug candidate to progress further in drug development it must meet a set of criteria that have been agreed to by the decision makers before they will open the gate. It is “go/no go” because the future product life of the drug hangs in the balance. Once a new drug candidate has progressed through a decision gate, the organization should be committed to expend even greater resources (money and scientists’ time) to do the studies to address criteria for the next decision gate along the development path. Disciplined planning and decision making is required to leverage the value of the decision gate approach. An example of a decision grid applicable to the early phase of drug development is depicted in Table 1. Initially, a clear set of questions at each decision gate needs to be agreed upon and understood by the decision makers. In later stages of drug development these questions and criteria are often presented in a target product profile (TPP), which is, in essence, a summary of a drug development plan described in terms of labeling concepts. The FDA has formalized the agency’s expectations for a TPP in a
34
Rodent: NOAEL >10 mg/kg; dog: NOAEL >10 mg/kg; no genotoxicity no CV; effects 5 mg/kg; dog: NOAEL >5 mg/kg; equivocal genotoxicity at highest exposure; no CV effects 90%). Finally, injury to the mucosa of the gastrointestinal tract occurred at lower doses and exposures in dogs than in monkeys (based on dose-range-finding studies), indicating the dog as the more sensitive nonrodent species. Pivotal one-month oral toxicity studies, including one-month reversal phases, were conducted in beagle dogs and Sprague–Dawley rats to support submission of an investigational new drug (IND) application to the U.S. Food and Drug Administration. A list of toxicology studies of PD0325901 conducted prior to initiation of human testing is presented in Table 1. Upon completion of the first two-week dose-range-finding study in rats, a significant and unique toxicity was observed that involved mineralization of vasculature (Figure 2) and various soft tissues (i.e., ectopic or systemic mineralization) as determined by routine light-microscopic evaluation. In a follow-up study in rats, dysregulation of serum calcium and phosphorus homeostasis, and systemic mineralization occurred in a time- and dosedependent manner. This toxicity was not observed in dogs or monkeys, despite systemic exposures to PD0325901 more than 10-fold higher than those associated with mineralization in rats and pharmacologic inhibition of
304
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
TABLE 1
Summary of Toxicology Studies Conducted with PD0325901a
Acute and escalating dose Single dose in rats Single dose in rats, IVb Escalating dose in dogs Escalating dose in dogs, IV Escalating dose in monkeys Safety pharmacology Neurofunctional evaluation in rats Neurofunctional evaluation in rats, IV Cardiovascular effects in monkeys Pulmonary effects in rats Purkinje fiber assay HERG assay Nonpivotal repeated-dose studies 2-Week dose-range finder in rats Exploratory 2-week study in rats 2-Week dose-range finder in dogs 2-Week dose-range finder in monkeys Pivotal repeated-dose studies One month in rats (plus one-month reversal phase) One month in dogs (plus one-month reversal phase) Pivotal genetic toxicity studies Bacterial mutagenicity Structural chromosome aberration In vivo micronucleus in rats Special toxicity studies Pharmacodynamic and toxicokinetic in rats, oral and IV Time course and biomarker development in rats Serum chemistry reversibility study in rats Investigative study in mice Enantiomer (R and S) study in rats PD0325901 in combination with pamidronate or Renagel in rats a
All animal studies were conducted by oral gavage unless otherwise indicated. IV, intravenous (bolus).
b
phosphorylated MAPK in canine or monkey tissue (demonstrating biochemical activity of PD0325901 at the target protein, i.e., MEK). Various investigative studies were conducted to examine the time course and potential mechanism of systemic mineralization in rats and to identify biomarkers that could be used to monitor for this effect in clinical trials. Next we describe the key studies conducted to investigate this toxicity, the results obtained, and how the nonclinical data were utilized to evaluate the safety risk of the compound, select a safe starting dose for a phase I trial, and provide measures to ensure patient safety during clinical evaluation of PD0325901 in cancer patients.
TOXICOLOGY STUDIES
305
Figure 2 Mineralization of the aorta in a male rat administered PD0325901 at 3 mg/kg in a dose-range-finding study. Arrows indicate mineral in the aorta wall. Hematoxylin and eosin–stained tissue section. (See insert for color reproduction of the figure.)
At the beginning of the toxicology program for PD0325901, a two-week oral-dose-range-finding study was conducted in male and female rats in which daily doses of 3, 10, and 30 mg/kg (18, 60, and 180 mg/m2, respectively) were administered. Mortality occurred in males at ≥3 mg/kg and females at ≥10 mg/ kg, with toxicity occurring to a greater extent in males at all dose levels. Increased serum levels of phosphorus (13 to 69%), and decreased serum total protein (12 to 33%) and albumin (28 to 58%) were seen at all doses. Lightmicroscopic evaluation of formalin-fixed and hematoxylin- and eosin-stained tissues was performed. Mineralization occurred in the aorta (Figure 2) and coronary, renal, mesenteric, gastric, and pulmonary vasculature of males at ≥3 mg/kg and in females at ≥10 mg/kg. Parenchymal mineralization with associated degeneration occurred in the gastric mucosa and muscularis, intestines (muscularis, mucosa, submucosa), lung, liver, renal cortical tubules, and/or myocardium at the same doses. Use of the Von Kossa histology stain indicated the presence of calcium in the mineralized lesions. Vascular/parenchymal mineralization and degeneration were generally dose related in incidence and severity. PD0325901 produced increased thickness (hypertrophy) of the femoral growth plate (physis) in both sexes at all doses, and degeneration and necrosis of the femoral metaphysis in males at ≥3 mg/kg and females at 30 mg/ kg. In addition, skin ulceration, hepatocellular necrosis, decreased crypt goblet cells, reduced hematopoetic elements, and ulcers of cecum and duodenum were observed. Systemic mineralization of the vasculature and soft tissues was
306
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
the most toxicologically significant finding of this study. At this time, it was not known whether the hyperphosphatemia was due to decreased renal clearance (York and Evans, 1996) and/or related to the mineralization. However, hyperphosphatemia and elevated serum calcium–phosphorus (Ca × P) product can result in vascular and/or soft tissue mineralization (Spaulding and Walser, 1970; Block, 2000; Giachelli et al., 2001). In addition, morphologic findings similar to those seen in this study are observed in various animal species (e.g., dogs, horses, pigs, rats) with vitamin D toxicosis and altered calcium homeostasis (Grant et al., 1963; Spangler et al., 1979; Harrington and Page, 1983; Long, 1984). Tissue mineralization is observed in the aorta, various arteries, myocardium, gastric mucosa, and renal tubules, along with other soft tissues in these animals. An exploratory two-week oral toxicity study was next conducted in male and female rats to further investigate the toxicities observed in the initial twoweek dose-range finder. The objectives of this study were to identify a minimal or no-adverse-effect level and to provide toxicity, toxicokinetic, and pharmacodynamic data to aid in dose selection for future studies. In addition, an attempt was made to assess whether alterations in phosphorus and calcium homeostasis occur and whether changes can be monitored as potential biomarkers of toxicity. Doses tested in this study were 0.3, 1, or 3 mg/kg (1.8, 6, or 18 mg/m2, respectively) and animals were dosed for up to 14 days. Cohorts of animals (5/sex/group) were necropsied on days 4 and 15, and hematology, serum biochemistry, plasma intact parathyroid hormone (PTH), and urinary parameters were evaluated. Urinalysis included measurement of calcium, phosphorus, and creatinine levels. Select tissues were examined microscopically, and samples of liver and lung were evaluated for total and phosphorylated MAPK (pMAPK) levels by Western blot analysis to evaluate for pharmacologic activity of PD0325901 (method described in Brown et al., 2007). Satellite animals were included for plasma drug-level analyses on day 8. In this study, systemic mineralization occurred at ≥0.3 mg/kg in a dosedependent fashion, was first observed on day 4, and was more severe in males. By day 15, mineralization was generally more pronounced and widespread. Skeletal changes included hypertrophy of the physeal zone in males at ≥1 mg/ kg and at 3 mg/kg in females, and necrosis of bony trabeculae and marrow elements with fibroplasia, fibro-osseous proliferation, and/or localized hypocellularity at ≥1 mg/kg in males and 3 mg/kg in females. The minimal plasma PD0325901 AUC(0–24) values associated with toxicity were 121 to 399 ng · h/mL, which were well below exposure levels associated with antitumor efficacy in murine models (AUC of 1180 to 1880 ng · h/mL). Pharmacologic inhibition of tissue pMAPK occurred at ≥1 mg/kg and was not observed in the absence of toxicity. The gastric fundic mucosa appeared to be the most sensitive tissue for evaluating systemic mineralization, which probably resulted from alterations in serum calcium and phosphorus homeostasis. This was based on the following observations. On day 4, serum phosphorus levels were increased 12
TOXICOLOGY STUDIES
307
TABLE 2 Mean Clinical Chemistry Changes in Male Rats Administered PD0325901 for Up to 2 Weeks PD0325901
Serum phosphorus (mg/dL) Serum calcium (mg/dL) Serum albumin (g/dL) Plasma PTH (pg/mL)b
Day
Control
0.3 mg/kg
1 mg/kg
3 mg/kg
4 15 4 15 4 15 4 15
12.90 11.30 10.58 10.38 2.74 2.56 492 1099
13.08 11.56 10.36 10.36 2.56 2.54 297 268
14.48 12.88 10.10 10.52 2.10a 2.36a 114a 457
16.24a 13.62a 10.16 10.36 2.04a 1.98a 155a 115a
p, < 0.01 vs. control; n = 5/group. Intact parathyroid hormone.
a
b
to 26%, and albumin was decreased 17 to 26% at ≥1 mg/kg (Table 2, male data only). In addition, PTH levels were decreased in a dose-dependent fashion (60 to 77%) at ≥1 mg/kg. On day 15, phosphorus levels were increased 21% in males at 3 mg/kg, and albumin was decreased 8 to 32% at ≥0.3 mg/kg. PTH levels were decreased 77 to 89% at 3 mg/kg. Changes in urinary excretion of calcium and phosphorus were observed in both sexes at ≥1 mg/kg and included increased excretion of phosphorus on day 15. Although increases in excretion of calcium were observed on day 4 in females, males exhibited decreases in urinary calcium. In this study, PD0325901 administration resulted in significantly decreased levels of serum albumin without changes in serum (total) calcium levels (Payne et al., 1979; Meuten et al., 1982; Rosol and Capen, 1997). This indicates that free, non-protein-bound calcium levels were increased. Hyperphosphatemia and hypercalcemia result in an increased Ca × P product, which is associated with induction of vascular mineralization (Block, 2000; Giachelli et al., 2001). The changes observed in urinary excretion of calcium and phosphorus probably reflected the alterations in serum levels. After completion of the two studies in rats described above it was concluded that PD0325901 produces significant multiorgan toxicities in rats with no margin between plasma drug levels associated with antitumor efficacy, pharmacologic inhibition of pMAPK (as an index of MEK inhibition), and toxicity in rats. Systemic mineralization was considered the preclinical toxicity of greatest concern, due to the severity of the changes observed and expectation of irreversibility, and the data suggested that it was related to a dysregulation in serum phosphorus and calcium homeostasis. Furthermore, skeletal lesions were seen in the rat studies that were similar to those reported with vitamin D toxicity and may be related to the calcium–phosphorus dysregulation. In concurrent toxicology studies in dogs and monkeys, neither systemic
308
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
mineralization nor skeletal changes were observed, despite higher plasma drug exposures, lethal doses, or pharmacologic inhibition of MEK. Therefore, the following questions were posed regarding PD0325901-induced systemic mineralization: (1) What is a potential mechanism? (2) Is this toxicity relevant to humans or rat-specific? and (3) Can this toxicity be monitored clinically? The ability of an anticancer agent that modulates various signal transduction pathways to produce dysregulation in serum calcium homeostasis is not unprecedented. 8-Chloro-cAMP is an experimental compound that has been shown to modulate various protein kinase signal transduction pathways involved in neoplasia. In preclinical models, this compound produced growth inhibition and increased differentiation in cancer cells (Ally et al., 1989). In a clinical trial, 8-chloro-cAMP was administered to patients with advanced cancer via intravenous infusion and resulted in dose-limiting toxicity of reversible hypercalcemia, as serum calcium was increased by up to approximately 40% (Saunders et al., 1997). This drug produced a parathyroid hormone-like effect in these patients, resulting in increased synthesis of 1,25-dihydroxyvitamin D (up to 14 times baseline value) as a mechanism for the hypercalcemia. Intravenous administration of 8-chloro-cAMP to beagle dogs also resulted in hypercalcemia (serum calcium increased 37 to 46%), indicating similar actions across species (Brown et al., 2000). Experience with this compound was important with respect to designing investigative studies with PD0325901 in which the hormonal control of serum calcium and phosphorus were evaluated. An investigative study was designed in rats to examine the time course for tissue mineralization in target organs and to determine whether clinical pathology changes occur prior to, or concurrent with, lesion development (Brown et al., 2005a). These clinical pathology parameters may therefore serve as biomarkers for systemic mineralization. Male rats (15/group) were used due to their increased sensitivity for this toxicity compared with females. Oral doses tested were 1, 3, or 10 mg/kg (6, 18, or 60 mg/m2, respectively). Five animals per group were necropsied on days 2, 3, or 4 following 1, 2, or 3 days of treatment, respectively. Clinical laboratory tests were conducted at necropsy that included serum osteocalcin, urinalysis, and plasma intact PTH, calcitonin, and 1,25-dihydroxyvitamin D. Lung samples were evaluated for inhibition of pMAPK, and microscopic evaluations of the aorta, distal femur with proximal tibia, heart, and stomach were conducted for all animals. Administration of PD0325901 resulted in inhibition of pMAPK in lung at all doses, demonstrating pharmacologic activity of the drug. On day 2, mineralization of gastric fundic mucosa and multifocal areas of necrosis of the ossifying zone of the physis were present only at 10 mg/kg. Necrosis of the metaphysis was present at ≥3 mg/kg. Serum phosphorus levels increased 33 to 43% and 1,25-dihydroxyvitamin D increased two- to sevenfold at all doses (Table 3). Osteocalcin increased 14 to 18%, and serum albumin decreased 8 to 14% at ≥3 mg/kg (Table 4). Osteocalcin is a major noncollagenous protein of bone matrix and synthesized by osteoblasts (Fu and Muller, 1999). Changes in serum osteocalcin can reflect alterations in bone turnover (resorption/
TOXICOLOGY STUDIES
309
TABLE 3 Mean Serum Phosphorus and Plasma 1,25-Dihydroxyvitamin D in Male Rats Administered PD0325901 for Up to 3 Days of Dosing PD0325901
Serum phosphorus (mg/dL) 1,25-Dihydroxyvitamin D (pg/mL)
Day
Control
1 mg/kg
3 mg/kg
10 mg/kg
2 3 4 2 3 4
12.06 11.48 11.34 309 257 191
16.10* 12.96* 13.18* M 856* 396 236 M
17.22* 15.62* M 15.40*M 1328* 776* M 604* M
16.84* Ma 19.02* M 21.70* M 2360* M 1390* M 1190* M
a
M, systemic mineralization observed. *, p < 0.01 vs. control; n = 5/group.
TABLE 4 Mean Serum Calcium and Albumin in Male Rats Administered PD0325901 for Up to 3 Days of Dosing PD0325901
Serum calcium (mg/dL)
Serum albumin (g/dL)
Day
Control
1 mg/kg
3 mg/kg
10 mg/kg
2 3 4 2 3 4
10.42 9.60 10.44 3.08 2.88 2.90
11.04 10.58 10.44 M 2.92 2.68 2.34** M
11.00 10.64 M 10.58 M 2.82* 2.62 M 2.34** M
10.66 Ma 10.58 M 7.24** M 2.66** M 2.34** M 1.98** M
a
M, systemic mineralization observed. *, p < 0.05 vs. control. **, p < 0.01 vs. control; n = 5/group.
formation). Serum osteocalcin appears to reflect the excess of synthesized protein not incorporated into bone matrix, or released protein during bone resorption (Ferreira and Drueke, 2000). The increases in osteocalcin seen in this study may have been reflective of bone necrosis. On day 3, mineralization of gastric fundic mucosa, gastric and cardiac arteries, aorta, and heart were present in all rats at 10 mg/kg. Myocardial necrosis was also seen at 10 mg/kg. Mineralization of gastric fundic mucosa was present in all rats at 3 mg/kg, and focal, minimal myocyte necrosis was present in one rat at 3 mg/kg. Thickening of the physeal zone of hypertrophying cartilage, and necrosis within the physeal zone of ossification and in the metaphyseal region in femur and tibia were seen in all animals at 10 mg/kg. Necrosis within the metaphyseal region was also present at 3 mg/kg. Serum phosphorus increased 13 to 66% at all doses and 1,25-dihydroxyvitamin D increased twoto fourfold at ≥3 mg/kg. Osteocalcin increased 12 to 28% at ≥3 mg/kg and serum albumin was decreased (7 to 19%) at all doses. Urine calcium increased
310
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
fivefold at 10 mg/kg, resulting in a fivefold increase in urine calcium/creatinine ratio. This increase may have represented an attempt to achieve mineral homeostasis in response to the hypercalcemia. In addition, hypercalciuria can occur with vitamin D intoxication (Knutson et al., 1997). On day 4, mineralization of gastric fundic mucosa, gastric muscularis, gastric and cardiac arteries, aorta, and heart were present in the majority of animals at ≥3 mg/kg. Myocardial necrosis with accompanying neutrophilic inflammation was also seen in all rats at 10 mg/kg and in one animal at 3 mg/kg. Mineralization of gastric fundic mucosa was present at 1 mg/kg. Thickening of the physeal zone of hypertrophying cartilage, and necrosis within the physeal zone of ossification and/or in the metaphyseal region in femur and tibia, were present at ≥3 mg/kg. At 1 mg/kg, thickening of the physeal zone of hypertrophying cartilage and metaphyseal necrosis were observed. Serum phosphorus increased 16 to 91% at all doses and 1,25-dihydroxyvitamin D increased twoto fivefold at ≥3 mg/kg. Osteocalcin increased 14 to 24% at ≥3 mg/kg, and serum albumin decreased 19 to 32% at all doses. At 10 mg/kg, serum calcium was decreased 31% (possibly resulting from the hypercalciuria on day 3) and calcitonin was decreased by 71%. Calcitonin is secreted by the thyroid gland and acts to lower serum calcium levels by inhibiting bone resorption (Rosol and Capen, 1997). The decrease in calcitonin may have resulted from feedback inhibition due to low serum calcium levels at 10 mg/kg on day 4. Urine creatinine, calcium, and phosphorus were increased at 10 mg/kg. This resulted in decreases of 41% and 21% in the calcium/creatinine and phosphorus/creatinine ratios, respectively. This four-day investigative study in rats resulted in several very important conclusions which were critical for supporting continued development of PD0325901. In the study, PD0325901 at ≥1 mg/kg resulted in systemic mineralization and skeletal changes in a dose- and time-dependent fashion. These changes were seen after a single dose at 10 mg/kg and after 3 doses at 1 mg/ kg. Elevations in serum phosphorus and plasma 1,25-dihydroxyvitamin D occurred prior to tissue mineralization. Although serum albumin was decreased throughout the study, calcium remained unchanged, consistent with an increase in non-protein-bound calcium. This study set the stage for the proposal of using serum phosphorus and calcium measurements as clinical laboratory tests or biomarkers for PD0325901-induced systemic mineralization. Whereas measurement of plasma 1,25-dihydroxyvitamin D is technically complex and costly, evaluation of serum calcium and phosphorus is rapid and performed routinely in the clinical laboratory with historical reference ranges readily available. Although the data obtained with urinalysis were consistent with dysregulation of calcium and phosphorus homeostasis, concerns existed as to whether specific and reproducible urinalysis parameters could be developed for monitoring the safety of PD0325901. Based on the data obtained thus far, hyperphosphatemia appeared to be the primary factor for eliciting tissue mineralization, and serum phosphorus was proposed as the key analyte for monitoring.
TOXICOLOGY STUDIES
311
An investigative study was conducted in male rats to assess the reversibility of serum chemistry changes following a single oral dose of PD0325901 (Brown et al., 2005a). The hypothesis was that serum phosphorus levels would return to control levels in the absence of drug administration. Male rats (10/group) received single oral doses at 1, 3, or 10 mg/kg, with controls receiving vehicle alone. Blood was collected on days 2, 3, 5, and 8 for serum chemistry analysis. Hyperphosphatemia (serum phosphorus increased up to 58%) and minimal increases in calcium occurred at all doses on days 2 and 3. Albumin was decreased at 10 mg/kg. These changes were completely reversible within a week. This study demonstrated that increases in serum phosphorus and calcium induced by PD0325901 are reversible following cessation of dosing. Although a single dose of 10 mg/kg produces systemic mineralization in rats, withdrawal of dosing results in normalization of serum calcium and phosphorus levels, indicating that the homeostatic mechanisms controlling these electrolytes remain intact. The results of this study were not unexpected. Oral administration to dogs of the vitamin D analogs dihydrotachysterol and Hytakerol (dihydroxyvitamin D2-II) results in hypercalcemia that is reversible following termination of dosing (Chen et al., 1962). Reversal of hypercalcemia and hypercalciuria has been demonstrated in humans following cessation of dosing of various forms of vitamin D (calciferol, dihydrotachysterol, 1-α-hydroxycholecalciferol, or 1-α,25-dihydroxycholecalciferol) (Kanis and Russell, 1977). Another investigative study was conducted in male rats to determine whether pamidronate (a bisphosphonate) or Renagel (sevelamer HCl; a phosphorus binder) would inhibit tissue mineralization induced by PD0325901 by inhibiting hyperphosphatemia. Bisphosphonates inhibit bone resorption and in so doing modulate serum calcium and phosphorus levels. Renagel is a nonabsorbable resin that contains polymers of allylamine hydrochloride, which forms ionic and hydrogen bonds with phosphate in the gut. Rats received daily oral doses of PD0325901 at 3 mg/kg for 14 days with or without co-treatment with pamidronate or Renagel. Pamidronate was given twice intravenously at 1.5 mg/kg one day prior to PD0325901 dosing and on day 6. Renagel was given daily as 5% of the diet beginning one day prior to PD0325901 dosing. Treatment groups consisted of oral vehicle alone, PD0325901 alone, pamidronate alone, Renagel alone, PD0325901 + pamidronate, and PD0325901 + Renagel. PD0325901 plasma AUC(0–24) values were 11.6, 9.17, and 4.34 μg · h/mL in the PD0325901 alone, PD0325901 + pamidronate, and PD0325901 + Renagel groups, respectively. Administration of PD0325901 alone resulted in hyperphosphatemia on days 3 and 15, which was inhibited by co-treatment with pamidronate or Renagel on day 3 only. PD0325901 alone resulted in systemic mineralization and skeletal changes consistent with changes seen in previous rat studies. Coadministration with either pamidronate or Renagel protected against systemic mineralization on day 3 only. Bone lesions were decreased with the co-treatments. Inhibition of toxicity with Renagel may have been due in part to decreased systemic drug exposure. However, the inhibition of toxic-
312
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
ity with pamidronate supports the role of a calcium–phosphorus dysregulation in PD0325901-induced systemic mineralization, because the inhibition of systemic mineralization observed on day 3 coincided with attenuation in the rise in serum phosphorus in these animals. A two-week oral dose range-finding study was conducted in dogs in which doses tested were 0.2, 0.5, and 1.5 mg/kg (4, 10, and 30 mg/m2, respectively). Also, a two-week oral-dose-range-finding study was conducted in cynomolgus monkeys at doses of 0.5, 3, and 10 mg/kg (6, 36, and 120 mg/m2, respectively). In addition to standard toxicology and toxicokinetic endpoints, determination of inhibition of tissue and peripheral blood mononuclear cell pMAPK was performed to assess pharmacologic activity of PD0325901 in both studies. PTH and 1,25-dihydroxyvitamin D were evaluated in the monkey study. In both studies, mortality occurred at ≥0.5 mg/kg (dogs) and at 10 mg/kg (monkeys) due to injury to the gastrointestinal tract mucosa, inhibition of pMAPK occurred at all doses, and systemic mineralization was not observed in either study. Increases in serum phosphorus were seen in moribund animals and/or associated with renal hypoperfusion (resulting from emesis, diarrhea, and dehydration). These elevations in phosphorus were considered secondary to renal effects and were not associated with changes in serum calcium. Toxicologically significant increases in serum phosphorus or calcium were not evident at nonlethal doses in dogs or monkeys. In the two-week monkey study, a dose-related increase in 1,25-dihydroxyvitamin D was observed on day 2 only (after a single dose) at ≥3 mg/kg. This increase did not occur on days 7 or 15, and was not associated with changes in serum phosphorus or calcium, nor systemic mineralization. Therefore, there did not appear to be toxicologic significance to the day 2 increase in 1,25-dihydroxyvitamin D in monkeys.
DISCUSSION Mineralization of vasculature and various soft tissues (systemic mineralization) was observed in toxicology studies in rats in a time- and dose-dependent manner. This change was consistent with the presence of calcium–phosphorus deposition within the vascular wall and parenchyma of tissues such as the stomach, kidney, aorta, and heart. The stomach appeared to be the most sensitive tissue, since mineralization of gastric fundic mucosa occurred prior to the onset of mineralization in other tissues. Male rats were consistently more sensitive to this toxicity than were female rats. In the pivotal one-month toxicity study in rats, the no-effect level for systemic mineralization was 0.1 mg/kg (0.6 mg/m2) in males and 0.3 mg/kg (1.8 mg/m2) in females, which were associated with PD0325901 steady-state plasma AUC(0–24) values of 231 and 805 ng · h/mL, respectively. Systemic mineralization was not observed in dogs or monkeys, despite pharmacologic inhibition of tissue pMAPK levels (>70%), administration of lethal doses, and exposures greater than 10-fold of those that induced mineralization in rats (10,600 ng · h/mL in dogs and
DISCUSSION
313
up to 15,000 ng · h/mL in monkeys). Systemic mineralization was not observed in mice despite administration of PD0325901 at doses up to 50 mg/kg (150 mg/m2). Systemic mineralization observed in rats following administration of PD0325901 is consistent with vitamin D toxicity due to dysregulation in serum calcium and phosphorus homeostasis (Grant et al., 1963; Rosenblum et al., 1977; Kamio et al., 1979; Mortensen et al., 1996; and Morrow, 2001). A proposed hypothesis for the mechanism of this toxicity is depicted in Figure 3. Elevated serum phosphorus levels (hyperphosphatemia) and decreased serum albumin were observed consistently in rats administered PD0325901. Although serum albumin levels are decreased in rats treated with PD0325901, calcium values typically remain unchanged or slightly elevated in these animals, indicating that free, non-protein-bound calcium is increased (Rosol and Capen, 1997; Payne et al., 1979; Meuten et al., 1982). Decreased parathyroid hormone levels (PTH) were observed in the rat studies. PTH plays a central role in the hormonal control of serum calcium and phosphorus. PTH is produced by the parathyroid gland and induces conversion of 25-hydroxyvitamin D (which is produced in the liver) to 1,25-dihydroxyvitamin D (calcitriol) in the kidney. 1,25-Dihydroxyvitamin D elicits increased absorption of calcium from the gastrointestinal tract. In addition, PTH mobilizes calcium and phosphorus from bone by increasing bone resorption, increases renal absorption of calcium, and increases renal excretion of phosphorus (in order to regulate
↑ [Ca] ¥ [P] = Systemic mineralization
↑P GI Tract ↑ Ca P, Ca ↑ Ca
Parathyroid
= ↓ PTH P
1, 25-Dihydroxyvitamin D PD0325901
Kidney
? 25-Hydroxyvitamin D
Figure 3 Hypothesis for the mechanism for systemic mineralization in the rat following PD0325901 administration.
314
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
serum phosphorus levels). Elevations in serum calcium typically elicit decreased PTH levels as a result of the normal control (negative feedback loop) of this endocrine system (Rosol and Capen, 1997). The decreases in PTH observed in the rats were believed to be due to the elevations in serum calcium (hypercalcemia). Hyperphosphatemia in the presence of normo- or hypercalcemia can result in an increased Ca × P product, which is associated with systemic mineralization (Block, 2000). Hyperphosphatemia was also observed in rats administered PD176067, which is a reversible and selective inhibitor of fibroblast growth factor receptor tyrosine kinase. In these animals, vascular and soft tissue mineralization also occurs (aorta and other arteries, gastric fundic mucosa, myocardium, renal tubules), probably due to increased Ca × P product (Brown et al., 2005b). Administration of PD0325901 to rats resulted in significantly increased levels of plasma 1,25-dihydroxyvitamin D. The mechanism for this action is not known but is not believed to be due to a metabolite of PD0325901. This is the most potent form of vitamin D and the primary metabolite responsible for regulating serum calcium and phosphorus. Vitamin D is converted to 25-hydroxyvitamin D in the liver and then 1-hydroxylated to 1,25-dihydroxyvitamin D in renal tubules. 1,25-Dihydroxyvitamin D acts by increasing absorption of calcium and phosphorus from the gastrointestinal tract, and can increase calcium and phosphorus reabsorption by renal tubules. Hyperphosphatemia and increased plasma 1,25-dihydroxyvitamin D levels in rats occurred 1 to 2 days prior to the detection of tissue mineralization at doses ≤3 mg/kg (18 mg/m2). Administration of PD0325901 to rats resulted in bone lesions that included necrosis of the metaphysis and the ossifying zone of the physis, and thickening of the zone of hypertrophying cartilage of the physis. The expansion of chondrocytes in the physis may be a response to the metaphyseal necrosis and loss of osteoprogenitor cells. These changes are characterized by localized injury to bone that appear to be due to local ischemia and/or necrosis. Skeletal vascular changes may be present in these animals, resulting in disruption of endochondral ossification. Skeletal lesions, including bone necrosis, can result from vitamin D intoxication (Haschek et al., 1978). The skeletal lesions observed in rats administered PD0325901 are similar to those reported with vitamin D toxicity, which provides additional evidence that toxicity occurred via induction of 1,25-dihydroxyvitamin D. Bone lesions similar to those observed in rats were not seen in dogs, monkeys, or mice administered PD0325901. In summary, PD0325901-induced systemic mineralization in the rat results from a dysregulation in serum phosphorus and calcium homeostasis. This dysregulation appears to result from toxicologically significant elevations in plasma 1,25-dihydroxyvitamin D levels following drug administration. Based on the toxicology data, rats are uniquely sensitive to this toxicity. A summary of the primary target organ toxicities observed in the preclinical studies is presented in Table 5. Toxicity to the skin (epidermal lesions) and gastrointes-
DISCUSSION
TABLE 5
315
Primary Target Organ Toxicities Observed in Preclinical Studies Species
Organ System Gastrointestinal tract Skin Systemic mineralizationb Bone Liver Gallbladder
Rat
Dog
Monkey
×a × × × × n/a
× × — — — —
× × — — — ×
a
Toxicity observed. Includes vascular (aorta, arteries) and soft tissue mineralization (e.g., stomach, heart, kidneys).
b
tinal tract (primarily ulcers/erosions in the mucosa) were observed across species and may have resulted from inhibition of MEK-related signal transduction pathways in these tissues (Brown et al., 2006). Gastrointestinal tract toxicity is dose-limiting in dogs and monkeys and was anticipated to be the dose-limiting toxicity of PD0325901 in the clinic. Therefore, gastrointestinal tract toxicity may preclude the development of other potential adverse events in humans, including potential dysregulation in serum phosphorus or calcium. It is not known whether systemic mineralization is relevant to humans. However, if PD0325901 does induce a dysregulation in serum calcium– phosphorus metabolism in humans, monitoring serum levels would provide an early indication of effects and guide modifications to dosing regimens. To ensure patient safety in the phase I clinical trial with PD0325901, procedures were incorporated into the trial design to monitor for potential dysregulation in serum calcium–phosphorus homeostasis. Measurements of serum calcium, phosphorus, creatinine, albumin, and blood urea nitrogen were performed frequently during the initial treatment cycle (21 days of dosing in a 28-day cycle), with periodic measurement in subsequent cycles, and the serum Ca × P product was calculated. The serum Ca × P product has been determined to be a clinically useful value as a means for evaluating risk for tissue and/or vascular mineralization with the recommendation that the value not exceed 70 based on clinical use of vitamin D analogs such as Rocaltrol and Hectorol (Roche Laboratories, 1998; Bone Care International, Inc., 1999; Block, 2000). Serum calcium and phosphorus are readily measured in a clinical setting with well-established reference ranges available. The trial included a protocolspecific dose-limiting toxicity for a Ca × P product > 70, which required a confirmatory measurement and dose interruption for that patient. In addition, serum vitamin D, PTH, alkaline phosphatase (total and bone), osteocalcin, and urinary C- and N-terminal peptide of collagen 1 (markers of bone resorption) were included for periodic measurement. Criteria considered for exclusion of candidate patients from the clinical trial included a history of
316
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
malignancy-associated hypercalcemia, extensive bone metastasis, parathyroid disorder, hyperphosphatemia and renal insufficiency, serum calcium or phosphorus levels >1× the upper limit of normal, and/or concomitant use of calcium supplements and vitamin D in amounts exceeding normal daily allowances.
CALCULATION OF CLINICAL STARTING DOSE The currently accepted algorithm for calculating a starting dose in clinical trials with oncology drugs is to use one-tenth of the dose that causes severe toxicity (or death) in 10% of the rodents (STD10) on a mg/m2 basis, provided that this starting dose (i.e., 1/10 the STD10) does not cause serious, irreversible toxicity in a nonrodent species (in this case, the dog) (DeGeorge et al., 1998). If irreversible toxicities are induced at the proposed starting dose in nonrodents or if the nonrodent (i.e., the dog) is known to be the more appropriate animal model, the starting dose would generally be one-sixth of the highest dose tested in the nonrodent (the dog) that does not cause severe, irreversible toxicity. Calculation of the initial phase I starting dose of PD0325901 was based on the pivotal one-month toxicology studies in rats and dogs. Doses tested in the one-month rat study were 0.1, 0.3, and 1 mg/kg (0.6, 1.8, and 6 mg/m2, respectively), and doses in the one-month dog study were 0.05, 0.1, and 0.3 mg/kg (1, 2, and 6 mg/m2, respectively). Both studies included animals assigned to a one-month reversal phase, in the absence of dosing, to assess reversibility of any observed toxicities. In addition to standard toxicology and toxicokinetic parameters, these studies included frequent evaluation of serum chemistries, and measurement of vitamin D, osteocalcin, PTH, and inhibition of tissue pMAPK levels. In the one-month rat study, no drug-related deaths occurred and systemic mineralization occurred in multiple tissues in both sexes at 1 mg/kg. Hypocellularity of the metaphyseal region of distal femur and/or proximal tibia occurred in males at 1 mg/kg. Toxicologic findings at lower doses included skin sores (at ≥0.1 mg/kg) and mineralization of gastric mucosa in one male at 0.3 mg/kg. The findings at ≤0.3 mg/kg were not considered to represent serious toxicity. In previous two-week dose-range-finding studies in rats, death occurred at 3 mg/kg (18 mg/m2), indicating this to be the minimal lethal dose in rats. Based on these results, the STD10 in rats was determined to be 1 mg/kg (6 mg/m2) In the one-month dog study, doses up to 0.3 mg/kg (6 mg/m2) were well tolerated with minimal clinical toxicity. Primary drug-related toxicity was limited to skin sores in two animals at 0.3 mg/kg. One-tenth the STD10 in rats is 0.6 mg/m2, which is well below a minimally toxic dose (6 mg/m2) in dogs. These data indicate an acceptable phase I starting dose to be 0.6 mg/m2, which is equivalent to 1 mg in a 60-kg person. The relationships between the primary toxicities in rats and dogs with dose and exposure are presented in Figure 4.
CALCULATION OF CLINICAL STARTING DOSE Dog
Rat
1.8 Nonpivotal
6
18
1120– 2460 6
121– 399 1.8
60
180
8010– 19200
3850– 6070
0.6 Pivotal 195–231 733–805 2060–2820
Anticipated Phase 1 Starting Dose (0.6 mg/m2)
317
49600– 62100
Dose Range AUC (0–24) (ng·hr/mL)
Dose Range AUC (0–24) (ng·hr/mL) Skin Lesions Systemic Mineralization Bone Metaphyseal Hypocellularity Hepatocellular Necrosis GI Tract Lesions Death
4
10
Nonpivotal 971– 1860– 1850 3550 6
1 2 Pivotal 208–222 567–743 1440–1550
Dose Range
30
AUC (0–24) (ng·hr/mL) 4730– 10600 Dose Range AUC (0–24) (ng·hr/mL) Skin Lesions GI Tract Lesions Death
0.1
1
10
100
1000
PD 0325901 Dose (mg/m2)
Figure 4 Relationships between dose and exposure with the primary toxicities of PD0325901 in rats and dogs. Exposure is expressed as PD0325901 plasma AUC(0–24) in ng · hr/mL and dose in mg/m2. Results from nonpivotal (dose-range-finding) studies and the pivotal one-month toxicity studies are presented.
In the phase I trial of PD0325901 in cancer patients (melanoma, breast, colon, and non-small cell lung cancer), oral doses were escalated from 1 to 30 mg twice daily (BID). Each treatment cycle consisted of 28 days, and three schedules of administration were evaluated: (1) 3 weeks of dosing with one week off, (2) dosing every day, and (3) 5 days of dosing with 2 days off per week (LoRusso et al., 2005; Menon et al., 2005; Tan et al., 2007; LoRusso et al., 2007). Doses ≥2 mg BID suppressed tumor pMAPK (indicating biochemical activity of the drug) and acneiform rash was dose limiting in one patient at 30 mg BID. There were no notable effects on serum Ca × P product and the most common toxicities included rash, fatigue, diarrhea, nausea, visual disturbances and vomiting. Acute neurotoxicity was frequent in patients receiving ≥15 mg BID (all schedules) and several patients developed optic nerve ischemia, optic neuropathy, or retinal vein occlusion (LoRusso et al., 2007). In the phase I trial, there were three partial responses (melanoma) and stable disease in 24 patients (primarily melanoma) (LoRusso et al., 2007). In a pilot phase II study of PD0325901 in heavily pretreated patients with non-
318
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
small cell lung cancer, 15 mg BID was given on various schedules over a 28-day cycle (Haura et al., 2007). The main toxicities were reversible visual disturbances, diarrhea, rash, and fatigue. The mean trough concentration of PD0325901 at 15 mg BID was 108 ng/mL (at steady state) and there were no objective responses (Haura et al., 2007).
CONCLUSIONS Tissue mineralization produced in rats administered the MEK inhibitor PD0325901 provides a case study of how a unique and serious toxicity observed in preclinical safety testing was effectively managed to allow progression of an experimental drug into human clinical trials. A number of key factors were critical for allowing continued development of this compound to occur, rather than early termination. PD0325901 represented a novel and targeted therapeutic agent for the treatment of various solid tumors, thereby allowing a high risk–benefit ratio to exist due to the significant unmet medical need posed by cancer. Phase I oncology trials typically occur in cancer patients with limited treatment options. Therefore, the barriers to entry for novel anticancer agents in the clinic are generally lower than for phase I trials involving healthy volunteers and therapies for non-life-threatening indications. Early in the toxicology program with PD0325901, lesions observed in rats were similar to those seen with vitamin D toxicity, and serum chemistry data indicated changes in phosphorus and calcium. This information provided the basis for the hypotheses to be proposed regarding the mechanism for vascular and soft tissue mineralization. Because mineralization occurred in rats administered PD0325901 rather than only in dogs or monkeys, an animal model suitable for multiple investigative studies was readily available. Despite the apparent species specificity for this toxicity, it was not appropriate to discount the risks toward humans because of a “rat-specific” finding. Rather, it was important to generate experimental data that characterized the toxicity and provided a plausible mechanism as a basis for risk management. Studies conducted with PD0325901 examined the dose–response and exposure–response relationships for toxicity and pharmacologic inhibition of MEK, the time course for lesion development, whether the changes observed were reversible or not, and whether associations could be made between clinical laboratory changes and anatomic lesions. We were able to identify biomarkers for tissue mineralization that were specifically related to the mechanism, were readily available in the clinical setting, noninvasive, and had acceptable assay variability. It is important that biomarkers proposed for monitoring for drug toxicity be scientifically robust and obtainable, and meet expectations of regulatory agencies. Finally, the data generated during the preclinical safety evaluation of PD0325901 were used to design the phase I–II clinical trial to ensure patient safety. This included selection of a safe starting dose for phase I,
REFERENCES
319
criteria for excluding patients from the trial, and clinical laboratory tests to be included as biomarkers for calcium–phosphorus dysregulation and tissue mineralization. In conclusion, robust data analyses, scientific hypothesis testing, and the ability to conduct investigative work were key factors in developing a biomarker for a serious preclinical toxicity, thereby allowing clinical investigation of a novel drug to occur. Acknowledgments Numerous people at the Pfizer Global Research and Development (PGRD), Ann Arbor, Michigan, Laboratories were involved in the studies performed with PD0325901, including the Departments of Cancer Pharmacology and Pharmacokinetics, Dynamics and Metabolism. In particular, the author would like to acknowledge the men and women of Drug Safety Research and Development, PGRD, Ann Arbor, who conducted the toxicology studies with this compound and made significant contributions in the disciplines of anatomic pathology and clinical laboratory testing during evaluation of this compound. REFERENCES Ally S, Clair T, Katsaros D, et al. (1989). Inhibition of growth and modulation of gene expression in human lung carcinoma in athymic mice by site-selective 8-Cl-cyclic adenosine monophosphate. Cancer Res, 49:5650–5655. Block GA (2000). Prevalence and clinical consequences of elevated Ca × P product on hemodialysis patients. Clin Nephrol, 54(4):318–324. Bone Care International, Inc., (1999). Package insert, HectorolTM (doxercalciferol) capsules. June 9. Brown AP, Morrissey RL, Smith AC, Tomaszewski JE, Levine BS (2000). Comparison of 8-chloroadenosine (NSC-354258) and 8-chloro-cyclic-AMP (NSC-614491) toxicity in dogs. Proc Am Assoc Cancer Res, 41:491 (abstract 3132). Brown AP, Courtney C, Carlson T, Graziano M (2005a). Administration of a MEK inhibitor results in tissue mineralization in the rat due to dysregulation of phosphorus and calcium homeostasis. Toxicologist, 84(S-1):108 (abstract 529). Brown AP, Courtney CL, King LM, Groom SL, Graziano MJ (2005b). Cartilage dysplasia and tissue mineralization in the rat following administration of a FGF receptor tyrosine kinase inhibitor. Toxicol Pathol, 33(4):449–455. Brown AP, Reindel JF, Grantham L, et al. (2006). Pharmacologic inhibitors of the MEK-MAP kinase pathway are associated with toxicity to the skin, stomach, intestines, and liver. Proc Am Assoc Cancer Res, 47:308 (abstract 1307). Brown AP, Carlson TCG, Loi CM, Graziano MJ (2007). Pharmacodynamic and toxicokinetic evaluation of the novel MEK inhibitor, PD0325901, in the rat following oral and intravenous administration. Cancer Chemother Pharmacol, 59:671–679.
320
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
Chen PS, Terepka AR, Overslaugh C (1962). Hypercalcemic and hyperphosphatemic actions of dihydrotachysterol, vitamin D2 and Hytakerol (AT-10) in rats and dogs. Endocrinology, 70:815–821. DeGeorge JJ, Ahn CH, Andrews PA, et al. (1998). Regulatory considerations for preclinical development of anticancer drugs. Cancer Chemother Pharmacol, 41:173–185. Dent P, Grant S (2001). Pharmacologic interruption of the mitogen-activated extracellular-regulated kinase/mitogen-activated protein kinase signal transduction pathway: potential role in promoting cytotoxic drug action. Clin Cancer Res, 7:775–783. Ferreira A, Drueke TB (2000). Biological markers in the diagnosis of the different forms of renal osteodystrophy. Am J Med Sci, 320(2):85–89. Friday BB, Adjei AA (2008). Advances in targeting the Ras/Raf/MEK/Erk mitogenactivated protein kinase cascade with MEK inhibitors for cancer therapy. Clin Cancer Res, 14(2):342–346. Fu JY, Muller D (1999). Simple, rapid enzyme-linked immunosorbent assay (ELISA) for the determination of rat osteocalcin. Calcif Tissue Int, 64:229–233. Giachelli CM, Jono S, Shioi A, Nishizawa Y, Mori K, Morii H (2001). Vascular calcification and inorganic phosphate. Am J Kidney Dis, 38(4, Suppl 1):S34–S37. Grant RA, Gillman T, Hathorn M (1963). Prolonged chemical and histochemical changes associated with widespread calcification of soft tissues following brief calciferol intoxication. Br J Exp Pathol, 44(2):220–232. Harrington DD, Page EH (1983). Acute vitamin D3 toxicosis in horses: case reports and experimental studies of the comparative toxicity of vitamins D2 and D3. J Am Vet Med Assoc, 182(12):1358–1369. Haschek WM, Krook L, Kallfelz FA, Pond WG (1978). Vitamin D toxicity, initial site and mode of action. Cornell Vet, 68(3):324–364. Haura EB, Larson TG, Stella PJ, et al. (2007). A pilot phase II study of PD-0325901, an oral MEK inhibitor, in previously treated patients with advanced non-small cell lung cancer. Presented at the AACR-NCI-EORTC International Conference on Molecular Targets and Cancer Therapy, abstract B110. Hoshino R, Chatani Y, Yamori T, et al. (1999). Constitutive activation of the 41-/43kDa mitogen-activated protein kinase pathway in human tumors. Oncogene, 18:813–822. Jost M, Huggett TM, Kari C, Boise LH, Rodeck U (2001). Epidermal growth factor receptor–dependent control of keratinocyte survival and Bcl-XL expression through a MEK-dependent pathway. J Biol Chem, 276(9):6320–6326. Kamio A, Taguchi T, Shiraishi M, Shitama K, Fukushima K, Takebayashi S (1979). Vitamin D sclerosis in rats. Acta Pathol Jpn, 29(4):545–562. Kanis JA, Russell RGG (1977). Rate of reversal of hypercalcaemia and hypercalciuria induced by vitamin D and its 1α-hydroxylated derivatives. Br Med J, 1:78–81. Knutson JC, LeVan LW, Valliere CR, Bishop CW (1997). Pharmacokinetics and systemic effect on calcium homeostasis of 1α,25-dihydroxyvitamin D2 in rats. Biochem Pharm, 53:829–837. Long GG (1984). Acute toxicosis in swine associated with excessive dietary intake of vitamin D. J Am Vet Med Assoc, 184(2):164–170.
REFERENCES
321
LoRusso P, Krishnamurthi S, Rinehart JR, et al. (2005). A Phase 1–2 clinical study of a second generation oral MEK inhibitor, PD0325901 in patients with advanced cancer. 2005 ASCO Annual Meeting Proceedings. J Clin Oncol, 23(16S), abstract 3011. LoRusso PA, Krishnamurthi SS, Rinehart JJ, et al. (2007). Clinical aspects of a phase I study of PD-0325901, a selective oral MEK inhibitor, in patients with advanced cancer. Presented at the AACR-NCI-EORTC International Conference on Molecular Targets and Cancer Therapy, abstract B113. Mansour SJ, Matten WT, Hermann AS, et al. (1994). Transformation of mammalian cells by constitutively active MAP kinase kinase. Science, 265:966–970. Menon SS, Whitfield LR, Sadis S, et al. (2005). Pharmacokinetics (PK) and pharmacodynamics (PD) of PD0325901, a second generation MEK inhibitor after multiple oral doses of PD0325901 to advanced cancer patients. 2005 ASCO Annual Meeting Proceedings. J Clin Oncol, 23(16S), abstract 3066. Meuten DJ, Chew DJ, Capen CC, Kociba GJ (1982). Relationship of serum total calcium to albumin and total protein in dogs. J Am Vet Med Assoc, 180:63–67. Milella M, Kornblau SM, Estrov Z, et al. (2001). Therapeutic targeting of the MEK/ MAPK signal transduction module in acute myeloid leukemia. J Clin Invest, 108(6):851–859. Morrow C (2001). Cholecalciferol poisoning. Vet Med, 905–911. Mortensen JT, Lichtenberg J, Binderup L (1996). Toxicity of 1,25-dihydroxyvitamin D3, tacalcitol, and calcipotriol after topical treatment in rats. J Inv Dermatol Symp Proc, 1:60–63. Payne RB, Carver ME, Morgan DB (1979). Interpretation of serum total calcium: effects of adjustment for albumin concentration on frequency of abnormal values and on detection of change in the individual. J Clin Pathol, 32:56–60. Rinehart J, Adjei AA, LoRusso PM et al. (2004). Multicenter phase II study of the oral MEK inhibitor, CI-1040, in patients with advanced non-small-cell lung, breast, colon, and pancreatic cancer. J Clin Oncol, 22(22):4456–4462. Roche Laboratories (1998). Package insert, Rocaltrol® (calcitriol) capsules and oral solution. Nov. 20. Rosenblum IY, Black HE, Ferrell JF (1977). The effects of various diphosphonates on a rat model of cardiac calciphylaxis. Calcif Tissue Res, 23:151–159. Rosol TJ, Capen CC (1997). Calcium-regulating hormones and diseases of abnormal mineral (calcium, phosphorus, magnesium) metabolism. In Clinical Biochemistry of Domestic Animals, 5th ed. Academic Press, San Diego, CA, pp. 619–702. Saunders MP, Salisbury AJ, O’Byrne KJ, et al. (1997). A novel cyclic adenosine monophosphate analog induces hypercalcemia via production of 1,25-dihydroxyvitamin D in patients with solid tumors. J Clin Endocrinol Metab, 82(12):4044–4048. Sebolt-Leopold JS, Dudley DT, Herrera R, et al. (1999). Blockade of the MAP kinase pathway suppresses growth of colon tumors in vivo. Nat Med, 5(7):810–816. Sebolt-Leopold JS (2000). Development of anticancer drugs targeting the MAP kinase pathway. Oncogene, 19:6594–6599. Sebolt-Leopold JS, Merriman R, Omer C, (2004). The biological profile of PD0325901: a second generation analog of CI-1040 with improved pharmaceutical potential. Proc Am Assoc Cancer Res, 45:925 (abstract 4003).
322
DEVELOPMENT OF SERUM CALCIUM AND PHOSPHORUS
Spangler WL, Gribble DH, Lee TC (1979). Vitamin D intoxication and the pathogenesis of vitamin D nephropathy in the dog. Am J Vet Res, 40:73–83. Spaulding SW, Walser M (1970). Treatment of experimental hypercalcemia with oral phosphate. J Clin Endocrinol, 31:531–538. Tan W, DePrimo S, Krishnamurthi SS, et al. (2007). Pharmacokinetic (PK) and pharmacodynamic (PD) results of a phase I study of PD-0325901, a second generation oral MEK inhibitor, in patients with advanced cancer. Presented at the AACRNCI-EORTC International Conference on Molecular Targets and Cancer Therapy, abstract B109. Wang D, Boerner SA, Winkler JD, LoRusso PM (2007). Clinical experience of MEK inhibitors in cancer therapy. Biochimi Biophys Acta, 1773:1248–1255. York MJ, Evans GO (1996). Electrolyte and fluid balance. In Evans GO (ed.), Animal Clinical Chemistry: A Primer for Toxicologists. Taylor & Francis, New York, pp. 163–176.
16 BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS AND ITS CLINICAL CONSEQUENCES Claire Cornips, B.Sc., and Huub Schellekens, M.D. Utrecht University, Utrecht, The Netherlands
INTRODUCTION Therapeutic proteins such as growth factors, hormones, monoclonal antibodies (mAbs), and others have increased in use dramatically over the past two decades, although their first use dates back more than a century, when animal antisera were introduced for the treatment and prevention of infections. Therapeutic proteins have always been associated with immunogenicity, although the incidence differs widely [1]. Granulocyte colony-stimulating factor (G-CSF) is the only protein in clinical use that has not been reported to induce antibodies. The first proteins used in medicine around 1900 were of animal origin. As foreign proteins they induced high levels of antibodies in the majority of patients after a single or a few injections. The type of immunological response induced was identical to that seen with vaccines. Most of the therapeutic proteins introduced in recent decades are homologs of human proteins. However, in contrast to expectations, these proteins also appear to induce antibodies, and in some cases, in the majority of patients. Given that these antibodies are directed against auto-antigens, the immunological reaction involves breaking B-cell tolerance. Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
323
324
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS
MECHANISMS OF ANTIBODY INDUCTION There are two main mechanisms by which antibodies are induced by therapeutic proteins (Table 1). If the proteins are of completely of foreign origin, such as asparginase or streptokinase, and the first generation mAbs derived from murine cells or partly foreign such as chimeric or humanized mAbs, the antibody response is comparable to a vaccination reaction. Often a single injection is sufficient to induce high levels of neutralizing antibodies, which may persist for a considerable length of time. The other mechanism is based on breaking B-cell tolerance, which normally exists, to self-antigens, such as human immunoglobulins or products such as epoetins and the interferons. For breaking B-cell tolerance, prolonged exposure to proteins is necessary. In general, it takes months before patients produce antibodies, which are mainly binding and disappear when treatment is stopped. To induce a classical immune reaction a degree of nonself is necessary. The trigger for this type of immunogenicity is the divergence from the comparable human proteins. The triggers for breaking tolerance are quite different. The production of these auto-antibodies may occur when the self-antigens are exposed to the immune system in combination with a T-cell stimulus or danger signal such as bacterial endotoxins, microbial DNA rich in GC motifs or denatured proteins. This mechanism explains the immunogenicity of biopharmaceuticals containing impurities. When tolerance is broken with this type of mechanism, the response is often weak with low levels of antibodies with low affinity. To induce high levels of IgG, the self-antigens should be presented to the immune system in a regular array form with a spacing of 50 to 100 Å, a supramolecular structure resembling a viral capsid [2]. Apparently the immune system has been selected to reach vigorously to these types of structures which
TABLE 1
Main Markers of Immunogenicity of Therapeutic Proteins Product
(Partly) foreign Level of nonself Presence of proteins T-cell epitopes inducing a Biological classical activity of the immune product response Self-protein Presence of breaking aggregates tolerance Biological activity of the product
Preclinical
Treatment
—
—
Induction of an immune response in immunetolerant mice
Chronic treatment
Patients Lack of immune tolerance Concomitant therapy Nonimmune compromised Concomitant therapy
FACTORS INFLUENCING IMMUNOGENICITY
325
normally are found only in viruses and other microbial agents. The most important factor in the immunogenicity of therapeutic proteins which are human homologs has been the presence of aggregates [3]. Aggregates may be presenting the self-antigens in a repeating form which is such a potent inducer of auto-antibodies.
FACTORS INFLUENCING IMMUNOGENICITY So the primary factors inducing an antibody response are aggregates in the case of human proteins and the degree of nonself in (partly) nonhuman proteins. There are also cases in which the immune response cannot be explained. There are, however, a number of other factors that may influence the level of the immune response [4]: • Product characteristics 1. Biological function 2. Impurities and contaminants 3. Product modification • Treatment characteristics 1. Length of treatment 2. Route of administration 3. Dosage 4. Concomitant therapy • Patient characteristics 1. Genetic background 2. Type of disease 3. Age 4. Gender • Unknown factors Product Characteristics The biological activities of the product are influencing the immune response. An immune stimulating therapeutic protein is more likely to induce antibodies than an immune suppressive protein. Monoclonal antibodies targeted to cellbound epitopes are more likely to induce an immune response than monoclonal antibodies with a target in solution. Also, the Fc-bound activities of monoclonal antibodies have an influence. Impurities may influence immunogenicity. The immunogenicity of products as human growth hormone, insulin, and interferon α-2 have declined over the years due to improved downstream processing and formulation, reducing the level of impurities. There are studies showing the induction of antibodies by
326
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS
oxidized protein which cross-reacted with the unmodified product [5] and host cell–derived endotoxin acting as an adjuvant. The probability of an immune response therefore increases with the level of impurities. Product modifications that are intended to enhance half-life potentially also increase the exposition of the protein to the immune system and may increase immunogenicity. In addition, the modification may reduce biological activity necessitating more protein for the same biological effect. PEGylation (polyethylene glycol) is claimed to reduce the immunogenicity of therapeutic proteins by shielding. There is evidence that pegylation reduces the immunogenicity of nonhuman proteins such as bovine adenosine deamidase and asparginase. Whether pegylation also reduces the capacity of human proteins to break B-cell tolerance is less clear. There are reports of high immunogenicity of pegylated human proteins such as MDGF, but the immunogenicity of unpegylated MDGF products is unknown. Treatment Characteristics Foreign proteins such as streptokinase and asparaginase often induce antibodies after a single injection. Breaking B-cell tolerance by human protein takes in general more than six months of chronic treatment. The route of administration influences the likelihood of an antibody response independent of the mechanism of induction. The probability of an immune response is highest with subcutaneous administration, less probable after intramuscular administration, and intravenous administration is the least immunogenic route. There are no studies comparing parenteral and nonparenteral routes of administration. However, intranasal and pulmonary administration of therapeutic proteins may induce an immune response. Patient Characteristics Gender, age, and ethnic background have all been reported to influence the incidence of antibody response to specific therapeutic proteins. However, the only patient characteristic that has consistently been identified for a number of different products is the disease from which the patients suffer. Cancer patients are less likely to produce antibodies to therapeutic protein than other patients. The most widely accepted explanation for this difference is the immune-compromised state of cancer patients, both by the disease as by anticancer treatment. Also, the median survival of patients on treatment by therapeutic proteins may be too short to develop an antibody response. In any case, cancer reduces considerably the probability of an antibody response to a protein. As the experience in cancer patients shows, immune suppressive therapy reduces the probability to develop an immune response to proteins. In addition, immune-suppressive drugs such as methotrexate are used in conjunction with monoclonal antibodies and other protein drugs to reduce the immune reactions.
PREDICTION OF IMMUNOGENICITY IN ANIMALS
327
PREDICTION OF IMMUNOGENICITY IN ANIMALS In principle, all therapeutic proteins are immunogenic in conventional laboratory animals, and their predictive value depends on the type of proteins [6]. The immune reaction in animals to biopharmaceuticals of microbial or plant origin is comparable to humans, as they are comparably foreign for all mammalian species. Animal studies in which the reduction of immunogenicity is evaluated therefore have a high degree of predictability for immunogenicity in humans. The development of antibodies has been observed regularly in preclinical studies in animals of biopharmaceuticals homologous to human proteins. Being considered a normal reaction to a foreign protein, it has led to the generally held assumption that immunogenicity testing and, in some cases, even preclinical testing in animals is irrelevant. However, not all antibodies interfere with the biological activity of a biopharmaceutical. And if there is a biological or clinical effect, these may help to identify the possible sequelae of immunogenicity, as has been shown with human epoetin in dogs. In the canine model human epoetin is immunogenic and it induces antibodies that neutralize the native canine epoetin, leading to pure red cell aplasia. This severe complication of antibodies to epoetin was later confirmed in humans. Also, antibody-positive animals may provide sera for the development and validation of antibody assays; and the evaluation of an antibody response in animals is important to evaluate the safety and pharmacokinetic data in conventional laboratory animals. Nonhuman primates have been advocated as better models to predict the immunogenicity of human proteins because of a high sequence homology between the product and the monkey native molecule to which the animal is immune tolerant. Immunogenicity studies in nonhuman primates have, however, also shown mixed results. Products with a high immunogenicity in monkeys sometimes do not induce antibodies in human patients. The opposite has also been observed, although these studies may have been too limited in length of treatment or number of animals to be truly predictive. A good example of the possible use of monkeys was a study to determine the immunogenicity of different human growth hormone (hGH) preparations using a hyper-immunization protocol, including the use of adjuvant to provide a worst-case scenario of immunogenicity [7]. The monkeys were treated with cadaver-derived methionyl-hGH and natural sequence hGH. The antibody response was 81%, 69%, and 5 to 23%, respectively, which reflects the relative immunogenicity of these preparations in human patients. So in this example, rhesus monkeys predict relative immunogenicity. Also, the immunogenicity of lys-pro biosynthetic human insulin was compared with the immunogenicity of porcine insulin and native-sequence insulin in rhesus monkeys. Neither of these proved to induce antibodies. With tissue plasminogen activators the immunogenicity in nonhuman primates was reported by the same group to reflect the immunogenicity in patients. Rhesus monkeys (Macaca mulatta) and cynomolgus monkeys (M. fascicularis) were
328
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS
used to test antibody response to EPO/GM-CSF hybrids. Two of the three constructs tested produced high levels of neutralizing antibodies. In the rhesus monkeys these hybrids caused severe anemia. In cynomolgus monkeys no effect on hematological parameters was found, indicating lack of crossreactivity of the antibodies with the native cynomolgus erythropoietin. There are also reports on immunogenicity of products in monkeys such as IFN α B/D, diaspirin cross-linked hemoglobin, and IL-3, which did not induce antibodies in patients. So monkeys cannot be used as absolute predictors of immunogenicity in humans, and even the response within the macaque family seems to differ. Theoretically, the best predictive model for immunogenicity of human proteins is the transgenic mouse. These animals are immune tolerant for the human protein they express. The caveats are that the wild-type mouse strain used for transgenesis should be able to produce antibodies to the protein and that the transgenic should show immune tolerance for the native molecule and may be differences in the processing of antigens and epitope recognition between mouse and humans. Mice, transgenic for human insulin, showed that immunogenicity to variant insulin molecules was dependent on the number of substitutions. Mice that were made transgenic for human tissue plasminogen activator variants with a single amino acid substitution proved to be immunogenic, showing the discriminatory potential of the model [8]. The transgenic approach proved useful in finding a reason for the increased immunogenicity caused by a specific formulation. In mice transgenic for interferon α-2 only the batches immunogenic in patients produced antibodies [9]. The presence of aggregates and oxidized proteins proved the main cause of immunogenicity in the transgenic animals as in patients. In mice transgenic for hGH the immunogenicity of hGH encapsulated in microspheres for sustained release was tested and no enhanced immunogenicity was observed. We have used the transgenic mouse models to study the product characteristics capable of breaking tolerance. In these models, aggregates were shown to be the major factor. These models, however, have been used by us and others mainly in a yes-or-no fashion to study factors such as aggregates and sequence variations. To find subtle differences that may exist between products and before these models can be used to fully predict the immunogenic potential of human therapeutic proteins, much more validation is necessary. With no models yet available with sufficient predictive power, clinical studies are the only sure way to establish the induction of antibodies by therapeutic proteins.
ASSAYS FOR ANTIBODIES A cellular immune response to therapeutic proteins has never been established. Also, the biological and clinical consequences of an immune response to these products have always been associated with the presence of antibodies. So, by definition, a positive assay for antibodies is the biomarker for the immunogenicity of proteins. The lack of standardized assays and international
CONSEQUENCES OF ANTIBODIES TO THERAPEUTIC PROTEINS
329
reference sera presents a major problem in assessing immunogenicity. It makes a comparison of antibody data generated by different laboratories impossible; comparisons of products based on published data (e.g., using information on package inserts) are also meaningless. Recently, excellent reviews have been published regarding the development and validation of the various assay formats for antibodies to therapeutic proteins. There are two principles for the testing of antibodies: assays that monitor binding of antibody to the drug (EIA, RIA, BIAcore), or (neutralizing) bioassays. These assays are used in combination: a sensitive binding assay to identify all antibody-containing samples is often more practical, as bioassays are usually more difficult and time consuming. If a native molecule is modified (e.g., pegylated or truncated) to obtain a new product with different pharmacological characteristics, both the “parent” and the new molecule should be used as capture antigens for antibody assays. A definition for the negative cutoff must be included in the validation process of this assay and is often based on the 5% false-positive rate. Such an analytical cutoff per se is not predictive of biological or clinical effect but rather, indicative of the technical limitations of the assay. And because the cutoff is set to include a relatively high number of false positives, all initial positive sera should be confirmed either by another binding assay or by a displacement assay. The confirmed positive samples should be tested in a bioassay for neutralizing antibodies, which correlate with a potential in vivo effect in patients, as these antibodies are usually neutralizing because they interfere with receptor binding. A confirmatory step for neutralizing antibodies is not necessary because these antibodies are a subset of binding antibodies. In some cases it may be important to show the neutralization to be caused by antibodies if the presence of other inhibitory factors, such as a soluble receptor, has to be excluded. Further characterization of the neutralizing antibody response may follow, such as isotype, affinity, and specificity assays. A positive or negative answer from the assays is not sufficient; development of antibodies is a dynamic process, and therefore the course of antibody development (kinetics) must be plotted quantitatively over time. Usually, persisting titers of neutralizing antibodies correlate with some biological effect. An analysis of the biological effect based on incidence may also be misleading. Akin to population kinetics in pharmacology, a method that compares relative immunogenicity in patient groups as “mean population antibody titers,” taking into account both incidence and titers of neutralizing antibodies, has also been found more useful than a percentage of seroconversions for the purpose of comparing two products.
CONSEQUENCES OF ANTIBODIES TO THERAPEUTIC PROTEINS In many cases the presence of antibodies is not associated with biological or clinical consequences. The effects that antibodies may induce depend on their
330
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS
level and affinity and can be the result of antigen–antibody reaction in general or of the specific interaction. Severe general immune reactions as anaphylaxis associated with the use of animal antisera have become rare because the purity of the products increased substantially. Delayed-type infusion-like reactions resembling serum sickness are more common, especially with monoclonal antibodies and other proteins administered in relative large amounts and the formation of immune complexes. Patients with a slow but steadily increasing antibody titer are reported to show more infusion-like reactions than patients with a short temporary response. The consequences of the specific interaction between protein drugs is dependent on the affinity of the antibody translating in binding and/or neutralizing capacity. Binding antibodies may influence the pharmacokinetic behavior of the product, and both increases and reductions of half-life have been reported, resulting in enhancement or reduction in activity. Persisting levels of neutralizing antibodies in general result in a loss of activity of the protein drug. In some cases the loss of efficacy can easily be monitored by the increase of disease activity. For example, in interferon alpha treatment of hepatitis C, viral activity can be monitored by transaminase activity. Loss of efficacy is correlated directly by increased viral activity and increase in transaminase levels. In the case of interferon beta treatment of multiple sclerosis the loss of efficacy is much more difficult to measure because the mode of action of the therapeutic protein is not known and the disease progress is unpredictable and difficult to monitor. The reduction of Mx induction which is specific for interferon activity has been used successfully to evaluate the biological effect of antibodies to interferon beta. The adverse effects of therapeutic proteins are in general the result of an exaggerated pharmacodynamic effect. So the loss of side effects may also be the result of the induction of antibodies and may be the first sign of immunogenicity. For example, in patients treated with interferon the loss of flulike symptoms is associated with the appearance of antibodies. Because by definition neutralizing antibodies interact with ligand– receptor interaction, they will inhibit the efficacy of all products in the same class with serious consequences for patients if there is no alternative treatment. The most dramatic side effects occur if the neutralizing antibodies cross-react with an endogenous factor with an essential biological function. This had been described for antibodies induced by epoetin alpha [10] and megakaryocyte growth and differentiation factor (MGDF), which led, respectively, to life-threatening anemia and thrombocytopenia, sometimes lasting for more than a year. Skin Reactions Skin reactions are a common side effect of therapeutic proteins, and some of these reactions are associated with an immunogenic response. But can these skin reactions be used as a marker for the immunogenicity of therapeutic
CONSEQUENCES OF ANTIBODIES TO THERAPEUTIC PROTEINS
331
proteins? The hypersensitivity reactions are classified as type I, II, III, and IV reactions. The type I reaction is IgE mediated. The type II reactions are caused by activated T-killer cells and macrophages and complement activation. The type III hypersensitivity reaction is caused by the disposition of immune complexes [11]. Type IV is T-cell mediated. Type I hypersensitivity or IgE-mediated allergies are very rare, and most are related to the excipients present in formulation rather than the protein drug products. IgE-mediated reactions against human insulin have been reported, although is less common than with pork or beef insulin [12]. Theoretically, the type II hypersensitivity skin reaction may be a symptom of the immunogenicity of therapeutic proteins. During a type II hypersensitivity reaction, antibodies activate T-killer cells, macrophages, or complement factors to induce an immune response. However, the antibodies produced by most therapeutic proteins as a consequence of breaking B-cell tolerance and T-cells play only a minor role, if any. TNF inhibitors such as etanercept cause injection-site reactions by a T-cellmediated delayed-type hypersensitivity reaction. Antibodies against etanercept have been shown not to be correlated to adverse events. Skin reactions probably are a class effect of TNF inhibitors. Blockade of TNF can stimulate certain forms of autoimmunity by increasing T-cell reactivity to microbial and self-antigens [13]. The skin reactions that are part of the type III hypersensitivity reaction caused by the local disposition of immune complexes are seen after treatment with monoclonal antibodies, which are used in relative high and repeated doses. These immune complexes may lead to anaphylactoid reactions and serum sickness–like symptoms. Skin reactions such as urticaria and rashes are common symptoms of this complication [14]. Monoclonal antibodies may also lead to local reactions at the injection site. However, as shown in Table 2, there is no relation between immunogenicity and local skin reactions. So these local reactions cannot be used as an early marker for more serious symptoms of immunogenicity. Some skin reactions seen after treatment with therapeutic proteins are the result of their pharmacodynamics: for example, the epidermal growth factor
TABLE 2 Relation Between Local Skin Reactions and Immunogenicity of Monoclonal Antibodiesa
Humira Remicade Xolair Mabthera a
Based on SPCs.
INN
Incidence of Antibodies (%)
Incidence of Local Skin Reactions (%)
adalimumab infliximab omalizumab rituximab
12 24 0 1
20 0 45 >1
332
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS
receptor (EGFR) inhibitors and beta interferon. EGFR plays a key role in normal skin function; thus, it is very likely that the rash and other skin reactions during therapy with EGFR inhibitors are caused by the inhibition of EGFR [14,15]. Interferon beta is known to induce high levels of neutralizing antibodies, but it seems that the adverse effects and skin reactions that occur during the first months of treatment have already disappeared when patients develop antibodies. Most probably, these adverse effects are a direct pharmacodynamic effect of interferon. Indeed, patients with neutralizing antibodies have a smaller risk of adverse events such as injection site reactions than do patients without [16,17]. So both local skin reactions at the site of injection and generalized reactions can be seen after the use of therapeutic proteins. These skin reactions can have different causes and cannot be used as biomarkers for immunogenicity.
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS Two classes of biomarkers are used to indicate the clinical consequences of immunogenicity of therapeutic proteins: 1. General: persisting levels of neutralizing antibodies 2. Specific: loss of activity of endogenous homolog; increase in specific disease marker; decrease in efficiency marker Biomarkers are also used to indicate the loss of efficiency of therapeutic proteins by antibodies: 1. Monoclonal antibodies: increase in side effects 2. Other therapeutic proteins: loss of side effects Structural properties are the main primary factors of the induction of antibodies [18]. Therapeutic proteins of nonhuman origin will induce antibodies in the majority of patients after a limited number of applications. The degree of nonself and the presence of T-cell epitopes and the relative lack of immune tolerance are predictors of the antibody response. Human homologs are less likely to be immunogenic. The best structural predictor of breaking tolerance is the presence of aggregates. The only animal model available for this type of immunogenicity is immune-tolerant transgenic mice. Induction of antibodies to human proteins usually occurs only after prolonged exposure. Independent of whether the protein is self or nonself, the possible immune modulating effect of the therapeutic protein, concomitant immune suppressive therapy, and the immune status of the patients are important predictors of an antibody response.
REFERENCES
333
Antibody formation is by definition the marker for the immunogenicity of therapeutic proteins. The role of the cellular immunity is largely unknown and may be absent in the case of breaking B-cell tolerance. The occurrence of clinical consequences is in the majority of cases associated with relative high and persisting levels of neutralizing antibodies. In some cases the occurrence of neutralizing antibodies is preceded by binding antibodies, which may interfere with the pharmacokinetics of the proteins. As discussed extensively, skin reactions cannot be seen as signs of the immunogenicity of proteins. Often, the loss of efficacy by neutralizing antibodies is difficult to assess because the diseases involved are chronic diseases with an unpredictable development and the proteins have only a limited effect. In these cases surrogate markers for efficacy may be monitored. Also, the effect on the side effects may be used as a marker. If the side effects are caused by the pharmacodynamic effect of the protein drugs, the loss of side effects is indicative of the development of neutralizing antibodies. If the side effect is the result of immune complexes, their appearance is associated with the induction of antibodies. REFERENCES 1. Schellekens H (2002). Bioequivalence and the immunogenicity of biopharmaceuticals. Nat Rev Drug Discov, 1(6):457–462. 2. Chackerian B, Lenz P, Lowy DR, Schiller JT (2002). Determinants of autoantibody induction by conjugated papillomavirus virus-like particles. J Immunol, 169:6120–6126. 3. Hermeling S, Schellekens H, Maas C, Gebbink MF, Crommelin DJ, Jiskoot W (2006). Antibody response to aggregated human interferon alpha2b in wild-type and transgenic immune tolerant mice depends on type and level of aggregation. J Pharm Sci, 95(5):1084–1096. 4. Kessler M, Goldsmith D, Schellekens H (2006). Immunogenicity of biopharmaceuticals. Nephrol Dial Transplant, 21(Suppl 5):v9–v12. 5. Hochuli E (1997). Interferon immunogenicity: technical evaluation of interferonalpha 2a. J Interferon Cytokine Res, 17:S15–S21. 6. Wierda D, Smith HW, Zwickl CM (2001). Immunogenicity of biopharmaceuticals in laboratory animals. Toxicology, 158(1–2):71–74. 7. Zwickl CM, Cocke KS, Tamura RN, et al. (1991). Comparison of the immunogenicity of recombinant and pituitary human growth hormone in rhesus monkeys. Fundam Appl Toxicol, 16(2):275–287. 8. Palleroni AV, et al. (1997). Interferon Immunogenicity: Preclinical Evaluation of Interferon-a2a. J Interferon Cytokine Res, 19(Suppl 1):s23–s27. 9. Stewart TA, Hollingshead PG, Pitts SL, et al. (1989). Transgenic mice as a model to test the immunogenicity of proteins altered by site-specific mutagenesis. Mol Biol Med, 6(4):275–281. 10. Casadevall N, Nataf J, Viron B, et al. (2002). Pure red-cell aplasia and antierythropoietin antibodies in patients treated with recombinant erythropoietin. N Engl J Med, 346:469–475.
334
BIOMARKERS FOR THE IMMUNOGENICITY OF THERAPEUTIC PROTEINS
11. Janeway CA (2005). Immunobiology: The Immune System in Health and Disease, 6th ed. Garland Science, New York, pp. 517–555. 12. Frost N (2005). Antibody-mediated side effects of recombinant proteins. Toxicology, 209(2):155–160. 13. Thielen AM, Kuenzli S, Saurat JH (2005). Cutaneous adverse events of biological therapy for psoriasis: review of the literature. Dermatology, 211(3):209–217. 14. Lacatoure ME (2006). Mechanisms of cutaneous toxicities to EGFR inhibitors. Nat Rev Cancer, 6(10):803–812. 15. Robert C, Soria JC, Spatz A, et al. (2005). Cutaneous side-effects of kinase inhibitors and blocking antibodies. Lancet Oncol, 6(7):491–500. 16. Francis GS, Rice GP, Alsop JC (2005). Interferon beta-1a in MS: results following development of neutralizing antibodies in PRISMS. Neurology, 65(1):48–55. 17. Panitch H, Goodin D, Francis G (2005). Benefits of high-dose, high-frequency interferon beta-1a in relapsing–remitting multiple sclerosis are sustained to 16 months: final comparative results of the EVIDENCE trial. J Neurol Sci, 239(1):67–74. 18. Hermeling S, Crommelin DJ, Schellekens H, Jiskoot W (2004). Structure– immunogenicity relationships of therapeutic proteins. Pharm Res, 21(6):897–903.
17 NEW MARKERS OF KIDNEY INJURY Sven A. Beushausen, Ph.D. Pfizer Global Research and Development, Chesterfield, Missouri
INTRODUCTION The current biomarker standards for assessing acute kidney injury (AKI), caused by disease or as a consequence of drug-induced toxicity, include blood urea nitrogen (BUN) and serum creatinine (SC). Retention of either marker in the blood is indicative of impedance in the glomerular filtration rate (GFR), which if left untreated could escalate to serious kidney injury through loss of function and ultimately, death. Although the colorimetric assays developed for SC and BUN are relatively quick, seconds as compared to hours for antibody-based analysis platforms like enzyme-linked immunosorbent assay (ELISA), they are poor predictors of kidney injury because they both suffer from a lack of sensitivity and specificity. For example, SC concentration is greatly influenced by other nonrenal factors, including gender, age, muscle mass, race, drugs, and protein intake [1]. Consequently, increases in BUN and SC levels report out injury only after serious kidney damage has occurred. These shortcomings rather limit their clinical utility to patients who are at risk of developing drug-induced AKI or where instances of AKI have already been established, require frequent monitoring, and time to treatment is critical. Renal failure or AKI is often a direct consequence of disease, can result from complications associated with disease or postsurgical trauma like sepsis,
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
335
336
Heparin, warfarin, streptokinase
Aminoglycosides, radiocontrast media, cisplatin, nedaplatin, methoxyflurane, outdated tetracycline, amphotericin B, cephaloridine, streptozocin, tacrolimus, carbamazepine, mithramycin, quinolones, foscarnet, pentamidine, intravenous gammaglobulin, fosfamide, zoledronate, cidofovir,adefovir,tenofovir, mannitol, dextran, hydroxyethylstarch
Tubular toxicity
Discontinue medication, supportive care, plasmapheresis if indicated Drug discontinuation, supportive care
Fever, microangiopathic, hemolytic anemia, thrombocytopenia FENa > 2%, UOsm < 350 urinary sediment with granular casts, tubular epithelial cells
Discontinue medication, supportive care, plasmapheresis if indicated
Fever, microangiopathic, hemolytic anemia, thrombocytopenia
Ciclosporin, tacrolimus, mitomycin C, conjugated estrogens, quinine, 5-fluorouracil, ticlopidine, clopidogrel, interferon, valaciclovir, gemcitabine, bleomycin
Intrinsic renal injury Vascular effects Thrombotic microangiopathy
Cholesterol emboli
Suspend or discontinue medication, volume replacement as clinically indicated
Benign urine sediment, FENa < 1%, UOsm > 500
Diuretics, NSAIDs, ACE inhibitors, ciclosporin, tacrolimus, radiocontrast media, interleukin-2, vasodilators (hydralazine, calcium-channel blockers, minoxidil, diazoxide)
Treatment
Prerenal injury
Clinical Findings
Medication
Common Medications Associated with Acute Renal Injurya
Pathoetiology
TABLE 1
337
Quinine, quinidine, sulfonamides, hydralazine, triamterene, nitrofurantoin, mephenytoin
Penicillin, methicillin ampicillin, rifampin, sulfonamides, thiazides, cimetidine, phenytoin, allopurinol, cephalosporins, cytosine arabinoside, furosemide, interferon, NSAIDs, ciprofloxacin, clarithromycin, telithromycin, rofecoxib, pantoprazole, omeprazole, atazanavir
Gold, penicillamine, captopril, NSAIDs, lithium, mefenamate, fenoprofen, mercury, interferon-α, pamidronate, fenclofenac, tolmetin, foscarnet
Aciclovir, methotrexate, sulfanilamide, triamterene, indinavir, foscarnet, ganciclovir
Methysergide, ergotamine, dihydroergotamine, methyldopa, pindolol, hydralazine, atenolol
Severe hemolysis
Immune-mediated interstitial inflammation
Glomerulopathy
Obstruction Intratubular: (crystalluria and/or renal lithiasis)
Ureteral (secondary to retroperitoneal fibrosis)
Discontinue medication, supportive care
Edema, moderate to severe proteinuria, red blood cells, red blood cell casts possible
Benign urine sediment, hydronephrosis on ultrasound
Discontinue medication, decompress ureteral obstruction by intrarenal stenting or percutaneous nephrostomy
Discontinue medication, supportive care
Discontinue medication, supportive care
Fever, rash, eosinophilia, urine sediment showing pyuria, white cell casts, eosinophiluria
Sediment can be benign with severe obstruction, ATN might be observed
Drug discontinuation, supportive care
Drug discontinuation, supportive care
Treatment
High LDH, decreased hemoglobin
Elevated CPK, ATN urine sediment
Clinical Findings
Source: Adapted from ref. 2. a ACE, angiotensin-converting enzyme; ATN, acute tubular necrosis; CPK, creatinine phosphokinase; FENa, fractional excretion of sodium; LDH, lactate dehydrogenase; NSAIDs, nonsteroidal anti-inflammatory drugs; UOsm, urine osmolality.
Lovastatin, ethanol, codeine, barbiturates, diazepam
Medication
Rhabdomyolysis
Pathoetiology
338
NEW MARKERS OF KIDNEY INJURY
or is produced by drug-induced nephrotoxicity. Drug-induced renal injury is of great concern to physicians. Knowledge of toxicities associated with U.S. Food and Drug Administration (FDA)-approved compounds helps to guide product selection in an effort to manage risk and maximize patient safety. Drug-induced nephrotoxicity is of even greater concern to the pharmaceutical industry, where patient safety is the principal driver in the need to discover safer and more efficacious drugs. Because BUN and SC are such insensitive predictors of early kidney injury, many instances of subtle renal damage caused by drugs may go unrecognized. Consequently, true estimates for druginduced nephrotoxicity are likely to be far lower than previously realized. For example, studies have indicated that the incidence of acute tubular necrosis or acute interstitial nephritis due to medication has been estimated to be as high as 18.6% [2]. In addition, renal injury attributed to treatment with aminoglycosides has been reported to approach 36% [3,4]. Not surprisingly, many common drugs have been associated with renal injury that cause site-specific damage (Table 1). Fortunately, most instances of drug-induced nephrotoxicity are reversible if discovered early and medication is discontinued. Collectively, the combined shortcomings of BUN and SC as predictors of nephrotoxicity and the propensity for many classes of medicines to cause drug-induced nephrotoxicity underscore the urgent need for the development and qualification of more sensitive and specific biomarkers. The benefits such tools will provide include predictive value and earlier diagnosis of drug-induced kidney injury before changes in renal function or clinical manifestations of AKI are evident. More important, biomarkers of nephrotoxicity with increased sensitivity and specificity will be invaluable to drug development both preclinically and clinically. Preclinically, new biomarkers will aid in the development of safer drugs having fewer liabilities with an ultimate goal to considerably lower or even possibly eliminate drug-induced nephrotoxicity. Clinically, the biomarkers will be used to monitor potential nephrotoxic effects due to therapeutic intervention or the potential for new drugs to cause renal toxicity in phase I to III clinical trials.
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY In recent years, two consortia led by the nonprofit organizations ILSI-HESI (International Life Sciences Institute, Health and Environmental Sciences Institute, http://www.hesiglobal.org) and C-Path (Critical Path Institute, http:// www.c-path.org) aligned with leaders in academia, industry, and the FDA with a mission to evaluate the potential utility of newly identified biomarkers of nephrotoxicity for use in preclinical safety studies and to develop a process for the acceptance of the new biomarkers in support of safety data accompanying new regulatory submissions. Several criteria for the evaluation and development of new biomarkers of nephrotoxicity were considered, including:
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY
339
TABLE 2 Biomarkers of Renal Injury by Region of Specificity, Onset, Platform, and Application Biomarker
Injury Related to:
Onset
Platforms
Application
Proximal tubular injury Tubular epithelial cells Tubular dysfunction
Early
Mouse, rat, human, chicken, turkey Mouse, rat, dog, monkey, human Mouse, rat, human
Proximal tubular injury Distal tubule
Early
KIM-1
General kidney injury and disease
Early
Luminex, ELISA Luminex, ELISA Luminex, ELISA Luminex, ELISA Luminex, ELISA Luminex, ELISA
Microalbumin
Proximal tubular injury Tubulointerstitial fibrosis Proximal tubular injury Renal papilla and collecting ducts
Early
β2-Microglobulin Clusterin Cystatin-C GSTα GST Yb1
Osteopontin NGAL RPA-1
Early Late
Early
Late Early Early
Luminex, ELISA Luminex, ELISA Luminex, ELISA ELISA
Mouse, rat, human
Zebrafish, mouse, rat, dog, monkey, human Mouse, rat, dog, monkey, human Mouse, rat, monkey, human Mouse, rat, human Rat
Source: Adapted from ref. 5.
• A preference for noninvasive sample collection. • New biomarkers developed for preclinical use should optimally be translated to the clinic. • Assays for new biomarkers should be robust and kits readily available for testing. • Assays should be multiplexed to minimize cost and expedite sample analysis. • Biomarkers should ideally predict or report out site-specific injury. • Biomarkers must be more sensitive and specific of kidney injury than existing standards. • Biomarkers should be predictive (prodromal) of kidney injury in the absence of histopathology. The preference for noninvasive sample collection made urine the obvious choice of biofluid. Urine has proven to be a fertile substrate for the discovery of promising new biomarkers for the early detection of nephrotoxicity [5]. A number of these markers have been selected for further development and
340
NEW MARKERS OF KIDNEY INJURY
qualification by the ILSI-HESI and C-Path Nephrotoxicity Working Groups in both preclinical and clinical settings, with the exception of RPA-1 and GST Yb1 (Biotrin), which are markers developed specifically for the analysis of kidney effects in rats (Table 2). The utility and limitations of each marker used in the context of early and site-specific detection are discussed below. β2-Microglobulin Human β2-microglobulin (β2M) was isolated and characterized in 1968 [6]. β2M was identified as a small 11,815-Da protein found on the surface of human cells expressing the major histocompatibility class I molecule [7]. β2M is shed into the circulation as a monomer, from which it is normally filtered by the glomerulus and subsequently reabsorbed and metabolized within proximal tubular cells [8]. Twenty-five years ago, serum β2M was advocated for use as an index of renal function because of an observed proportional increase in serum β2M levels in response to decreased renal function [9]. It has since been abandoned due to a number of factors complicating the interpretation of the findings. More recently, increased levels of intact urinary β2M have been directly linked to impairment of tubular uptake. Additional work in rats and humans has demonstrated that increased urinary levels of β2M can be used as a marker for proximal tubular function when β2M production and glomerular filtration are normal in a setting of minimal proteinuria [10–13]. Urinary β2M has been shown to be superior to N-acetyl-β-glucosaminidase as a marker in predicting prognosis in idiopathic membranous neuropathy [14]. In this context β2M can be used to monitor and avoid unnecessary immunosuppressive therapy following renal transplantation. β2M is being considered for evaluation as an early predictor of proximal tubular injury in preclinical models of drug-induced nephrotoxicity. Although easily detected in urine, there are several factors that may limit its value as a biomarker. For example, β2M is readily degraded by proteolytic enzymes at room temperature and also degrades rapidly in an acidic environment at or below pH < 6.0 [15]. Therefore, great care must be taken to collect urine in an ice-cold, adequately buffered environment with the addition of stabilizers to preserve β2M levels during the period of collection and in storage. It is unlikely that β2M will be used as a stand-alone marker to predict or report proximal tubule injury preclinically or in the clinic. Rather, it is likely to be used in conjunction with other proximal tubule markers to support such a finding. A brief survey for commercially available antibodies used to detect β2M indicates that most are species specific (http://www.abcam.com). Instances of cross-reactivity were noted for specific reagents between human and pig, chicken and turkey, and human and other primates. A single monoclonal reagent is reported to have cross-reactivity with bovine, chicken, rabbit, and mouse, and none were listed that specifically recognized dog β2M. Because both commonly used preclinical species rat and dog β2M proteins share only 69.7% and 66.7% amino acid identity with the human protein (http://www.expasy.org), it would be prudent to develop and
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY
341
characterize antibody reagents specific to each species and cross-reacting antisera to specific amino acid sequences shared by all three proteins. Clusterin Clusterin is a highly glycosylated and sulfated secreted glycoprotein first isolated from ram rete testes fluid in 1983 [16]. It was named clusterin because of its ability to elicit clustering of Sertoli cells in vitro [17]. Clusterin is found primarily in the epithelial cells of most organs. Tissues with the highest levels of clusterin include testis, epididymus, liver, stomach, and brain. Metabolic and cell-specific functions assigned to clusterin include sperm maturation, cell transformation, complement regulation, lipid transport, secretion, apoptosis, and metastasis [18]. Clusterin is also known by a number of synonyms as a consequence of having been identified simultaneously in many parallel lines of inquiry. Names include glycoprotein III (GPIII), sulfated glycoprotein-2 (SG-2), apolipoprotein J (apo J), testosterone-repressed message-2 (TRPM2), complement associated protein SP-40, 40, and complement cytolysis inhibitor protein (see Table 1). Clusterin has been cloned from a number of species, including the rat [19]. The human homolog is 449 amino acids in length, coding for a protein with a molecular weight of 52,495 Da [20]. However, due to extensive posttranslational modification, the protein migrates to an apparent molecular weight of 70 to 80 kDa following sodium dodecyl sulfate polyacrylamide gel electrophoresis (SDS-PAGE). Amino acid identity between species is moderate. Human clusterin shares 70.3%, 76.6%, 71.7%, and 77% with the bovine, mouse, pig, and rat homologs, respectively (http://www.expasy.org). Clusterin is a heterodimer comprised of an α and a β subunit, each having an apparent mass of 40 kDa by SDS-PAGE. The subunits result from the proteolytic cleavage of the translated polypeptide at amino acid positions 23 and 277. This eliminates the leader sequence and produces the mature 205amino acid β subunit and the remaining 221-amino acid α subunit. The α and β subunits are held together by five sulfhydryl bonds afforded by cysteine residues clustered within each of the subunits [21]. In addition, each subunit has three N-linked carbohydrates that are also heavily sulfated, giving rise to the higher apparent molecular weight observed following SDS-PAGE. Considerable evidence has been provided suggesting that clusterin plays an important role in development. For example, clusterin mRNA expression has been observed at 12.5 days’ postgestation in mice, where it is present in all germ cell layers [22]. Furthermore, stage-specific variations of the transcript have been observed, as have changes in specific localization during development. Similarly, changes in the developmental expression of clusterin in kidney, lung, and nervous system have also been reported [23]. These observations suggest that clusterin might play a role in tissue remodeling. In the developing murine kidney, clusterin is expressed in the tubular epithelium and later in development is diminished as tubular maturation progresses [24]. Interestingly, clusterin is observed in newly formed tubules
342
NEW MARKERS OF KIDNEY INJURY
but appears to be absent in glomeruli. Of interest to many investigators of renal function is the reemergence of clusterin observed following induction of a variety of kidney diseases and drug-induced renal injury. Clusterin induction has been observed following ureteral obstruction [25] and ischemia reperfusion injury [26]. Elevations in the levels of clusterin have also been observed in the peri-infarct region following subtotal nephrectomy [27] and in animal models of hereditary polycystic kidney disease [28]. Marked increases of clusterin released in urine have also been recorded in animal models of aminoglycoside-induced nephrotoxicity [29–31]. Authors have opined that clusterin functions in either a protective role by scavenging cell debris or may play a role in the process of tissue remodeling following cellular injury based on these observations. Collectively, the body of work linking elevated levels of urinary clusterin to kidney damage has suggested that measurement of urinary clusterin may be useful as a marker of renal tubular injury. Indeed, an early study comparing urinary levels of clusterin against N-acetyl-β-glucosaminidase (NAG) following chronic administration of gentamicin over a two-month period demonstrated that while the urinary levels of both proteins rose rapidly, peaked, and then declined, clusterin levels remained significantly higher than control values over the duration of the experiment. By contrast, NAG levels dropped to within control values within 10 days of treatment even though evidence of tubulointerstitial disease persisted [30]. More recent work examining the levels of urinary clusterin in the autosomal-dominant polycystic kidney disease (cy/+) rat model compared to the FHH rat model of focal segmental glomerulosclerosis following bilateral renal ischemia demonstrated that clusterin levels correlated with the severity of tubular damage and suggested use as a marker for differentiating between tubular and glomerular damage [32]. Although the value of clusterin as an early marker of tubular epithelial injury has not yet been established clinically, preclinical findings suggest that it is an ideal candidate for translation to the clinic as an early marker of nephrotoxicity. Cystatin-C Cystatin C (Cys-C) is a 13-kDa nonglycosylated protein belonging to the superfamily of cysteine protease inhibitors [33]. Cys-C is produced by all nucleated cells and, unlike SC, is unaffected by muscle mass. Serum Cys-C was suggested to be closer to the “ideal” biomarker reporting GFR because although freely filtered by the glomerulus, it is not secreted. Instead, Cys-C is adsorbed by tubular epithelial cells, where it is catabolized and is not returned to the bloodstream, thus obviating the need to calculate urinary Cys-C to measure GFR [34]. Several studies have been designed to examine the usefulness of serum Cys-C as a measure or biomarker of GFR [35]. In one such study, serum Cys-C was shown to be a useful biomarker of acute renal failure and could be detected one to two days prior to the elevation in levels of SC, the accepted clinical diagnosis of AKI [36]. Although earlier in detection than
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY
343
SC, serum Cys-C levels were not predictive of kidney disease and, like SC, reported out kidney injury long after serious damage had occurred. In another study, investigators monitored and compared the levels of serum Cys-C and urinary Cys-C in patients following cardiothoracic surgery with and without complicating AKI [37]. The results clearly demonstrated that while plasma Cys-C was not a useful predictor of AKI, early and persistent increases in urinary Cys-C correlated with the development and severity of AKI. Another interesting but unexplained observation in this study was that women had significantly higher postoperative levels of urinary Cys-C than did men even though preoperative Cys-C levels were similar. These data have prompted groups like ILSI-HESI and C-Path to examine the utility of urinary Cys-C as a preclinical biomarker of drug-induced renal injury in the hope that elevated levels of Cys-C can be detected in urine prior to the emergence of overt tubular dysfunction. Glutathione S-Transferases The glutathione S-transferases (GSTs) form a family of homo-and heterodimeric detoxifying enzymes [38] identified originally as a group of soluble liver proteins that play a major role in the detoxification of electrophilic compounds [39]. They have since been shown to be products of gene superfamilies [40] and are classified into alpha, mu, pi, and theta subfamilies based on sequence identity and other common properties [41]. Tissue distribution and levels of GST isoform expression has been determined by immunohistochemical localization [42], isoform-specific peptide antibody Western blotting, and mass spectrometry [40]. Analysis of GST subunit diversity and tissue distribution using peptide-specific antisera has shown GST μ isoforms to be the most widely distributed class of GSTs, with expression evident in brain, pituitary, heart, lung, adrenal gland, kidney, testis, liver, and pancreas, with the highest levels of GST μ1 observed in adrenals, testis, and liver. Isoforms of the GSTα subfamily, also known by the synonyms glutathione S-transferase-1, glutathione S-transferase Ya-1, GST Ya1, ligandin, GST 1a-1a, GST B, GST 1-1, and GST A1-1 (http://www.expasy.org/uniprot/P00502), are more limited in distribution, with highest levels of expression observed in hepatocytes and proximal tubular cells of the kidney [42]. Indeed, proximal tubular GSTα levels have been reported to approximate 2% of total cytosolic protein following exposure to xenobiotics or renal toxins [43]. In the Rowe study [40], GSTα was found to be rather evenly distributed between adrenals, kidney, and pancreas, with highest levels observed in liver, whereas isoforms of the GSTπ subclass were expressed in brain, pituitary, heart, liver, kidney, and adrenals, with highest levels of expression observed in kidney. The high levels of expression and differential distribution of GST isoforms made them attractive candidates as biomarkers that could be used to indicate site-specific drug-induced nephrotoxicity. For example, development of a radioimmunoassay to quantify leakage of ligandin (GSTα)
344
NEW MARKERS OF KIDNEY INJURY
into the urine as a measure of nephrotoxicity in the preclinical rat model was reported as early as 1979 [44]. Subsequent work described the development of a radioimmunoassay for the quantitation of GSTπ in the urine [45a] later used as an indicator of distal tubular damage in the human kidney [45b]. Additional work described the development of a multiplexed ELISA for the simultaneous quantitation of GSTα and GSTπ to discriminate between proximal and distal tubular injury, respectively [46]. In terms of sensitivity, a study examining the nephrotoxic effects of the sevoflurane degradation product, fluoromethyl-2,2-difluoro-1-(trifluoromethyl) vinyl ether, in rats showed urinary GSTα to be the most sensitive marker of mild proximal tubular damage compared to other urinary markers measured, including protein and glucose [47]. A second study in which four human volunteers were given sevoflurane demonstrated abnormalities in urinary glucose, albumin, GSTα, and GSTπ, while levels of BUN or SC were unaffected, suggesting that the GSTs were more sensitive markers of site-specific dug-induced nephrotoxicity [48]. Immunohistochemical staining of the rat kidney with antibodies to different GST isoforms has shown that GSTα subunits are expressed selectively in the proximal tubule, whereas GSTμ and π subnits are localized to the thin loop of Henle and proximal tubules, respectively [38]. An examination of the distribution of the rat GSTμ equivalent, GSTYb1, in the kidney indicates that it is localized to the distal tubules. Simultaneous measurement of urinary GSTα and GSTYb1 has been used to discriminate between drug-induced proximal and distal tubular injury (cited by Kilty et al. [49]). The high levels of GSTs in the kidney and site-specific localization of different GST classes in addition to increased sensitivity in detecting drug-induced nephrotoxicity in humans make them ideal candidates for the development and testing of preclinical markers that predict or report early signs of nephrotoxicity to support preclinical safety studies and subsequent compound development. Kidney Injury Molecule 1 Rat kidney injury molecule 1 (KIM-1) was discovered as part of an effort to identify genes implicated in kidney injury and repair [50] using the polymerase chain reaction (PCR) subtractive hybridization technique of representational difference analysis originally developed to look at differences in genomic DNA [51] but adapted to examine differences in mRNA expression [52]. Complementary DNA generated from poly(A+) mRNA purified from normal and 48-hour postischemic rat kidneys was amplified to generate driver and tester amplicons, respectively. The amplicons were used as templates to drive the subtractive hybridization process to generate designated differential products, three of which were ultimately gel purified and subcloned into the pUC18 cloning vector. Two of these constructs were used to screen λZapII cDNA libraries constructed from 48-hour postischemic rat kidneys. Isolation and purification of positively hybridizing plaques resulted in the recovery of a
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY
345
2.5-kb clone that contained sequence information on all three designated differential products. A BLAST search of the NCBI database revealed that the rat KIM-1 sequence had limited (59.8%) amino acid homology to HAVcr-1, identified earlier as the monkey gene coding for the hepatitis A virus receptor protein [53]. The human homolog of KIM-1 was isolated by low-stringency screening of a human embryonic liver λgt10 cDNA library using the same probe that yielded the rat clones [50]. The plaque of one of two clones purified from this exercise was shown to code for a 334-amino acid protein sharing 43.8% identity and 59.1% similarity to the rat KIM-1 protein. Comparison to the human HAVcr protein [54] revealed 85.3% identity demonstrating a clear relationship between the two proteins. Subsequent work has demonstrated that KIM-1 and HAVcr are synonyms for the same protein, also known as T-cell immunoglobulin and mucin domain–containing protein 1 (TIMD-1) and TIM-1. The TIMD proteins are all predicted to be type I membrane proteins that share a characteristic immunoglobulin V, mucin, transmembrane, and cytoplasmic domain structure [55]. It is not clear what the function of KIM-1 (TIMD-1) is, but it is believed that TIMD-1 is involved in the preferential stimulation of Th2 cells within the immune system [56]. In the rat, KIM-1 mRNA expression is highest in liver and barely detected in kidney [50]. KIM-1 mRNA and protein expression are dramatically up-regulated following ischemic injury. Immunohistochemical examination of kidney sections using a rat-specific KIM-1 antibody showed that KIM-1 is localized to regenerating proximal tubule epithelial cells. KIM-1 was proposed as a novel biomarker for human renal proximal tubule injury in a study that demonstrated that KIM-1 could be detected in the urine of patients with biopsy-proven acute tubular necrosis [57]. Human KIM-1 occurs as two splice variants that are identical with respect to the extracellular domains but differ at the carboxy termini and are differentially distributed throughout tissues [58]. Splice-variant KIM-1b is 25 amino acids longer than the originally identified KIM-1a and is found predominantly in human kidney. Interestingly, cell lines expressing endogenous KIM-1 or recombinant KIM-1b constitutively shed KIM-1 into the culture medium, and shedding of KIM-1 could be inhibited with metalloprotease inhibitors, suggesting a mechanism for KIM-1 release into the urine following the regeneration of proximal tubule epithelial cells as a consequence of renal injury. Evidence supporting KIM-1’s potential as a biomarker for general kidney injury and repair was clearly demonstrated in another paper describing the early detection of urinary KIM-1 protein in a rat model of drug-induced renal injury. In this study increases in KIM-1 were observed before significant increases in SC levels could be detected following injury with folic acid and prior to measurable levels of SC in the case of cisplatin-treated rats [59]. In later, more comprehensive studies examining the sensitivity and specificity of KIM-1 as an early biomarker of mechanically- [60] or drug-induced renal injury [61], KIM-1 was detected earlier than any of the routinely used biomarkers of renal injury,
346
NEW MARKERS OF KIDNEY INJURY
including BUN, SC, urinary NAG, glycosuria, and proteinuria. Certainly, the weight of evidence described above supports the notion that KIM-1 is an excellent biomarker of AKI and drug-induced renal injury. The increasing availability of antibody-based reagents and platforms to rat and human KIM-1 proteins offer convenient and much needed tools for preclinical safety assessment of drug-induced renal toxicity and for aid in diagnosing or monitoring mild to severe renal injury in the clinic. Further work is required to determine if KIM-1 is a useful marker for long-term injury and whether it can be used in combination with other makers to determine site-specific kidney injury. Microalbumin The examination of proteins excreted into urine provides useful information about renal function (reviewed in [62]). Tamm–Horsfall proteins that originate from renal tubular cells comprise the largest fraction of protein excreted in normal urine. The appearance of low-molecular-weight urinary proteins normally filtered through the basement membrane of the glomerulus, including insulin, parathormone, lysozyme, trypsinogen and β2-microglobulin indicate some form of tubular damage [63]. The detection of highermolecular-weight (40- to 150-kDa) urinary proteins not normally filtered by the glomerulus, including albumin, transferrin, IgG, caeruloplasmin, α1-acid glycoprotein, and HDL, indicate compromised glomerular function [64]. Albumin is by far the most abundant protein constituent of proteinuria. Although gross increases in urinary albumin measured by the traditional dipstick method with a reference interval of 150 to 300 mg/mL have been used to indicate impairment of renal function, there are many instances of subclinical increases of urinary albumin within the defined reference interval that are predictive of disease [65–67]. The term microalbuminuria was coined to define this phenomenon, where such increases had value in predicting the onset of nephropathy in insulindependant diabetes mellitus [68]. The accepted reference interval defined for microalbuminuria is between 30 to 300 mg in 24 hours [69,70]. Because microalbuminuria appears to be a sensitive indicator of renal injury, there is a growing interest in the nephrotoxicity biomarker community to evaluate this marker in the context of an early biomarker predictive of drug-induced renal injury. Although microalbuminuria has traditionally been used in preclinical drug development to assess glomerular function there is growing evidence to suggest that albuminuria is a consequence of impairment of the proximal tubule retrieval pathway [71]. Evidence that microalbuminuria might provide value in diagnosing drug-induced nephrotoxicity was reported in four of 18 patients receiving cisplatin, ifosamide, and methotrextate to treat osteosarcoma [72]. Because microalbuminuria can be influenced by other factors unrelated to nephrotoxicity, including vigorous exercise, hematuria, urinary tract infection, and dehydration [5], it may have greater predictive value for renal
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY
347
injury in the context of a panel of markers with increased sensitivity and site specificity. Indeed, further evaluation of microglobulin as an early biomarker of site-specific or general nephrotoxicity is required before qualification for preclinical and clinical use. Osteopontin Osteopontin (OPN) is a 44-kDa highly phosphorylated secreted glycoprotein originally isolated from bone [73]. It is an extremely acidic protein with an isoelectric point of 4.37 (http://www.expasy.org/uniprot/P10451), made even more acidic through phosphorylation on a cluster of up to 28 serine residues [74]. Osteopontin is widely distributed among different tissues, including kidney, lung, liver, bladder, pancreas, and breast [75] as well as macrophages [76], activated T-cells [77], smooth muscle cells [78], and endothelial cells [79]. Evidence has been provided demonstrating that OPN functions as a calcium oxalate crystal formation inhibitor in cultured murine kidney cortical cells [80]. Immunohistochemical and in situ hybridization examination of the expression and distribution of OPN protein and mRNA in the rat kidney clearly demonstrated that levels are highest in the descending thin loop of Henle and cells of the papillary surface epithelium [81]. Uroprontin, first described as a relative of OPN was among the first examples of OPN isolation from human urine [82]. Although normally expressed in kidney, OPN expression can be induced under a variety of experimental pathologic conditions [83,84], including tubulointerstitial nephritis [85], cyclosporine-induced neuropathy [86], hydronephrosis as a consequence of unilateral ureteral ligation [87], renal ischemia [88], nephropathy induced by cisplatin, and crescentric glomeulonephritis [89]. Up-regulation of OPN has been reported in a number of animal models of renal injury, including drug-induced nephrotoxicity by puromycin, cylcoaporine, strptozotocin, phenylephrine, and gentamicin (reviewed in [90a]). In the rat, gentamicin-induced acute tubular necrosis model OPN levels were highest in regenerating proximal and distal tubules, leading the authors to conclude that OPN is related to the proliferation and regeneration of tubular epithelial cells following tubular damage [90b]. Although osteopontin has been proposed as a selective biomarker of breast cancer [91] and a useful clinical biomarker for the diagnosis of colon cancer [92], OPN shows great promise and requires further evaluation as a clinical biomarker for renal injury. Certainly, the high levels of OPN expression following chemically or physically induced renal damage coupled with the recent availability of antibodybased reagents to examine the levels of mouse, rat, and human urinary OPN provide ample opportunity to evaluate OPN as an early marker of AKI in the clinic and a predictive marker of drug-induced nephrotoxicity preclinically. Further planned work by the ILSI-HESI and C-Path groups hope to broaden our understanding regarding the utility of OPN in either capacity as an early predictor of renal injury.
348
NEW MARKERS OF KIDNEY INJURY
Neutrophil Gelatinase–Associated Lipocalin Neutrophil gelatinase–associated lipocalin (NGAL) was first identified as the small molecular-weight glycoprotein component of human gelatinase affinity purified from the supernatant of phorbol myristate acetate stimulated human neutrophils. Human gelatinase purifies as a 135-kDa complex comprised of the 92-kDa gelatinase protein and the smaller 25-kDa NGAL [93]. NGAL has subsequently been shown to exist primarily in monomeric or dimeric form free of gelatinase. A BLAST search of the 178-amino acid NGAL protein yielded a high degree of similarity to the rat α2 microglobulin-related protein and mouse protein 24p3, suggesting that NGAL is a member of the lipocalin family. Lipocalins are characterized by the ability to bind small lipophilic substances and are thought to function as modulators of inflammation [94]. More recent work has shown that NGAL, also known as siderocalin, complexes with iron and iron-binding protein to promote or accelerate recovery from proximal tubular damage (reviewed in [95]). RNA dot blot analysis of 50 human tissues revealed that NGAL expression is highest in trachea and bone tissue, moderately expressed in stomach and lung with low levels of transcript expression in the remaining tissues examined, including kidney [94]. Because NGAL is a reasonably stable small-molecular-weight protein, it is readily excreted from the kidney and can be detected in urine. NGAL was first proposed as a novel urinary biomarker for the early prediction of acute renal injury in rat and mouse models of acute renal failure induced by bilateral ischemia [96]. Increases in the levels of urinary NGAL were detected in the first hour of postischemic urine collection and shown to be related to dose and length of exposure to ischemia. In this study the authors reported NGAL to be more sensitive than either NAG or β2M, underscoring its usefulness as an early predictor of acute renal injury. Furthermore, the authors proposed NGAL to be an earlier marker predictive of acute renal injury than KIM-1, since the latter reports injury within 24 hours of renal injury compared to 1 hour for NGAL. Marked up-regulation of NGAL expression was observed in proximal tubule cells within 3 hours of ischemia-induced damage, suggesting that NGAL might be involved in postdamage reepithelialization. Additional work demonstrated that NGAL expression was induced following mild ischemia in cultured human proximal tubule cells. This paper also addressed the utility of NGAL as an early predictor of drug-induced renal injury by detecting increased levels of NGAL in the urine of cisplatin-treated mice. Adaptation of the NGAL assay to address utility and relevance in a clinical setting showed that both urinary and serum levels of NGAL were sensitive, specific, and highly predictive biomarkers of acute renal injury following cardiac surgery in children [97]. In this particular study, multivariate analysis showed urinary NGAL to be the most powerful predictor in children that developed acute renal injury. Measurable increases in urinary NGAL concentrations were recorded within 2 hours of cardiac bypass surgery, whereas
NEW PRECLINICAL BIOMARKERS OF NEPHROTOXICIY
349
increases in SC levels were not observed until 1 to 3 days postsurgery. Other examples demonstrating the value of NGAL as a predictive biomarker of early renal injury include association of NGAL with severity of renal disease in proteineuric patients [98] and NGAL as an early predictor of renal disease resulting from contrast-induced nephropathy [99]. NGAL has been one of the most thoroughly studied new biomarkers predictive of AKI as a consequence of disease or surgical intervention, and to a lesser extent, drug-induced renal injury. Sensitive and reliable antibody-based kits have been developed for a number of platforms in both humans and rodents (Table 2) and there is considerable interest in examining both the specificity and sensitivity of NGAL for acceptance as a fit-for-purpose predictive biomarker of drug-induced renal injury to support regulatory submissions. Certainly, because NGAL is such an early marker of renal injury, it will have to be assessed as a stand-alone marker of renal injury as well as in the context of a larger panel of markers that may help define site specific and degree of kidney injury. Renal Papillary Antigen 1 Renal papillary antigen 1 (RPA-1) is an uncharacterized antigen that is highly expressed in the collecting ducts of the rat papilla and can be detected at high levels in rat urine following exposure to compounds that induce renal papillary necrosis [100]. RPA-1 was identified by an IgG1 monoclonal antibody, designated Pap X 5C10, that was generated in mice immunized with pooled fractions of a cleared lysate of homogenized rat papillary tissue following crude DEAE anion-exchange chromatography. Immunohistochemical analysis of rat papillae shows that RPA-I is localized to the epithelial cells lining the luminal side of the collecting ducts and to a lesser extent in cortical collecting ducts. A second publication described the adaptation of three rat papillaspecific monoclonal antibodies, including Pap X 5C10 (PapA1), to an ELISA assay to examine antigen excretion in rat urine following drug-induced injury to the papillae using 2-bromoethanamine, propyleneimine, indomethicin, or ipsapirone [101]. Of the three antibodies evaluated, PapA1 was the only antigen released into the urine of rats following exposure to each of the toxicants. The authors concluded that changes in the rat renal papilla caused by xenobiotics could be detected early by urinary analysis and monitored during follow-up studies. This study also clearly demonstrated that the Pap X 5C10, PapA1, RPA-1 antigen had the potential for use as a site-specific biomarker predictive of renal papillary necrosis. Indeed, the Pap X 5C10 monoclonal antibody was adapted for commercial use as an RPA-1 ELISA kit marketed specifically to predict or monitor site-specific renal injury in the rat [49]. The specificity and sensitivity of the rat reagent has generated a great deal of interest in developing an equivalent reagent for the detection of human papillary injury. Identification of the RPA-1 antigen remains elusive. Early biochemical characterization of the antigen identified it as a large-
350
NEW MARKERS OF KIDNEY INJURY
molecular-weight protein (150 to 200 kDa) that could be separated into two molecular-weight species with isoelectric points of 7.2 and 7.3, respectively [100]. However, purification and subsequent protein identification of the antigen were extremely challenging. A recent attempt at the biochemical purification and identification of the RPA-1 antigen has been equally frustrating, with investigators providing some evidence that the antigen may be a large glycoprotein and suggesting that the carbohydrate moiety is the specific epitope recognized by the Pap X 5C10 monoclonal antibody [102]. This would be consistent with, and help to explain why, the rat reagent does not crossreact with a human antigen in the collecting ducts, as protein glycosylation of related proteins often differs dramatically between species, thereby precluding the likelihood of presenting identical epitopes. Nevertheless, continued efforts toward identifying a human RPA-1 antigen will provide investigators with a sorely needed clinical marker for the early detection of drug-induced renal papillary injury.
SUMMARY A considerable amount of effort has gone into identifying, characterizing, and developing new biomarkers of renal toxicity having greater sensitivity and specificity than the traditional markers, BUN and SC. The issue of sensitivity is a critical one, as the ideal biomarker would detect renal injury before damage is clinically evident or cannot be reversed. Such prodromal biomarkers would provide great predictive value to the pharmaceutical industry in preclinical drug development, where compounds could be modified or development terminated early in response to the nephrotoxicity observed. Even greater value could be realized in the clinic, where early signs of kidney injury resulting from surgical or therapeutic intervention could be addressed immediately before serious damage to the patient has occurred. Several of the candidate markers described above, including β2M, GSTα, microalbumin, KIM-1, and NGAL, have demonstrated great promise as early predictors of nephrotoxicity. Continued investigation should provide ample data from which to make a determination regarding the utility of these markers for preclinical and, ultimately, clinical use. The issue of biomarker specificity is also of great value because it provides information regarding where and to what extent injury is occurring. For example, increases in levels of SC and BUN inform us that serious kidney injury has occurred but does not reveal the precise nature of that injury, whereas the appearance of increased levels of β2M, GSTα, microalbumin, and NGAL indicate some degree of proximal tubule injury. Similarly, RPA-1 reports injury to the papilla, clusterin indicates damage to tubular epithelial cells, and GSTYb1 is specific to distal tubular damage. Low-level increases of early markers warn the investigator or clinician that subclinical damage to the kidney is occurring and provides the necessary time to alter or terminate a course of treatment or development.
REFERENCES
351
Monitoring toxicity is an important aspect of achieving a positive clinical outcome and increased safety in drug development. Incorporation of many or all of these markers into a panel tied to an appropriate platform allows for the simultaneous assessment of site-specific kidney injury with some understanding of the degree of damage. Several detection kits are commercially available for many of these new biomarkers of nephrotoxicity. For example, Biotrin International provides ELISA kits for the analysis of urinary GSTs and RPA-1, while Rules Based Medicine and Meso Scale Discovery offer panels of kidney biomarkers multiplexed onto antibody-based fluorescence or chemiluminescent platforms, respectively. As interest in new biomarkers of kidney injury continues to develop, so will the technology that supports them. Presently, all of the commercial reagents and kits supporting kidney biomarker detection are antibody-based. The greatest single limitation of such platforms is how well the reagents perform with respect to target identification, nonspecific protein interaction, and species cross reactivity. Although kits are standardized and come complete with internal controls, kit-to-kit and lab-to-lab variability can be high. Another technology being developed for the purpose of quantifying biomarkers in complex mixtures such as biofluids is mass spectrometry–based multiple reaction monitoring. This technology requires the synthesis and qualification of small peptides specific to a protein biomarker that can be included in a sample as an internal standard to which endogenous peptide can be compared and quantified. This platform is extremely sensitive (femtomolar detection sensitivity), requires very little sample volume, and offers the highest degree of specificity with very short analysis times. Limitations of the platform are related to the selection of an appropriate peptide and the expense of assay development and qualification for use. For example, peptides need to be designed that are isoform specific, being able to discriminate between two similar but not identical proteins. Peptide design is also somewhat empirical with respect to finding peptides that will “fly” in the instrument and produce a robust signal at the detector. The choice of peptides available in a particular target may be limiting given these design restrictions. Consequently, not all proteins may be amenable to this approach. In conclusion, continued improvement in technology platforms combined with the availability of reagents to detect new biomarkers of nephrotoxicity provides both the clinician and the investigator with a variety of tools to predict and monitor early or acute kidney injury. This will be of tremendous value toward saving lives in the clinic and developing safer, more efficacious drugs without nephrotoxic side effects.
REFERENCES 1. Bjornsson TD (1979). Use of serum creatinine concentrations to determine renal function. Clin Pharmacokinet, 4:200–222.
352
NEW MARKERS OF KIDNEY INJURY
2. Choudhury D, Ziauddin A (2005). Drug-associated renal dysfunction and injury. Nat Clin Pract Nephrol, 2:80–91. 3. Kleinknecht D, Landais P, Goldfarb B (1987). Drug-associated renal failure: a prospective collaborative study of 81 biopsied patients. Adv Exp Med Biol, 212:125–128. 4. Kaloyandes GJ, et al. (2001). Antibiotic and Renal Immunosuppression-Related Renal Failure. Lippincott Williams & Wilkins, Philadephia. 5. Vaidya VS, Ferguson MA, Bonventre JV (2008). Biomarkers of acute kidney injury. Annu Rev Pharmacol Toxicol, 48:463–493. 6. Berggard I, Bearn AG (1968). Isolation and properties of a low molecular weight β2-globulin occurring in human biological fluid. J Biol Chem, 213: 4095–4103. 7. Harris HW, Gill TJ III (1986). Expression of class I transplantation antigens. Transplantation, 42:109–117. 8. Bernier GM, Conrad ME (1969). Catabolism of β2-microglobulin by the rat kidney. Am J Physiol, 217:1350–1362. 9. Shea PH, Mahler JF, Horak E (1981). Prediction of glomerular filtration rate by serum creatinine and β2-microglobulin. Nephron, 29:30–35. 10. Eddy AA, McCullich L, Liu E, Adams J (1991). A relationship between proteinuria and acute tubulointerstitial disease in rats with experimental nephritic syndrome. Am J Pathol, 138:1111–1123. 11. Holm J, Hemmingsen L, Nielsen NV (1993). Low-molecular-mass proteinuria as a marker of proximal renal tubular dysfunction in normo- and microalbuminuric non-insulin-dependent subjects. Clin Chem, 39:517–519. 12. Kabanda A, Jadoul M, Lauwerys R, Bernard A, van Ypersele de Strihou C (1995). Low molecular weight proteinuria in Chinese herbs nephropathy. Kidney Int, 48:1571–1576. 13. Kabanda A, Vandercam B, Bernard A, Lauwerys R, van Ypersele de Strihou C (1996). Low molecular weight proteinuria in human imminodeficiency virus– infected patients. Am J Kidney Dis, 27:803–808. 14. Hofstra JM, Deegans JK, Willems HL, Wetzels FM (2008). Beta-2-microglobulin is superior to N-acetyl-beta-glucosaminindase in predicting prognosis in idiopathic membranous neuropathy. Nephrol Dial Transplant, 23:2546–2551. 15. Davey PG, Gosling P (1982). Beta-2-microglobulin instability in pathological urine. Clin Chem, 28:1330–1333. 16. Blashuck O, Burdzy K, Fritz IB (1983). Purification and characterization of cellaggregating factor (clusterin), the major glycoproteoin in ram rete testis fluid. J Biol Chem, 12:7714–7720. 17. Fritz IB, Burdzy K, Setchell B, Blashuck O (1983). Ram rete testes fluid contains a protein (clusterin) which influences cell–cell interactions in vitro. Biol Reprod, 28:1173–1188. 18. Rosenberg ME, Silkensen J (1995). Clusterin: physiologic and pathophysiologic considerations. Int J Biochem Cell Biol, 27:633–645. 19. Collard MW, Griswold MD (1987). Biosynthesis and molecular cloning of sulfated glycoprotein 2 secreted by rat Sertoli cells. Biochemistry, 26:3297–3303.
REFERENCES
353
20. Kirszbaum L, Sharpe JA, Murphy B, et al. (1989). Molecular cloning and characterization of the novel, human complement-associated protein, SP40,40: a link between the complement and reproductive systems. EMBO J, 8:711–718. 21. Kirszbaum L, Bozas SE, Walker ID (1992). SP-40,40, a protein involved in the control of the complement pathway, possesses a unique array of disulfide bridges. FEBS Lett, 297:70–76. 22. French LE, Chonn A, Ducrest D, et al. (1993). Murine clusterin: molecular cloning and mRNA localization of a gene associated with epithelial differentiation processes during embryogenesis. J Cell Biol, 122:1119–1130. 23. O’Bryan MK, Cheema SS, Bartlett PF, Murphy BF, Pearse MJ (1993). Clusterin levels increase during neuronal development. J Neurobiol I, 24:6617–6623. 24. Harding MA, Chadwick LJ, Gattone VH II, Calvet JP (1991). The SGP-2 gene is developmentally regulated in the mouse kidney and abnormally expressed in collecting duct cysts in polycystic kidney disease. Dev Biol, 146:483–490. 25. Pearse MJ, O’Bryan M, Fisicaro N, Rogers L, Murphy B, d’Apice AJ (1992). Differential expression of clusterin in inducible models of apoptosis. Int Immunol, 4:1225–1231. 26. Witzgall R, Brown D, Schwarz C, Bonventre JV (1994). Localization of proliferating cell nuclear antigen, vimentin, c-Fos, and clusterin in the post-ischemic kidney: evidence for a heterogeneous genetic response among nephron segments, and a large pool of mitotically active and dedifferentiated cells. J Clin Invest, 93:2175–2188. 27. Correa-Rotter R, Hostetter TM, Manivel JC, Eddy AA, Rosenberg ME (1992). Intrarenal distribution of clusterin following reduction of renal mass. Kidney Int, 41:938–950. 28. Cowley BD Jr, Rupp JC (1995). Abnormal expression of epidermal growth factor and sulfated glycoprotein SGP-2 messenger RNA in a rat model of autosomal dominant polycystic kidney disease. J Am Soc Nephrol, 6:1679–1681. 29. Aulitzky WK, Schlegel PN, Wu D, et al. (1992). Measurement of urinary clusterin as an index of nephrotoxicity. Proc Soc Exp Biol Med, 199:93–96. 30. Eti S, Cheng SY, Marshall A, Reidenberg MM (1993). Urinary clusterin in chronic nephrotoxicity in the rat. Proc Soc Exp Biol Med, 202:487–490. 31. Rosenberg ME, Silkensen J (1995). Clusterin and the kidney. Exp Nephrol, 3:9–14. 32. Hidaka S, Kranzlin B, Gretz N, Witzgall R (2002). Urinary clusterin levels in the rat correlate with the severity of tubular damage and may help to differentiate between glomerular and tubular injuries. Cell Tissue Res, 310:289–296. 33. Abrahamson M, Olafsson I, Palsdottir A, et al. (1990). Structure and expression of the human cystatin C gene. Biochem J, 268:287–294. 34. Grubb A (1992). Diagnostic value of cystatin C and protein HC in biological fluids. Clin Nephrol, 38:S20–S27. 35. Laterza O, Price CP, Scott M (2002). Cystatin C: an improved estimator of glomerular function? Clin Chem, 48:699–707. 36. Herget-Rosenthal S, Marggraf G, Husing J, et al. (2004). Early detection of acute renal failure by serum cystatin C. Kidney Int, 66:1115–1122.
354
NEW MARKERS OF KIDNEY INJURY
37. Koyner JL, Bennett MR, Worcester EM, et al. (2008). Urinary cystatin C as an early biomarker of acute kidney injury following adult cardiothoracic surgery. Kidney Int, 74:1059–1069. 38. Rozell B, Hansson H-A, Guthenberg M, Tahir G, Mannervik B (1993). Glutathione transferases of classes α, μ and π show selective expression in different regions of rat kidney. Xenobiotica, 23:835–849. 39. Smith GJ, Ohl VS, Litwack G (1977). Ligandin, the glutathione S-transferases, and chemically induced hepatocarcinogenesis: a review. Cancer Res, 37:8–14. 40. Rowe JD, Nieves E, Listowsky I (1997). Subunit diversity and tissue distribution of human glutathione S-transferases: interpretations based on electospray ionization-MS and peptide sequence–specific antisera. Biochem J, 325:481–486. 41. Mannervik B, Awasthi YC, Board PG, et al. (1992). Nomenclature for human glutathione transferases. Biochem J, 282:305–306. 42. Sundberg AG, Nilsson R, Appelkvist EL, Dallner G (1993). Immunohistochemical localization of alpha and pi class glutathione transferases in normal human tissues. Pharmacol Toxicol, 72:321–331. 43. Beckett GJ, Hayes JD (1993). Glutathione S-transferases: biomedical applications. Adv Clin Chem, 30:281–380. 44. Bass NM, Kirsch RE,Tuff SA, Campbell JA, Saunders JS (1979). Radioimmunoassay measurement of urinary ligandin excretion in nephrotoxin-treated rats. Clin Sci, 56:419–426. 45a. Sundberg AG, Appelkvist EL, Backman L, Dallner G (1994). Quantitation of glutathione transferase-pi in the urine by radioimmunoassay. Nephron, 66: 162–169. 45b. Sundberg AG, Appelkvist EL, Backman L, Dallner G (1994). Urinary pi-class glutathione transferase as an indicator of tubular damage in the human kidney. Nephron, 67:308–316. 46. Sundberg AG, Nilsson R, Appelkvist EL, Dallner G (1995). ELISA procedures for the quantitation of glutathione transferases in the urine. Kidney Int, 48: 570–575. 47. Kharasch ED, Thorning D, Garton K, Hankins DC, Kilty CG (1997). Role of renal cysteine conjugate b-lyase in the mechanism of compound A nephrotoxicity in rats. Anesthesiology, 86:160–171. 48. Eger EI II, Koblin DD, Bowland T, et al. (1997). Nephrotoxicity of sevofluorane versus desulfrane anesthesia in volunteers. Anesth Analg, 84:160–168. 49. Kilty CG, Keenan J, Shaw M (2007). Histologically defi ned biomarkers in toxicology. Expert Opin Drug Saf, 6:207–215. 50. Ichimura T, Bonventre JV, Bailly V, et al. (1998). Kidney injury molecule-1 (KIM-1), a putative epithelial cell adhesion molecule containing a novel immunoglobulin domain, is up-regulated in renal cells after injury. J Biol Chem, 273:4135–4142. 51. Lisitsyn N, Lisitsyn N, Wigler M (1993). Cloning the differences between two complex genomes. Science, 259:946–951. 52. Hubank M, Schatz DG (1994). Identifying differences in mRNA expression by representational difference analysis of cDNA. Nucleic Acids Res, 22:5640–5648.
REFERENCES
355
53. Kaplan G, Totsuka A, Thompson P, Akatsuka T, Moritsugu Y, Feinstone SM (1996). Identification of a surface glycoprotein on African green monkey kidney cells as a receptor for hepatitis A virus. EMBO J, 15:4282–4296. 54. Feigelstock D, Thompson P, Mattoo P, Zhang Y, Kaplan GG (1998). The human homolog of HAVcr-1 codes for a hepatitis A virus cellular receptor. J Virol, 72:6621–6628. 55. Kuchroo VK, Umetsu DT, DeKruyff RH, Freeman GJ (2003). The TIM gene family: emerging roles in immunity and disease. Nat Rev Immunol, 3:454–462. 56. Meyers JH, Chakravarti S, Schlesinger D, et al. (2005). TIM-4 is the ligand for TIM-1 and the TIM-1-TIM-4 interaction regulates T cell proliferation. Nat Immunol, 6:455–464. 57. Won KH, Bailly V, Aabichandani R, Thadhani R, Bonventre JV (2002). Kidney injury molecule-1(KIM-1): a novel biomarker for human renal proximal tubule injury. Kidney Int, 62:237–244. 58. Bailly V, Zhang Z, Meier W, Cate R, Sanicola M, Bonventre JV (2002). Shedding of kidney injury molecule-1, a putative adhesion protein involved in renal regeneration. J Biol Chem, 277:39739–39748. 59. Ichimura T, Hung CC, Yang SA, Stevens JL, Bonventre JV (2004). Kidney injury molecule-1: a tissue and urinary biomarker for nephrotoxicant-induced renal injury. Am J Renal Physiol, 286:F552–F563. 60. Vaidya VS, Ramirez V, Ichimura T, Bobadilla NA, Bonventre JV (2006). Urinary kidney injury molecule-1: a sensitive quantitative biomarker for early detection of kidney tubular injury. Am J Renal Physiol, 290:F517–F529. 61. Zhou Y, Vaidya VS, Brown RP, et al. (2008). Comparison of kidney injury molecule-1 and other nephrotoxicity biomarkers in urine and kidney following acute exposure to gentamicin, mercury and chromium. Toxicol Sci, 101:159–170. 62. Lydakis C, Lip GYH (1998). Microalbuminuria and cardiovascular risk. Q J Med, 91:381–391. 63. Kaysen JA, Myers BD, Cowser DG, Rabkin R, Felts JM (1985). Mechanisms and consequences of proteinuria. Lab Invest, 54:479–498. 64. Noth R, Krolweski A, Kaysen G, Meyer T, Schambelan M (1989). Diabetic nephropathy: hemodynamic basis and implications for disease management. Ann Intern Med, 110:795–813. 65. Viberti GC (1989). Recent advances in understanding mechanisms and natural history of diabetic disease. Diabetes Care, 11:3–9. 66. Morgensen CE (1987). Micoalbuminuria as a predictor of clinical diabetic nephropathy. Kidney Int, 31:673–689. 67. Parving HH, Hommel E, Mathiesen E, et al. (1988). Prevalence of microalbuminuria, arterial hypertension, retinopathy, neuropathy, in patients with insulindependent diabetes. Br Med J, 296:156–160. 68. Viberti GC, Hill RD, Jarrett RJ, Argyropoulos A, Mahmud U, Keen H (1982). Microalbuminuria as a predictor of clinical nephropathy in insulin-dependent diabetes mellitus. Lancet, 319:1430–1432. 69. Morgensen CK, Schmitz O (1988) The diabetic kidney: from hyperfiltration and microalbuminuria to end-stage renal failure. Med Clin North Am, 72:466–492.
356
NEW MARKERS OF KIDNEY INJURY
70. Rowe DJF, Dawnay A, Watts GF (1990). Microalbuminuria in diabetes mellitus: review and recommendations for measurement of albumin in urine. Ann Clin Biochem, 27:297–312. 71. Russo LM, Sandoval RM, McKee M, et al. (2007). The normal kidney filters nephritic levels of albumin retrieved by proximal tubule cells: retrieval is disrupted in nephritic states. Kidney Int, 71:504–513. 72. Koch Nogueira PC, Hadj-Assa A, Schell M, Dubourg L, Brunat-Metigny M, Cochat P (1998). Long-term nephrotoxicity of cisplatin, ifosamide, and methotrexate in osteosarcoma. Pediatr Nephrol, 12:572–575. 73. Prince CW, Oosawa T, Butler WT, et al. (1987). J Biol Chem, 262:2900–2907. 74. Sorensen ES, Hojrup P, Petersen TE (1995). Posttranslational modifications of bovine osteopontin: identification of twenty-eight phophorylation and three O-glycosylation sites. Protein Sci, 4:2040–2049. 75. Brown LF, Berse B, Van de Water L, et al. (1982). Expression and distribution of osteopontin in human tissues: widespread association with luminal epithelial surfaces. Mol Cell Biol, 3:1169–1180. 76. Singh PR, Patarca R, Schwartz J, Singh P, Cantor H (1990). Definition of a specific interaction between the early T lymphocyte activation 1 (Eta-1) protein and murine macrophages in vitro and its effect upon macrophages in vivo. J Exp Med, 171:1931–1942. 77. Weber GF, Cantor H. (1996). The immunology of eta-1/osteopontin. Cytokine Growth Factor Rev, 7:241–248. 78. Giachelli C, Bae N, Lombardi D, Majesky M, Schwartz S (1991). The molecular cloning and characterization of 2B7, a rat mRNA which distinguishes smooth muscle cell phenotypes in vitro and is identical to osteopontin (secreted phosphoprotein I, 2a). Biochem Biophys Res Commun, 177:867–873. 79. Liaw L, Lindner V, Schwartz SM, Chambers AF, Giachelli CM (1995). Osteopontin and beta 3 integrin are coordinately expressed in regenerating endothelium in vivo and stimulate ARG-GLY-ASP-dependent endothelial migration in vitro. Circ Res, 77:665–672. 80. Worcester EM, Blumenthal SS, Beshensky AM, Lewand DL (1992). The calcium oxalate crystal growth inhibitor protein produced by mouse kidney cortical cells in culture is osteopontin. J Bone Miner Res, 7:1029–1036. 81. Kleinman JG, Beshenky A, Worcester EM, Brown D (1995). Expression of osteopontin, a urinary inhibitor of stone mineral crystal growth, in rat kidney. Kidney Int, 47:1585–1596. 82. Shiraga H, Min W, Vandusen WJ, et al. (1992). Inhibition of calcium oxalate growth in vitro by uropontin: another member of the aspartic acid-rich protein superfamily. Proc Natl Acad Sci USA, 89:426–430. 83. Wuthrich RP (1998). The complex role of osteopontin in renal disease. Nephrol Dial Transplant, 13:2448–2450. 84. Rittling SR, Denhardt DT (1999). Osteopontin function in pathology: lessons from osteopontin-deficient mice. Exp Nephrol, 7:103–113. 85. Giachelli CM, Pichler R, Lombardi D, et al. (1994). Osteopontin expression in angiotensin II–induced tubulointerstitial nephritis. Kidney Int, 45:515– 524.
REFERENCES
357
86. Pichler RH, Franseschini N, Young BA, et al. (1995). Pathogenesis of cyclosporine nephropathy: roles of angiotensin II and osteopontin. J Am Soc Nephrol, 6:1186–1196. 87. Diamond JR, Kees-Folts D, Ricardo SD, Pruznak A, Eufemio M (1995). Early and persistent up-regulated expression of renal cortical osteopontin in experimental hydronephrosis. Am J Pathol, 146:1455–1466. 88. Kleinman JG, Worcester EM, Beshensky AM, Sheridan AM, Bonventre JV, Brown D (1995). Upregulation of osteopontin expression by ischemia in rat kidney. Ann NY Acad Sci, 760:321–323. 89. Yu XQ, Nikolic-Paterson DJ, Mu W, et al. (1998). A functional role for osteopontin expression in experimental crescentic glomerulonephritis in the rat. Proc Assoc Am Physicians, 110:50–64. 90a. Xie Y, Sakatsume M, Nishi S, Narita I, Arakawa M, Gejyo F (2001). Expression, roles, receptor, and regulation of osteopontin in the kidney. Kidney Int, 60:1645–1657. 90b. Xie Y, Nishi S, Iguchi S, et al. (2001). Expression of osteopontin in gentamicininduced acute tubular necrosis and its recovery process. Kidney Int, 59: 959–974. 91. Mirza M, Shaunessy E, Hurley JK, et al. (2008). Osteopontin-C is a selective marker for breast cancer. Int J Cancer, 122:889–897. 92. Agrawal D, Chen T, Irby R, et al. (2002). Osteopontin identified as lead marker of colon cancer progression, using pooled sample expression profiling. J Natl Cancer Inst, 94:513–521. 93. Kjeldsen L, Johnson AH, Sengelov H, Borregaard N (1993). Isolation and primary structure of NGAL, a novel protein associated with human neutrophil gelatinase. J Biol Chem, 268:10425–10432. 94. Cowland JB, Borregaard N (1997). Molecular characterization and pattern of tissue expression of the gene for neutrophil gelatinase–associated lipocalin from humans. Genomics, 45:17–23. 95. De Broe M (2006). Neutrophil gelatinase–associated lipocalin in acute renal failure. Kidney Int, 69:647–648. 96. Mishra J, Ma Q, Prada A, et al. (2003). Identification of neutrophil gelatinase– associated protein as a novel early urinary biomarker of renal ischemic injury. J Am Soc Nephrol, 14:2534–2543. 97. Mishra J, Dent C, Tarabishi R, et al. (2005). Neutrophil gelatinase–associated lipocalin (NGAL) as a biomarker for acute renal injury after cardiac surgery. Lancet, 365:1231–1238. 98. Bolignano D, Coppolino G, Campo S, et al. (2007). Urinary neutrophil gelatinase–associated lipocalin (NGAL) is associated with severity of renal disease in proteinuric patients. Nephrol Dial Transplant, 23:414–416. 99. Hirsch R, Dent C, Pfriem H, et al. (2007). HNGAL as an early predictive biomarker of contrast-induced nephropathy in children. Pediatr Nephrol, 22: 2089–2095. 100. Falkenberg FW, Hildebrand H, Lutte L, et al. (1996). Urinary antigens as markers of papillary toxicity: I. Identification and characterization of rat kidney papillary antigens with monoclonal antibodies. Arch Toxicol, 71:80–92.
358
NEW MARKERS OF KIDNEY INJURY
101. Hildebrand H, Rinke M, Schluter G, Bomhard E, Falkenberg FW (1999). Urinary antigens as markers of papillary toxicity: II. Application of monoclonal antibodies for the determination of papillary antigens in rat urine. Arch Toxicol, 73:233–245. 102. Price S, Betton G. Personal communication, unpublished data.
PART V TRANSLATING FROM PRECLINICAL RESULTS TO CLINICAL AND BACK
359
18 TRANSLATIONAL MEDICINE— A PARADIGM SHIFT IN MODERN DRUG DISCOVERY AND DEVELOPMENT: THE ROLE OF BIOMARKERS Giora Z. Feuerstein, M.D., Salvatore Alesci, M.D., Ph.D., Frank L. Walsh, Ph.D., J. Lynn Rutkowski, Ph.D., and Robert R. Ruffolo, Jr., Ph.D. Wyeth Research, Collegeville, Pennsylvania
DRUG TARGETS: HISTORICAL PERSPECTIVES Drugs are natural or designed substances used deliberately to produce pharmacological effects in humans or animals. Drugs have been part of human civilizations for millennia. However, until the very recent modern era, drugs have been introduced to humans by empiricism and largely by serendipitous events such as encounters with natural products in search of food or by avoiding hazardous plants and animal products. The emergence of the scientific era in drug discovery evolved alongside the emergence of physical and chemical sciences at large, first as knowledge to distill, isolate, and enrich the desired substance from its natural environment, followed by deliberate attempts to modify natural substances to better serve the human needs and desires. Scientific evolution throughout the past two centuries enabled identification of biologically active substances in humans (e.g., hormones) which were
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
361
362
TRANSLATIONAL MEDICINE
manipulated chemically to improve (potency, duration of action, and exposure), or to mitigate or abrogate undesirable actions. The cumulative knowledge of human, animal, and plant biology and chemistry provided the scientific foundation and technical capabilities to alter natural substances purposely in order to improve them. Such evolution marked the era of forward pharmacology. The era of forward pharmacology is about drug design that emanates from primary knowledge of the action of the biological target that has clear biological action. The exponential progress in molecular biology since the mid-twentieth century, culminating in deciphering the complete human genome in the year 2000, brought the dawn of pharmacogenomics and the reverse pharmacology era. The reverse pharmacology era is defined by the need, first, to clarify the biology and medical perspectives of the target so as to qualify it as a drugable and pharmaceutically exploitable for drug discovery and development scheme. The pharmacogenomic era provides vast opportunities for selection of new molecular targets from a gamut of approximately 30,000 primary genes, over 100,000 proteins, and multiples of their translational and metabolomics products. Thus, the permutations in respect to opportunities for pharmacological interventions are unprecedented, vast, and most promising for innovative medicines. The pharmacogenomics era as a source for drug targets also poses unprecedented hurdles in selection, validation, and translation into effective and safe drugs. New technologies continue to drive efficiency and robustness of mining the genomic drug discovery opportunities, but physiological and integrated biology knowledge is lagging. In this perspective, translational medicine and biomarkers research have taken center stage in validation of the molecular target for pharmaceutical exploitation. In this chapter we offer a utilitarian approach to biomarkers and target selection and validation that is driven by the translational medicine prospect of the target to become a successful drug target. We hereby offer classification and analytical process aimed to assess risk, innovation, feasibility, and predictability of success of translating novel targets into successful drugs. We provide clear definitions of the type of biomarkers that are the core of translational medicine and biomarkers research in modern pharmaceutical companies.
BIOMARKERS: UTILITARIAN CLASSIFICATION Biomarkers are the stepping-stones for modern drug discovery and development [1–4]. Biomarkers are defined as biological substances or biophysical parameters that can be monitored objectively and reproducibly and used to predict drug effect or outcome. This broad definition is, however, of little utility to the pharmaceutical process since it carries no qualification for the significance and use of the biomarker. The following classes and definitions of biomarkers are therefore offered:
BIOMARKERS: UTILITARIAN CLASSIFICATION
363
1. Target validation: biomarkers that assess the relevance and the potential for a given target to become the subject of manipulation that will modify the disease to provide clear therapeutic benefits while securing a sufficient therapeutic index of safety and tolerability. 2. Compound-target interaction biomarkers: biomarkers that define the discrete parameters of the compound (or biological) interaction with the molecular target. Such parameters include binding of the compound to the target, its residency time on the target, the specific site of interaction with the target, and the physical or chemical consequences to the target induced by the compound (or biological). 3. Pharmacodynamic biomarkers: biomarkers that predict the consequence(s) of compound (biological) interaction with the target. The pharmacodynamic biomarkers include events that are desired therapeutically and adverse events based on mechanism of action. Pharmacodynamic biomarkers can report on discrete molecular events that are proximal to the biochemical pathway that is modified by the manipulated target or remote consequences such as in vivo or clinical outcomes (morbidity or mortality). Pharmacodynamic biomarkers are diverse and frequently nonobvious. Advanced and sophisticated bioinformatics tools are required for tracking the divergence and convergence of signaling pathways triggered by compound interaction with the target. A subset of the pharmacodynamic biomarkers are consequences induced by the compound outside its intended mechanism of action. Such pharmacodynamic effects are often termed “off-target” effects, as they are not the direct consequence of the compound interaction with the target. Usually, such pharmacodynamic events are due to unforeseen lack of selectivity or metabolic transformations that yielded metabolites not present (or detected) in the animals used for safety and metabolic studies prior to launch of the compound into human trials or into human use. These issues are not dealt with in this chapter. 4. Disease biomarkers: biomarkers that correlate statistically with the disease phenotype (syndrome) for which therapeutics are developed. Correlation of levels (in the circulation, other fluids or tissue) or expression patterns (gene, protein) in peripheral blood cells or tissues should signify disease initiation, progression, regression, remission, or relapse. In addition, duration of aberrantly expressed biomarkers could also be associated with risk for disease, even if the level of the biomarker does not change over time. Since disease biomarkers are defined by their statistical correlation to features of the disease, it is imperative that the clinical phenotyping is clearly defined. Stratification of all possible phenotypic variables is clearly a prerequisite for accurate assessment of the discrete relationships of the biomarker to the disease. Gender, age, lifestyle, medications, and physiological and biochemical similarities are
364
TRANSLATIONAL MEDICINE
often not sufficiently inclusive, resulting in a plethora of disease biomarker claims that are often confusing and futile. 5. Patient selection: biomarkers that are used for selection of patients for clinical studies, specifically proof-of-concept studies or confirmation phase III clinical trials that are required for drug registration. These biomarkers are important in helping to select patients likely to respond (or conversely, not respond) to a particular treatment or a drug’s specific mechanism of action, and potentially predict those patients who may experience adverse effects. Such biomarkers are frequently genetic (single-nucleotide polymorphism, haplotypes) or pharmacogenomic biomarkers (gene expression), but could be any of the primary pharmacodynamic biomarkers. Biomarkers for patient selection are now mainstream in exploratory clinical trials in oncology, where genotyping of tumors in view of establishing the key oncogenic “driver(s)” are critical for the prediction of potential therapeutic benefits of modern treatments with molecular targeting drugs. The success of the new era of molecular oncology (as compared to the cytotoxic era) will depend largely on the ability to define these oncogenic signaling pathways via biomarkers such as phosphorylated oncogenes, or the functional state due to mutations that cause gain or loss of function. 6. Adaptive trial design: The objectives of adaptive design trials are to establish an integrated process to plan, design, and implement clinical programs that leverage innovative designs and enable real-time learning. The method is based on simulation-guided clinical drug development. In a first step, the situation is being assessed, the path forward and decision criteria defined, and assumptions analyzed. Adaptive trials have become an enabler strategy, and they work to integrate competing positions and utilities into a single aligned approach and to force much clearer articulation and quantification on the path forward. Once this framework is established, a formal scenario analysis that compares the fingerprints of alternative designs through simulation is conducted. Designs that appear particularly attractive to the program are further subjected to more extensive simulation. Decision criteria steer away from doses that are either unsafe or nonefficacious and aim quickly to hone in onto the most attractive dose range. Response-adaptive doseranging studies deploy dynamic termination rules (i.e., as soon as a noeffective-dose scenario is established, the study is recommended for termination). Bayesian approaches are ideally suited to enable ongoing learning and dynamic decision making [5]. The integrator role of adaptive trials is particularly strong in establishing links between regulatory accepted “confirm”-type endpoints and translational medicine’s efforts to develop biomarkers. Search for biomarkers that may enable early decision making need to be read out early to gain greater confidence in basing decisions on them. A biomarker can be of value even if it only
BIOMARKERS: UTILITARIAN CLASSIFICATION
365
allows a pruning decision. These considerations highlight the importance of borrowing strength from indirect observations and use mathematical modeling techniques to enhance learning about the research question. For example, in a dose-ranging study, it is assumed that there should be some relationship between the response of adjacent doses, and this assumption can be used to model an algorithm. Both safety and efficacy considerations can be built into this model: ideally, integration of all efforts, from disease modeling in discovery to PK/PD modeling in early clinical development to safety/risk and business case modeling in late development [4–7]. The utility of this system is represented in Figures 1 and 2, which suggest a semiquantitative scoring system that helps assess the strength of the program
Target Validation BioM
Disease BioM
Target/Compound Interaction BioM
Pharmacodynamic BioM
Patients Selection BioM
Minimal data such as human SNP; target expression not defined
1
Weak
2
Moderate
3
Strong
1
Weak
2
Moderate
3
Strong
Surrogate endpoint
1
Weak
Likely very difficult; no prior art
2
Moderate
3
Strong
1
Weak
2 Moderate
Good human genetic and genomics data as well as genetically modified animal models, specific expression of target, and known function All the above and marketed (phase III) compound POC available No validated biomarker Validated but not surrogate
Possible but not proven; prior art with reference cmpd Ligand available, POC with reference cmpd; access to target Unclear; no assays for animal models or humans PD BioM identified in experim ental models and human but no validated assays established PD BioM identified in experim ental models and human but no validated assays established No known genetic/SNP predisposition; no BioM assay
3
Strong
1
Weak
2
Moderate
Known genetic/SNP predisposition
3
Strong
All of the above, prior art with cmpd and assay in place
Figure 1
Criteria for biomarker scoring.
366
TRANSLATIONAL MEDICINE
examples Class A
Target highly specific to the disease
CML: BCI/Abl DVT: FV Leiden MG: anti-AchR Ab
Class B
Target normally present in humans but activated largely in disease state
Thrombosis: GPIIb/IIIa P-selectin
Class C
Target present and functions in normal physiology but excessively active in disease state
Stroke: -glutamate Breast Cancer -GFRK
Class D
Target known to be present and functions indiscriminately in normal or disease states
Hypertension -renin -Ca+2 channel (L-type) -Cholesterol
Safety High
Potential for MOA AE
Figure 2 Type I biomarkers: target validation translational medicine perspectives. CMI, chronic myelocytic leukemia; DVT, deep vein thrombosis; FV, factor V; MG, myasthenia gravis; AchR, acetylcholine receptor; GPIIb/IIIa, platelet integrin receptor; GFRK, growth factor receptor kinase; MOA, mechanism of action; AE, adverse effects.
overall and identification of the areas of weaknesses in each of the biomarkers needed along the compound (biological) progression path. Figure 3 illustrates the continuum of biomarkers research, validation, and implementation along the complete time line of drug discovery and development, including life-cycle management (phase IV) and new indication (phase V) when appropriate. Figure 3 illustrates the interfaces of translational medicine within the traditional drug discovery and development process, while Figure 4 represents the new model of “learn and confirm,” where biomarkers figure prominently in driving the “learn” paradigm. A program for which a STRONG score is established across all five biomarkers’ specifications provides confidence of the likelihood for success from the biological and medical perspectives and is likely to result in a more promising development outcome. Similarly, it would be prudent to voice concerns regarding programs that score WEAK, especially if low scores are assigned to target validation, pharmacodynamic, and in special cases, target-compound interaction (e.g., central nervous system target). This scoring system is complementary to other definitions of biomarkers based on certain needs. For example, surrogate biomarkers as defined by the U.S. Food and Drug Administration (FDA) are markers that can be used for drug registration in
BIOMARKERS: UTILITARIAN CLASSIFICATION
TV
CTI
PD
DM
PS/AD
C A N D I D A T E
PS/AD DM PD
367
L E A D
CTI TV
Experimental
PreDevelopment
Discovery
Dev Track
Phase 1
2
3
4
Figure 3 Building translational medicine via biomarker research. (See insert for color reproduction of the figure.)
Exp
PreDev
Discovery
Dev Track
Phase
Discovery
1
2
3
4
Confirm
L
CR & D Learn
C M
Early biomarkers team Strategy & initiatives
Validation & Implementation biomarkers team
Translational Medicine-Biomarkers Research
Figure 4 Translational medicine: biomarker implementation along the pipeline. Exp, exploratory phase; Pre-Dev, predevelopment track; CR&D, clinical research and development; LCM, life-cycle management.
368
TRANSLATIONAL MEDICINE
lieu of more definitive clinical outcome data. Surrogate biomarkers are few and difficult to establish (e.g., blood pressure and cholesterol; Figure 2).
PRINCIPLES OF TARGET SELECTION Two key guiding principles are essential in the early selection process of molecular targets: 1. Modulating the target carries the prospect of unequivocal medical benefit (efficacy) to patients beyond a standard of care. 2. Benefits can be garnered while maintaining a sufficient level of safety that can be realized within the attainable compound exposure. Such a mission is frequently unachievable, and hence establishing an acceptable therapeutic index is the practical goal for most drug development schemes. Commonly, a therapeutic index is established by calculation of the ratio of the maximum tolerated dose (MTD) and the minimum effective dose (MED) in animal efficacy and safety studies. In this light, targets selected for drug development can be classified with respect to risk assessment based on the following categories (Figure 2): Class A. The target is only present and contributing to the disease process. Class B. The target is present physiologically but in a nonactive form, but then is activated and contributes to the disease. Class C. The target functions physiologically but in an augmented, uncontrolled fashion that contributes to the disease. Class D. The target functions in normal states and indiscriminately in disease (e.g., no difference in target expression, functions, or distribution can be identified in disease as compared to the normal physiological state). Class A: Disease-Specific Target A disease-specific molecular target should be a molecule that operates only in the disease state and does not participate in physiological (normal) functions. Drug interaction with such targets should provide efficacy with the lowest chance for mechanism-based adverse effects when manipulated by drugs. Examples of such targets are genetic disorders, which result in either overactivity or loss of activity of the target. Such is the case in chronic myelogenous leukemia (CML), which results from aberrant recombination of DNA from chromosome 22 into chromosome 9 (Philadelphia chromosome), fusing the Bcr and Abl genes into an overacting tyrosine kinase, which drives oncogenic transformation. To cure the disease, potent and selective inhibitors of
PRINCIPLES OF TARGET SELECTION
369
this aberrant kinase had to be discovered, a task that took over a decade to accomplish [8]. Such targets have the potential for a high safety profile. It is however, important to note that this example may not necessarily represent the ultimate approach for this disease since the activity of the kinase (Bcr/Abl) is driven by the Abl kinase catalytic site, which is preserved in its physiological format. Thus, inhibitors of this target/kinase by drugs such as Gleevec may still carry the potential for interference in tissue and cells in which the Abl kinase is physiologically active. Another example that is applicable to this category is a disease such as myasthenia gravis, where specific antibodies that block the acetylcholine receptors cause progressive muscle weakness. Specific neutralizing agents to these antibodies are likely to provide high efficacy in treating the disease, with the likelihood of fewer adverse effects [9] since such antibodies are not physiologically present in human beings. These examples are typical for type 1 class A target validation. The biomarkers that need to be established for this category should focus on validating the specificity of the target to the disease state. Class B: Target Present Physiologically in a Nonactive Form but Is Activated and Contributes to the Disease This class of targets has little or no discernible physiological activity in normal states, yet in certain pathophysiological situations, the target is presented, activated, and plays a role in a pathophysiological event. Example of such targets in the type 1 class B category is the P-selectin adhesion molecule. This adhesion molecule is normally cryptic within platelets and endothelial cells. Upon activation of these cells, P-selectin is presented on the surface of the cell and mediates adhesion interaction with its ligand, a mechanism believed to play a role in thrombosis and inflammation. Inhibitors of P-selectin binding to its ligand, the P-selectin glycoprotein ligand (PSGL-1), are expected to provide clinical benefit with a lower likelihood of adverse events. To validate this situation, biomarkers that confirm the preferential role of the activated target in a pathophysiological process while maintaining little physiological function are essential. However, one must be aware of potentially serious limitations to this approach where cryptic targets in the physiological state that are activated in pathological conditions and where inhibition of the target may not only provide for significant therapeutic benefit, but where inhibition of such a target may also expose the patient to some other risk, such as loss of host defense from injury. Such is the case of the platelet adhesion integrin molecule, GPIIb/IIIa, which serves as the final common pathway for platelet aggregation. Interfering with activated GPIIb/IIIa binding to its ligand (e.g., fibrinogen) provides effective and often lifesaving therapy in patients with acute risk for thrombosis; however, chronic treatment with GPIIB/IIIa antagonists have not been particularly effective in providing benefits, due to the
370
TRANSLATIONAL MEDICINE
relatively high frequency of significant adverse effects due to bleeding since platelet adhesion to matrix protein is essential to seal bleeding sites in trauma and disease conditions. Thus biomarkers for this class must establish the full physiological significance of the target in order to assess the therapeutic index of tolerability (benefits as well as risks). Class C: Target Functions Physiologically but in an Augmented, Uncontrolled Fashion That Contributes to the Disease This class of targets includes molecules that play an active role in normal physiological processes, some of which may be critical to health. Such is the neurotransmitter glutamate in the central nervous system, which is essential to cognition, memory, thought processes, and state of arousal. However, in ischemic stroke or asphyxia, glutamate release is uncontrolled and reaches to high levels over prolonged periods that are believed to be neurotoxic and likely contribute to neuronal death following a stroke. Inhibitors of glutamate release or antagonists of its action at various receptors are believed to carry the potential for effective treatment for stroke provided that the inhibition of excess release of the neurotransmitter can be achieved in a timely manner and only to an extent that preserves the physiological need of this transmitter, and over short periods (only that limited period where excess glutamate is neurotoxic). Such targets may be pharmaceutically exploitable when their manipulation is tuned carefully to the pathophysiological context. Another example of a target in this category includes the human growth factor receptor kinase (hGFRK) inhibitor, Herceptin, which in certain cancers (e.g., breast cancer) is constitutively activated and participates in the oncogenic drive. Inhibition of the hGFRK, while clearly of therapeutic value in breast cancer, has also been associated with heart failure due to the physiological role of GFRK in the cardiac myocyte survival signaling pathway [10]. Thus, the biomarker challenge in modulation of class C targets of this nature is in identifying biomarkers that assess the needed “titration” for inhibition of the target activity only to an extent that is necessary for the maintenance of normal physiological function. Class D: Target Maintains Physiological Functions in Normal and Disease States This class of targets encompasses the largest group of molecular targets exploited so far by modern drugs. Many members of this class have yielded highly beneficial therapies. This class consists of molecular targets that are known to have important physiological functions, which cannot be differentiated within a disease context; that is, the target is not different in its expression levels (gene, protein) or signaling pathway in normal and disease states. A priori, such targets harbor the greatest risk for mechanism-based adverse effects, as there is no apparent reason to expect that modulation of the target in the disease state will spare the normal physiological function of the target.
SUMMARY
371
Examples of such targets include the coagulation factor inhibitors (e.g., FIX, FXa, and thrombin), which are critical to maintain a physiological level of homeostasis; hence, inhibition of these targets carries inherent bleeding liabilities. Similarly, all current antiarrhythmic drugs (e.g., amiodarone, lidocaine, dofetilide), while effective in treating life-threatening arrhythmias, all carry significant liability for mechanism-based pro-arrhythmic effects and the potential for sudden death. The biomarker challenges for this class are defining the fine balance needed between efficacy in the disease context and expected safety limitations. Biomarkers that define the acceptable therapeutic index are key to the successful utility of drugs that modulate such targets. However, targets in this class do not necessarily exhibit a narrow safety margin for clinically meaningful adverse effects. Significant examples are the L-type Ca2+ channel blockers. The L-type Ca2+ channel is an essential conduit of Ca2+ needed for “beat by beat” Ca2+ fluxes that secure precise rhythm and contractility of the heart, skeletal muscle, neuronal excitability, and hormone and neurotransmitter release. Yet L-type Ca2+ channel blockers are important and sufficiently safe drugs that are used to treat hypertension, angina, and cardiac arrhythmias with undisputable medical benefits. However, inherent to the L-type Ca2+ channel blockers in this class of targets are mechanism-based adverse effects associated with rhythm disturbances, hypotension, edema, and other liabilities. Probably the best example for system specificity of physiological targets that provide major medical benefits with a high safety margin is the renin– angiotensin–aldosterone (RAAS) system. The RAAS is an important blood pressure, blood volume, and blood flow regulatory system, yet its manipulation by several different pharmacological agents (rennin inhibitors, angiotensin I converting enzyme inhibitors, angiotensin II receptors antagonists) has yielded highly beneficial drugs that reduce risk of morbidity and mortality from hypertension, heart failure, and renal failure, despite the fact that the system does not demonstrate significant operational selectivity between normal and disease states (especially hypertension). However, mechanismbased hypotension and electrolyte disturbances can limit the therapeutic benefit of these drugs, and elicit significant adverse effects when the RAAS is excessively inhibited [11]. The biomarker challenge for these targets is to define the relative or preferential role of the target in its various physiological activities, where minor manipulation in one organ might provide sufficient therapeutic potential while providing a low likelihood for adverse effects that result from more substantial inhibition of the same target in other organs.
SUMMARY The analysis and classification offered in this chapter regarding biomarkers in drug discovery and development aim to highlight the need for careful study and analysis of the significance of the target selected for therapeutic intervention as the first crossroad for success or failure in the development of effective
372
TRANSLATIONAL MEDICINE
and safe drugs [12]. The analysis and utility of biomarkers along the process of drug discovery and development have become an integral part of the “learn and confirm” paradigm of drug discovery and development in leading pharmaceutical organizations such as Wyeth Research. Such analyses are useful to guide the “learn phase” in search for biomarkers that can better assess the benefits and risks associated with manipulation of the molecular target. The scope of this chapter does not allow for a detailed review of the “learn and confirm” paradigm, for which the readers are directed elsewhere [13,14]. Various technological and strategic activities are needed to establish the biomarker strategies for the various targets described. The need to address these issues via biomarker research, validation, and implementation beginning at the very early stages of the drug discovery and development process is emphasized. In the pharmaceutical setting, it means beginning efforts to identify biomarkers for all five categories listed above. Such efforts could begin even before a tractable compound (biological) is in hand, a time where target validation is a clear focus of the program. As a compound becomes available, compound–target interaction, pharmacodynamic (efficacy and safety) biomarkers, and strategies for patient selection and adaptive design needs must be explored. At the onset of the first-in-human studies, all strategies, plans, and biomarker research should be as well worked out as possible. We believe that fundamental changes in the structure, function, and interfaces of pharmaceutical R&D are urgently needed to provide a key role for translational medicine and biomarkers research toward more successful discovery and development of innovative medicines. REFERENCES 1. Biomarker Definition Working Group (2001). Biomarkers and surrogate biomarkers endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther, 69:89–95. 2. Trusheim R, Berndt ER, Douglas FL (2007). Stratified medicine: strategic and economic implications of combining drugs and clinical biomarkers. Nat Rev Drug Discov, 6:287–293. 3. Feuerstein GZ, Rutkowski JL, Walsh FL, Stiles GL, Ruffolo RR Jr (2007). The role of translational medicine and biomarkers research in drug discovery and development. Am Drug Discov, 2:23–28. 4. FDA (2004). Challenge and opportunity on the critical path to new medical products. http:www.fda.gov/oc/initiatives/criticalpath/whitepaper.pdf. 5. Berry DA (2006). Bayesian clinical trials. Nat Drug Discov Rev, 5:27–36. 6. Gallo P, Chuang-Stein C, Dragalin V, Gaydos B, Krams M, Pinheiro J (2006). Adaptive design in clinical drug development: an executive summary of the PhRMA working group. J Biopharm Stat, 16:275–283. 7. Krams M, Lees KR, Hacke W, Grieve AP, Orgogozo J-M, Ford GA (2003). Acute stroke therapy by inhibition of neutrophils (ASTIN): an adaptive dose–response study of UK-279,276 in acute ischemic stroke. Stroke, 34:2543–2548.
REFERENCES
373
8. Kurzrock R (2007). Studies in target-based treatment. Mol Cancer Ther, 6(9):2385. 9. Hampton T (2007). Trials assess myasthenia gravis therapies. JAMA, 298(1):29–30. 10. Chien KR (2006). Herceptin and the heart: a molecular modifier of cardiac failure. N Engl J Med, 354:789–790. 11. Hershey J, Steiner B, Fischli W, Feuerstein GZ (2005). Renin inhibitors: an antihypertensive strategy on the verge of reality. Drug Dev Today, 2:181–185. 12. Simmons D (2006). What makes a good anti-inflammatory drug target? Drug Discov Dev, 5–6:210–219. 13. Gombar C, Loh E (2007). Learn and confirm. Drug Discov Dev, 10:22–27. 14. Sheiner LB (1997). Learning versus confirming in clinical drug development. Clin Pharmacol Ther, 61:275–291.
19 CLINICAL VALIDATION AND BIOMARKER TRANSLATION David Lin, B.MLSc. University of British Columbia, Vancouver, British Columbia, Canada
Andreas Scherer, Ph.D. Spheromics, Kontiolahti, Finland
Raymond Ng, Ph.D. University of British Columbia, Vancouver, British Columbia, Canada
Robert Balshaw, Ph.D., and Shawna Flynn, B.Sc. Syreon Corporation, Vancouver, British Columbia, Canada
Paul Keown, M.D., D.Sc., MBA, Robert McMaster, D.Phil., and Bruce McManus, M.D., Ph.D. University of British Columbia, Vancouver, British Columbia, Canada
INTRODUCTION Throughout history, biological and pathogenic processes have been measured to monitor health and disease. The presence of “sweetness” in urine and blood was recognized thousands of years ago as an indication of the disorder now known as diabetes. By recent example, the characterization of infections has been achieved by performing cultures for microorganisms, both for identification and to establish sensitivities to antibiotics. Any such measures, however variously quantitated, were forerunners of what are now popularly referred Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
375
376
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
to as biomarkers. When clinical laboratories became well established after the middle of the twentieth century, many components of body fluids and tissues were assayed. These are all biomarkers of the physiological state of being. Modern biomarkers being discovered or sought are pursued and measured based on the same principles as the first, most rudimentary of disease indicators. Hundreds of biomarkers are now used in modern medicine for the prediction, diagnosis, and prognostication about disease, as well as for monitoring preventive or therapeutic interventions. A sizable challenge now lies in selecting novel biomarkers that are most appropriate for clinical use in the face of many new candidates being suggested by various investigators. Biomarkers are often thought of as strictly molecular in nature, but in fact they include a vast range of objectively measurable features. Any characteristic that reflects a normal biological process, pathogenic process, or a pharmacologic response to an intervention can potentially become a clinically useful biomarker [1]. Thus, there are many different types of biomarkers, including molecular, physiological, or structural features of biological systems. However, how a specific measure reaches the status of customary has not been so clearly established. The focus on using biomarkers in clinical decision making and diagnosis has expanded, and similarly, biomarkers are playing an ever-increasing role in drug development processes and for regulatory decision making. As drug development costs continue to rise, biomarkers have become increasingly important, as they are a potential means to decrease the time for development, costs, and late-phase attrition rates in the regulatory approval process for new drugs: • Reduce clinical trial costs: • Decrease late-phase attrition rates. • Replace time-consuming clinical endpoints. • Reduce required clinical trial sample size by way of patient stratification: • Identify the population most likely to benefit. • Identify the population with a high level of risk for events of interest. • Provide more robust assays than some conventional clinical endpoints. • Help improve models for calculating return on investment. For example, cost and time can be reduced when biomarkers help to segment patient groups early in trials, and some new markers may be more timely than many of the more traditional clinical endpoints or outcomes used currently in assessing clinical trials, such as patient survival [2,3]. Biomarkers are also beneficial tools for selecting the best candidates for a trial or to increase safety through more effective drug monitoring. Ultimately, by reducing the required time investment to demonstrate a drug’s safety and efficacy,
BIOMARKER DISCOVERY, DEVELOPMENT, AND TRANSLATION
377
biomarkers may greatly reduce the costs and risks of performing a clinical trial.
BIOMARKER DISCOVERY, DEVELOPMENT, AND TRANSLATION Biomarker discovery can be performed using animal models, but it is now commonly carried out in humans from the very beginning stages of biomarker development. Biomarkers analyzed in preclinical animal studies are eventually transferred to the human, clinical settings. In a controlled laboratory environment the conditions are relatively constant, and “subjects” (i.e., the animals) are homogeneous and free of complicating co-morbidities. In these laboratory settings there are generally more options regarding the assays available, and the possibility of frequent and repeated testing in individual animals or groups of similar or identical animals allows for changes in the measured matrix of analytes to be detected with high precision and sensitivity. Ideally, a candidate biomarker discovered in this fashion would then be transferred into the clinical environment and evaluated further on human samples. The drawback of this approach is that many of the biomarkers discovered in animal models cannot be translated for use in humans. Animal models often do not accurately reflect human biology [4]. Further, the biomarker candidate may not achieve acceptable performance standards in heterogeneous patients with age, gender, and racial differences. This is why it has been suggested that pilot studies of biomarkers be conducted in humans first, in early phase II clinical trials that incorporate the variability in various ambient influences, and then that the marker be validated in preclinical studies, and in a later stage, clinical trials in parallel. This approach expedites the biomarker development process and minimizes the attrition rate of biomarker candidates since they are developed from the beginning under human clinical conditions. In patients, biomarker candidate discovery is performed preliminarily in an internal primary cohort and is confirmed subsequently in an external secondary cohort. These two discovery phases are typically performed in observational or retrospective cohorts, and less and less often in preclinical studies. Some companies are avoiding animal models for discovery all together to avoid spending time and money on animal-based biomarkers that can often lead to a dead end. It is not always practical to pursue validation of biomarker candidates identified in the discovery process. It is important to establish parameters for rational, statistically sound, and evidence-based selection and rejection of candidate biomarkers [5–7]. The decision to continue development of a biomarker candidate is largely based on its potential to contribute cost-effectively to disease management [8]. Biomarkers used in the early phases of clinical development may be useful in providing more timely proof of concept or dose-range information than a real clinical endpoint [9]. Biomarker development should also be driven by clinical need [8]. A clinically useful biomarker
378
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
must favorably affect clinical outcomes such as decreasing toxicity or increasing survival [10]. Once cost-benefit ratios are evident and more than one institution has consistently confirmed a biomarker’s ability to perform at the requisite levels of sensitivity and specificity, a biomarker is ready for prospective testing in clinical trials [8]. The U.S. Food and Drug Administration (FDA) pharmacogenomic submission guidelines recommend that transferring genomic biomarker sets from microarrays to other platforms (such as quantitative real-time polymerase chain reaction) be attempted only once it has been demonstrated that the differential expression of such genes is sensitive, specific, and reproducible. Analytical quality assurance is performed continually throughout biomarker and drug development for most biomarkers; however, if a biomarker is to be used as an endpoint in a phase III efficacy study, it should be validated beforehand [11]. Assay and analytical specificity and sensitivity should be established and validated prior to clinical phases such that clinical qualification can be carried out using proven, analytically robust methods [12]. However, as biomarker and drug development are intertwined processes, they may often occur concurrently throughout the different stages of clinical trials.
BIOMARKER VALIDITY AND VALIDATION: THE REGULATORY PERSPECTIVE The FDA’s Guidance for Industry: Pharmacogenomic Data Submission, published in 2005, has helped introduce and define exploratory and valid biomarkers. The FDA defines a valid pharmacogenomic biomarker as one that is measured in an analytical test system with well-established performance characteristics and for which there is an established scientific framework or body of evidence that elucidates the toxicological, pharmacological, or clinical significance of the test result. The FDA further classifies valid biomarkers as “probable” and “known” in terms of the level of confidence that they attain through the validation process. Probable valid biomarkers may not yet be widely accepted or validated externally but appear to have predictive value for clinical outcomes, whereas known valid status is achieved by those that have been accepted in the breadth of the scientific community. It is important to realize that the different classes of biomarkers reflect their levels of confidence (Figure 1). This can be thought of in a hierarchical manner, with exploratory biomarkers being potential precursors of clinically useful (probable or known) valid biomarkers [13]. Integrating biomarkers into clinical trials for eventual clinical use by identifying the best or most valid biomarker candidates is not a clearcut process. The term validity, particularly in the field of biomarkers research, is a broad concept that has been used to describe everything from the analytical methods to the characteristics of the biomarkers identified [14]. Validity is also used across multiple industries, not only medical or health disciplines. Therefore,
379
THE REGULATORY PERSPECTIVE
se
Str atif ie & B d Use ibli ogr of Bio aph info ica r l To matic al ols
Re Th fine e m Ef rap ent fic eu of ac tic y
Do si n Sa (Su g & fe rr Ef ty og fi & at cac Ef e) Fi y fic t fo ac rP y
ur po
Lo ng W Pati -term ell en -b t ein g
rd s
Continued Surveillance (Phase IV) Large Clinical Trial (Phase III) Clinical Trial (Phase I, II)
r Su
Gr
c ga ro
External Validation Different (2nd) Cohorts, Research Team & Technique Internal Validation Initial (1st) Cohorts, Different Techniques & Platforms
Phased Strategies of Validation
ea t
er
y
n ow Kn
d ifie ls e trat & S & Mo d ign s De s c h e ds, pproa tho Me tical A tis Sta
T
ds ar w o
ed uc d Re
ce en u eq ns Co
To wa
k is /R
Va
Se ns i
tiv
ity
lid
le ab ob r P
C
l Va
/S
pe ci fi c
id
i ty
rs – y ke or ar t a m or i o pl B ce Ex date en i d d i an nf
om Bi
rC ke r a
o
Figure 1 Biomarker validation. Biomarker development and validation are driven by intended use or fit-for-purpose (FFP). The principle of FFP validation is that biomarkers with false-positive or false-negative indications pertaining to high patient consequences and risks necessitate many phases of validation. There are five phases of validation that can be executed with the stratified use of various bioinformatical and bibliographical tools as well as different designs, statistical approaches, and modeling: (1) internal validation, (2) external validation, (3) clinical trials (phases I and II; checking for safety and efficacy), (4) large clinical trials (phase III), and (5) continued surveillance. Sensitivity and specificity correlate with the intended purpose of the biomarker, and the level of confidence that a biomarker achieves depends on the phase of validation that has been reached. In ideal cases biomarkers reach surrogate endpoint status and can be used to substitute for a clinical endpoint. This designation requires agreement with regulatory authorities, as the consequences of an ambiguous surrogate endpoint are high.
when referring to biomarkers, validation is sometimes termed qualification for clarity. Biomarker qualification has been defined as a graded fit-for-purpose evidentiary process linking a biomarker with biology and clinical endpoints [15,16]. Traditionally, the validity of clinical biomarkers has become established in a typically lengthy process through consensus and test of time [17]. Now more than ever, clear guidelines for validation are needed, as technological advances have drastically increased biomarker discovery rates. With the recent explosion of “omics” technologies and advancements in the fields of genomics, proteomics, and metabolomics, high-throughput biomarker discovery strategies are now widely used. This has created some unforeseen issues. Biomarker
380
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
Biomarker accepted or rejected for suggested use
Qualification study results reviewed
Qualification study strategy assessed
Biomarker context assessed and available data submitted in voluntary data submission
Biomarker qualification review team recruited (clinical and nonclinical)
Submit request to qualify biomarker for specific use
Figure 2
FDA biomarker qualification pilot process. (Adapted from ref. 17.)
candidate discovery now commonly outruns the rate at which the candidates are being validated, creating a bottleneck in biomarker (assay) development [18,19]. Great efforts are now being undertaken to accelerate the acceptance of biomarkers from exploratory to valid, as the goal of many research teams and drug companies is to streamline the translation of biomarker from basic science and discovery to clinical use [12]. Despite the definitions provided by the FDA and the availability of FDA’s Guidance for Industry: Bioanalytical Method Validation in 2001, there is still a lack of sufficient regulatory guidance for biomarker validation. The FDA has designed a qualification process map that sets the foundational framework toward establishing validation guidelines (Figure 2). This pilot structure, to start qualification processes for biomarkers in drug development, is designed around various FDA centers whereby the context and qualification of new biomarkers is assessed. Ultimately they are rejected or accepted for suggested use relative to current biomarkers. This may not be ideal, as it may be problematic to establish new biomarkers accurately based on the current biomarkers, which are themselves often imperfect relative to a specific endpoint [17]. Nonetheless, this pilot framework will eventually enable more detailed biomarker translation models, which address some of the remaining issues with the current guidelines, to be developed.
FIT-FOR-PURPOSE STRATEGY
381
There remains a lack of specific guidelines on which validation process(es) are recommended or expected in order to transition effectively from exploratory to valid biomarkers, or from probable valid biomarkers into known valid biomarkers [13,20]. Confusion still exists with regard to analyses or experiments that need to be performed and data that are both appropriate and sufficient for biomarker (assay) validation [20]. The confusion and inconsistency in the validation process are contributed to partially by the diverse nature of biomarker research [3,20]. Considering the large variety of novel biomarkers, their applications, and associated analytical methods, it is unlikely that FDA regulations or other available guidelines will easily be able to address validation issues associated with all possible research objectives [16,20]. Thus, it is incredibly difficult to establish, let alone use, a specific detailed universal validation guideline [20]. FIT-FOR-PURPOSE STRATEGY Despite the lack of universal guidelines or agreement on the specific requirements for biomarker assay development and validation, there is a general consensus in the biomarker research community that the foundation of validation efforts is to ensure that the biomarker(s), or the assay, is “reliable for its intended use” [20]. This principle is now commonly referred to as the fit-forpurpose validation strategy or guideline. This approach embraces the notion that depending on the intended purpose of the biomarker, the objectives and processes or types of validation will probably be different [21,22]. Dose selection, early efficacy assessment or candidate selection, and surrogacy development are all examples of biomarker clinical purposes, each of which have differing levels of validation requirement. The risk and consequence may be different depending on the purpose, even when the same biomarker is used [4]. Further, the degree of stringency and phase of the validation, both of which are discussed later in the chapter, should be commensurate with the intended application of the validation data (Figure 1) [22]. In that sense, the term validity should be thought of as a continuum—an evaluation of the degree of validity—rather than an all-or-none state [14,23]. Therefore, biomarker utility is not measured dichotomously and should not be classified as simply good or bad. The level of a biomarker’s worth is gauged on a continuous scale, with some being much more valuable than others, depending on what they indicate and how they can be applied [24]. Validation of biomarkers and establishment of their “worth” or “value” is a continually evolving and often iterative process (Figure 1). Application of the Fit-for-Purpose Strategy The fit-for-purpose (FFP) strategy is a fluid concept that can be applied to any clinical trial to validate biomarkers of interest. As described earlier, the classification and validation of a biomarker is context-specific, and the validation
382
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
criteria required are dependent on the intended use of the biomarker. The concept of fit-for-purpose validation is sensible, as biomarkers may be used to include or exclude patients from treatment and to determine dosing, toxicity, or safety. The consequence or risk of a false-negative or false-positive biomarker indication must be considered since even the most sensitive and specific assays are not perfect. A biomarker intended as a population screen must be very sensitive and extremely specific [25]. Any tests that are to be used in isolation for decision making require particularly stringent validation, whereas drug efficacy biomarkers that are typically used in groups or involve subsequent testing may require less stringency since the consequence of a false indication of efficacy is lower [26]. Biomarker Validation Process Utilizing the FFP strategy, various questions can be asked at the beginning of any biomarker research and development (R&D) project: For what reason are the biomarker(s) being identified? Is the clinical validation going to validate the biomarker for specificity, sensitivity, and reproducibility for diagnostic purposes, or to serve as a surrogate endpoint? What business-critical decisions will be made based on the biomarker data? These questions not only help determine the level of confidence required for the biomarker, but also help strategize regarding the phases of validation. For example, in the case of developing surrogate biomarkers to replace clinical endpoints, candidate biomarkers would, theoretically, evolve over time toward the point of surrogacy as the research project moves through different phases of validation. For the purpose of this chapter, the validation process from initial biomarker development to postmarket surveillance has been broken down into five major phased strategies (Figure 1). It is important to note that the overall process is a continuous loop driven by the intended purpose of the biomarker data, but the flow of the overall process may be subjected to change depending on the results generated at each phase [22]. In general, the process of biomarker validation can be described as a multifaceted process that includes determining sensitivity, specificity, and reproducibility of the assay and also clinical sensitivity and specificity [27]. This applies to both methodologic and clinical validation. Method validation pertains to the process by which the assay and its performance characteristics are assessed. This assessment is performed throughout biomarker translation and is based on several fundamental parameters: accuracy, precision, selectivity, sensitivity, reproducibility, and stability [16,22]. Method validation is not discussed in detail in this chapter.
PRECLINICAL AND CLINICAL VALIDATION PHASES Clinical validation is the documented process demonstrating the evidentiary link of a biomarker with a biological process and clinical endpoints [15,16].
PRECLINICAL AND CLINICAL VALIDATION PHASES
383
Continued Surveillence (Phase IV)
Large-Scale Clinical Trials (Phase III) Initially/Preliminarily Validated Biomarkers (“Probable Valid”)
Safety & Efficacy Clinical Trials (Phase I-II)
Analytical Quality Assurance (SOP Driven)
Prospective
Valid Biomarkers Adopted for Clinical Use (“Known Valid”)
Observational/ Retrospective
Best (FFP) Biomarker Candidates
Biomarker Discovery (Secondary Cohorts-External) Biomarker Candidates
Preliminary Biomarker Discovery (Primary Cohorts-Internal)
Figure 3 Biomarker development and translation. Biomarker translation from discovery to clinical use involves five general steps or phases. Biomarker candidates are initially discovered in a primary (internal) cohort and confirmed in a secondary (external) cohort through clinical observations. Biomarker candidates that have the best fit-for-purpose and also satisfy a clinical need then enter prospective phase I and II clinical trials. Once a biomarker is used in these early clinical trials to demonstrate safety and efficacy, they may be considered “initially validated biomarkers.” Following large-scale clinical trials (phase III), and once biomarkers have been used for decision making, they may be considered valid and may be adopted for clinical use. Biomarker assessment continues in postmarket phase IV trials. SOP-driven analytical quality assurance is performed throughout all processes of biomarker translation.
The validation process whereby biomarkers are translated from discovery to clinical use should be customized according to biomarker type, use, variability, and prevalence. However, the general process for validating any biomarker is the same. Before any biomarker can be applied clinically, it is subjected to analytical/method validation and also clinical validation. Biomarker validation can be performed in five general translational steps: preliminary biomarker discovery, biomarker discovery, safety and efficacy clinical trials, large-scale clinical trials, and continued surveillance (Figure 3). Clinical biomarkers are validated in retrospective or prospective analyses and biomarker trials, or drug trials [12]. The validation process should reflect the clinical performance of the biomarker(s), based on existing clinical data, new clinical data, literature review findings, or current clinical knowledge [12]. Moreover, it should be an evidentiary and statistical process that aims to link
384
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
a biomarker to specific biological, pathological, pharmacological (i.e., drug effect), or clinical endpoints [22]. Internal Validation During the biomarker discovery phase, the main focus is to identify biomarkers that distinguish between the treatment and control groups or correlate with the clinical observation of interest. Prior to this process, and depending on the sample size, plans can be made to allocate the subjects into two individual cohorts for the purpose of internal validation. It is important, however, to distinguish this allocation/splitting process from that used for the purpose of internal validation of classifiers: • Internal validation of candidate biomarkers identified in the discovery cohort • Use of different platforms • Use of different statistical methods • Internal validation of classifiers • Split-sample method • Some form of cross-validation method There are several different approaches to separate the initial pool of patients or samples for internal validation and creating classifiers; the traditional and alternative approaches are outlined here. Traditionally, a discovery and a validation cohort are created (Figure 4). Genomic biomarker candidates may first be identified in the discovery cohort using microarray analysis, for example. One way of identifying a panel of biomarkers from the candidates is by use of classification methods. The samples from the discovery data set can be split into a training set, used to find a classifier, and a test set, used for internal validation of classifiers. The classifier is an equation combining the expression values of some of the candidate markers that best distinguish the treatment from the control group. Once the classifier has been developed, the panel of biomarkers may be validated again in the validation cohort before the external validation phase is carried out. In this sense, the data obtained in the validation cohort serves purely as an additional test set. Although the traditional internal validation model is simple and logical, it may not be the most applicable strategy in the real world, given the complexity of most biomarker research today. There are two potential weaknesses to this approach. First, separating available samples into discovery and validation cohorts from the outset might unintentionally restrict the use of the samples or data to their arbitrary labels. In reality, during the developmental phase of biomarker research, different statistical analyses are often carried out to identify potentially useful biomarkers. Depending on the types of comparisons being made, a sample (or portion of data) could be used for discovery pur-
PRECLINICAL AND CLINICAL VALIDATION PHASES
385
Developmental/ Discovery Phase
“Discovery” cohort
Training Set
Testing Set
“Validation” cohort
Another Testing Set
External Validation
Figure 4 Traditional internal validation approach. Traditional approaches to internal validation rely on two cohorts: discovery and validation. The discovery cohort is typically separated into training and test sets for the development of classifiers based on biomarker(s) of interest. The classifiers are then tested in a separate cohort internally before external validation.
poses in one analysis while it is used for validation in another. Second, in the case where the classifier fails to show acceptable accuracy in the initial discovery cohort testing set, one might consider incorporating additional data and reanalyzing the training set. However, collecting additional patient samples or information may not always be possible; this type of situation may warrant the reallocation of data from the validation to the discovery cohort training set, in order to redevelop the classifier. An alternative model of internal validation may be particularly useful for smaller sample sizes (Figure 5). Similar to the traditional approach, the subjects are divided into two separate groups. However, neither cohort is marked strictly as discovery or validation, to minimize potential confusion in later analyses and maximize the utility of available data. As an example, a classifier can be generated and internally validated by creating training and testing sets from cohort 1. The same classifier can then be validated again in a separate cohort (cohort 2). Based on the outcome, the R&D team may decide that the classifier is ready for external validation or that a more robust classifier needs to be developed. In the latter case, a new classifier can be created by utilizing cohort 2 as the discovery cohort. Once the new classifier has been validated internally in cohort 2, it can be evaluated again in cohort 1, which is used
386
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
Test Classifier 1
Cohort 2
Cohort 2
Test Classifier 2
Cohort 1 + 2 Classifier 3
External Validation 1) Obtain samples from an external collaborator or download publicly available data; test classifier using process and methods (SOPs) on-site 2) Send cohort 3 (separate set of samples) to an external site; analyzed using collaborator’s process and methods (SOPs) 3) Provide classifier to the collaborator; test classifier using true independent cohort with collaborator’s SOPs at an external site
Figure 5 Alternative internal validation approach, which may be particularly useful for smaller sample sizes. Like the traditional approach, the subjects are divided into two separate cohorts. However, in this model, a classifier can be developed from either cohort, or whenever necessary, from two cohorts combined to improve the robustness of the classifier.
entirely as another testing set. Finally, the same decision process is repeated: Is the classifier robust enough to stand up to the scrutiny of external validation? Are additional data required to create a new classifier? In the latter situation, it may be necessary to combine cohorts 1 and 2 to develop larger training and test sets for internal validation. This may, in turn, help create a new classifier that is potentially more robust and applicable to a larger intended patient population. The main advantage of this model is that it provides many possible internal validation approaches, particularly in a project where sample size is small. This gives flexibility to the overall validation process and allows decisions to be made based on the results generated at each step. During the development and testing of a biomarker panel, concurrent literature and technical validations may also take place. Depending on the characteristics of the biomarker, various technology and analytical techniques may be applied. For example, quantitative polymerase chain reaction (qPCR) may be used to validate the microarray data generated from cohort 1. Once the initial findings have been confirmed, such as the differential expression of particular genomic biomarkers, qPCR can also be used to test the candidate biomarkers in cohort 2.
PRECLINICAL AND CLINICAL VALIDATION PHASES
387
There are a number of potential advantages to cross-platform or crosstechnology validations. Relative to high-throughput technologies such as microarray chips, which are typically used for identifying genomic biomarker candidates, the use of low-throughput techniques such as qPCR may help reduce the cost of the validation process. Also, by applying different platforms and analytical methods, biomarkers that are found to be statistically significant across the various cohorts are less likely to be related to or influenced by platform-specific bias. More recently, studies have also suggested that the use of combined data from multiple platforms (i.e., genomics and proteomics) to assess potential biomarkers is far superior to those generated with one technical approach alone [21]. The internal validation strategy will be largely dependent on the characteristics of the biomarkers (i.e., genomic, proteomic, metabolomic), the intended use (i.e., prognostic or diagnostic), and the sample size (i.e., traditional or alternative model). Nonetheless, the general recommendation is that the preliminary or developmental studies should be large enough so that either split-sample validation or some form of cross-validation can be performed to demonstrate the robustness of the internally validated prediction [28,29]. External Validation The use of internal discovery and validation cohorts has helped studies to develop biomarker panels with impressive accuracy and precision for the predicted outcome [28]. However, internal validation should not be confused with external validation, which is typically performed on a cohort from a different geographical location and is meant to simulate a broader clinical application [28]. Internal validation, even with the use of “independent” cohorts, does not guarantee “generalizability.” In principle, the aim of external validation is to verify the accuracy and reproducibility of a classifier in a truly independent cohort with similar underlying condition, given a defined clinical context [30]. External validation is a crucial and essential step before a classifier is implemented in a larger clinical setting for patient management [30]. Like internal validation, the design of external validation processes will depend on the intended use of the biomarker. For predictive biomarkers, prospective trials may be necessary, as they are considered the gold standard [28,30]. Moreover, it has been argued that a biomarker is more readily tested in a prospective clinical trial once retrospective studies conducted at external institutions have consistently shown the ability for the biomarker to perform at the required levels of sensitivity and specificity [8]. To accelerate the translational process from external validation to phase I and II clinical trials, partnerships and collaborations are often established. In some circumstances, classifiers may need to be tested in intended patients at a collaborators’ site or external institution in a prospective manner. In other cases, retrospective studies using patient samples may suffice. There are
388
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
several important factors to consider when conducting an external validation, regardless of the directionality of the study (i.e., prospective or retrospective). Many of these factors are also critical in designing clinical trials.
USE OF STATISTICS AND BIOINFORMATICS Due to the complexity and multifaceted nature of biomarker research, it is not uncommon, especially in the recent years, to see different “omics” techniques, bioinformatics, and statistical methods incorporated into biomarker research and development [21]. Thus, there has been a large increase in the number of possible approaches to analytical and clinical validations. As illustrated in Figure 1, the use of bioinformatics and statistics to validate biomarkers can accompany any preclinical or clinical stage. More specifically, they are applied to a greater degree of freedom during the initial phases of biomarker development. Ideally, by the time a biomarker enters large clinical trials or is put on the market under continued surveillance, its robustness should already have been tested with the use of a variety of statistical and bioinformatical approaches. Regardless, these computational approaches, although not very stringent, are fundamentally useful tools for ensuring the validity of results prior to using some of the more time-consuming and costly validation methods (i.e., external validation or clinical trials). Statistical Approaches High-throughput “omics” technologies such as microarray-based expression profiling generate massive amounts of data. As such, new statistical methods are continually being developed to deal with this challenging issue. The availability of a plethora of statistical techniques, when used with proper precautions, has provided a relatively quick and inexpensive way to validate biomarker candidates. The trial-and-error process with different statistical methods is especially common during early stages of biomarker development. In the exploratory phase of a biomarker project, various computational and mathematical techniques, such as multivariate analysis or machine learning, are often utilized to detect differences between treatment and control groups or between patients with different clinical presentations [18]. Statistically distinctive genomic biomarkers identified during the exploratory phase by one method may subsequently be subjected to a different technique. Similarly, given the same set of samples and expression measurements, permutation can be carried out on the data set prior to repeating an analysis. Congruency between the results generated using different methods may ultimately translate to an increase in biomarker confidence. This is especially useful during the internal and external phases of validation, when greater statistical freedom is exercised for the purpose of identifying or ranking biomarkers. Ideally, by the time a panel of biomarkers is selected for use in
FROM EXTERNAL VALIDATION TO CLINICAL TRIALS
389
a clinical trial, specific algorithms and statistical approaches should be established. It has been suggested in a recent FDA presentation [12] that the expression patterns or algorithms should be developed and confirmed in at least two independent data sets (a training set and a test set, respectively) [12]. Bioinformatical and Bibliographical Approaches Potential biomarker candidates should be checked continuously using bioinformatical and bibliographical tools to provide biological and clinical context. This step should be performed in parallel to the stratified statistical approaches. Although bioinformatics originated from the field of genomics, it now plays an important role in connecting and integrating biological and clinical data from a variety of sources and platforms [21]. Biomarkers that fit accepted theory are more likely to be accepted readily by the research community and the public [14]. This is especially important when selecting and transitioning biomarkers from internal and external validation phases into clinical trials. Numerous bioinformatical tools, many of which are open-source, are available to accelerate this evidentiary process. Gene and protein biomarker candidates can be processed first and then grouped through gene ontology using tools such as FatiGO and AmiGO [31,32]. More sophisticated pathway-oriented programs such as Ingenuity and MetaCore are also potentially useful in assisting the R&D team to zone in on biomarkers belonging to pathways and processes relevant to the clinical endpoints of interest [33,34]. Another advantage of the bioinformatical approach is the ability to link clinical measurements across platforms and/or identify a unique set of molecular signatures. For example, during external validation on a separate cohort of patients, results might indicate that the biomarkers identified initially as statistically significant were individually unable to strongly predict the clinical endpoint. However, linking expression data across platforms (i.e., genomic and proteomic biomarkers) may help provide a more comprehensive understanding of the biology and establish a stronger correlation between the biomarker and the clinical presentation of the patient [21].
FROM EXTERNAL VALIDATION TO CLINICAL TRIALS: THE IMPORTANCE OF COHORT, TECHNICAL, AND COMPUTATIONAL FACTORS As biomarker candidates continue to pour out of research laboratories, it has become increasingly evident that validation is much more difficult and complex than discovery. There are a multitude of general and specific considerations and obstacles to address in order to validate biomarker candidates clinically, some of which were discussed earlier and some of which we discuss now.
390
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
Translation of a candidate biomarker from a discovery phase into the validation phase of clinical trials imposes huge organizational, monetary, and technical hurdles that need to be considered at the onset. The team needs to meet requirements in sample number and feature distribution (cohort factors), sample quality, accuracy and precision in handling and processing the samples (technical factors), and in the analysis (computational factors). As we will see, each of these steps is extremely important and very challenging at the same time. Cohort Factors During any phases of external or clinical validation of biomarkers, bias can be introduced unintentionally. This is a major concern in the design, conduct, and interpretation of any biomarker study [35,36]. Starting with the population selection process, variations at a biologically level can lead to discernible differences in body fluid and tissue compositions and biomarker measurements [35]. As such, basic criteria such as gender, age, hormonal status, diet, race, disease history, and severity of the underlying condition are all potential sources of variability [35]. Moreover, patient cohort characteristics of the validation phase of a candidate biomarker must be representative of all patients for which the biomarker is developed. To reduce a bias requires the inclusion of hundreds of patients per treatment or disease arm. Not every patient is willing to give biosamples for a biomarker study, however, and only 30 to 50% of the patients in a clinical trial may have signed the informed consent for the biomarker analysis, presenting an important concern for statisticians. There is a risk for population bias, since a specific subset of patients may donate biosamples, hence skewing the feature distribution. Another risk factor that needs to be dealt with in some instances is the lack of motivation of clinical centers to continue a study and collect samples. Although it is sometimes possible to compensate for a smaller sample size through the use of different statistical methods, such as the use of local pooled error for microarray analysis, the analysis team needs to ensure that there is no patient selection bias by the center selection: Affluent centers and their patients may have different characteristics from those with other social backgrounds. The lack of available samples or patient enrolment may ultimately translate to a decrease in biomarker confidence or the generalizability of the intended biomarker. In reality, collaboration will probably make biomarker validation more robust and economically feasible than working independently [37]. Since the issue of intellectual property (IP) agreement is minimal for the biomarker validation process, as it is not patentable, open interactions among steering committees of large trials or cohort studies should be encouraged [37,38]. Technical Factors Sample Collection, Preparation, and Processing Even with the establishment of a relatively homogeneous cohort for external validation or
FROM EXTERNAL VALIDATION TO CLINICAL TRIALS
391
clinical trials, results from the intended biomarker assays are valid only if sample integrity is maintained and reproducible from sample collection through analysis [22]. Major components of assay and technical validation are: • • • • • • •
Reference materials Quality controls Quality assurance Accuracy Precision Sensitivity Specificity
Prior to the start of a validation phase, the team needs to decide on the sampling procedure, sample quality parameters, and sample processing. Reproducibility is the key to successful biomarker validation. It is important that standardized operating protocols (SOPs) for sample collection, processing, and storage be established to provide guidance for the centers and the laboratory teams. Nurses and technicians should be trained to minimize the variability in sample collection and handling. Most biomarkers are endogenous macromolecules which can be measured in human biological fluids or tissues [39]. The collection process for these specimens seems straightforward. However, depending on the type of biomarker (genomic, metabolomic, or proteomic) and the collection methods, various factors may need to be taken into account when designing the SOPs [22]. Several examples are given in Table 1. Processing the samples in a core facility reduces the risk of handling bias, since the same personnel would handle all samples in the most reproducible way possible. Once the samples are collected, systematic monitoring of their quality over time should also be established. Random tests can be conducted to ensure the short-term, benchtop, or long-term stability of the samples to uphold the integrity of the biolibrary [38].
TABLE 1
Sample Considerations
Collection (Biological Fluids or Tissues) Type of needle Type of collection tube or fixation Location of collection Time of collection Status of patient
Preparation and Processing
Storage
Dilution Plasma or serum isolation
Type of storage containers Temperature of storage
Temperature of processing Reagents used
Duration of storage
392
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
Unfortunately, during external and clinical validations, it is likely that the independent collaborators utilize a completely different set of SOPs. Even though this could ultimately contribute to the robustness of the validation process, it may be useful to establish specific guidelines that would help minimize site-to-site variability. There are several ways to achieve this. Provision to the laboratories and centers of kits that contain all chemicals needed for the processing of the sample can potentially reduce the risk of batch effects. Similarly, as pointed out in Figure 5, it may be feasible to collect a separate set of samples internally (to minimize sample collection variability) but send them to a collaborator’s site for external validation using independent processing and analytical SOPs. It is of utmost importance that any deviation from the SOPs is noted. These features can then later be accommodated by statisticians for better modeling. Any available information on bias or deviation from protocols or batch processing is useful in the computational process. Excluding them may influence the decision as to whether a biomarker was or was not validated. Sustained Quality Assurance, Quality Control, and Validation of the Biomarker Tests or Assays The aforementioned cohort (i.e., patient selection) and technical factors (i.e., sample collection, processing, and storage) can all have a significant impact on any of the phased strategies to biomarker validation shown in Figure 1. However, in reality, validation of the biomarker test is just as important as the validation of the biomarker itself. To improve the chance of successful translation from external and clinical validation results to patient care, the analytical validity of the test (does the test measure the biomarker of interest correctly and reliably?) should be closely monitored along with the clinical validity of the biomarker (does the biomarker correlate with the clinical presentation?) [8,12]. Furthermore, to sustain the quality assurance and quality control between different stages of biomarker development, it may be necessary to carry out multiple analytical or technical validations when more than one platform is used. Computational Factors In addition to the statistical and bioinformatical factors in the validation process described earlier, the team must also be aware that a vast number of samples impose a huge computational burden on software and hardware. This is especially true when high-throughput and high-performance technologies or high-density arrays need to be used for the validation process. Not considering this potential issue may ultimately be costly in terms of money and time. To deal with the massive amount of data generated from these technologies, new statistical techniques are continuously being developed. Statistical methods should include those that were used successfully in the discovery phase. If the performance is not as good in prior analyses, the refinement of algorithms will be necessary (“adaptive statistics”).
BIAS AND VARIABILITY: KEY POINTS REVISITED
393
Other Challenges of Clinical Biomarker Validation There are many additional barriers and concerns for clinical validation of biomarkers: • Choice of matrix (readily accessible, effect on biomarker concentrations) • Variability (interindividual and intraindividual) • Preparing calibration standards • Implementations of quality control to assure reproducibility • Limited availability of clinical specimens • Heterogeneity of biomarkers (isoforms, bound states) • IP protection (lack of collaboration) • Lack of clear regulatory guidance As discussed, the discovery and validation processes involve multicenter studies with large patient cohorts and technical equipment, steered by a vast number of staff members. The enormous costs for these studies can in most cases be covered only by consortia, consisting potentially of academic centers and pharmaceutical companies. The establishment of such consortia often has to overcome legal hurdles and IP issues, which may be a time-consuming process. As mentioned above, establishment of a biomarker may involve the recruitment of hundreds of patients per treatment arm. Both recruitment time and success rate are unpredictable at the beginning of the study phase. It is during the initial phase that the team needs to determine what really constitutes a good biomarker. Key questions in this decision include: What is the threshold for decision making that needs to be established to call a biomarker “useful”? How do we achieve a “robust” biomarker: a marker that is not easily influenced by such factors as location and personnel? How economical does the biomarker test need to be for it to be used? Which biomarker matrix should be selected, since this may affect future validation possibilities? Accessible matrices such as urine or blood with limited or known concentration variability are ideal.
BIAS AND VARIABILITY: KEY POINTS REVISITED To summarize the sections above, variability is a major obstacle in biomarker validation, regardless of the matrix, the type of biomarker, and its use. As noted above, there are two types of variability: intraindividual variability, which is usually related to lab techniques, sample timing, drug effects, and within-subject biological processes, and interindividual variability, resulting from different individual responses involving multiple genetic factors [40]. Biological variability may be difficult to assess, but it is important to control
394
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
TABLE 2
Fundamental Concerns in Biomarker Validation
Overfitting
Bias Generalizability
Concern
Solutions
False positives and negatives High sensitivity and specificity found but fail on independent validation sets Misidentification of differences between samples Can results be applied to appropriate clinical populations?
Increase sample size Statistical approaches Receiver operator characteristic curves Control for confounding factors Representative validation cohorts
for this factor. A statistical correlation to a clinical endpoint for a candidate biomarker cannot be determined without assessing biological variability, as the overall noise of a sample is a sum of both analytical and biological variability [6]. A biomarker with wide biological variability or time fluctuations that are difficult to control may be rejected [6]. Diurnal variability may require sample pooling or collection at the same time of day [6]. Biomarkers have diverse molecular structures, including possible bound states, which also need to be considered as influences on variability. There are specific considerations for any biomarker validation study. Overfitting, bias, and generalizability are three of the most fundamental concerns pertaining to clinical biomarker validation (Table 2) [41]. With the introduction of high-throughput discovery strategies, overfitting has become a particular fear. These discovery platforms are designed to measure countless analytes, and there is therefore a high risk of false discovery. When a large number of variables are measured on a small number of observations to produce high sensitivity and specificity, the results may not be reproducible on independent validation sets. Some biomarker candidates may be derived simply due to random sample variations, particularly with inadequate sample sizes [26]. A false positive may be thought of as critical part of a disease process, when in fact it is either associated only loosely or coincided randomly with disease diagnosis or progression [3]. As mentioned earlier in the chapter, a biomarker may correlate with a disease statistically but not prove to be useful clinically [8,42]. Increasing sample size and use of receiver operator characteristic curves may help overcome this concern of overfitting. Bias is another major concern during biomarker validation, as there is often potential for misidentifying the cause of the differences in biomarkers between samples. Confounding variables such as age, race, and gender should be controlled for either through statistical modeling or validation study design to limit the effects of bias. Since validation cohorts require suitable diversity for widespread utility, bias may be difficult to avoid entirely through study design.
REFERENCES
395
Similar to bias, most issues pertaining to the generalizability of a biomarker across clinical populations can be addressed through careful consideration of cohort selection. Cohort factors were discussed briefly above. To increase generalizability, the later phases of validation should include more rigorous testing of potential interfering endogenous components by including more diverse populations with less control of these confounding variables [22]. For example, in later-stage clinical trials there should be less control of diet and sample collection and more concomitant medications and co-morbidities [22]. This will allow a biomarker to be used in more clinically diverse situations.
KEY MESSAGES Many of the biomarkers in current clinical use have become accepted via debate, consensus, or merely the passage of time [13]. This rather unofficial establishment of biomarkers in the past has been very inefficient. Importantly, biomarkers can no longer become accepted in this way, as they would fail to meet the current regulatory standards of modern medicine. Contemporary biomarkers must be tested in highly regulated human clinical trials [43]. To date, the clinical trial process has not been very efficient, and a typical biomarker life cycle from discovery to clinical use may take decades. For example, the evolution of prostate-specific antigen (PSA) as a biomarker for prostate disease diagnosis and monitoring took 30 years for regulatory approval by the FDA [44]. In order to expand on the biomarker repertoire used currently in clinical practice, the acceptance process for new biomarkers needs to become much more efficient and cost-effective. The general problem of lack of regulatory guidance is very likely to be addressed formally by regulatory bodies in the near future. Problems such as the availability of samples will also probably improve as collaborations are established and the ethical issues surrounding biobanks are clarified. Even if a biomarker candidate fails during validation, much may be learned regarding the pathophysiology of the disease and the corresponding drug effects during the process [9]. However, without successful validation and integration of biomarkers into clinical use, much of the research effort, particularly in terms of biomarkers for drug development, can be futile.
REFERENCES 1. Biomarkers Definition Working Group (2001). Biomarkers and surrogate endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther, 69(3):89–95. 2. Colburn WA (1997). Selecting and validating biologic markers for drug development. J Clin Pharmacol, 37(5):355–362.
396
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
3. Colburn WA (2003). Biomarkers in drug discovery and development: from target identification through drug marketing. J Clin Pharmacol, 43(4):329–341. 4. Boguslavsky J (2004). Biomarkers as checkpoints. Drug Discov Dev, Sept. 5. Hunt SM, Thomas MR, Sebastian LT, et al. (2005). Optimal replication and the importance of experimental design for gel-based quantitative proteomics. J Proteome Res, 4(3):809–819. 6. Lee JW, Figeys D, Vasilescu J (2007). Biomarker assay translation from discovery to clinical studies in cancer drug development: quantification of emerging protein biomarkers. Adv Cancer Res, 96:269–298. 7. Listgarten J, Emili A (2005). Statistical and computational methods for comparative proteomic profiling using liquid chromatography–tandem mass spectrometry. Mol Cell Proteom, 4(4):419–434. 8. Bast RC Jr, Lilja H, Urban N, et al. (2005). Translational crossroads for biomarkers. Clin Cancer Res, 11(17):6103–6108. 9. Kuhlmann J (2007). The applications of biomarkers in early clinical drug development to improve decision-making processes. Ernst Schering Res Found Workshop, 59:29–45. 10. Mandrekar SJ (2005). Clinical trial designs for prospective validation of biomarkers. Am J Pharmacogenom, 5(5):317–325. 11. Lachenbruch PA, Rosenberg AS, Bonvini E, Cavaille-Coll MW, Colvin RB (2004). Biomarkers and surrogate endpoints in renal transplantation: present status and considerations for clinical trial design. Am J Transplant, 4(4):451–457. 12. Harper CC (2007). FDA Perspectives on Development and Qualification of Biomarkers, in Rediscovering Biomarkers: Detection, Development and Validation, GTCbio; San Diego, CA. 13. Goodsaid F, Frueh F (2006). Process map proposal for the validation of genomic biomarkers. Pharmacogenomics, 7(5):773–782. 14. Bonassi S, Neri M, Puntoni R (2001). Validation of biomarkers as early predictors of disease. Mutat Res, 480–481:349–358. 15. Wagner JA (2002). Overview of biomarkers and surrogate endpoints in drug development. Dis Markers, 18(2):41–46. 16. Wagner JA, Williams SA, Webster CJ (2007). Biomarkers and surrogate end points for fit-for-purpose development and regulatory evaluation of new drugs. Clin Pharmacol Ther, 81(1):104–107. 17. Goodsaid F, Frueh F (2007). Biomarker qualification pilot process at the US Food and Drug Administration. AAPS J, 9(1):E105–E108. 18. Baker M (2005). In biomarkers we trust? Nat Biotechnol, 23(3):297–304. 19. Benowitz S (2004). Biomarker boom slowed by validation concerns. J Natl Cancer Inst, 96(18):1356–1357. 20. Lee JW, Weiner RS, Sailstad JM, et al. (2005). Method validation and measurement of biomarkers in nonclinical and clinical samples in drug development: a conference report. Pharm Res, 22(4):499–511. 21. Ilyin SE, Belkowski SM, Plata-Salaman CR (2004). Biomarker discovery and validation: technologies and integrative approaches. Trends Biotechnol, 22(8): 411–416.
REFERENCES
397
22. Lee JW, Devanarayan V, Barrett YC, et al. (2006). Fit-for-purpose method development and validation for successful biomarker measurement. Pharm Res, 23(2):312–328. 23. Peck RW (2007). Driving earlier clinical attrition: If you want to find the needle, burn down the haystack. Considerations for biomarker development. Drug Discov Today, 12(7–8):289–294. 24. Groopman JD (2005). Validation strategies for biomarkers old and new. AACR Educ Book, 1:81–84. 25. Normolle D, Ruffin MT IV, Brenner D (2005). Design of early validation trials of biomarkers. Cancer Inf, 1(1):25–31. 26. Jarnagin K (2006). ID and Validation of biomarkers: a seven-fold path for defining quality and acceptable performance. Genet Eng Biotech News, 26(12). 27. O’Connell CD, Atha DH, Jakupciak JP (2005). Standards for validation of cancer biomarkers. Cancer Biomarkers, 1(4–5):233–239. 28. Simon R (2005). Roadmap for developing and validating therapeutically relevant genomic classifiers. J Clin Oncol, 23(29):7332–7341. 29. Simon R (2005). Development and validation of therapeutically relevant multigene biomarker classifiers. J Natl Cancer Inst, 97(12):866–867. 30. Bleeker SE, Moll HA, Steyerberg EW, et al. (2003). External validation is necessary in prediction research: a clinical example. J Clin Epidemiol, 56(9):826–832. 31. Al-Shahrour F, Diaz-Uriarte R, Dopazo J (2004), FatiGO: A Web tool for finding significant associations of gene ontology terms with groups of genes. Bioinformatics, 20(4):578–580. 32. AmiGO. http://amigo.geneontology.org/cgi-bin/amigo/go.cgi. 33. Ingenuity Pathways Analysis. http://www.ingenuity.com/products/pathways_ analysis.html. 34. MetaCore Gene Expression and Pathway Analysis. http://www.genego.com/ metacore.php. 35. Moore RE, Kirwan J, Doherty MK, Whitfield PD (2007). Biomarker discovery in animal health and disease: the application of post-genomic technologies. Biomarker Insights, 2:185–196. 36. Ransohoff DF (2005). Bias as a threat to the validity of cancer molecular-marker research. Nat Rev Cancer, 5(2):142–149. 37. McCormick T, Martin K, Hehenberger M (2007). The evolving role of biomarkers: focusing on patients from research to clinical practice. Presented at the IBM (Imaging) Biomarker Summit III, IBM Corporation; Nice, France. 38. Maruvada P, Srivastava S (2006). Joint National Cancer Institute–Food and Drug Administration workshop on research strategies, study designs, and statistical approaches to biomarker validation for cancer diagnosis and detection. Cancer Epidemiol Biomarkers Prev, 15(6):1078–1082. 39. Colburn WA, Lee JW (2003). Biomarkers, validation and pharmacokinetic– pharmacodynamic modelling. Clin Pharmacokinet, 42(12):997–1022. 40. Mayeux R (2004). Biomarkers: potential uses and limitations. NeuroRx, 1(2): 182–188.
398
CLINICAL VALIDATION AND BIOMARKER TRANSLATION
41. Early Detection Research Network. Request for Biomarkers. Attachment 2: Concepts and Approach to Clinical Validation of Biomarkers: A Brief Guide. http://edrn.nci.nih.gov/colops/request-for-biomarkers. 42. Katton M (2003). Judging new markers by their ability to improve predictive accuracy. J Natl Cancer Inst, 95:634–635. 43. NCI (2006). Nanotechnology-Based Assays for Validating Protein Biomarkers. NCI Alliance for Nanotechnology in Cancer, Bethesda, MD, Nov.–Dec. 44. Bartsch G, Frauscher F, Horninger W (January 2007). New efforts in the diagnosis of prostate cancer. Presented at the IBM (Imaging) Biomarker Summit III, IBM Corporation; Nice, France, Jan.
20 PREDICTING AND ASSESSING AN INFLAMMATORY DISEASE AND ITS COMPLICATIONS: EXAMPLE FROM RHEUMATOID ARTHRITIS Christina Trollmo, Ph.D., and Lars Klareskog, M.D., Ph.D. Karolinska Institute, Stockholm, Sweden
INTRODUCTION Chronic inflammatory diseases include a number of rheumatic, neurological, dermatological, and gastrointestinal diseases, which develop as a result of immune and inflammatory reactions. These perpetuating reactions ultimately cause the clinical symptoms, which are subsequently used to classify the symptoms as a “disease.” Analyzing the disease course longitudinally reveals several distinct steps during disease progression, for which the presence of biomarkers is of importance, both for identification of disease status and for prediction of disease course and treatment options. However, biomarkers per se are not always available today. We have chosen here to discuss disease characteristics and potential biomarkers in one common chronic inflammatory disease, rheumatoid arthritis (RA), which affects approximately 0.5 to 1% of the population worldwide. In this chapter we focus on the following factors: 1. Onset of disease in order to discuss the questions in whom and why the disease occurs, and whether and how onset can be predicted
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
399
400
PREDICTING AND ASSESSING INFLAMMATORY DISEASE
from biomarkers that are present before the occurrence of clinical symptoms 2. Progression of disease with respect to development of joint destruction, but also other with respect to complications such as extra-articular manifestations, cardiovascular events, and lymphoma development in this patient group 3. Treatment of the individual patient and of specific symptoms and disease manifestations 4. Selection of patients in clinical trials of new drugs
RHEUMATOID ARTHRITIS DISEASE PROCESS Rheumatoid arthritis (RA) is a disease defined by seven criteria, with four that should be fulfilled to make the diagnosis (Table 1). These criteria have been useful in harmonizing clinical trials and clinical practice. However, they are not based on what is now known about etiology or pathogenesis, and they are not too helpful in selecting treatment for the single patient. Hence, there are needs to redefine the diagnosis for RA and related diseases, first to define entities more related to distinct etiologies and pathogenetic mechanisms, then to use such new entities for stratification and selection of patients in clinical trials and clinical practice. Basic features of the immune and inflammatory process in RA are, on the one hand, processes that can be identified in the peripheral circulation, initially
TABLE 1
Classification Criteria for Rheumatoid Arthritisa
1. Morning stiffness in and around joints lasting at least 1 hour before maximal improvement 2. Soft tissue swelling (arthritis) of three or more joint areas observed by a physician 3. Swelling (arthritis) of the proximal interphalangeal, metacarpophalangeal, or wrist joints 4. Symmetric swelling (arthritis) 5. Rheumatoid nodules 6. Presence of rheumatoid factor 7. Radiographic erosions and/or periarticular osteopenia in hand and/or wrist joints Source: Arnett FC, Edworthy SM, Bloch DA, et al. (American Rheumatism Association) (1988). The American Rheumatism Association 1987 revised criteria for the classification of rheumatoid arthritis. Arthritis Rheum, 31:315–324. a Criteria 1 through 4 must have been present for at least six weeks. Rheumatoid arthritis is defined by the presence of four or more criteria, and no further qualifications (classic, definite, or probable) or list of exclusions are required. These criteria demonstrated 91 to 94% sensitivity and 89% specificity for RA compared with non-RA rheumatic disease control subjects.
RHEUMATOID ARTHRITIS DISEASE PROCESS
401
bone Joint capsule
cartilage
Synovial membrane
Synovial fluid
Figure 1 Inflamed RA joint. In healthy joints a thin synovial membrane lines the joint capsule and the synovial fluid. In the RA joint both the synovial membrane and the synovial fluid are infiltrated by inflammatory cells, leading to tender and swollen joints. The synovial membrane also “grows” over the cartilage, aiding in the process of cartilage and bone desctruction.
rheumatoid factors (RFs), and on the other hand, processes in the inflamed tissue, mainly the joints (Figure 1). Rheumatoid factors, identified almost 70 years ago, are part of the diagnostic criteria for RA. Being present in some 50 to 60% on incident RA cases and increasing over time with active disease, these auto-antibodies are also seen in many non-RA conditions and are thus not very specific for the disease. Rheumatoid factors have never been shown to be pathogenic by themselves, neither in patients nor in experimental models. They are thus seen mainly as biomarkers of importance for diagnosis and prognosis of a more severe disease course, but not necessarily directly involved in disease pathogenesis. Joint inflammation in RA is focused on synovial inflammation, in many cases associated with cartilage destruction and concomitant erosions in bone. This inflammation has been studied in large detail over the years, demonstrating that its features are common to many other types of chronic inflammation in other tissues. No real RA pathognomonic features have yet been identified, the most unique feature identified so far being the way the inflammatory cells and molecules attack and destroy bone and cartilage. Having the major features of RA, the synovial joint inflammation and presence of RF being typical but by no means unique for RA, there is an obvious need to define more specific features of the disease. The identification of such features would enable us to search for a better understanding of the pathogenesis of RA, a more accurate diagnosis based on biomarkers, and more specific treatments.
402
PREDICTING AND ASSESSING INFLAMMATORY DISEASE
STUDIES ON ETIOLOGY AND PATHOGENESIS AS A BASIS FOR DEVELOPMENT OF BIOMARKERS FOR DIAGNOSIS AND PROGNOSIS IN RA Any understanding of a complex, partly genetic disease is based on an understanding of how genes and environment interact in giving risk to immune reactions that contribute to the joint destruction and other inflammatory reactions in RA. In healthy subjects, the major role for the immune system and subsequent inflammatory reactions is to defend us against pathogens, but in RA the immune system has partly changed focus to attack our own tissues, primarily the joints, and is thus denoted as an autoimmune disease. Genes There is strong evidence to support a significant genetic component to the susceptibility of RA. Twin studies clearly demonstrate an overrepresentation of disease concordance in monozygotic twins (12 to 20%, depending on study) compared to dizygotic twins (4 to 5%) and the general population (0.5 to 1%). Analysis on the gene level demonstrates the strongest genetic association with genes within the HLA region, specifically the HLA-DRB1 gene. Its gene products, the MHC class II molecules, were described in the 1970s to be present on cells in the inflamed joint, allowing antigens (part of proteins, generally pathogens, but in autoimmune diseases self-proteins) to be presented to the immune system and subsequently to trigger inflammatory reactions (Figure 2). Serologic and later genetic typing of the various MHC molecules revealed that a few allotypes of HLA-DRB1 were overrepresented in RA patients. A closer analysis demonstrated even identical amino acid
Figure 2 Inflammatory cells in the RA joint. A number of immune cells have infiltrated the joint and local production of inflammatory mediators, including cytokines and antibodies, occurs. The synovial fluid, which functions as a cushion during joint movements, is in a healthy joint acellular. Illustrated to the left is the presentation of an antigen by the dendritic cell to a T-cell; the yellow connector is a MHC class II molecule. Cytokines, released from the immune cells, function as signaling molecules between cells. (See insert for color reproduction of the figure.)
STUDIES ON ETIOLOGY AND PATHOGENESIS AS A BASIS
403
sequences in those regions of the MHC class II DRβ1 chain, which are in contact with the antigen and the T-cell receptor on the T-cell mediating the specific immune responses. This amino acid sequence is termed shared epitope. The nature of the specific immune reactions mediated by the MHC class II molecules has, however, been surprisingly difficult to define. Identifying the specific antigens would help to identify the autoimmune trigger and make it possible to interfere specifically to break the autoimmune reactions. In total, the HLA region contributes 30 to 50% of the genetic component for RA in Caucasians. It appears that other MHC class II genes and allotypes may contribute to RA in other ethnic groups, and much is still not known about the contributions of different genes and allotypes within the MHC concerning susceptibility and disease course in RA. Only much more recently, a second genetic risk allele has been identified in populations of European descent. This minor allele of a nonsynonomous single-nucleotide polymorphism (SNP) in the protein tyrosine phosphatase nonreceptor 22 (PTPN22) gene confers the second-largest genetic risk to the development of RA, with an odds ratio of about 1.8. PTPN22 encodes the intracellular protein lymphoid tyrosine phosphatase, which plays a central role in immune responses by playing an integral part in signal transduction and T-cell receptor signaling pathway and inhibiting T-cell activation. This variant was first demonstrated for type 1 diabetes, but was soon confirmed in a number of autoimmune diseases, including RA, systemic lupus erythematosus (SLE), and Grave disease. However, other autoimmune diseases show no association with this SNP, suggesting subsets of autoimmune diseases to be defined accordingly. The recent introduction of whole genome-wide association studies allowed, for the first time, good coverage of common variations in the human genome. Hundreds of thousands of SNPs in thousands of samples were genotyped and compared. Interestingly, studies focusing on RA again demonstrated the strongest effects for the two well-documented RA susceptibility genes HLADRB1 and PTPN22. Other genes (Table 2) make a more modest contribution to susceptibility. Future studies have to identify remaining, probably smaller, genetic effects and how all the genetic effects interact with each other as well as with environmental factors in inducing and perpetuating the disease.
TABLE 2 Contribution of Genetic Risk Factors in Rheumatoid Arthritis Gene HLA PTPN22 6q23 STAT4 TRAF1/C5
Odds Ratio 6.4 1.8 1.2 1.2–1.4 1.1–1.4
404
PREDICTING AND ASSESSING INFLAMMATORY DISEASE
Environment Information on environmental factors important for the development, perpetuation, or course of RA is surprisingly scarce. Smoking is the only conventional environmental factor that has been linked reproducibly to an increased risk of developing RA. Other exposures, such as silica dust and mineral oils, have been reported in a few studies. It has not yet been possible to verify frequently hypothesized stimuli such as microbial infections with the methods used to date. Smoking was initially considered as an unspecific risk factor, of interest mainly from a public health perspective. However, newer studies indicate that smoking is a specific trigger of RA, as discussed in more detail below.
Immunity Studies on specific immune reactions in RA have been confined almost entirely to those involving autoantibodies. As described above, they were initially restricted to the measure of rheumatoid factors, but more recently, antibodies specific for citrullinated proteins have been shown to be of great importance. This is discussed in detail below, since their presence covariates with genetic and environmental factors, providing an important tool in subgrouping patients into different entities of RA. In the process of joint inflammation and cartilage and bone destruction, many different cells and molecules of the immune system participate. Some of them are illustrated in Figure 2. However, even if some of these inflammatory processes are revealed in detail, a specific trigger has still not been identified. In recent years, advances within the field of cytokine regulation and cytokine-directed therapy have largely dominated the research field of RA, illustrating how therapeutic progress is possible even though the role of adaptive immunity in the disease is not fully understood. Cytokines are soluble molecules that mediate the communication between cells of the immune system but also with other cells of the body, such as the endothelium. Interestingly, the first cytokine that was targeted, tumor necrosis factor (TNF), belongs to the innate immune system. Blocking IL-1, another cytokine belonging to the innate immune system, has not proven to be as effective as TNF blockade for the majority of the RA patients. Recent clinical trails blocking a third cytokine in this family, IL-6, show promising results. This cytokine exerts effects within both the innate and the adaptive immune systems. Temporarily eliminating B-cells, which are the producers of antibodies, has also proven a successful therapy. A third alternative, blocking the interaction between cells presenting antigens and T-cells, has resulted in an approved therapy. Together, these treatment-based data also demonstrate the significant role of the various parts of the innate as well as adaptive immune system for disease progression.
BETTER DIAGNOSIS BY MEANS OF BIOMARKERS AND GENETICS
405
Citrulline Immunity The presence of antibodies specifically identifying citrullinated antigens, specifically termed antibodies to citrullinated protein antigens (ACPAs) are a strong predictor of developing RA. Approximately 60% of all RA patients carry such antibodies. These antibodies are highly specific for the disease; that is, they are rare in the normal population (100 μM for the other cell lines. CFU assays (tier 2) are then run to qualify and confirm the findings of tier 1. Tier 2 consists of mouse, rat, dog, and human CFU-GM assays to predict species sensitivity, but these assays are tedious and time consuming, enabling only a limited number of compounds to be evaluated. Increased throughput was achieved when a 96-well plate CFU-GM assay was developed that yielded results that correlated well with the traditional CFU-GM assay. The data generated from the tiers 1 and 2 can trigger frontloading of in vivo studies to evaluate a more comprehensive hematology profile and include use of additional technologies such as flow cytometry (tier 3). In summary, a three-tiered approach using in vitro, ex vivo, and in vivo studies allows rapid, high-throughput assessment of compounds
IN VITRO TOXICITY BIOMARKERS
439
with potential chemistry-related bone marrow toxicity to be eliminated in the early preclinical stages of drug development. Liver Toxicity The overall goal of developing an in vitro toxicity testing system is for inclusion in a toolbox as an element of investigative toxicology. This should contribute to hypothesis-based research aimed at identification of mode-ofaction-based toxicity biomarkers, which can extend to preclinical and clinical phases of drug development. The current preclinical and clinical diagnostic panel used to monitor hepatocellular toxicity has been quite reliable except in cases of human drug-induced liver injury, which occurs with a low incidence and prevalence, ranging from 1 in 10,000 to 1 in 100,000. This is also referred to as idiosyncratic hepatotoxicity, meaning that the adverse events are a consequence of individual and unpredictable patient responses. It is well recognized that there is an urgent need to identify and develop a more successful prediction model coupled with diagnostic and prognostic biomarkers for druginduced idiosyncratic liver injury in humans. This type of human liver toxicity is multifactorial, including individual-specific responses, and because of this, it is highly unlikely that any preclinical and/or clinical studies can be powered appropriately to unmask this risk. Furthermore, it is difficult to assess the value of our preclinical toxicology studies and their relevance to predicting drug-induced idiosyncratic liver injury in humans, because we lack understanding of the mode of action and pathophysiology of this unique type of toxicity. However, there is extensive literature related to the predictability of hepatotoxicity, and there is good concordance with in vitro and in vivo assessments. Several different strategies have been employed to detect various forms of hepatotoxicity, including, necrosis, apoptosis, and phospholipidosis. The in vitro hepatotoxicity predictive strategy uses hepatic microsomes; immortalized cell lines; primary hepatocyte cell cultures from humans, rodents, and nonrodents; and liver slices. The latter, in particular, enables a better approximation and evaluation of metabolites that may be formed and an assessment of their hepatotoxicity potential determined and in vitro–in vivo comparisons made. For example, using this approach, studies were done to characterize the metabolic profile of troglitazone, an approved drug that has now been withdrawn from the market because of hepatotoxicity and hepatic failure requiring liver transplants. Data from a series of studies using rat and human hepatic microsomes as well as in vivo rat studies suggested that troglitazone hepatotoxicity was caused by several reactive intermediates covalently binding to hepatic proteins that may undergo redox cycling and induce oxidative stress, causing cell damage. This hypothesis was strongly supported by the data suggesting that not only is troglitazone an inducer of P450 3A, but is also responsible for metabolism of the thiazolidinedione ring, a key structural element of many of the peroxisome prolifercator–activated receptor (PPAR) agonists.
440
VALIDATING IN VITRO TOXICITY BIOMARKERS
Taken together, troglitazone acts as an inducer of enzymes that catalyzes its biotransformation to chemically reactive intermediates; this autoinduction of its own metabolism is ultimately detrimental to the cells. Additionally, this combined approach was able to show that in humans these intermediates could be formed, thus identifying a potential etiology for troglitazone-induced hepatotoxicity in humans. Utilization of liver slice technology, co-cultures, and whole liver perfusion can be quite useful, but these are low-throughput assays that are not always suitable for screening. The advantage is that toxicity to the biliary and other hepatic cellular constituents can be assessed. Thus predicting the hepatotoxicity potential using in vitro cell culture systems can also employ the well-recognized preclinical and clinical biomarkers of hepatocellular damage, such as ALT (alanine aminotransferase), AST (aspartate aminotransferase), GLDH (glutamate dehydrogenase), MDH (malate dehydrogenase), PNP (purine nucleoside phosphorylase), and PON-1 (paraoxonase-1). Other enzymes, such as ALP (alkaline phosphatase), GGT (γ-glutamyl transferase) and 5′-nucleotidase (5′NT) can also be used when toxicity to the biliary tree is suspected. Cytotoxicity (necrosis and apoptosis) in vitro is the primary endpoint, and as such, biomarkers for potential hepatotoxicity clinical monitoring should include necrosis as well as apoptosis. Recent evidence suggests that this may be possible by measuring the soluble pool of cytokeratin 18 (CK18) and the caspase-cleaved CK18 fragments in conjunction with the traditional markers. The in vitro surrogate system can also provide valuable information on the suitability of the appropriate biomarker and potential mechanism of action related to toxicity, such as mitochondrial dysfunction, metabolic pathways, and/or CYP450 induction. In summary, in vitro hepatocyte cell culture systems and tissues derived from in vivo studies will identify the appropriate biomarker for monitoring preclinical and clinical hepatotoxicity but these data can also contribute to hypothesis-driven mode-of-action investigative toxicity studies. This will include molecular profiling, metabonomics, transcriptomics, and proteomics, all of which are useful tools that could aid in identification of novel biomarkers of hepatotoxicity. Phospholipidosis Phospholipidosis (PLD) is characterized by concentric-layered multilamellar intracellular lysosomal inclusion bodies that are often composed of complex phospholipids, parent drug, and/or metabolites. Accumulation will often occur in several different cell types of the hepatobiliary, immune, and nervous systems. Typically, hepatocytes, biliary epithelial cells and macrophages of lymph nodes, and pulmonary alveoli and ganglia and nonganglia neuronal cell bodies of the central nervous system can be affected. The evidence to date suggests that phospholipidosis is a structural-related toxicity of cationic amphiphilic compounds irrespective of pharmacologic action. The finding of phos-
CONCLUSIONS
441
pholipidosis may also be associated with inflammation, severe organ damage, and possibly impairment of immune function. Although drugs are marketed that cause phospholipidosis preclinically and clinically, this is an undesirable profile for potentially new candidate drugs, and as such, this liability should be identified and avoided early in the drug discovery process. Therefore, highthroughput in vitro predictive screens can add value, particularly if phospholipidosis potency can be ranked and supported by in vivo data. Several methodologies have been evaluated to assess neutral and phospholipid content as an index of phospholipidosis in cells growing in culture. However, accumulation of NBD-PE as a result of cytotoxicity induces false-positive results, particularly at high concentrations. Recently, a highthroughput, validated, predictive, sensitive, and selective multichannel fluorescence-based in vitro PLD assay was developed to reduce the false-positive limitation of cytotoxicity. This assay uses I-13.35 adherent mouse spleen macrophages cultured in 96-well plates with fluorescent-tagged phospholipids. Cells with an intact nucleus were differentiated from dead cells using ethidium staining and cell gating that rejects dead cells. Using this improved technique, 26 of 28 positive phospholipidogenic compounds were identified. These findings aided application of this methodology to other techniques, such as flow cytometry, which may be used in preclinical toxicology studies and clinical trials. For example, flow cytometric analysis coupled with Nile Red staining was used to detect neutral and phospholipids in a monocyte cell line, U397. Application of this methodology was utilized in (in vivo) toxicology studies, and this raised the possibility that preclinical toxicology and clinical assessment of phospholipidosis could be done using peripheral blood cells and flow cytometry.
CONCLUSIONS Overall, there is a compelling need for in vitro toxicity biomarkers for clinical endpoints. Toxicity biomarkers in the in vitro context are defined as quantitative measurable characteristics that serve as indicators of a pathologic process or related biochemical or molecular events. Conceptually, bridging biomarkers of toxicity should include not only the traditional parameters of biofluid and physiological measurements, but also measurable endpoints that could serve as indicators of potential adverse events preclinically and clinically even though they have been derived from in vitro, ex vivo, and in vivo studies. The use and application of this combination will undoubtedly improve the toxicologist’s ability to identify human hazards so that the appropriate risk assessment and management strategy can be developed. In developing in vitro assays to identify biomarkers with potential clinical application and utility, a clear understanding and determination of what that biomarker will assess must be defined. For example, exaggerated pharmacologic action of a molecular target may be associated with an undesirable effect resulting in toxicity. In
442
VALIDATING IN VITRO TOXICITY BIOMARKERS
such cases, single/multiple markers and/or assays may be required for utilization in screens during the early drug discovery phase, with continued assessment in preclinical and clinical development. If successful, these data can be invaluable in deriving the therapeutic index of a drug. In the case of a specific target organ toxicity, the ultimate goal is to identify biomarkers of toxicity that can be used as a general screen that reflects cellular damage, regardless of mode of action in a high-throughput manner. Therefore, it is imperative that in vitro assays and the appropriate platforms be developed to identify relevant toxicity biomarkers that will be useful during preclinical and clinical development.
REFERENCE 1. Atkinson AJ Jr, Colburn WA, DeGruttola VG, et al. (2001). Biomarkers and surrogate endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther, 69:89–95.
PART VI BIOMARKERS IN CLINICAL TRIALS
443
23 OPPORTUNITIES AND PITFALLS ASSOCIATED WITH EARLY UTILIZATION OF BIOMARKERS: CASE STUDY IN ANTICOAGULANT DEVELOPMENT Kay A. Criswell, Ph.D. Pfizer Global Research and Development, Groton, Connecticut
INTRODUCTION The cost of developing new drugs continues to rise and demands an everincreasing percentage of total research and development (R&D) expenditures [1]. Recent studies have shown that there is less than a 10% success rate between the first human trials and the launch of a new product [1]. When coupled with an attrition rate of over 70% in phase II and nearly a third of all new compounds in phase III trials [1–3], the gravity of the need for reliable, early-decision-making capability is evident. All pharmaceutical companies are focused on ways to reduce the cost of discovering and developing new drugs. In general, three areas have received the greatest attention: (1) improving target validation, (2) selecting candidates with a greater chance of success, or (3) identifying those compounds that will fail earlier in their development. Inadequate efficacy or poor safety margins are the two main reasons for compound attrition. Therefore, developing a strategy for development of safety
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
445
446
OPPORTUNITIES AND PITFALLS
and efficacy biomarkers is key to early prediction of compound success or failure. Utilization of biomarkers in preclinical testing and ex vivo human testing may provide valuable information for compound progression and success in the clinic, but it is not without problems. Studying the effects of markers on disease processes is not new. It has probably existed for centuries, as evidenced by such obsolete practices as tasting urine to determine the diabetic status of a patient. Utilization of biomarkers for disease diagnosis and prognosis has escalated dramatically during this century with the improved diagnostic power, reliability, and speed of routine clinical pathology, biochemical, and genomic data. However, when studying the specific effects of a drug on marker activity, utilization of biomarkers has a more recent history. One of the earliest documented cases may be the early trials of lamotrigine, which effectively utilized the electroencephalograph (EEG) as a biomarker and demonstrated decreased epileptiform activity [4]. Twenty years after the lamotrigine study the urgency for relevant and early biomarkers of safety and efficacy during drug development has been recognized, and it is not uncommon that each new drug development program is accompanied by a biomarker request. Not only has the frequency of biomarker development exploded, but the timing of biomarker utilization has been pushed earlier in development. There is an ever-increasing demand to test and screen potential candidates before good laboratory practices (GLP) studies and commitment to human trials. Two problems inherently complicate the use of early biomarkers to preclinical outcomes: (1) truly understanding the disease process and the therapeutic intervention process, and (2) species–species differences that may alter the translation and application of the biomarker. Understanding the disease process appears obvious, but as already noted, understanding the target is still a key area for decreasing attrition. Regardless of the expanding body of knowledge surrounding disease processes, infectious diseases may be the lone area where the disease process is fully understood. Successful therapeutic intervention and drug sensitivity can be predicted accurately with well-characterized biomarkers of causal organism growth and survival [5]. For biomarkers to be truly effective in drug development, they need to reflect a highly specific biochemical or molecular change that occurs in the disease process, is altered by the therapeutic intervention, and occurs prior to any downstream effects on clinical endpoints. This is a lofty goal, as most disease processes affect multiple pathways, and feedback mechanisms further complicate the overall biology. The further impact of nontranslatable, speciesspecific characteristics makes daunting the task of providing biomarkers early in development. There are, however, diseases and biological pathways that have a fairly broad acceptance regarding the essential components. Additionally, certain areas of drug development target activation or inhibition of a highly specific molecule or receptor within those pathways. These conditions provide a unique opportunity for biomarker success when conducted early in the drug discovery process.
INTRODUCTION
447
The coagulation pathway is an example of a well-characterized model, and specific inhibitors of this pathway are being explored actively as new candidates for anticoagulant therapy. One of the documented areas is the utilization and challenge of early implementation of biomarkers to predict the clinical outcomes of factor Xa (FXa) inhibitor compounds. Venous and arterial thromboembolic disorders have a substantial impact on human health and morbidity [6]. Although these conditions have been treated for many years with the vitamin K antagonist warfarin (coumadin), or with intravenous or subcutaneous heparin administration, the opportunity to provide therapeutic intervention with a safer profile, fewer side effects, less intra- and intersubject variability, a better route of administration, and less need for continuous coagulation monitoring is attractive. Novel coagulation therapeutics have generated tremendous interest due to the medical need and opportunity to improve on current treatment strategies. FXa occupies a pivotal position within the coagulation cascade and is a highly attractive candidate as a target for novel coagulation intervention. This enzyme links the intrinsic and extrinsic coagulation pathways and is the rate-limiting step in thrombin formation [7]. Changes within the intrinsic coagulation pathway are routinely monitored by a clotting assay called the activated partial thromboplastin time (aPTT), whereas changes in the extrinsic pathway are assessed by increases or decreases in another clotting assay called the prothrombin time (PT) (7, Figure 1). Both assays are assessed spectrophotometrically or mechanically and are expressed
Intrinsic Pathway (aPTT) Factor XII
Extrinsic Pathway (PT)
Factor XIIa
Factor XI
Factor XIa
Factor IX
TRAUMA (Tissue Factor)
Factor IXa
Factor Xa
Factor X
Prothrombin
Factor VII
Factor X
Thrombin
Fibrinogen
Figure 1
Factor VIIa
Final Common Pathway Fibrin
Intrinsic and extrinsic coagulation pathways.
448
OPPORTUNITIES AND PITFALLS
as the time to clot formation after addition of an exogenous clot-activating agent. The aPTT is used routinely to monitor the safety and level of anticoagulation of heparin therapy and coumadin in monitored via the PT assay. Safety profiles for these compounds have been established following years of use so that fold increases over the predose aPTT or PT can be used to predict therapeutically effective doses of these drugs versus levels associated with inadequate or excessive coagulation. Additionally, aPTT and PT are routine and highly standardized clinical assays. Testing reagents are standardized and instrumentation has undergone rigorous scrutiny to pass U.S. Food and Drug Administration (FDA) requirements because of its routine use in clinical diagnostics. The reliability of aPTT and PT as biomarkers of anticoagulant therapy safety is also fairly well established. Since activation of factor X is required for completion of both the intrinsic and extrinsic pathways, it appears logical that inhibition of factor X activation should prolong both PT and aPTT. It would be anticipated that early assessment of novel anticoagulants such as FXa inhibitors could readily be assessed with the aPTT and PT assays. Furthermore, if the new compound does not require metabolic activation, ex vivo incubation of human plasma with the compound of interest followed by aPTT and PT assessment may provide a reliable method to rapidly assess efficacy. Despite their reliability and acceptability, PT and aPTT do not always reflect the anticoagulant activity of novel compounds. Rivaroxaban is an oral direct inhibitor of activated factor X [8]. In a rat venous model of the inferior vena cava, Rivaroxaban produced dose-dependent inhibition of FXa and an increase in PT [7]. An inhibition of 32% was associated with a 1.8-fold increase in PT, and nearly 100% inhibition produced a 3.2-fold increase in PT. However, in a rabbit model 92% inhibition of FX was associated with only a modest 1.2-fold increase in PT, demonstrating that species-specific sensitivity is one of the problems associated with monitoring the newer anticoagulants with standard coagulation parameters [8]. Even Coumadin provides a good example of species-specific effects that allow a useful therapeutic for humans but a lethal pest control for rodents [9]. Despite this challenge, utilization of coagulation testing for in vivo and ex vivo screening is well documented in the development of FXa inhibitors [8,10–12]. Rarely is a single biomarker considered definitive for evaluation of safety or efficacy. The drug development approach for Otamixabin and DU-176b incorporated a series of clotting parameter assays and assays to measure the effects on thrombus formation [10–12]. Although clotting parameters would be the preferred biomarker based on the ability to monitor in plasma, use of fairly simple but reproducible assays, cost, and the ability to monitor a clinical population easily, evidence is not established that clotting assays and thrombus formation assays are interchangeable. For Otamixaban, in vitro coagulation parameters were assessed for their ability to produce a doubling of PT and aPTT. This testing allowed a rank ordering of anticoagulant effects per species of rabbit > human > monkey >
INTRODUCTION
449
rat > dog [10]. Additionally, aPTT appeared to be the more sensitive biomarker in all species, with aPTT doubling occurring at drug concentrations that were less than half the concentration required for PT doubling. Multiple pharmacological models of thrombosis in rats, dogs, and pigs were also conducted with Otamixaban. In rats, thrombus mass was markedly reduced by nearly 95%, with a corresponding increase in aPTT of 2.5-fold and in PT of 1.6-fold [10]. In contrast, intravenous administration of 1, 5, or 15 μg/mL Otamixaban in the pig model effectively eliminated coronary flow reserves related to this stenosis model at the middle and high dose. PT was also prolonged at the middle and high dose, but aPTT was prolonged only at the high dose. Although pigs were not listed as assessed in the species-specificity model, it suggests that the clotting parameter of choice may vary per species and may not correlate well with thromobosis assays. Furthermore, clinical trial outcomes showed that at anticipated antithrombotic and therapeutic concentrations of 100 ng/mL Otamixaban, neither PT nor aPTT changed appreciably. In contrast, alternative clotting parameters such as the HepTest clotting time and the Russell viper venom clotting time, showed substantial prolongation, again suggesting that alternatives to standard PT and aPTT may be preferable [10]. Further work with the oral FXa inhibitor DU-176b provides additional evidence that selection of the right biomarker and appropriate correlation to functional assays is critical. This study was conducted in 12 healthy male volunteers [12]. The antithrombotic effect of DU-176b was assessed by measuring the difference in size of acutely formed, platelet-rich thrombus, pre- and postdrug administration using a Badimon perfusion chamber model under low and high shear force. Subjects received a single 60-mg dose of DU-176b, and pharmacokinetic and pharmacodynamic assessments were conducted at 1.5, 5, and 12 hours postdosing. Pharmacokinetic assessments included PT, international normalization ratio (INR), aPTT, thrombin generation, and anti-factor Xa activity. Drug levels were also assessed. Badimon chamber results demonstrated a strong antithrombotic effect at 1.5 hours with a progressive return toward baseline by 12 hours. All of the pharmacokinetic endpoints showed significant change from pretreatment, suggesting that any of the parameters might be an effective biomarker of DU-176b safety and/or efficacy. However, a close statistical look at this data raises some questions. A comparison of drug concentration level to anti-factor Xa activity and clotting parameters showed the strongest correlation with anti-factor Xa activity (r2 = 0.85), similar correlation with PT and INR (r2 = 0.795 and 0.78, respectively), but a fairly weak correlation with aPTT (r2 = 0.40). This suggests that although Otamixaban and DU-176b are both FXa inhibitors, arbitrary selection of PT or aPTT as a better predictor of drug concentration is problematic. Furthermore, when the antithrombotic effects of DU-176 assessed by Badimon chamber were compared to those obtained by clotting parameters, the correlation was even more challenging. Prothrombin time showed a correlation of r2 = 0.51 at both high and low stress and the correlation with aPTT was only r2 = 0.39 and 0.24 [12]. This suggests that although aPTT is used for monitoring of heparin therapy and PT
450
OPPORTUNITIES AND PITFALLS
is utilized for clinical safety of coumadin, their routine use as factor Xa inhibitors by themselves is insufficient.
CASE STUDY DATA WITH A DEVELOPMENTAL FXa INHIBITOR Beyond the published literature that is available, data and personal observations collected during the development of another FXa inhibitor at Pfizer Global Research & Development are now provided to complete this case study approach to biomarker utilization during anticoagulant development. Development of this particular FXa inhibitor ultimately was discontinued, as this compound required intravenous administration and as such lacked marketability compared to oral FXa inhibitors. However, the lessons learned provide further documentation of species-specific and interpretational complications that arise with the utilization of accepted coagulation biomarkers to monitor anticoagulant efficacy and safety for FXa inhibitors. Dose Selection for Ex Vivo Experiments In designing ex vivo experiments to evaluate potential biomarkers, selection of the appropriate drug concentration is critical. Furthermore, when experiments are conducted in multiple species, selection of the same drug concentration for all species is typically not ideal, due to species-specific drug sensitivity. Factor X concentrations vary by species, and the level of FXa inhibition is also variable. Therefore, the concentrations of FXa inhibitor utilized in this particular ex vivo evaluation were selected to achieve a range of FXa inhibition that was modest to nearly complete in all species examined. Pharmacology studies predicted that this FXa inhibitor would result in species-specific factor Xa sensitivity in the order human > dog > rat. Interestingly, this species specificity was not identical to that observed with Otamixaban [10], demonstrating that extrapolating biomarker data even between compounds in the same class may be misleading. For this developmental FXa inhibitor, human plasma was spiked to obtain final drug concentrations of 0, 0.2, 0.6, 1.2, and 6.0 μg/mL. Drug concentrations of 0, 0.4, 2.0, 8.0, and 15.0 μg/mL were selected for dog assessments and 0, 1.0, 4.0, 12.0, and 24.0 μg/mL were used for ex vivo assessments in rats to achieve a comparable range of FXa inhibition compared to that observed in human samples. Thromboplastin is the reagent that induces clot formation in the PT assay. There is ample documentation that the type and sensitivity of thromboplastin is a critical factor in the effective and safe monitoring of coumadin administration [13–17]. To minimize this variability in PT assays a calibration system was adopted by the World Health Organization (WHO) in 1982. This system converts the PT ratio observed with any thromboplastin into an international normalized ratio (INR). This value was calculated as follows: INR = observed PT ratioc, where the PT ratio is subject PT/control PT and c is the power value
CASE STUDY DATA WITH A DEVELOPMENTAL FXa INHIBITOR
451
representing the International Sensitivity Index (ISI) of the particular thromboplastin [18]. This system has proven to be an effective means of monitoring human oral anticoagulant therapy with coumadin and has been implemented almost universally. It allows individuals to be monitored at multiple clinics using varying reagents and instrumentation, while still achieving an accurate assessment of true anticoagulation. However, there is little or no information regarding selection of thromboplastin reagents or use of the INR for monitoring of FXa inhibitors. Typically, the higher the ISI value, the less sensitive the reagent, and the longer the PT time produced. The most commonly used thromboplastin reagents for PT evaluation are either rabbit brain thromboplastin (of variable ISI values, depending on manufacturer and product) or human recombinant thromboplastin, typically with an ISI of approximately 1.0. Use of the INR is accepted as a more relevant biomarker of anticoagulant efficacy than are absolute increases in PT alone, at least for coumadin therapy [13]. To more fully evaluate the effect of this FXa inhibitor on INR, PT was evaluated using rabbit brain thromboplastin, with ISI values of 1.24, 1.55, and 2.21 and a human recombinant thromboplastin (0.98 ISI). Although either human recombinant thromboplastin or rabbit thromboplastin are considered acceptable reagents for the conduct of PT testing, it was unclear whether these reagents would produce similar results in the presence of an FXa inhibitor or whether the sensitivity of the thromboplastin itself would affect results. Effect on Absolute Prothrombin Time Prothrombin time data obtained using rabbit brain thromboplastin with the three increasing ISI values during these ex vivo studies are presented in Table 1. The source and sensitivity of thromboplastin used in the assay affected the absolute PT value in all species, clearly demonstrating the need to standardize this reagent in preclinical assessment and to be cognizant of this impact in clinical trials or postmarketing, when reagents are less likely to be standardized. As anticipated, addition of the FXa inhibitor to plasma under ex vivo conditions increased the PT in a dose-dependent manner. This increase in PT time length was observed regardless of the ISI value (sensitivity) of the thromboplastin used and occurred in all species (Table 1). Although the absolute time for clot formation generally increased with increasing ISI, this was not true for all assessments. Table 2 summarizes the maximum change in PT and the range of variability when rabbit brain thromboplastin of varying ISI values was compared to human recombinant thromboplastin. Again, in general, the higher the PT value, the larger the deviation between reagent types. For example, although there was a 2.1-second difference between human and rabbit thromboplastin in untreated human plasma, the difference increased from 5.5, 9.2, 12.0, 13.6, and 38.2 seconds in samples containing 0.2, 0.6, 1.2, 1.8, or 6.0 μg/mL FXa inhibitor, respectively. Dogs were much less sensitive to the type of thrombo-
452
OPPORTUNITIES AND PITFALLS
TABLE 1 In Vitro Effect of an Experimental Factor Xa Inhibitor on Absolute Prothrombin Time Using a Rabbit Brain Thomboplastin Concentration of Factor Xa Inhibitor (μg/mL) 0 0.2 0.6 1.2 1.8 6.0 0 0.4 2.0 8.0 15.0 0 1.0 4.0 12.0 24.0
International Sensitivity Indexa 0.98
1.24
PT (s)—Human Plasma 11.4 ± 0.11 13.4 ± 0.17* 18.3 ± 0.37 23.8 ± 0.50* 30.8 ± 0.76 37.3 ± 0.86* 47.0 ± 1.98 51.4 ± 1.51* 61.2 ± 2.04 61.8 ± 1.58 133.0 ± 3.82 111.4 ± 3.17* PT (s)—Dog Plasma 7.8 ± 0.14 8.4 ± 0.07* 11.0 ± 0.26 11.0 ± 0.13 17.6 ± 0.42 16.6 ± 0.23* 32.0 ± 0.98 27.6 ± 0.49* 45.4 ± 1.52 36.5 ± 0.73* PT (s)—Rat Plasma 9.1 ± 0.05 15.1 ± 0.07* 13.3 ± 0.06 20.7 ± 0.13* 20.1 ± 0.29 30.8 ± 0.23* 31.1 ± 0.72 46.2 ± 0.49* 42.7 ± 1.09 60.6 ± 0.73*
1.55
2.21
12.9 ± 0.13* 23.4 ± 0.44* 39.9 ± 0.70* 58.9 ± 1.08* 74.8 ± 1.54* 152.3 ± 3.08*
10.9 ± 0.14 16.8 ± 0.43* 27.0 ± 0.93* 38.2 ± 1.39* 48.3 ± 2.08* 94.8 ± 4.12*
7.1 ± 0.07 10.5 ± 0.19 17.9 ± 0.41 33.6 ± 0.92 46.7 ± 1.35
6.7 ± 0.06* 9.2 ± 0.15* 15.0 ± 0.31* 27.1 ± 0.63* 36.8 ± 0.88*
17.1 ± 0.15* 31.3 ± 0.27* 51.8 ± 0.52* 83.6 ± 0.84* 109.4 ± 0.79*
13.1 ± 0.07* 23.2 ± 0.19* 36.9 ± 0.52* 56.2 ± 0.84* 75.4 ± 1.60*
*, Mean value ± S.E.M. for 10 individual subjects significantly different from 0.98 ISI thromboplastin means at 5% level by t-test, separately by increasing ISI value for individual rabbit thromboplastins. a
plastin used and showed smaller maximum changes in PT values. In contrast, rat PT values were highly dependent on the source of thromboplastin, and samples tested with rabbit brain thromboplastin were markedly longer than with human recombinant thromboplastin. Rats showed this high level of thromboplastin dependence even in untreated control samples. Variability of PT in FXa inhibitor-treated human and dog plasma was similar to that observed in controls and did not change appreciably with increasing concentration of drug (Table 2). FXa inhibitor-treated rat plasma showed an approximately twofold increase in variability compared to control. Effect on PT/Control Ratio and INR Generating a PT/control ratio by dividing the number of absolute seconds in the treated sample by the number in the control (untreated) sample provides a second method of assessing PT. If the ISI of the thromboplastin used is close to 1.0, the INR should be similar to the PT/control ratio (Table 3). The PT/
CASE STUDY DATA WITH A DEVELOPMENTAL FXa INHIBITOR
453
TABLE 2 Comparison of Human Recombinant Thromboplastin and Rabbit Brain Thromboplastin on Prothrombin Time in Plasma Samples Containing Increasing Concentrations of Factor Xa Inhibitora Species Human
Dog
Rat
Intended Drug Concentration (μg/mL) 0 0.20 0.60 1.20 1.80 6.00 0 0.40 2.00 8.00 15.00 0 1.00 4.00 12.00 24.00
Maximum Change in PTb
Range of Variabilityc (%)
2.1 5.5 9.2 12.0 13.6 38.2 1.1 1.8 2.5 4.9 8.9 8.0 18.0 31.7 52.5 66.7
−4 to +18 −8 to +30 −12 to +30 −19 to +25 −21 to +22 −29 to +14 −15 to +7 −7 to −5 −14 to +2 −15 to +5 −20 to +3 +43 to +88 +56 to +136 +53 to +158 +48 to +168 +42 to +156
a
Samples spiked with a factor Xa inhibitor in vitro. Maximum change in prothrombin time compared to 0.98 ISI human recombinant thromboplastin. c Variability of three increasing ISI levels of rabbit brain thromboplastin compared to human recombinant. b
control ratio could be used effectively to normalize thromboplastin differences in untreated human, dog, or rat samples. At predicted efficacious concentrations of FXa inhibitor, the PT/control ratio effectively normalized reagent differences. However, at high concentrations of FXa inhibitor, particularly in the rat, this method lacked the ability to normalize results effectively. Table 4 shows the corresponding INR values obtained in human, dog, and rat plasma when assessed with rabbit brain thromboplastins of increasing ISI. As anticipated, the PT/control ratio and INR were similar when the ISI was approximately 1. In contrast to the modest differences in PT when expressed as either absolute seconds or as a ratio compared to control value, the INR showed dramatic increases (Table 3). The magnitude of the INR value rose consistently with increasing ISI value and was marked. At the highest dose tested, the INR ranged from 11.1 with the 0.98 ISI reagent to 121.9 with the 2.21 ISI reagent in human samples, 5.6 to 43.4 in dogs, and 4.6 to 48.1 in rats. Assessment of PT in human, dog, or rat plasma containing this developmental FXa inhibitor was affected by the ISI of the thromboplastin selected for the
454
OPPORTUNITIES AND PITFALLS
TABLE 3 In Vitro Effect of an Experimental Factor Xa Inhibitor on Prothrombin Time/Control Ratio Using a Rabbit Brain Thomboplastin Concentration of Factor Xa Inhibitor (μg/mL) 0 0.2 0.6 1.2 1.8 6.0 0 0.4 2.0 8.0 15.0 0 1.0 4.0 12.0 24.0
International Sensitivity Indexa 0.98
1.24
1.55
PT / Control Ratio (:1)—Human Plasma 1.0 ± 0.00 1.0 ± 0.00 1.0 ± 0.00 1.6 ± 0.02 1.8 ± 0.12 1.8 ± 0.21 2.7 ± 0.05 2.8 ± 0.04 2.9 ± 0.03 4.1 ± 0.15 3.8 ± 0.08 4.6 ± 0.06* 5.4 ± 015 5.6 ± 0.07 5.8 ± 0.08 11.7 ± 0.26 8.3 ± 0.17* 11.9 ± 0.16 PT / Control Ratio (:1)—Dog Plasma 1.0 ± 0.00 1.0 ± 0.00 1.0 ± 0.00 1.4 ± 0.01 1.3 ± 0.01 1.5 ± 0.02 2.2 ± 0.02 2.0 ± 0.02 2.3 ± 0.04 4.1 ± 0.07 3.3 ± 0.05* 4.3 ± 0.10 5.8 ± 0.14 4.4 ± 0.08* 6.5 ± 0.16* PT / Control Ratio (:1)—Rat Plasma 1.0 ± 0.00 1.0 ± 0.00 1.0 ± 0.00 1.5 ± 0.02 1.4 ± 0.02 1.6 ± 0.02 2.2 ± 0.03 2.0 ± 0.04 3.0 ± 0.02* 3.4 ± 0.08 3.1 ± 0.07 4.9 ± 0.06* 5.5 ± 0.17 7.3 ± 0.24* 15.3 ± 0.21*
2.21 1.0 1.5 2.5 3.5 5.4 8.7
± ± ± ± ± ±
0.00 0.02 0.06 0.09 0.15 0.29*
1.0 ± 0.00 1.4 ± 0.02 2.3 ± 0.04 4.1 ± 0.08 5.5 ± 0.11 1.0 1.6 2.8 4.3 11.3
± ± ± ± ±
0.00 0.01 0.03* 0.05* 0.28*
*, Mean value ± S.E.M. for 10 individual subjects significantly different from 0.98 ISI thromboplastin means at 5% level by t-test, separately by increasingly ISI value for individual rabbit thromboplastins. a
assay. However, it was not affected to the same degree as was coumadin. Consequently, using the correction calculation designed for coumadin fluctuations to obtain an INR with CI-1031 grossly exaggerated the INR value. Although INR has been used clinically to monitor anticoagulant status during coumadin therapy, it probably should not be used with FXa inhibitor administration. Coumadin therapy typically produces INR values of 2, 4, and 6 as therapeutic, above therapeutic, and critical levels, respectively. INR values of 10 to 15 may be observed in acute coumadin poisoning, but INR values higher than 15 rarely occur [19]. Clearly, the magnitude of the INR obtained in this experiment (>120 in humans), combined with the incremental increase that occurred with increasing ISI value, shows that INR values in these FXa inhibitor-treated samples were an artifact of the calculation and not associated with the true anticoagulant effects of the FXa inhibitor itself. This suggests that when INR is used in clinical trials, it is important to select a thromboplastin with an ISI value close to 1.0. In this manner, the INR will
CASE STUDY DATA WITH A DEVELOPMENTAL FXa INHIBITOR
455
TABLE 4 In Vitro Effect of an Experimental Factor Xa Inhibitor on International Normalization Ratio Using a Rabbit Brain Thomboplastin Concentration of Factor Xa Inhibitor (μg/mL) 0 0.2 0.6 1.2 1.8 6.0
International Sensitivity Indexa 0.98
1.24
1.55
2.21
International Normalized Ratio (:1)—Human Plasma 1.0 ± 0.10 1.0 ± 0.02 1.0 ± 0.02 1.0 1.6 ± 0.04 2.0 ± 0.06* 2.6 ± 0.07* 2.6 2.7 ± 0.07 3.6 ± 0.10* 5.7 ± 0.16* 7.5 4.0 ± 0.16 5.3 ± 0.19* 10.5 ± 0.29* 16.2 5.2 ± 0.17 6.7 ± 0.21* 15.3 ± 0.47* 27.4 11.1 ± 0.31 13.8 ± 0.49* 46.0 ± 1.41* 121.9
± ± ± ± ± ±
0.03 0.16* 0.57* 1.31* 2.618 11.54*
0 0.4 2.0 8.0 15.0
International Normalized Ratio (:1)—Dog Plasma 1.0 ± 0.03 1.0 ± 0.01 0.9 ± 0.02 1.4 ± 0.04 1.4 ± 0.01 1.7 ± 0.05* 2.2 ± 0.05 2.3 ± 0.04 3.8 ± 0.14* 4.0 ± 0.12 4.4 ± 0.10* 10.1 ± 0.43* 5.6 ± 0.18 6.2 ± 0.16* 16.7 ± 0.74*
1.0 ± 0.03 2.0 ± 0.08* 6.0 ± 0.28* 22.1 ± 1.19* 43.4 ± 2.31*
0 1.0 4.0 12.0 24.0
International Normalized Ratio (:1)—Rat Plasma 1.0 ± 0.03 1.0 ± 0.00 1.0 ± 0.02 1.5 ± 0.02 1.5 ± 0.02 2.6 ± 0.03* 2.2 ± 0.03 2.4 ± 0.06* 5.6 ± 0.11* 3.3 ± 0.07 4.0 ± 0.13* 11.7 ± 0.34* 4.6 ± 0.12 5.6 ± 0.22* 17.8 ± 0.20*
1.0 3.5 9.9 25.1 48.1
± ± ± ± ±
0.01 0.07* 0.31* 0.82* 2.27*
*, Mean value ± S.E.M. for 10 individual subjects significantly different from 0.98 ISI thromboplastin means at 5% level by t-test, separately by increasingly ISI value for individual rabbit thromboplastins. a
closely approximate the PT/control ratio and give a true estimate of the anticoagulated state. Table 5 indicates the maximum change in PT/control ratio and INR using thromboplastins with increasing ISI values (1.24 to 2.21). Changes in the PT/control ratio were modest at drug concentrations that produced increases of fourfold or less, the maximum targeted therapeutic PT value for clinical trials. The mean PT/control ratio in human samples increased maximally from 2.7 to 3.1 at twice the therapeutic dose (0.6 μg/mL). Absolute PT and PT ratios compared to baseline values were only modestly different using thromboplastin from various manufacturers, sources (human recombinant versus rabbit), and ISI. This finding indicates that absolute PT or PT/control ratio were more effective biomarkers of FXa inhibitor concentration than was INR.
456
OPPORTUNITIES AND PITFALLS
TABLE 5 Comparison of PT/Control Ratio and International Normalization Ratio in Plasma Samples Containing Increasing Concentrations of Factor Xa Inhibitora
Species Human
Dog
Rat
Intended Drug Concentration (μg/mL) 0 0.2 0.6 1.2 1.8 6.0 0 0.4 2.0 8.0 15.0 0 1.0 4.0 12.0 24.0
PT/Control Ratio (0.98 ISI) 1.00 1.61 2.70 4.13 5.37 11.67 1.00 1.42 2.24 4.11 5.82 1.00 1.45 2.20 3.42 5.51
INR (0.98 ISI) 0.99 1.58 2.65 4.00 5.19 11.10 1.00 1.41 2.22 4.00 5.60 1.00 1.45 2.19 3.34 4.55
Maximum Change in PT/ Control Ratiob 0 0.22 0.38 0.65 0.97 3.39 0 0.10 0.27 0.81 1.46 0 0.38 0.82 1.48 9.82
Maximum Change in INRb 0.02 1.03 4.88 12.24 22.21 110.84 0.08 0.61 3.77 18.06 37.76 0.01 2.08 7.7 21.74 43.54
a
Samples spiked with a factor Xa inhibitor in vitro. Results obtained by selecting the maximum result obtained with rabbit brain thromboplastin and subtracting from result obtained with human recombinant thromboplastin. b
PURSUING BIOMARKERS BEYOND PT, INR, AND aPTT Values obtained with aPTT under ex vivo conditions were less sensitive than PT to FXa inhibitor-induced elevations and often underestimated drug concentration (data not shown). Beyond PT, INR, and aPTT, the most commonly used assay to evaluate FXa inhibitors is probably the anti-factor Xa assay (anti-FXa). It seems logical that a parameter named anti-Factor Xa assay should be the ideal biomarker for a FXa inhibitor. Additionally, this assay is used routinely in clinical settings to monitor the safety of heparin, a substance that also inhibits FXa production [20]. However, this assay is little more than a surrogate marker for drug concentration. A standard curve is prepared using the administered heparin (or other FXa inhibitor), and the chromagenic assay allows determination of the drug concentration in the plasma samples via production of FXa [21]. For heparin, the anti-FXa assay appears relevant. Years of use has allowed the development of a strong correlation between the number of international units of heparin determined via the assay and clinical safety. Reference ranges have been defined for the assay and provide a rapid
PURSUING BIOMARKERS BEYOND PT, INR, AND aPTT
457
estimation of under, over, or therapeutic levels of heparin administration [22]. Still variability in the anti-FXa assay has been reported and is attributable to a number of factors, including instrumentation, assay technique, specificity of the commercially available kits, heparin preparations used in generating the standard curve, and approaches to data fitting [21]. In contrast, this experience does not exist for anti-Xa values obtained during FXa inhibitor administration. Just as the PT and INR may not be as beneficial for predicting FXa inhibitor effects as they are for coumadin, it should not be assumed that the anti-FXa assays have equivalent predictivity for heparin and other FXa inhibitors. For the Pfizer developmental FXa inhibitor, the anti-FXa assay offered little more than the PT as a monitor of drug concentration. An additional assay called the factor X clotting (FX:C) assay was also evaluated. This assay is conducted using genetically engineered factordeficient plasma spiked with serial dilutions of purified human factor X [23,24]. Concentrations of factor X in plasma are then determined by extrapolation from the standard curve. Since factor X must be converted to factor Xa for clot formation to occur, a functional clotting assay for factor X can also be used to assess the effects of factor Xa inhibitors. The FX:C assay provides several unique features that may make it a valuable biomarker for monitoring factor Xa inhibitor therapy: (1) the assay provides a rapid, reliable assessment of drug concentration and the percent inhibition of FXa achieved during drug inhibitor administration; (2) the assay can be performed on a high-throughput automated platform that is available in most hospital-based coagulation laboratories; and (3) individual factor X concentrations range from 60 to 150% between subjects [25]. This fairly high level of baseline intersubject variability suggests that a standard dose of drug may have a substantially different impact on total factor X inhibition. The FX:C assay defines baseline factor X activity and thereby allows continued dosing to achieve a targeted factor X concentration [4]. Literature is available concerning factor X concentrations and bleeding history in patients with either inherited or acquired factor X deficiency, so minimally there is some understanding that correlates the impact of reductions in FX:C evaluations and bleeding potential [26–28]. By determining the actual concentration of functional factor X remaining, physicians may have increased confidence in the administration of factor Xa inhibitors. As with all the other coagulation biomarkers used for monitoring FXa inhibition, it was not immediately clear whether the FX:C assay was applicable in multiple species. Ex vivo experiments allowed this evaluation. To provide effective anticoagulant activity a 30% reduction in FX:C activity was predicted to be the minimal requirement for this compound. The FXa inhibitor concentrations in the ex vivo experiments were selected to bracket a range of factor X inhibition predicted to range from approximately 30% to 100%. Table 6 shows the intended concentrations of this FXa inhibitor in each species, the resulting FX:C activity, and the percent inhibition achieved. Assessment of these drug concentrations induced factor Xa inhibition of approximately 20% to >90%, showing that the targeted range could be predicted and achieved in
458
OPPORTUNITIES AND PITFALLS
TABLE 6 Factor X Activity and Percent Inhibition in Plasma Samples Containing Increasing Concentrations of a Factor Xa Inhibitora Species Human
Dog
Rat
Intended Drug Concentration (μg/mL) 0 0.2 0.6 1.2 1.8 6.0 0 0.4 2.0 8.0 15.0 0 1.0 4.0 12.0 24.0
FX:C Activityb (%) 106.1 ± 1.9 64.3 ± 1.6 32.2 ± 1.0 16.5 ± 0.7 10.0 ± 0.5 2.3 ± 0.2 143.0 ± 4.5 112.9 ± 8.6 42.4 ± 4.3 11.1 ± 1.4 5.4 ± 0.8 84.8 ± 2.8 52.6 ± 1.8 26.2 ± 0.9 11.8 ± 0.6 6.7 ± 0.3
Percent Inhibitionc NA 39.4 69.7 84.4 90.6 97.8 NA 21.0 70.3 92.2 96.2 NA 38.0 69.1 86.0 92.1
a
Samples spiked with a Factor Xa inhibitor in vitro. Mean ± SD of 10 samples/concentration. c Calculated from species-specific control value. NA, not applicable. b
all species. These ex vivo experiments demonstrated that the predicted efficacious dose of 0.2 to 0.3 μg/mL achieved the required 30 to 40% inhibition of FXa, providing early confidence in the dose selection process for phase I human trials. Additionally, these early ex vivo studies confirmed speciesspecific differences. The drug concentrations required to produce similar levels of FXa inhibition across species were markedly different. The FX:C assay was used effectively in preclinical rat and dog studies with this developmental FXa inhibitor. Knowledge of the species-specific concentration of drug required to induce the required 30% inhibition of FXa drove the selection of the low dose, whereas nearly complete inhibition of FXa drove the selection of the high dose. The FX:C assay helped determine the drug concentration required for complete inhibition of factor Xa in these species and the relative bleeding risk associated with a range of factor X concentrations. Prior knowledge of the impact of this drug on FXa inhibition through fairly simple clotting assessments helped eliminate undue risks of over-anticoagulation in preclinical studies, and there was no loss of animals due to excessive hemorrhage. It also addressed questions of whether dosing had been pushed to high enough levels when only minimal bleeding was observed at the highest dose. Since nearly 100% inhibition was achieved during the study, using higher doses was
CONCLUSIONS
459
not indicated and the lack of bleeding under conditions of complete FXa inhibition in rats and dogs suggested a strong safety profile. Inclusion of these biomarkers in preclinical studies provided greater confidence for selection of target stopping criteria for the first-in-human trial. The FX:C assay was translated and used as part of the first-in-human clinical trial with this compound. The FX:C assay provided data consistent with in vitro modeling, suggesting that it is predictive of drug concentration.
CONCLUSIONS One of the goals for new anticoagulant therapies is a superior safety profile compared to marketed anticoagulants, thereby minimizing or eliminating the need for clinical monitoring. Although clinical monitoring with standardized coagulation assays may appear to be a simple solution to monitoring the safety and efficacy of anticoagulants, there are inherent issues that make the elimination of clinical monitoring highly desirable. The obvious factors of cost and labor are minor in comparison to the problems associated with lack of patient compliance, delayed time to achieve therapeutic benefit, and the high degree of variability in the assay itself due to instrumentation, reagents, technique, and the inherent variability among subjects. Phase I studies using biomarkers are generally cheaper than phase II clinical endpoint studies. Additionally, new anticoagulants pose a relatively undetermined safety risk, due to the possibility of excessive bleeding. Therefore, biomarkers will continue to be essential until safety profiles can be established for this newer generation of anticoagulants. Although PT and INR are an effective reflection of drug the safety and efficacy for coumadins, they are less than ideal as biomarkers of new FXa inhibitor drugs. Assessing anti-FXa activity has been similar to drug concentration analysis for some inhibitors but variable for others [12; current case study]. FX:C clotting activity may be another alternative but remains largely unexplored. Regardless of the hope for the development of safer anticoagulants that are monitoring-free, the reality is that development of these drugs requires extensive patient monitoring to ensure safety. Compared to heparin and coumadin, which are monitored fairly effectively with aPTT and PT, respectively, development of the new FXa inhibitors are typically accompanied by a laundry list of probable biomarkers. This process is likely to continue until safety is firmly established through prolonged use and clinical experience with these agents. It seems likely that most of these coagulation assays could just as likely be a bioassay of drug concentration as an indicator of pharmacologic response. In Stern’s evaluation of biomarkers of an antithrombin agent, he concluded that “not all biomarkers are created equal” [29]. He suggested that “If a proposed biomarker measurement requires a drug and its molecular target to be combined in the same assay, it may be more a pharmacokinetic than a
460
OPPORTUNITIES AND PITFALLS
pharmacodynamic assessment. Also, such assays should not be assumed to demonstrate an in vivo effect” [29]. As such, these biomarkers face a lofty hurdle to replace such pharmacodynamic endpoints as the Badimon chamber. So what does this mean for the early use of biomarkers in the development of new anticoagulants? It suggests that the greatest benefit for early utilization of coagulation biomarkers remains in allowing optimal selection of compounds, attrition of the right compounds, and the opportunity to provide an early ex vivo assessment against marketed competitors. It also demonstrates that efforts expended in understanding species-specific and reagent differences are critical in performing those early experiments.
REFERENCES 1. CMR International (2006). 2006/7 Pharmaceutical R&D Factbook CMR International, pp. 22–35. 2. Kola I, Landix J (2004). Can the pharmaceutical industry reduce attrition rates? Nat Rev Drug Discov, 3:711–715. 3. DiMasi JA (2001). Risks in new drug development: approval success rates for investigational drugs. Clin Pharmacol Ther, 69:297–307. 4. Jawad S, Oxley J, Yuen WC, Richens A (1986). The effect of lamotrigine, a novel anticonvulsant, on interictal spikes in patients with epilepsy. Br J Clin Pharmacol, 22:191–193. 5. Smith MB, Woods GL (2001). In vitro testing of antimicrobial agents. In Davey FR, Herman CJ, McPherson RA, Pincus MR, Threatte G, Woods GL (eds.), Henry’s Clinical Diagnosis and Management by Laboratory Methods, 20th ed. W.B. Saunders, Philadelphia, pp. 1119–1143. 6. Anderson FA, Wheeler HB, Goldberg RJ, et al. (1991). A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism. Arch Intern Med, 151:933–938. 7. Colman RW, Clowes AW, George JN, Hirsh J, Marder VJ (2001). Overview of hemostasis. In Colman RW, Hirsh J, Marder VJ, Clowes AW, George JN (eds.), Hemostasis and Thrombosis: Basic Principles and Clinical Practice. Lippincott Williams & Wilkins, Philadelphia, pp. 3–16. 8. Kakar P, Watson T, Gregory-Lip YH (2007). Drug evaluation: Rivaroxaban, an oral, direct inhibitor of activated factor X. Curr Opin Invest Drugs, 8(3):256–265. 9. Guertin KR, Choi YM (2007). The discovery of the factor Xa inhibitor Otamixaban: from lead identification to clinical development. Curr Med Chem, 14:2471–2481. 10. Zafar MU, Vorchheimer DA, Gaztanaga J, et al. (2007). Antithrombotic effect of factor Xa inhibition with DU-176b: phase-1 study of an oral, direct factor Xa inhibitor using an ex-vivo flow chamber. Thromb Haemost, 98:883–888. 11. Hylek EM (2007). Drug evaluation: DU-176b, an oral, direct factor Xa antagonist. Curr Opin Invest Drugs, 8(9):778–783. 12. Crowther MA, Ginsberg JS, Hirsh J (2001). Practical aspects of anticoagulant therapy. In Colman RW, Hirsh J, Marder VJ, Clowes AW, George JN, (eds.),
REFERENCES
13. 14. 15.
16. 17.
18.
19.
20.
21. 22. 23.
24.
25.
26.
27.
461
Hemostasis and Thrombosis: Basic Principles and Clinical Practice. Lippincott Williams & Wilkins, Philadelphia, pp. 1497–1516. Zucker S, Cathey MH, Sox PJ, Hall EC (1970). Standardization of laboratory tests for controlling anticoagulant therapy. Am J Clin Pathol, 52:348–354. Poller L (1987). Progress in standardization in anticoagulant control. Hematol Rev, 1:225–228. Bailey EL, Harper TA, Pinterton PH (1971). The “therapeutic range” of the one-stage prothrombin time in the control of anticoagulant therapy: the effect of different thromboplastin preparations. CMAJ, 105:307–318. Kirkwood TB (1983). Calibration of reference thromboplastin and standardization of the prothrombin time ratio. Thromb Haemost, 49:238–244. Jeske W, Messmore HL, Fareed J (1998). Pharmacology of heparin and oral anticoagulants, In Loscalzo J, Schafer AI (eds.), Thrombosis and Hemorrhage, 2nd ed. Williams & Wilkins, Baltimore, pp. 257–283. Crowther MA, Ginsberg JS, Hirsh J (2001). Practical aspects of anticoagulant therapy. In Colman RW, Hirsh J, Marder VJ, Clowes AW, George JN, (eds.), Hemostasis and Thrombosis: Basic Principles and Clinical Practice. Lippincott Williams & Wilkins, Philadephia, PA, pp. 1497–1516. Levine MN, Hirsh J, Gent M (1994). A randominzed trial comparing activated thromboplastin time with heparin assay in patients with acute venous thromboembolism requiring large daily doses of heparin. Arch Intern Med, 154:49–56. Kitchen S, Theaker J, Preston FE (2000). Monitoring unfractionated heparin therapy: relationship between eight anit-Xa assays and a protamine titration assay. Blood Coagul Fibrinolysis, 11:55–60. Fifth ACCP Consensus Conference on Antithrombotic Therapy (1998). Chest, 119(Suppl):1S–769S. Bauer KA, Kass BL, ten Cate H (1989). Detection of factor X activation in humans. Blood, 74:2007–2015. Bauer KA, Weitz JI (2001). Laboratory markers of coagulation and fibrinolysis. In Colman RW, Hirsh J, Marder VJ, Clowes AW, George JN, (eds.), Hemostasis and Thrombosis: Basic Principles and Clinical Practice. Lippincott Williams & Wilkins, Philadelphia, pp. 1113–1129. Fair DS, Edgington TS (1985). Heterogeneity of hereditary and acquired factor X deficiency by combined immunochemical and functional analyses. Br J Haematol, 59:235–242. Herrmann FH, Auerswald G, Ruiz-Saez A, et al. (2006). Factor X deficiency: clinical manifestation of 102 subjects from Europe and Latin American with mutations in factor 10 gene. Haemophilia, 12:479–489. Choufani EB, Sanchlorawala V, Ernst T, et al. (2001). Acquired factor X deficiency in patients with amyloid light-chain amyloidosis: incidence, bleeding manifestations, and response to high-dose chemotherapy. Blood, 97: 1885–1887. Mumford AD, O’Donnell J, Gillmore JD, Manning RA, Hawkins PN, Laffan M (2000). Bleeding symptoms and coagulation abnormalities in 337 patients with AL-amyloidosis. Br J Haematol, 110:454–460.
462
OPPORTUNITIES AND PITFALLS
28. Stern R, Chanoine F, Criswell K (2003). Are coagulation times biomarkers? Data from a phase I study of the oral thrombin inhibitor LB-30057 (CI-1028). J Clin Pharmacol, 43:118–121. 29. Stirling Y (1995). Warfarin-induced changes in procoagulant and anticoagulant proteins. Blood Coagul Fibrinolysis, 6:361–373.
24 INTEGRATING MOLECULAR TESTING INTO CLINICAL APPLICATIONS Anthony A. Killeen, M.D., Ph.D. University of Minnesota, Minneapolis, Minnesota
INTRODUCTION The clinical laboratory plays a critical role in modern health care. It is commonly estimated that approximately 70% of all diagnoses are to some extent dependent on a laboratory finding. The clinical laboratory has various roles in the diagnosis and treatment of disease, including determining disease risks, screening for disease, establishing a diagnosis, monitoring of disease progression, and monitoring of response to therapy. Not surprisingly, the size of the market is large. A 2004 report by S.G. Cowen on the global in vitro diagnostics (IVD) industry estimated it to be $26 billion in size, and molecular diagnostics was identified as being among the more rapidly growing areas. Today, molecular testing is used in many areas of the clinical laboratory, including microbiology and virology, analysis of solid and hematologic tumors, inherited disorders, tissue typing, and identity testing (e.g., paternity testing and forensic testing). The growth has occurred rapidly over the last 20 years. In this chapter we examine the principal issues surrounding the integration of molecular testing into the clinical laboratory environment.
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
463
464
INTEGRATING MOLECULAR TESTING INTO CLINICAL APPLICATIONS
CLINICAL LABORATORY REGULATION The clinical laboratory environment in the United States is one of the most extensively regulated areas of medical practice and comes under the federal Clinical Laboratory Improvement Amendments (CLIA) of 1988 (http://www. cms.hhs.gov/clia/). Any implementation of molecular diagnostic is therefore governed by the provisions of the CLIA. The history of the CLIA dates back to the 1980s, when public and congressional concern was raised by reports of serious errors being made in clinical laboratories. In response to these concerns, legislation was introduced with the intention of improving laboratory testing. These regulations cover most aspects of laboratory practice. Any laboratory testing that is performed in the United States for clinical purposes such as diagnosis, monitoring, deciding appropriate treatment, and establishing prognosis must be performed in a CLIA-certified laboratory. These regulations, however, do not apply to purely research studies or to early research and development work for molecular or other testing in a non-CLIA-certified environment, but as soon as testing that has genuine clinical utility is made available, it must be performed in a certified environment. The initial application for a CLIA certificate is usually made to the state office of the Centers for Medicare and Medicaid Services (CMS). A successful application will result in a certificate of registration, which allows a laboratory to perform clinical testing pending its first formal inspection. Depending on whether the laboratory is certified by CMS or by an accrediting organization, a successful inspection will result in a grant of either a certificate of compliance or a certificate of accreditation (Figure 1). These are essentially equivalent for Accreditation, 16,142
Compliance, 19,695
PPM,39,014
Waiver, 122,992
Figure 1 Distribution of CLIA certificates by type in non-CLIA-exempt states in 2007. (Data from the CLIA database, http://www.cms.hhs.gov/CLIA/.)
CLINICAL LABORATORY REGULATION
465
the purposes of offering clinical testing. Accrediting organizations function as surrogates for CMS in the laboratory accreditation process and must be approved by CMS to accredit clinical laboratories. The accrediting organizations are the College of American Pathologists (CAP), the Council on Laboratory Accreditation (COLA), the Joint Commission, the American Association of Blood Banks (AABB), the American Society for Histocompatibility and Immunogenetics (ASHI), and the American Association of Bioanalysts (AAB). Some of these, such as the ASHI, accredit laboratories that perform only limited types of testing. Others, such as the CAP, accredit laboratories for all types of clinical testing, including molecular diagnostic testing. Clinical tests are categorized for the purposes of the CLIA into several levels of complexity. This categorization is the function of the U.S. Food and Drug Administration (FDA). The type of CLIA certificate that a laboratory requires parallels the complexity of its test menu. The lowest level of test complexity is the waived category. Tests in this category are typically simple methods with little likelihood of error or of serious adverse consequences for patients if performed incorrectly. Commonly, such tests are performed in physician office laboratories. It should be noted that the term waived applies to a test, not to the need for the laboratory to have a CLIA certificate to perform any clinical testing. The next-highest level is the moderate-complexity test, including a category known as provider-performed microscopy. The highest level is the high-complexity test, which is applicable to most molecular tests. Laboratories that perform high-complexity testing must have a certificate to perform this type of testing. When the CLIA was written 20 years ago, there was relatively little molecular testing, and as a result, molecular diagnostics does not have specific requirements in the regulations as do most areas of clinical laboratory practice, such as clinical chemistry, microbiology, and hematology. Nevertheless, the general requirements of CLIA can be adapted to molecular testing. Accrediting organizations such as the CAP do have specific requirements for laboratories that perform molecular diagnostic testing. These are available in their laboratory inspection checklists [1]. Whereas the FDA is responsible for categorizing tests, the Centers for Medicare and Medicaid Services (CMS) are responsible for the oversight of the CLIA program, including granting certificates, approving accrediting organizations, approving proficiency testing (PT) programs, inspections, and enforcement actions. The CLIA is a federal law and applies to all clinical testing performed in the United States and in foreign laboratories that are certified under the CLIA. There are provisions in the CLIA under which individual states can substitute their own laboratory oversight programs if it is determined that such programs are at least as stringent as the federal program. Currently, such programs exist only in New York and Washington. These are known as “CLIA-exempt” states, although CMS reserves the authority to inspect any aspect of laboratory performance in these states. The CLIA includes the
466
INTEGRATING MOLECULAR TESTING INTO CLINICAL APPLICATIONS
following areas of laboratory testing: proficiency testing, preanalytic testing, analytic testing, and personnel requirements. Proficiency Testing Proficiency testing (PT) is one external measure by which a laboratory’s performance can be judged. In a PT program, laboratories are sent samples for analysis and return their results to the PT program organizers. The correct result (or range of results) for these programs is determined by the organizers based on a comparison of participant results with results obtained by reference laboratories (accuracy-based grading), or by comparison with other laboratories that use the same analytical methods (peer-group grading). Ideally, all PT programs would use accuracy-based grading, but there are significant practical limitations to this approach. One of the major limitations is the PT material itself. For many analytes it is not possible to obtain the necessary range of concentrations to test low, normal, and high concentrations using real human samples. This necessitates the use of artificial samples that have been spiked with the analyte or from which the analyte has been removed (or at least its concentration has been lowered). Such artificial samples may behave unexpectedly when tested using some analytical equipment and give higher or lower values that would be obtained in a native specimen containing the same concentration of the analyte. This is known as the matrix effect. Other limitations may require peer-group grading; for example, recombinant proteins may not be detected equally in different manufacturers’ immunoassays, making accuracy-based grading impossible. Enzyme concentrations may be determined by different manufacturers using different concentrations of cofactors, different temperatures, and different substrates, thus giving rise to such intermethod disagreement that accuracy-based grading is impossible. Molecular testing poses certain challenges to PT programs. It may not be possible to obtain real human specimens such as blood from subjects known to carry mutations of interest because of the quantities required for a large PT program. This necessitates the use of cell lines or even DNA aliquots for PT programs in genetics. Such samples cannot test all phases of the analytical process, including extraction of DNA from whole blood (the normal procedure for genetic testing). The same concern applies to molecular testing for infectious diseases such as HIV-1. For these reasons, it is not uncommon that PT samples do not fully mimic patient samples. Under the CLIA, laboratories are required to enroll in PT programs for a group of analytes specified in Subpart I of the regulations. These analytes were chosen based on clinical laboratory testing patterns that existed in 1988, and the list has not been updated since then. As a result, many newer tests, including molecular tests, are not on this list. For tests not on this list of “regulated” analytes, laboratories must verify the accuracy of their methods by some other method at least twice a year. This could include comparison of results with
CLINICAL LABORATORY REGULATION
467
those obtained by a different method: sample exchange with another laboratory, or even correlation of results with patients’ clinical status. If formal PT programs exist, laboratories should consider enrolling in these. Several of the accrediting organizations do have requirements for participation in PT programs where these exist, including PT programs for molecular testing. Preanalytic Testing The CLIA has requirements that cover the preanalytic phase of testing. These include the use of requisition forms with correct identification of the patient, the patient’s age and gender, the test to be performed, the date and time of sample collection, the name of the ordering provider or the person to whom results should be reported, the type of specimen (e.g., blood), and any other additional information needed to produce a result. All of these are critical pieces of information that should be provided to the laboratory. Many so-called “laboratory errors” actually arise at the time of sample collection, and specimen misidentification is one of the most common types of error in the testing process. In addition to the patient’s age and gender, orders for molecular genetic testing should include relevant information about suspected diagnosis, clinical findings, and especially the family history. Many experienced clinical geneticists and genetic counselors will include a pedigree diagram on a requisition form for tests for inherited disorders. This practice is highly desirable and provides much useful information to the laboratory. As an example of the importance of this information, current practice guidelines in obstetrics and gynecology in the United States encourage the offering of prenatal testing to expectant Caucasian mothers to determine if they are carriers of mutations for cystic fibrosis. A recommended panel of mutations to be tested by clinical laboratories covers approximately 80 to 85% of all mutations in this population. In general, a negative screening test for these mutations reduces the risk of being a cystic fibrosis carrier from 1 in 30 to 1 in 141, and the laboratory would report these figures, or, if a mutation were identified, would report the specific mutation. However, these figures are based on the assumption that there is no family history of the disorder in the patient’s family. If there is such a history, the risk of being a carrier (both before and after testing) is substantially higher. It is therefore essential that the ordering physician inform the laboratory if there is a family history. Analytic Testing The CLIA has detailed requirements for the analytic phase of the testing process. These include the procedure manual, which is a step-by-step set of instructions on how the test should be performed, the process for method calibration, the procedures for preparation of reagents, the use of controls,
468
INTEGRATING MOLECULAR TESTING INTO CLINICAL APPLICATIONS
establishment of the reference range, reporting procedures, and analytical parameters such as sensitivity and specificity. There are no specific CLIA requirements that are unique to molecular testing, and therefore the molecular diagnostics laboratory has to adapt requirements from related areas such as clinical chemistry and microbiology to molecular testing. Some of the accrediting organizations have checklists that include specific requirements for molecular testing. These can provide useful guidance on procedures even for a laboratory that is not accredited by one of these organizations. Postanalytic Testing Postanalytic testing refers to steps involved in reporting results to the ordering physician in a timely manner. The patient’s name and identification information must be on the report, as should the name and address of the performing laboratory. In addition to the result, the report should include the reference interval and any relevant interpretive comments. The laboratory should be able to provide information on test validation and known interferences on the request of an ordering physician. Results must be released only to authorized persons. Although certain elements of the postanalytic phase of testing can be controlled by the laboratory, there are also critical elements that are beyond its control, notably the correct interpretation of the result by the ordering physician. Molecular diagnostics (and genetics in general) is an area in which many physicians and other providers never had formal training in medical school. Concern has been expressed about the need to improve genetics education for health care professionals. Where there is a gap in provider knowledge, the laboratory should be able to offer expert consultation on the interpretation of its results to primary care providers [2]. This requires time, patience, and good communication skills on the part of the laboratory director and senior staff. Although such activity may be reimbursable under some health plans, the primary incentives for providing this kind of consultation are good patient care and customer satisfaction. Personnel Qualifications Under the CLIA, requirements exist for laboratory personnel qualifications and/or experience. Perhaps the most important qualification requirements apply to the laboratory director. The director of a high-complexity laboratory such as a clinical molecular testing laboratory must hold a license in the state in which he or she works (if the state issues such licenses) and be a physician or osteopathic physician with board certification in pathology. Alternatively, the laboratory director can be a physician with at least one year of training in laboratory practice during residency, or a physician with at least two years of experience supervising or directing a clinical laboratory. A doctoral scientist holding a degree in a chemical, physical, biological, or clinical laboratory
GENETIC TESTING AND PRIVACY
469
science field with board certification from an approved board may also serve as the laboratory director. There are also provisions that allow for grandfathering of persons who were serving as laboratory directors at the time of implementation of the CLIA. Currently, there are no specific CLIA-required qualifications for the director of a molecular diagnostics laboratory. There are, however, board examinations in this field or similar fields that are offered by the American Board of Pathology, the American Board of Medical Genetics, the American Board for Clinical Chemistry, the American Board of Medical Microbiology, and the American Board of Bioanalysts. It is possible that individual states may begin to require specific qualifications in molecular diagnostics in the future or even that changes to the CLIA may require such qualifications. Other personnel and their qualifications described in the CLIA for high-complexity laboratories are technical supervisor, clinical consultant, general supervisor, cytology supervisor, cytotechnologist, and testing personnel.
GENETIC TESTING AND PRIVACY For many years there has been concern about the use of genetic information to discriminate against people with genetic diseases or those who are at risk of manifesting genetic disease at some time in the future. Although there are very few reported examples of such discrimination, the possibility of such misuse of genetic information by employers or insurance companies has received considerable attention by both the public and by legislative bodies [3]. A comprehensive analysis of applicable laws is beyond the scope of this chapter, but certain principles that apply to the clinical laboratory are worth mentioning. It is generally assumed, of course, that all clinical laboratory testing is performed with the consent of the patient. However, written consent is a legal requirement for genetic testing in some jurisdictions. The laboratory is generally not in a position to collect informed consent from patients, so it is usually obtained by some other health care worker, such as the ordering physician or genetics counselor. The laboratory director should be aware of applicable laws in this matter and determine, with legal advice if necessary, what testing is covered in his or her jurisdiction and ensure that appropriate consent is obtained. Genetic testing in its broadest meaning can cover more than just nucleic acid testing. For example, some laboratory methods for measuring glycohemoglobin, a test used for following diabetes control, can indicate the presence of genetic variants of hemoglobin, such as sickle-cell hemoglobin. Histopathologic examination of certain tumors can be strongly suggestive of an inherited disorder. Serum protein electrophoresis can reveal α-1 antitrypsin deficiency, an inherited disorder. The laboratory should consider how it reports such findings, which may contain genetic information that is unanticipated by both the ordering physician and the patient.
470
INTEGRATING MOLECULAR TESTING INTO CLINICAL APPLICATIONS
The most significant federal legislation in this area is the Genetic Information Nondiscrimination Act of 2008. This act offers protection against the use of genetic information as a basis for discrimination in employment and health insurance decisions. Under the provisions of this law, people who are healthy may not be discriminated against on the basis of any genetic predisposition to developing disease in the future. Health care insurers (but not life insurers or long-term care insurers) and employers may not require prospective clients or employees to undergo genetic testing or take any adverse action based on knowledge of a genetic trait. The benefits of this legislation are that some people may feel less trepidation about undergoing genetic testing because of fear that such information could be used by an employer or insurance company to discriminate against them.
TESTING IN RESEARCH LABORATORIES As research laboratories report new molecular findings in inherited and acquired diseases, it is not uncommon for clinical laboratories to receive requests to send patient samples to research laboratories for testing. This is an area in which the clinical laboratory must be careful to avoid noncompliance with CLIA regulations. One of the requirements of the CLIA is that certified laboratories must not send samples for patient testing to a non-CLIAcertified laboratory. This rule applies even if the research laboratory is the only one in the world to offer a particular test. Such samples should not be handled by a CLIA-certified laboratory, and the ordering physician should find some other means of arranging for testing if it is considered necessary. For example, it may be possible for testing to be performed under a research protocol. In this case the local institutional review board may be able to offer useful guidance on the creation and implementation of an appropriate protocol. There are good reasons to be cautious about performing clinical testing in a research setting. The standards that one expects in a CLIA-certified laboratory are designed to promote quality and improve the accuracy of patient testing. Laboratories that do not follow these extensive requirements may not have all of the necessary protocols and procedures in place to offer the same quality of test result. Research laboratories are often part of academic institutions that may or may not carry malpractice insurance coverage in the event that a reported test result is erroneous.
MOLECULAR TESTING FROM RESEARCH TO CLINICAL APPLICATION The usual progression of molecular testing begins with gene and mutation discovery, typically in a research laboratory setting. Publication of these early
MOLECULAR TESTING FROM RESEARCH TO CLINICAL APPLICATION
471
findings in peer-reviewed literature is the normal means of disseminating new information about a gene of clinical interest and the variations that can cause disease. It is important to document at least the most common disease-causing mutations and benign polymorphisms. It is usual to file patent applications to establish intellectual property claims, particularly if the test may have wide clinical applicability. After a disease-causing mutation has been discovered, diagnostic testing on patients (as opposed to research subjects who have consented) requires performance in a laboratory that holds a CLIA certificate, and for molecular testing almost certainly means a certificate that allows for high-complexity testing. Because research laboratories are usually not set up to perform clinical testing or to meet the stringent criteria for clinical laboratory operations, it is usual that the rights to perform clinical testing be sold or licensed to a clinical laboratory that has the capability of offering such testing. The question of how many laboratories should be licensed is an important one. In general, it is often problematic for clinical providers when only one laboratory has a license to perform a clinical test. There is no way to verify a result in an independent laboratory, there is no competition that might lead to better test pricing for patients, there is little that can be done if the laboratory performance in areas such as turnaround time is suboptimal, and research may be inhibited [4]. For these reasons, licensing a test to multiple laboratories is generally preferable to an exclusive license to one laboratory. It has also been argued that patenting tests may inhibit their availability in clinical laboratories [5]. What should a molecular diagnostics laboratory be able to offer to meet clinical needs for molecular testing? First, the quality of the test result must be of a very high standard; that is, the results are reliable. Of course, all laboratories strive for this goal, which is implicit in the numerous regulations that govern laboratory testing. This is achieved by careful attention to the preanalytic, analytic, and postanalytic factors mentioned above and to the hiring of qualified and skilled personnel. The laboratory should offer turnaround times that are appropriate to the clinical needs of a specific test and will vary from one test to another. For example, testing for some infectious diseases is likely to require a faster turnaround time than is testing for a genetic predisposition to a chronic disease. Information should be readily available on the requirements for specimen type and the needs for special handling. The laboratory should be able to offer interpretations and consultations to ordering physicians regarding results of patient testing. If the genetic test result is a risk factor for future development of disease or for carrier status (e.g., cystic fibrosis carrier screening in pregnancy), the laboratory should be able to recalculate such risks if additional family history is provided at a later time. Many laboratories have a formal relationship with a genetic counselor who can interact with both patients and other health care workers and provide a variety of very useful services. As clinical testing becomes more widespread, there can be significant changes to the knowledge and thinking about the relationship between disease
472
INTEGRATING MOLECULAR TESTING INTO CLINICAL APPLICATIONS
and underlying genetic mutation. An example of this is illustrated by the hereditary hemochromatosis gene, HFE. Discovery of this gene and the common mutations, C282Y and H63D, led to the view that the homozygous states, especially homozygosity for C282Y, would lead to chronic iron overload and hemochromatosis [6]. That view is no longer correct in light of more recent population studies of the penetrance of these mutations. Approximately one-third of patients who are homozygous for C282Y do not have elevated ferritin levels and appear not to be at risk of iron overload [7]. The reason for the variability of penetrance is probably related to dietary iron, blood loss, and other genetic factors that have yet to be determined. It is important for the laboratory director to be aware of such changing perspectives in thinking about diseases and to be an educator to others, making them aware of important developments, so that rational ordering patterns are encouraged.
REIMBURSEMENT FOR MOLECULAR TESTING In common with all areas of medical practice, reimbursement for molecular testing at the federal level (Medicare) is based on the current Common Procedural Terminology (CPT) coding system. State providers such as Medicaid and private insurance companies generally follow the same process. Under CPT coding, a charge and its payment are based on the number of individual items of service provided. Each step in a typical molecular assay ranging from extraction of DNA to performance of a polymerase chain reaction to gel electrophoresis and final result interpretation has a unique CPT code and an associated reimbursement based (in the case of Medicare) on the published fee schedule. Therefore, the Medicare reimbursement rate is calculable and is based on the individual steps in an assay. Private insurance companies may reimburse at a higher rate than federal payers. The CPT codes are updated annually by the American Medical Association, which retains copyright on the codes. Because of the rapid advances in molecular testing, it is not uncommon for laboratories to use methods that are not listed in the CPT guide. In this case, it may be necessary to seek consultation billing experts on choosing the appropriate fee codes. Not uncommonly, genetic test prices from commercial laboratories are well above those that can be justified from published fee schedules. Although this may be perfectly legal, it can lead to significant problems for patients whose insurance companies (including Medicare) may not cover the full cost of the testing. In this situation the patient may have to pay out of pocket for part or all of the cost of the test if it is decided that the testing is essential. This situation can pose a financial risk for hospitals and clinics if they refer a sample for testing to a reference laboratory and thereby possibly incur the charges for a test. One possible option is to notify the patient and ordering physician that such tests are unlikely to be covered by insurance and determine how they propose to pay for testing. For Medicare patients, an advance beneficiary
REFERENCES
473
notice (ABN) may be used to formally notify a patient that the test is considered to be a noncovered service [8]. These types of situations should be discussed with hospital management.
SUMMARY Molecular testing is firmly established in clinical laboratories for a wide variety of disorders. According to published reports, molecular diagnostics is and will continue to remain one of the fastest areas of growth in clinical testing. In the United States, the clinical laboratory operates under the regulations of the Clinical Laboratory Improvement Amendments of 1988, which provide the framework for producing high-quality results. The clinical laboratory differs significantly from the research laboratory both in practice and from a regulatory point of view. Careful attention should be paid to issues such as patient privacy and reimbursement for molecular testing.
REFERENCES 1. College of American Pathologists (2008). http://www.cap.org (accessed Sept. 18, 2008). 2. Harvey EK, Fogel CE, Peyrot M, Christensen KD, Terry SF, McInerney JD (2007). Providers’ knowledge of genetics: a survey of 5915 individuals and families with genetic conditions. Genet Med, 9:259–267. 3. Harmon A (2008). Insurance fears lead many to shun DNA tests. New York Times, Feb. 24. 4. Cho MK, Illangasekare S, Weaver MA, Leonard DG, Merz JF (2003). Effects of patents and licenses on the provision of clinical genetic testing services. J Mol Diagn, 5:3–8. 5. Merz JF, Kriss AG, Leonard DG, Cho MK (2002). Diagnostic testing fails the test. Nature, 415:577–579. 6. Feder JN, Gnirke A, Thomas W, et al. (1996). A novel MHC class I–like gene is mutated in patients with hereditary haemochromatosis. Nat Genet, 13(4):399–408. 7. Olynyk JK, Trinder D, Ramm GA, Britton RS, Bacon BR (2008). Hereditary hemochromatosis in the post-HFE era. Hepatology, 48:991–1001. 8. Carter D (2003). Obtaining advance beneficiary notices for Medicare physician providers. J Med Pract Manage, 19:10–18.
25 BIOMARKERS FOR LYSOSOMAL STORAGE DISORDERS Ari Zimran, M.D. Gaucher Clinic, Shaare Zedek Medical Center, Jerusalem, Israel
Candida Fratazzi, M.D. Altus Pharmaceuticals, Inc., Waltham, Massachusetts
Deborah Elstein, Ph.D. Gaucher Clinic, Shaare Zedek Medical Center, Jerusalem, Israel
INTRODUCTION During the past few decades, because of the advances in molecular technology and improved understanding of lysosomal diseases, efforts have been made to identify appropriate prognostic and predictive factors in many of these diseases, despite the fact that each disease is a rare disorder. Indeed, what may be seen as a commonality among these diseases in terms of biochemistry or molecular underpinnings further defines the biochemical and molecular elements of each disease that make each disease unique. Thus, the situation today reflects the partial knowledge we have about this conglomerate of diseases, so that in the main, very few biomarkers are available in most of these diseases to assist the clinician to know which patients a priori will suffer from more severe manifestations or will benefit the most from specific therapies. In this context one must mention the infantile (neurological) forms that are rapidly
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
475
476
BIOMARKERS FOR LYSOSOMAL STORAGE DISORDERS
progressive, and for these it might not be ethically acceptable to offer therapies that have not yet withstood the test of time. A biomarker should be technically feasible in many hands, easy to measure (using readily available technology, so that results are universal and standardized); useful, with a consistent relative magnitude between experimentals and controls, or treated and untreated; reliable, precise and accurate clinically, not just statistically; and classifiable as strongly predictive or prognostic. In recruiting patients with lysosomal disorders for clinical trials, the use of biomarkers is a double-edged sword: Whereas biomarkers may meet all the criteria above, they must also be clearly related to disease burden in the vast majority of patients and be capable of detection at both ends of the spectrum, from very mild to severe, and equally reactive to specific therapy within the same range. If all these prerequisites cannot be met, the use of the biomarker may be unjustified clinically. The purpose of this chapter is to review the literature and practice of biomarkers in lysosomal storage diseases and use current practices to discuss guidelines for the use of biomarkers in upcoming clinical trials.
IDENTIFICATION OF SPECIFIC LYSOSOMAL STORAGE DISEASES The rarer a disease, the more likely it is that biomarkers are unavailable. In terms of some of the lysosomal diseases, of which there are more than 50, there are actually no universally recognized biomarkers other than specific protein (enzyme) or substrate markers. Thus, for four mucoploysaccharidoses disorders (MPS I, MPS II, MPS III, and MPS VI), and Pompe disease, Gaucher disease, and Fabry disease there are protein markers, either enzymes or macrophage biomarkers; and for seven diseases (MPS I, MPS II, MPS IIIA, MPS IIIB, MPS IVA, MPS VII, and Fabry disease) there are substrate markers [1]. Urinary heparan sulfates can be used to differentiate MPS IIIC (Sanfilippo C syndrome) and MPS II (Hunter disease) [2], keratin sulfate to identify MPS IV and possibly other MPS disorders [3], and antibodies against gangliosides (i.e., anti-GM2 and anti-GM3) based on animal modeling. Antibodies, monoclonal and/or polyclonal, have been generated against some of the diseases’ enzymes or substrates as well. Immunohistochemical techniques that can also be used to identify proteins [4], although not quantitative, may prove to have considerable potential as predictive markers if teamed with other techniques, such as mass spectrometry or various forms of chromatography. In the first decade of 2000, before his untimely death, Nestor Chamoles produced filter papers for dried blood spots to identify various enzymes whose deficiencies implicated a lysosomal disorder: α-l-iduronidase (MPS I), αgalactosidase (Fabry disease), β-d-galactosidase (GM1 gangliosidosis), and others [5–13]. Eventually, the technology was streamlined into a multiplex assay identification of enzymes for Fabry, Gaucher, Hurler, Krabbe, Niemann–
IDENTIFICATION OF SPECIFIC LYSOSOMAL STORAGE DISEASES
477
Pick A/B, and Pompe diseases [14] because of the recognition that simple and reliable diagnostic markers for patient identification were needed with the imminent and/or probable availability of specific therapies in these diseases. This methodological advance highlights the importance of using an easily obtainable patient sample (and in this case in small quantities) that can be transported without damage without special handling and which is highly reproducible. For the majority of the enzymes above, the filter paper system is reliable; however, in Pompe disease, for example, measured α-glucosidase activity may be a composite of other activities, making this specific assay less reliable. In its stead, and in similar cases where inhibition of nonspecific (substrate) activities cannot be totally suppressed, other assays, such as immunocapture, can be used [15]. There is also proof of principle for the use of diagnostic biomarkers from amniotic fluid in lysosomal disorders, with the express purpose of distinguishing normal from affected and even correlation with specific storage material in some of the disorders [16]. Within the past few years the list of diseases that can be profiled on the basis of various proteins, oligosaccharides, and glycolipids includes six MPS disorders and at least eight other diseases. Of note is a urinary measure of oligosacchirides (glycosaminoglycan derivatives) which has met the criteria of sensitivity and specificity in identifying persons with MPS disorders and, based on unique profiles, can differentiate among (all but MPS IIIB and MPS IIIC) subtypes [17]. Generally, to be useful as true biomarkers, some correlation with clinical disease expression (i.e., predictive or prognostic value) must be proven. A urinary diagnostic test that may be an appropriate marker of both disease progression and response to therapy is urinary globotriaosylceramide (Gb3) in Fabry disease [18], although residual enzyme activity in the blood is a poorer marker of disease status. Similarly, in Gaucher disease, the most common lysosomal storage disorder, which has a range of clinical expression from virtually asymptomatic octogenarians to lethal neonatal forms, residual activity is a poor predictor of disease severity [19]. Therefore, in other diseases, a combination of analyses of enzyme activity with genotype or molecular phenotypes has been recommended [20] [e.g., in MPS II (Hunter disease)] to improve predictability. In summary, within the past few decades various specific assays have been developed that identify patients with enzyme deficiencies and can even quantify residual enzyme activity (relative to normal controls) based on the kinetics model of Conzelmann and Sandhoff [21] of a correlation between lipid accumulation and deficient enzyme activity [22], but residual activity is not always correlated with clinical status. Alternatively, improper or “derailed” processing of the enzyme may be indicative of disease severity since these enzymes undergo trafficking from endosome to Golgi to lysosome. This thinking has been applied in estimates of lysosomal-associated membrane proteins (LAMP-1 and LAMP-2) with the expectation of uncovering a processing defect common to all lysosomal disorders that would also be predictive of disease severity [23], but this was not proven [24]. It has also
478
BIOMARKERS FOR LYSOSOMAL STORAGE DISORDERS
been suggested that mutant enzyme variants may be retained in the endoplasmic reticulum and that this may be one of the factors that determine disease severity [25]. Accumulation of lipid in the endosomes or lysosomes was shown to characterize variants of type C Niemann–Pick disease because of the presence of cholesterol [26], thereby making this a good marker, but again, this was true only for a highly specific variant of this rare disorder. It should be noted that one ramification from these and similar findings is that response to therapy, even if the modality is identical for more than one lysosomal disease, may not be uniquely or sensitively monitored using non-disease-specific markers.
IDENTIFICATION OF CLINICAL MARKERS Clinical markers with predictive or prognostic value would be attractive if quantifiable and if assessment is non-invasive. It was hoped that the use of animal models would be illustrative of human conditions. In a most recent study of murine MPS I, although it was clearly shown that thickened aortic valves and abnormal cardiac function can be monitored from the preclinical stage, the authors note that “murine MPS I is not identical to human MPS I” [27]; each has unique clinical features. Nonetheless, a more recent study in murine MPS I employing proteomic analysis of heparin cofactor II-thrombin (HCII-T), which is a serine protease inhibitor, showed highly elevated serum levels in mice and in human patients that were correlated with disease severity and responsive to therapy [28]. HCII-T, therefore, may indeed meet the requirements of a good biomarker for MPS I, especially since it implicates a specific pathophysiology. A second good example in the MPSs is the use of accumulation of a disaccharide (HNS-UA), a marker of heparin sulfate storage in disease-specific sites of MPS IIIA [29] because the rate of accumulation is commensurate with disease severity at these sites and is appropriately reactive to disease-specific therapy. Along these lines, therefore, it is commendable to find disease-specific parameters that lend themselves to quantification and test correlation with clinical severity and responsiveness to therapy. Biopsies and bone marrow aspirations or repeat radiological workups to stage severity should not be condoned if there is a better option: even, some might say, if that option does not exactly meet our criteria of a “good” biomarker. In Gaucher disease, because of concern about Gaucher-related skeletal involvement as associated with considerable morbidity, one study showed a reduction in osteoblast and osteoclast bone markers [30], but there was no correlation with incidence of bone pathology [31]. Biomarkers in this sense therefore might be misleading. Another example in Fabry disease showed no correlation between plasma concentrations of endothelial markers or homocysteine with response to therapy, although the endothelial and leucocyte activation are good measures of renal and cardiovascular involvement in Fabry disease [32].
MACROPHAGE SURROGATE BIOMARKERS
479
INFLAMMATORY MARKERS AS SECONDARY BIOMARKERS Among the most prevalent hypothetical constructs used in lysosomal storage disorders is that of inflammation as a mediator, either as causative or as consequent effect, of lipid storage material. A pathway common to all MPS disorders (originally to describe MPS VI and MPS VII) has been developed based on inflammatory reactivity of connective tissues correlated with metalloproteinases in chondrocytes [33], but it is not a qualitative measure of severity or responsiveness to therapy. In GM1 gangliosidosis because it is known that neuronal apoptosis and abnormalities in the central nervous system are secondary to storage, assessment of inflammatory cerebrospinal fluid markers showed correlation with clinical course but were not responsive to therapeutic interventions [34]. However, in a mouse model of gangliosidoses that showed disease progression with increased inflammatory cells in the microglia, the difference in GM1 (Sandhoff disease) and GM2 gangliosidosis (Tay–Sachs and late-onset Tay–Sachs disease) models was the timing of the onset of clinical signs [35], which is not always taken into consideration. Thus, while inflammation may be postulated to be either a primary or a secondary index of disease activity, not all markers meet the criteria of sensitivity or clinical relevance. Similarly, in an early study of Gaucher disease using macrophagederived inflammatory markers, there were some cytokines that correlated with disease severity and clinical parameters, but the results were equivocal in many markers [36]. In a knock-out mouse model of types A and B of Niemann– Pick disease, the macrophage inflammatory cytokine MIP-1α was elevated in disease-specific sites and declined with therapy [37], but this marker cannot be disease-specific. On a global level, however, mouse models for the various lysosomal disorders have recently shown a connection between lipid storage in the endosome or lysosome and invariant natural killer T (iNKT)-cell function, indicative of thymic involvement, albeit these findings would conflict with the theory of elaboration of inflammatory markers in lysosomal storage disorders [38].
MACROPHAGE SURROGATE BIOMARKERS Of the many avenues attempted in the various common and less common lysosomal storage diseases, none is a completely satisfactory biomarker. This is a distinct disadvantage when the alternative may be invasive procedures that are more dangerous than merited by the status of the patient. A class of biomarkers has been incorporated into the evaluation initially of Gaucher disease, but now also of Fabry disease and type B Niemann–Pick disease, which are surrogate in the sense that they measure plasma levels of macrophage lipid or chemokines. Examples of this class are chitotriosidase and C-C chemokine ligand 18 (CCL 18; also called pulmonary and activation-regulated chemokine, PARC) that can be measured in plasma and in urine. Chitotriosidase
480
BIOMARKERS FOR LYSOSOMAL STORAGE DISORDERS
in Gaucher disease [39] was considered a specific marker of disease severity and then as a measure of response to therapy [40]. Among the methodological issues with using chitotriosidase is that it is genetically deficient in 6% of all persons, and genotyping should be done. The surrogate marker CCL18/PARC was then introduced [41] because it had the advantage of being present in everyone, yet it is not nearly as elevated in patients with Gaucher disease relative to healthy individuals. An advantage of CCL18/PARC over chitotriosidase assays is the less difficult assay of CCL18/PARC. In male patients with Fabry disease, chitotriosidase levels were found to be significantly elevated but were not correlated with disease severity, although in some cases may have normalized with therapy [42]. In two siblings with type B Niemann–Pick disease, there were elevated levels of both markers, but not commensurate with clinical severity [43]. Recently, urinary levels of chitotriosidase and CCL 18/PARC have been measured in Gaucher disease, but they do not appear to correlate with plasma levels, although there was correlation after exposure to treatment [44]. Interestingly, despite its indirect relationship to diseasespecific parameters, the popularity of chitotriosidase as a putative biomarker has led to its use in testing other nonspecific markers [45]. This should not, of course, be the intention of biomarkers (i.e., that they correlate with each other) because then one is caught up in loops of correlation not one of which is related directly to a disease-specific parameter.
BIOMARKERS AND CLINICAL TRIALS Initiation of clinical trials is a costly and time-consuming commitment which has as a goal decreased time to market of a novel modality that will be a gold standard. This is definitely the case in rare diseases where the availability of a single therapeutic option that is safe and effective may be the only hope of affected individuals. If a pharmaceutical company undertakes the commitment to a clinical trial in an ultrarare disorder, the candidate modality must have tremendous promise to survive rigorous examination of the preclinical stages. By the time a putative therapy achieves phase II or phase III status, patients too will be highly motivated to see a successful treatment brought to market. Thus, on the one hand there is incentive for the company and for patients to get the treatment into the market, but on the other hand there is awareness that in clinical trials of patients, many hopeful candidate therapies do not meet their primary outcome measures, resulting in dismissal of that option. Choosing outcome measures for clinical trials is both a science and an art in rare diseases because candidate patients are few, there is not always a “dream team” in terms of disease severity, and as we all know that “stuff happens.” Things that happen that are unforeseen and uncontrollable may prevent a perfectly acceptable drug from getting to market. The current practice is to have secondary as well as primary outcome measures that one can
REFERENCES
481
assess should the primary outcome be equivocal or difficult to interpret once the clinical trial is completed. As implied above, because they are seen as adjuncts in assessing clinical efficacy of therapy, biomarkers are popular as outcome measures in clinical trials, especially by regulatory agencies. In rare diseases, however, one must be cautious in applying biomarkers merely because they are more convenient to assess than disease-specific clinical parameters. Importantly, all biomarkers are not equal in their predictive or prognostic value. This is a critical starting point in evaluating whether to include a biomarker as an outcome measure. Similarly, there is a difference, as implied above, between markers that measure disease-specific events and those (surrogate markers have a different use and can be confusing in this context) that measure events putatively related to a clinical event. In making clinical decisions, one should not rely on putatively related markers but be guided by clinically relevant parameters that correlate with disease severity. In conclusion, biomarkers are a means of better prediction and follow-up, especially from the perspective of regulatory issues involving diagnostics and novel therapeutic options. This is even more cogent in cases where ancillary or additive therapies are considered to “fine tune” previously achieved therapeutic achievements. However, one should differentiate between diagnostic markers and prognostic biomarkers before choosing the latter over the former in making clinical decisions.
REFERENCES 1. Parkinson-Lawrence E, Fuller M, Hopwood JJ, Meikle PJ, Brooks DA (2006). Immunochemistry of lysosomal storage disorders. Clin Chem, 52: 1660–1668. 2. Toma L, Dietrich CP, Nader HB (1996). Differences in the nonreducing ends of heparan sulfates excreted by patients with mucopolysaccharidoses revealed by bacterial heparitinases: a new tool for structural studies and differential diagnosis of Sanfilippo’s and Hunter’s syndromes. Lab Invest, 75:771–781. 3. Tomatsu S, Okamura K, Maeda H, et al. (2005). Keratan sulphate levels in mucopolysaccharidoses and mucolipidoses. J Inherit Metab Dis, 28:187–202. 4. Walkley SU (2004). Secondary accumulation of gangliosides in lysosomal storage disorders. Semin Cell Dev Biol, 15:433–444. 5. Chamoles NA, Blanco M, Gaggioli D (2001). Diagnosis of alpha-l-iduronidase deficiency in dried blood spots on filter paper: the possibility of newborn diagnosis. Clin Chem, 47:780–781. 6. Chamoles NA, Blanco M, Gaggioli D (2001). Fabry disease: enzymatic diagnosis in dried blood spots on filter paper. Clin Chim Acta, 308(1–2):195–196. 7. Chamoles NA, Blanco MB, Iorcansky S, Gaggioli D, Specola N, Casentini C (2001). Retrospective diagnosis of GM1 gangliosidosis by use of a newbornscreening card. Clin Chem, 47:2068.
482
BIOMARKERS FOR LYSOSOMAL STORAGE DISORDERS
8. Chamoles NA, Blanco MB, Gaggioli D, Casentini C (2001). Hurler-like phenotype: enzymatic diagnosis in dried blood spots on filter paper. Clin Chem, 47:2098–2102. 9. Chamoles NA, Blanco M, Gaggioli D, Casentini C (2002). Gaucher and Niemann–Pick diseases—enzymatic diagnosis in dried blood spots on filter paper: retrospective diagnoses in newborn-screening cards. Clin Chim Acta, 317: 191–197. 10. Chamoles NA, Blanco M, Gaggioli D, Casentini C (2002). Tay–Sachs and Sandhoff diseases: enzymatic diagnosis in dried blood spots on filter paper: retrospective diagnoses in newborn-screening cards. Clin Chim Acta, 318:133–137. 11. Chamoles NA, Niizawa G, Blanco M, Gaggioli D, Casentini C (2004). Glycogen storage disease type II: enzymatic screening in dried blood spots on filter paper. Clin Chim Acta, 347:97–102. 12. Wang D, Eadala B, Sadilek M, et al. (2005). Tandem mass spectrometric analysis of dried blood spots for screening of mucopolysaccharidosis I in newborns. Clin Chem, 51:898–900. 13. Niizawa G, Levin C, Aranda C, Blanco M, Chamoles NA (2005). Retrospective diagnosis of glycogen storage disease type II by use of a newborn-screening card. Clin Chim Acta, 359:205–206. 14. Gelb MH, Turecek F, Scott CR, Chamoles NA (2006). Direct multiplex assay of enzymes in dried blood spots by tandem mass spectrometry for the newborn screening of lysosomal storage disorders. J Inherit Metab Dis, 29:397–404. 15. Umapathysivam K, Hopwood JJ, Meikle PJ (2005). Correlation of acid alphaglucosidase and glycogen content in skin fibroblasts with age of onset in Pompe disease. Clin Chim Acta, 361:191–198. 16. Ramsay SL, Maire I, Bindloss C, et al. (2004). Determination of oligosaccharides and glycolipids in amniotic fluid by electrospray ionisation tandem mass spectrometry: in utero indicators of lysosomal storage diseases. Mol Genet Metab, 83:231–238. 17. Fuller M, Rozaklis T, Ramsay SL, Hopwood JJ, Meikle PJ (2004). Disease-specific markers for the mucopolysaccharidoses. Pediatr Res, 56:733–738. 18. Whitfield PD, Calvin J, Hogg S, et al. (2005). Monitoring enzyme replacement therapy in Fabry disease: role of urine globotriaosylceramide. J Inherit Metab Dis, 28:21–33. 19. Fuller M, Lovejoy M, Hopwood JJ, Meikle PJ (2005). Immunoquantification of beta-glucosidase: diagnosis and prediction of severity in Gaucher disease. Clin Chem, 51:2200–2202. 20. Sukegawa-Hayasaka K, Kato Z, Nakamura H, et al. (2006). Effect of Hunter disease (mucopolysaccharidosis type II) mutations on molecular phenotypes of iduronate-2-sulfatase: enzymatic activity, protein processing and structural analysis. J Inherit Metab Dis, 29:755–761. 21. Conzelmann E, Sandhoff K (1991). Biochemical basis of late-onset neurolipidoses. Dev Neurosci, 13:197–204. 22. Schueler UH, Kolter T, Kaneski CR, Zirzow G, Sandhoff K, Brady RO (2004). Correlation between enzyme activity and substrate storage in a cell culture model system for Gaucher disease. J Inherit Metab Dis, 27:649–658.
REFERENCES
483
23. Zimmer KP, le Coutre P, Aerts HM, et al. (1999). Intracellular transport of acid beta-glucosidase and lysosome-associated membrane proteins is affected in Gaucher’s disease (G202R mutation). J Pathol, 188:407–414. 24. Meikle PJ, Ranieri E, Simonsen H, et al. (2004). Newborn screening for lysosomal storage disorders: clinical evaluation of a two-tier strategy. Pediatrics, 114: 909–916. 25. Ron I, Horowitz M (2005). ER retention and degradation as the molecular basis underlying Gaucher disease heterogeneity. Hum Mol Genet, 14:2387–2398. 26. Sun X, Marks DL, Park WD, et al. (2001). Niemann–Pick C variant detection by altered sphingolipid trafficking and correlation with mutations within a specific domain of NPC1. Am J Hum Genet, 68(6):1361–1372. 27. Braunlin E, Mackey-Bojack S, Panoskaltsis-Mortari A, et al. (2006). Cardiac functional and histopathologic findings in humans and mice with mucopolysaccharidosis type I: implications for assessment of therapeutic interventions in Hurler syndrome. Pediatr Res, 59:27–32. 28. Randall DR, Sinclair GB, Colobong KE, Hetty E, Clarke LA (2006). Heparin cofactor II-thrombin complex in MPS I: a biomarker of MPS disease. Mol Genet Metab, 88:235–243. 29. King B, Savas P, Fuller M, Hopwood J, Hemsley K (2006). Validation of a heparan sulfate–derived disaccharide as a marker of accumulation in murine mucopolysaccharidosis type IIIA. Mol Genet Metab, 87:107–112. 30. Drugan C, Jebeleanu G, Grigorescu-Sido P, Caillaud C, Craciun AM (2002). Biochemical markers of bone turnover as tools in the evaluation of skeletal involvement in patients with type 1 Gaucher disease. Blood Cells Mol Dis, 28:13–20. 31. Ciana G, Addobbati R, Tamaro G, et al. (2005). Gaucher disease and bone: laboratory and skeletal mineral density variations during a long period of enzyme replacement therapy. J Inherit Metab Dis, 28:723–732. 32. Demuth K, Germain DP (2002). Endothelial markers and homocysteine in patients with classic Fabry disease. Acta Paediatr Suppl, 91:57–61. 33. Simonaro CM, D’Angelo M, Haskins ME, Schuchman EH (2005). Joint and bone disease in mucopolysaccharidoses VI and VII: identification of new therapeutic targets and biomarkers using animal models. Pediatr Res, 57:701–707. 34. Satoh H, Yamato O, Asano T, et al. (2007). Cerebrospinal fluid biomarkers showing neurodegeneration in dogs with GM1 gangliosidosis: possible use for assessment of a therapeutic regimen. Brain Res, 1133:200–208. 35. Jeyakumar M, Thomas R, Elliot-Smith E, et al. (2003). Central nervous system inflammation is a hallmark of pathogenesis in mouse models of GM1 and GM2 gangliosidosis. Brain, 126:974–987. 36. Hollak CE, Evers L, Aerts JM, van Oers MH (1997). Elevated levels of M-CSF, sCD14 and IL8 in type 1 Gaucher disease. Blood Cells Mol Dis, 123:201–212. 37. Dhami R, Passini MA, Schuchman EH (2006). Identification of novel biomarkers for Niemann–Pick disease using gene expression analysis of acid sphingomyelinase knockout mice. Mol Ther, 13:556–564. 38. Gadola SD, Silk JD, Jeans A, et al. (2006). Impaired selection of invariant natural killer T cells in diverse mouse models of glycosphingolipid lysosomal storage diseases. J Exp Med, 203:2293–2303.
484
BIOMARKERS FOR LYSOSOMAL STORAGE DISORDERS
39. Hollak CE, van Weely S, van Oers MH, Aerts JM (1994). Marked elevation of plasma chitotriosidase activity: a novel hallmark of Gaucher disease. J Clin Invest, 93:1288–1292. 40. Czartoryska B, Tylki-Szymanska A, Gorska D (1998). Serum chitotriosidase activity in Gaucher patients on enzyme replacement therapy (ERT). Clin Biochem, 31:417–420. 41. Boot RG, Verhoek M, de Fost M, et al. (2004). Marked elevation of the chemokine CCL18/PARC in Gaucher disease: a novel surrogate marker for assessing therapeutic intervention. Blood, 103:33–39. 42. Vedder AC, Cox-Brinkman J, Hollak CE, et al. (2006). Plasma chitotriosidase in male Fabry patients: a marker for monitoring lipid-laden macrophages and their correction by enzyme replacement therapy. Mol Genet Metab, 89:239–244. 43. Brinkman J, Wijburg FA, Hollak CE, et al. (2005). Plasma chitotriosidase and CCL18: early biochemical surrogate markers in type B Niemann–Pick disease. J Inherit Metab Dis, 28:13–20. 44. Boot RG, Verhoek M, Langeveld M, et al. (2006). CCL18: a urinary marker of Gaucher cell burden in Gaucher patients. J Inherit Metab Dis, 29:564–571. 45. Moller HJ, de Fost M, Aerts H, Hollak C, Moestrup SK (2004). Plasma level of the macrophage-derived soluble CD163 is increased and positively correlates with severity in Gaucher’s disease. Eur J Haematol, 72:135–139.
26 VALUE CHAIN IN THE DEVELOPMENT OF BIOMARKERS FOR DISEASE TARGETS Charles W. Richard, III, M.D., Ph.D., Arthur O. Tzianabos, Ph.D., and Whaijen Soo, M.D., Ph.D. Shire Human Genetic Therapies, Cambridge, Massachusetts
INTRODUCTION Biomarkers have only recently come into use as an important tool in the development and clinical testing of therapeutic agents. The value of biomarker development was realized following the failure of drugs to achieve success in late-stage clinical trials in the 1990s. In many cases, small molecules and biologics being tested in clinical trials showed efficacy in preclinical testing as well as in phase I and phase II clinical trials, but failed to meet clinical endpoints once tested in expanded phase III trials. Initially, pharmaceutical and biotechnology companies sought to improve clinical trial design through selection of more refined clinical endpoints. This was augmented by an effort to better understand the mechanism of action of the therapies being tested. These initiatives did improve the success rate of clinical trials in general and yielded an understanding in many cases of why therapies did not work in phase III trials. However, it became clear with time that these measures addressed only part of the overall shortcomings in the drug development process.
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
485
486
VALUE IN BIOMARKER DEVELOPMENT
As more and more clinical trial results became available and were analyzed, the underlying problem with the process of drug development became evident. Clinical trials focused mainly on safety in phase I and early phase II trials. Efficacy data were only obtained in late-stage phase II trials and in expanded phase III trials. This approach squandered the opportunity to obtain meaningful efficacy data in phase I and early phase II testing. While the number of patients in phase I trials is usually smaller than in phase II trials, the opportunity to obtain data in humans was being lost. This realization led to an effort to correlate changes in biological markers or biomarkers in humans with therapies being tested. The ability to monitor changes in the level of naturally occurring cytokines, cell surface molecules, signaling cascades, inflammatory mediators, or metabolic products that correlate with amelioration of disease following treatment gave investigators additional information about the effect of these therapies. This information could then be correlated with clinical outcomes to better understand much earlier in the drug development timeline whether a compound or biologic is effective.
VALUE OF BIOMARKER DEVELOPMENT USING PRECLINICAL MODELS The utility of biomarkers in clinical development is increased through the use of preclinical animal models of disease that recapitulate the major hallmarks of human disease being targeted for drug development. Therefore, biomarker development really does begin in the preclinical stage of drug development and relies heavily on selection and validation of good preclinical models of disease. The selection of good preclinical models is an often overlooked component of drug development. This is the first step in building the value chain of a good biomarker(s) for use in clinical testing. Disease models should faithfully manifest the major aspects of human disease. This should be reflected in the development of the major and minor clinical signs that occur in humans but also in the pathogenesis of disease that leads to overt clinical signs. For example, the cytokines TNFα and IL-6 have been identified in preclinical animal models as major drivers in the pathogenesis of autoimmune diseases such as rheumatoid arthritis (RA; [1–3]). The finding that increased levels of these cytokines circulate in mice with RA and that these levels correlate with the severity of disease in these animals identified TNFα and IL-6 as potential biomarkers for human disease. Further, demonstration that that these levels decrease on immunosuppressive therapies confirmed their potential usefulness as a biomarker for the testing of novel drugs for RA in the clinic [1–3]. It is often the case that preclinical models for a given disease target do not exist or have been poorly developed. In this situation, efforts need to be directed toward understanding the underlying factors contributing to
STRATEGIES FOR DEVELOPING BIOMARKERS
487
disease pathogenesis through the development of animal models that recapitulate the hallmarks of human disease. These basic science exploratory studies can be difficult and time consuming. However, the value of these studies is realized when testing reveals that a therapy can modulate levels of the identified biomarkers and this correlates with a positive effect of the drug on disease endpoints in a clinically relevant animal model. Additional studies that investigate the effect of dose and regimen of a given therapy on modulation of these biomarkers are ultimately the key experiments that will inform on the dosing regimen to be used in human clinical trials. This information is critical, as it is a valuable component in the value chain of drug development. It is now clear that the effort spent on biomarker development in preclinical animal models at the front end of the drug development process facilitates a more informed approach to clinical trial design in humans. This ultimately translates into a higher success rate for therapies tested in the clinic.
STRATEGIES FOR DEVELOPING BIOMARKERS USING PRECLINICAL ANIMAL MODELS There are several strategies that can be employed for the development of biomarkers using preclinical animal models. This often involves utilization of existing information about the pathogenesis of a given disease in relevant models. However, in most cases this requires basic research designed to understand what biomarkers correlate with the development of disease and if these biomarkers can be modulated by therapy in a meaningful way. When considering animal models for biomarker development, it is important to distinguish between the types of models available. The best-case scenario is to utilize (or develop) a model that mimics the major and minor factors that drive the development of disease in humans. The identification of these factors as the cause of disease in these models allows for the ability to monitor and correlate them with disease severity. It then becomes important to determine if these factors respond to standard therapy known to reduce disease in humans. This is the best-case scenario moving forward if searching for new, improved therapeutic agents through head-to-head testing with standard therapies can be achieved. However, it is most often the case that therapeutic agents are being developed for a disease for which there is no current treatment and/or there are no biomarkers of disease pathogenesis. In these situations, basic research is required for biomarker identification. The current approaches are varied and not uniform. Often, a good starting point for this initiative involves thorough research of the existing scientific literature to understand what serum or tissue factors are increased or decreased in patients who manifest disease and whether these factors are predictive of disease or disease severity. In addition, it is important to understand if these factors are a driving force in disease
488
VALUE IN BIOMARKER DEVELOPMENT
pathogenesis. If this is the case, these factors could be good candidates as useful biomarkers to monitor the efficacy of potential therapeutic agents. As cited above, TNFα and IL-6 are cytokines that are generally increased in sera taken from patients with RA [4]. These peptides are proinflammatory cytokines that appear early in the cascade of cytokines that ultimately lead to joint inflammation and the pathogenesis of this disease. The ability of drugs known to have a therapeutic effect on RA to decrease the levels of these cytokines in preclinical animal models demonstrated their value as biomarkers for the testing of new therapies. This highlights the usefulness of identifying serum biomarkers that play a central role in the pathogenesis of disease, as their levels typically increase in a manner that correlates with disease progression. In cases where there is no literature to support biomarker identification, the development of genomic and proteomic techniques has created the opportunity for de novo biomarker discovery. If good animal models are available, genome-wide microarray analysis of cells or tissues obtained during the onset or progression of disease could lead to the identification of genes that are upor down-regulated. These data need to be confirmed with additional proteomic studies to validate the potential role of the identified gene products, while additional structure–function studies in animal models need to be done to determine if these gene products correlate with disease progression and/or play a central role in disease pathogenesis. Finally, studies need to be performed to determine if these biomarkers respond to therapies known to affect the disease in animal models. Once these basic research studies are performed in animal models, the identification of potential biomarkers needs to be validated in humans. This can be done in clinical trials in the patient target population using similar molecular techniques. The biomarkers selected need to be amenable to identification in easy-to-obtain clinical specimens such as serum or peripheral blood cells. However, once validated in humans, these biomarkers can serve as a very important tool in the value chain, leading to the testing and evaluation of novel therapeutic agents.
VALUE OF BIOMARKER DEVELOPMENT AND USE IN CLINIC TRIALS The sequencing of the human genome has presented a bewildering array of new targets for drug discovery that are only now being sorted out through more sophisticated systems biology approaches to biological pathway analysis. Most of these novel targets in drug discovery have not been proven pharmacologically in humans, so early readouts of potential clinical effectiveness in human clinical trials through biomarker analysis provides the much needed confidence that the drug affects the intended target in vivo. Moving beyond biological proof of principle, the ideal biomarker is one that can be measured
PERSONALIZED MEDICINE AND PATIENT STRATIFICATION
489
more easily, more frequently, and more accurately in humans and predicts early response to treatment or is an early indicator of clinical benefit. Since clinical efficacy is often apparent only after extended study in relatively large populations, identifying the most sensitive early biomarker of improvement in pathophysiology in smaller populations in shorter trials is an important research goal. Ideally, this search for the most sensitive and robust biomarker readout has been incorporated into earlier proof-of-efficacy animal studies, as it is almost too late for the investigative and experimental work to begin at activities immediately before filing the investigational new drug (IND) application and first-in-human clinical trials. Toward this end, most big pharmaceutical companies have adopted the incorporation of biomarker teams (either separate-line functions or matrixed teams) into the late discovery research process. Incorporation of potential biomarkers should be considered as part of all traditional phase IIa trials, but is considered increasingly as part of early, nonregistration, exploratory translational medicine studies. Moving from proof of pathway perturbation to using the biomarker to establish the optimal dosing regimen is an important downstream consideration. Finely tuning dosing regimens to clinical outcome measures is rarely satisfactory, so assessing pharmacodynamic endpoints usually requires a sensitive and easily measured biomarker. Optimal biomarkers are those that can be sampled safely upon repeated measurements and are traditionally thought of as biochemical markers of bodily fluids, especially serum samples, but can include serial noninvasive imaging studies. In some instances, limited tissue biopsy material can be procured for histochemical and immunohistochemical analysis.
USE OF BIOMARKERS FOR PERSONALIZED MEDICINE AND PATIENT STRATIFICATION Biomarkers in clinical trials are also valuable if they can predict which subjects are most likely to respond. This is most obvious in the selection of patients for targeted chemotherapy in subgroups of cancers, but any biomarker that can potentially stratify patients into groups most likely to respond to treatment is potentially valuable. Examples from marketed products that use biomarkers for cancer responder selection based on gene expression include Her2 (trastuzumab), c-kit (imatinib), epidermal growth factor receptor (EGFR; erlotinib, cetuximab), and Philadelphia chromosome (imatinib). Biomarkers are also useful if patient subpopulations at risk for toxicity can be identified for exclusion from trials of efficacy. Examples from marketed products include the screening of patients prior to therapy for glucose 6-phosphate dehydrogenase deficiency (G6PD; Dapsone), dihydropyrimidine dehydrogenase deficiency (DPD; fluorouracil) and ornithine transcarbamylase deficiency (OTC; valproic acid), since deficiency of these biomarkers leads to severe toxicity. The promise of pharmacogenetics and the discovery of genetic variants in
490
VALUE IN BIOMARKER DEVELOPMENT
DNA that predicts efficacy or toxicity has gone largely unfulfilled, but some examples in marketed products do exist. Genetic variations in Nacetyltransferase (NAT; isoazanide), thiopurine methyltransferase (TPMT; 6-MP, azothioprine), UDP-glucuronyslytransferase-1 (UGT1A1; irinotecan), and several liver cytochrome P450–metabolizing enzymes [CYP2D6 (Strattera), CYP2C19, and CYP2C9 (warfarin)] have been proven to cause increased drug exposure, leading to toxicity, and are used for dosage adjustment [5]. Great strides have been made in recent years in automated microarray systems for surveying the entire genome for single-nucleotide polymorphisms and copy-number variation. Large numbers of patients are needed to uncover small genetic effects, so subtle differences in efficacy or rare idiosyncratic toxicology reactions will seldom be uncovered in limited phase I/II or other exploratory trials. That said, large efforts are under way by many large pharmaceutical companies to bank DNA from larger phase III and postmarketing trials to conduct genome-wide association studies with increasing sophisticated statistical genetic analysis that may identify single-nucleotide polymorphisms or copy-number variance that correlate with treatment response or toxicity.
BIOMARKER DEVELOPMENT IN PARTNERSHIPS WITH REGULATORY AGENCIES The most widespread use of biomarkers in current practice is for internal decision making, although the aspirational goal remains the development of surrogate biomarkers that can substitute for clinically validated endpoints for registration studies. Surrogate biomarkers are typically endpoints in therapeutic intervention trials, although surrogates are sometimes used in natural history or epidemiologic studies. To assist pharmaceutical companies in this effort, the U.S. Food and Drug Administration (FDA) has published a guidance document for pharmacogenomic data submissions that defines the concept of valid biomarker, probably valid biomarker, and exploratory pharmacogenomic data for regulatory decision making. The PG guidance document [6] defines known biomarkers as accepted by the scientific community at-large to predict clinical outcome, valid biomarkers as having predictive value but not yet replicated or widely accepted, and exploratory biomarkers as those found in exploratory hypothesis generation, often in the context of whole genome genomic, proteomic, or metabolomic analysis. Perhaps the most effective use of exploratory surrogate biomarkers in clinical drug development occurs in the context of 21 CFR 314 and 601 Accelerated Approval Rule (1992), which allows for surrogate or nonultimate clinical registration endpoints. Fast-track designation is granted by the sponsor of the development program for a specific indication of a specific drug or biological program “to facilitate the development and expedite the review of new drugs that are intended to treat serious or life-threatening conditions and that dem-
REFERENCES
491
onstrate the potential to address unmet medical conditions.” Accelerated approval is often based on less well established surrogate endpoints or clinical endpoints. But postmarketing data are then required to verify and describe the drug’s clinical benefit and to resolve remaining uncertainty as to the relation of the surrogate endpoint upon which approval was based to clinical benefit, or the observed clinical benefit to ultimate outcomes. In summary, the continued development of biomarkers and true surrogate markers throughout the drug development value chain in the service of drug registration will remain an important activity to increase efficiency in the drug development process and to understand at the earliest time point in the pipeline whether a new chemical entity is likely to succeed or fail. Academic, pharmaceutical, and regulatory agency partnerships to develop biomarkers collaboratively within such initiatives as the FDA Critical Path Initiative [7] will help to “modernize the scientific process through which a potential human drug, biological product, or medical device is transformed from a discovery or proof of concept into a medical product” [8].
REFERENCES 1. Miyata S, Ohkubo Y, Mutoh S (2005). A review of the action of tacrolimus (FK506) on experimental models of rheumatoid arthritis. Inflamm Res, 54(1):1–9. 2. Moller B, Villiger PM (2006). Inhibition of IL-1, IL-6, and TNF-alpha in immunemediated inflammatory diseases. Springer Semin Immunopathol, 27(4):391–408. 3. Rose-John S, Waetzig GH, Scheller J, Grotzinger J, Seegert D (2007). The IL-6/ sIL-6R complex as a novel target for therapeutic approaches. Expert Opin Ther Targets, 11(5):613–624. 4. Kremer JM, Davies JM, Rynes RI, et al. (1995). Every-other-week methotrexate in patients with rheumatoid arthritis: a double-blind, placebo-controlled prospective study. Arthritis Rheum, 38(5):601–607. 5. Freuh FF (2006). Qualifications of genomics biomarkers for regulatory decision making. Presented at the Annual DIA EuroMeeting, Paris, Mar. 7, 2006. http:// www.fda.gov/Cder/genomics/presentations/DIA_Eur4.pdf. 6. FDA (2005). U.S. FDA Center for Drug Evaluation and Research Guidance for Industry: Pharmacogenomic Data Submissions. http://www/fda/gpv/cder/ guidance/6400fnl.pdf (accessed Jan. 11, 2007). 7. FDA (2007). U.S. FDA’s Critical Path Initiative. http://www.fda.gov/oc/initiatives/ criticalpath (accessed Jan. 11, 2007). 8. Buckman S, Huang SM, Murphy S (2007). Medicinal product development and regulatory science for the 21st century: the Critical Path vision and its impact on health care. Clin Pharmacol Ther, 81(2):141–144.
PART VII LESSONS LEARNED: PRACTICAL ASPECTS OF BIOMARKER IMPLEMENTATION
493
27 BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT: THE ESSENTIAL ROLE OF PROJECT MANAGEMENT AND TEAMWORK Lena King, Ph.D., DABT CanBioPharma Consulting, Inc., Guelph, Ontario, Canada
Mallé Jurima-Romet, Ph.D. MDS Pharma Services, Montreal, Quebec, Canada
Nita Ichhpurani, B.A., PMP MDS Pharma Services, Mississauga, Ontario, Canada
INTRODUCTION: PHARMACEUTICAL PROJECT TEAMS The research-based pharmaceutical industry is one of the most complex industries in the world. Discovery and development teams constitute a wellestablished model to manage the complexity and integrated activities to guide projects in pharmaceutical development. Organizational models and composition of these teams vary between companies, depending on the size and business strategy of the company, but they are always multidisciplinary in nature. The discovery team is charged with discovering and developing new leads. This team may include scientists with expertise in disease models,
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
495
496
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
target identification, high-throughput screening, molecular biology, combinatorial chemistry, medicinal chemistry, and imaging. The development team is generally formed once a decision has been made to fund development of a new pharmaceutical lead for eventual registration. Development teams include preclinical disciplines (pharmacology, pharmacokinetics, and toxicology), pharmaceutical development (pilot and production chemists and/or biopharmaceutical expertise, formulation), regulatory affairs, clinical development, and commercial and marketing expertise. The development team often has a formal project management structure with a project team leader and a project manager. In smaller organizations, a project manager may also serve as the team leader. Project management serves a critical role in supporting and driving forward the drug development process for the drug candidate chosen. Particularly in recent years, the organizational structure of the discovery and development teams has been changing to adapt to internal and external demands, and the decreasing productivity and increasing costs associated with pharmaceutical development. To meet these challenges, capitalize on new technologies, and improve quality of decision making, companies are fostering collaborations between discovery and development scientists. The discovery teams increasingly include scientists with experience in DMPK (drug metabolism and pharmacokinetics), toxicology, clinical development, and project management to streamline or translate the research from discovery into development. Translational research is being proposed as the bridge between the perceived discovery–development silos and is emerging as a cross-functional discipline in its own right. As illustrated in Figure 1, some organizations have created an explicit biomarker or translational research unit that is represented on the development project team. Other organizations have adopted an implicit model in which biomarkers are part of the function of existing discovery and development units. A third option is a hybrid model that partners biomarker work in discovery and development without the creation and funding of a separate biomarker unit [1]. In addition to internal organizational restructuring, partnering between companies and outsourcing some parts of development (or in the case of virtual companies, all of development) to contract research organizations (CROs) is also becoming more common. These partnerships or alliances cover a wide spectrum of transactions and disciplines. Formalized alliance structures with contracts, governance, and team-specific guidance may not be in place for pharmaceutical development teams. However, even when a drug development team includes only partners from one company, it has been suggested that the project team is an implicit alliance and when including external partners may be an explicit alliance [2]. For small discovery startup companies, CROs may provide not only the conduct of studies necessary to help new candidates to progress through discovery and development but often also the essential development expertise and will act in implicit partnership with the sponsor. Thus, the concepts and processes developed for alliances, and their
INTRODUCTION: PHARMACEUTICAL PROJECT TEAMS
Implicit
Explicit
TMed objectives owned by pre-existing organizational entities
Clearly identifiable organizational structure dedicated to TMed
R& D
R& D
Biomarker Discovery Discovery
497
Biomarkers
Clinical
Biomarker Development and utilization
Hybrid Sharing of responsibilities for TMed between existing and new organizational entities R& D
Discovery
Clinical
Biomarkers
Figure 1 Translational research (TMed) organizational models. (Adapted from ref. 1, with permission from Drug Discovery World.)
success stories, are instructive for drug development teams [3]. In research and development, an alliance provides a venue for access to complementary knowledge, new and different ideas, and processes with shared risk and reward. The following core principles pertain to alliances: 1. Goals and outcomes are shared with equitable sharing of risk and reward. 2. Participants have equal say in decisions, and each participant should have full management support. 3. Decision criteria must be based on what is best for the project rather than for individual participants. 4. The team operates in a culture of open and honest communication. A recent survey of formalized research and development (R&D) alliances evaluated the contribution of alliance design (i.e., the number and type of partners, geographic proximity, R&D knowledge, and capabilities of each partner) and alliance management (governance agreements and processes) to the success of the alliance. The results showed that the alliance could generally
498
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
be designed with appropriate and complementary expertise. The number of partners and the presence of competitors among the partners had no overall effect on the success of the alliance. Effective contractual provisions and governance had a positive effect on the measures for alliance success. However, the most pronounced positive predictors of success were the frequency of communication and how ambitious a project was. The more ambitious projects were a strong predictor for success [4]. The success factors identified for other R&D alliances apply also to successful project teams involved in pharmaceutical development (Table 1). Managing the project within budget and with appropriate resources is a major responsibility. For the pharmaceutical industry, the need for cost containment is providing compelling arguments for introducing high-value decision gates earlier in the development process. As illustrated in Table 2, biomarkers are one of the most important and tangible tools for facilitating translational research, moving data-driven decision making earlier into development, and for guiding development to the most appropriate indication and patient subpopulation. Although these additional decision gates can be helpful
TABLE 1
Success Factors of a Drug Development Project Team
Predictors of Success of R&D Alliances
Successful Drug Development Team
Appropriate number, type of partner with complementary R&D knowledge and capabilities
“The right partners.”
Effective contractual provisions and governance
“Good plan and good execution”—a wellunderstood and management-supported plan.
Excellent and transparent communication
Consultative team interactions Team leader and project manager guide the team for decisions that are “on time, within scope and budget.” Develop trust between team members and the ability to work efficiently and effectively in a context of imperfect, incomplete, and unexpected information. Solutions are sought without attribution of blame. Innovative thinking and ideas are encouraged.
Ambitious projects
With development time lines spanning decades, these are inherently ambitious projects that require champions to obtain resources and management support.
TEAM DYNAMICS: PHARMACEUTICAL PROJECT TEAMS
TABLE 2
499
Biomarkers in the Pharmaceutical Development Cycle
Discovery/ Preclinical Stage Defining mechanism of action Compound selection PK/PD modeling Candidate markers for clinical trials Better prediction by animal models through translational research
Phase I–IIa
Phase IIb–III
Phase IIIb–IV
Demonstrating clinical proof of concept Dose and scheduling optimization Optimization of patient population Applications in new therapeutic indications
Minimize trial sizes through accurate inclusion and exclusion Maximize success rates by early confirmation of efficacy Potential for primary or secondary surrogate endpoints
Differentiation of products in marketplace through superior profiling of response Differentiation in subpopulations (gender, race, genetics) Personalized medicine (co-development of diagnostic)
for the team, the inclusion of biomarkers adds complexity to the traditional linear model of drug development with a more reiterative process for the project team to manage.
TEAM DYNAMICS: PHARMACEUTICAL PROJECT TEAMS The development team has members with complementary technical skills. The management of the complex process of pharmaceutical development requires that these highly skilled knowledge workers engage, relate, and commit to a shared goal with defined milestones. These team interactions have to occur in a dynamic environment where (1) studies and experiments continually generate results that may fundamentally change the course of development, (2) management support and priority may be low compared to other projects, (3) team members may be geographically dispersed, and (4) resources for conduct of studies and other activities often are not controlled directly by the team. At its best, the pharmaceutical development team provides an environment that is mutually supportive, respectful, and enables discussion on controversial issues. It is open to new ideas, agile and constructive in addressing new issues, and has goals and strategy supported and understood by management. The project leader and the project manager should strive to generate an environment that is as conducive as possible to provide this ideal. These are not
500
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
features specific to biomarker development, but as mentioned below, including novel biomarkers will add to the complexity of the development project and require additional attention and management due to the increased number of communication channels. Following are some of the general principles of a productive team environment: 1. Include and plan for a project kickoff meeting and face-to-face meetings. 2. Define the roles and responsibilities of each team member. 3. Operate in a spirit of collaboration with a shared vision. 4. Practice active listening. 5. Practice transparent decision making; determine how decisions will be made and the role of each team member in the process. 6. Encourage all team members to engage in debate about strategic issues. 7. Spend time and energy to define objectives. 8. Engage and communicate actively with management. 9. Decide but revisit which communication tools are optimal. 10. Recognize and respect differences. 11. Plan for adversity. 12. Plan for the expected as well as the unexpected. There are a number of excellent books that discuss team dynamics and team management [5–7]. Pharmaceutical scientific organizations are beginning to offer continuing education courses in program management, and dedicated pharmaceutical training courses are available [8]. However, effective drug development project leaders and managers do not come out of university programs or training centers. The understanding of how all the complex pieces of drug development come together can best be learned through hands-on experience as a team member, team leader, or project manager. Typically, it takes many years of working within the industry to gain sufficient knowledge of the drug development process to be an effective project team leader or manager.
CONSEQUENCES OF BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT STRATEGIES Biomarkers are not new in pharmaceutical development. The interpretation, clinical significance, and normal variation of established biomarkers are generally well understood and widely accepted (discussed elsewhere in this book). Their utility, normal variation, and significance have been evaluated and corroborated in many different research and clinical studies. However, novel biomarkers that are now emerging may be available at only a single or a few
PROJECT MANAGEMENT
501
vendors or laboratories. The assays may be technically and scientifically complex, results dependent on platform, and limited data may be available on their normal variation and biological significance. Modern computational techniques allow for powerful multiplex analysis, binning of multiple parameters, and analysis of multiple biomarker on an individual animal or patient basis. These capabilities provide exciting opportunities for advancing the science; however, there are few published or marketed tools for choosing, planning, implementing, and evaluating the risk–cost benefit of biomarkers in pharmaceutical development. The risk–cost benefit for the biomarker may also be dependent on the size of the company, their portfolio, and the financing model. Large pharmaceutical companies’ investment decisions for including novel biomarker strategy may be different from those of startup companies. A larger company may be able to offset the costs of biomarker development and implementation by applying the biomarkers to multiple projects and compounds. A startup company may include a biomarker throughout development despite uncertainty as to its ultimate utility; the company accepts the risk associated with new information emerging during the development process. By contrast, a larger company may require prior assessment of the value of including the biomarker in expediting development and improving development decisions. The integral role of biomarkers in decision making is discussed in Chapter 3 of this book, but this aspect of biomarkers also has implications for project management and teamwork within a drug development team. Following are some of the consequences of employing novel biomarkers or a unique biomarker strategy in pharmaceutical development: • • • •
High levels of investment in infrastructure Multiple technological platforms and specialized expertise High demands on data management Increased complexity in study designs, sample logistics, and study data interpretation • Uncertainty and ambiguity for strategic decision making • Confidence and acceptance in translation and interpretation of results may be low • Lack of precedence for using biomarkers in novel regulatory alternatives such as exploratory IND • Ethical issues: for example, tissue banking, privacy, and data integrity • Evolving regulatory environment with changing requirements and expectations
PROJECT MANAGEMENT The following systematic tools and processes approach available for project management [9,10] can be applied to management of biomarker programs:
502
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
Gantt charts Contracts, scope documents Meeting minutes Communication plans RACI (responsible, accountable, consulted, informed) charts Lessons-learned tools Milestone charts
Decision trees Risk analysis logs PERT (performance, evaluation, review, tracking) charts Work breakdown structures Budget tracking Lean-sigma tools
Process mapping with the team is useful to ensure that all aspects of the biomarker project management are well understood. The level of detail can range from GANTT charts (Figure 2) designed principally to track time lines to program-wide integrated biomarker strategies. An example of the latter for development of an oncology candidate is illustrated in Figure 3. The development strategy has to include open discussion about the advantages and disadvantages of including the biomarker, recognizing that this often has to occur in the absence of clear and straightforward knowledge of the value of the biomarker across species and in a specific disease or subset of patients. Increasingly, project teams are expected to analyze risk associated with different activities and develop contingency plans far in advance of the actual activity occurring. Risks associated with various biomarkers (e.g., timely assay qualification, sampling logistics, patient recruitment, regulatory acceptance) have to be part of this analysis and contingency plan development. There are also numerous stakeholders beyond the development team who influence and may guide the development of the pharmaceutical: • Sponsor (may include different departments with diverse goals/ interests) • Business analysts • Investors (small companies) or shareholders • Regulators • CROs and/or biomarker labs • Investigators • Patients • Patient support/interest groups For the project management team, it is important to identify the stakeholders and evaluate their diverse and potentially competing priorities, particularly their perspectives on the benefit–risk effects of the new pharmaceutical. Novel targets with a drug producing an effect on a pharmacodynamic biomarker may be the last ray of hope for patients with serious or life-threatening diseases. Patients and sometimes physicians dealing with these diseases may have a different and often highly personal benefit–risk perspective compared to
503
Figure 2
Sample GANTT chart of a drug development plan incorporating biomarkers.
504
Is selected biomarker relevant to in vitro testing
Biomarker discovery effort
Is selected biomarker relevant to in vivo testing?
Information gathering on known markers (literature review, expert opinion gathering, etc.) YES
Is there a biomarker linked to (a) mechanism of action or (b) type of NO cancer targeted?
Associate cancer(s) type to your chosen mechanism of action Breast, colon, prostate, lung, brain
Therapeutic desired effect Cytotoxic or cytostatic? Adjuvant therapy? Side-effect attenuation? Primary tumor growth? Metastasis?
Define specific disease and Mechanism targeted Apoptosis Tumor invasion Angiogenesis Signal transduction Cell replication etc..
YES
NO
Figure 3
YES
Can you conduct another study with appropriate model and/or dose?
Oncology biomarker map.
Preclinical GLP-like validation
Biomarker is tied to mechanism of action Proceed to ADME/Tox studies and/or confirm in second cell line
Tox biomarkers selection ADME/Tox
Clinical GLP-like validation
Clinical Relevance Can preclinical toxicity biomarker(s) applied clinically? Proceed to clinical
Toxicity biomarker(s) Identification
Efficacy biomarker(s) monitoring
YES
Is monitoring of selected efficacy biomarker(s) relevant during Tox study?
Potential causes of observed inefficacy: Is it dose related? Is the cancer cell line appropriate in this animal model?
Can selected biomarker be applied preclinically and/or clinically ?
YES
Can you measure the biomarker? Choose appropriate technology
YES
Which animal model to use: Syngeneic or xenograph?
Animal testing results
Effect on cancer progression Biomarker(s) relevance Revisit disease mechanism
Effect on cancer progression Biomarker(s) relevance
Effect on cancer progression Biomarker(s) relevance Choose other marker
Effect on cancer progression Biomarker(s) relevance
Nonclinical discovery & efficacy (in vitro–in vivo)
Selection and characterization of efficacy biomarker(s)
CHALLENGES ASSOCIATED WITH DIFFERENT TYPES OF BIOMARKERS
505
regulators, investors, and sponsors. Concerns about statistical significance, translation of the effect of the pharmacodynamic biomarker to clinical efficacy, and market share may carry little weight for patient advocacy groups under certain situations. Even concerns for safety biomarkers can be viewed as too restrictive: at best, perceived to delay access to potentially valuable medicines, and at worst, to stop their development. Sponsors and investors, eager to see hints of efficacy as early as possible, can sometimes become overly confident about positive biomarker results before statistical analysis, normal variability, or relationships to other clinical efficacy markers are available. This may be more common in small emerging companies that rely on venture capital to finance their drug development programs than in larger established pharmaceutical companies. CROs or laboratories performing the assays and/or statistical analyses may be more cautious in their interpretations of biomarker data, sometimes seemingly unnecessarily so, but are motivated by the need to maintain quality standards as well as a neutral position. Consensus and communication problems are more likely to occur when these perspectives are widely disparate. Although it may be difficult at times, it is essential to achieve a common ground between stakeholders for effective communication.
CHALLENGES ASSOCIATED WITH DIFFERENT TYPES OF BIOMARKERS The definition and characteristic of biomarkers proposed by the National Institutes of Health working group: “a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacological responses to a therapeutic intervention” [11], provide a categorization of biomarkers into efficacy, patient stratification, and safety biomarkers. There are different but overlapping challenges for the team, depending on the category of biomarkers. Efficacy Biomarkers Efficacy biomarkers range from pharmacodynamic (PD) biomarkers, markers quantifying drug–target interaction, and markers reflecting the underlying pathology of the disease to those with established links to clinical outcomes accepted as surrogate endpoints for regulatory approval. Effect of a pharmaceutical in development on a biomarker associated with efficacy is guaranteed to generate enthusiasm and momentum in the team. PD biomarkers have a long history in pharmaceutical development and form one of the cornerstones of hypothesis-driven approaches to drug discovery and development. These biomarkers are commonly generated as part of the discovery process. The biomarker may fulfill multiple key criteria in in vitro or animal models: (1) it may be used to characterize the pharmacology
506
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
models; (2) it may be used in knock-out or knock-in genetic models to further validate the target; and (3) it may demonstrate a characteristic PD/pharmacokinetic (PK) relationship with the drug under development. The changes reflecting underlying pathology range from largely unknown to those clearly indicative of potential market impact. An example of the latter is atrovastatin administration, resulting in decreases in serum triglycerides in normolipidemic subjects in clinical pharmacology studies [12,13]. This is more an exception than the norm. Typically, interpretation of the clinical significance and potential market impact of a biomarker is less certain, particularly if the pharmaceutical is (1) acting by a novel mechanism of action and (2) targeting a chronic progressive disease where disease modification rather than cure is the outcome anticipated. The rationale for including PD biomarkers is generally easy to articulate to management, and particularly for smaller companies, these biomarkers may be essential for attracting investment. While enthusiasm and willingness to include these types of markers is generally not the issue, they are not without significant challenges in implementation and interpretation in the pharmaceutical development paradigm: • Technical aspects • Stability of the biomarkers • Technical complexity of the assay • Assay robustness, sensitivity, specificity • Throughput of the assay • Biological samples • Access to matrices that can or should be assayed • Sample collection volume or amount and timing in relation to dosing • Feasibility, cost, and resolution capabilities for imaging modalities for interactions with targets in the central nervous system, testis, poorly vascularized tumors, etc. • Data interpretation • Normal values; inter- and intraindividual variability • Values in disease versus healthy conditions • Diurnal and environmental effects in animals • Effects of diet, lifestyle, concomitant medications in humans • Impact on development of no change or unexpected changes in biomarkers in the continuum from discovery to clinical
Patient Stratification Biomarkers The use of patient stratification biomarkers in pharmaceutical development and medical practice forms the foundation of what has been called personal-
CHALLENGES ASSOCIATED WITH DIFFERENT TYPES OF BIOMARKERS
507
ized, individualized, or stratified therapy. Patient stratification biomarkers focus on patients and/or underlying pathology rather than on the effect of the pharmaceutical on the target. For small-molecule drugs, genotyping for polymorphic drug-metabolizing enzymes responsible for elimination or activation/ inactivation of the compound is now an established practice in clinical trials. Results about potential effects attributed to certain genotypes may be reflected in labeling recommendations for dose adjustments and/or precautions about drug–drug interactions [14]. A priori determination of genotype for polymorphic metabolizing enzymes are now included on the labels for irinotecan [15] and was recently added for warfarin [16] to guide selection of dosing regimen. Targeted therapy in oncology is the best established application of patient stratification biomarkers. The development of Herceptin, the monoclonal antibody trastuzumab, with an indication restricted to breast tumors overexpressing HER2/neu protein [17], is a clinical and commercial success story for this approach. Oncology indications also include examples of the potential of using serum proteomics to classify patients according to the highest potential for clinical benefit. For example, Taguchi et al. [18] used matrix-assisted laser desorption ionization (MALDI) mass spectroscopy (MS) analysis to generate an eight-peak MALDI MS algorithm of unidentified proteins to aid in the pretreatment selection of appropriate subgroups of non-small cell lung carcinoma patients for treatment with epidermal growth factor receptor inhibitors (erlotinib or gefitinib). As illustrated by the examples above, patient stratification biomarkers encompass a wide range of technologies, including algorithms of unknown proteins. Challenges for the development team are to understand and identify the potential for including patient stratification biomarkers either as part of or as the major thrust in the development process. This is often a major challenge, since the technologies may lie outside the core knowledge areas of the team members, making it difficult to articulate and discuss their value within the team and to communicate effectively to management. These challenges can be particularly pertinent for some of the “omics” technologies, which can be highly platform dependent and rely on complex statistical methodologies to analyze large sets of data to principal components. The results often have little intuitive inference in the underlying targeted disease pathology and may be one of the reasons that these powerful methodologies are not used more commonly. Some considerations for including patient stratification biomarkers are summarized as follows: • Strategic issues • What is the purpose of including the patient stratification biomarker? • Will it be helpful in reaching go/no go decisions? • Is it required for registration purposes? • What will the implications be for marketing and prescribing practices?
508
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
• Practical considerations • Is the biomarker commercially available and accessible? • If a diagnostic biomarker is essential to the development of the pharmaceutical, should co-development be considered? • Are there IP and marketing restrictions? • What are the implications of the biomarker technology on the conduct of the clinical trial? Safety Biomarkers The considerations for safety during development are paramount; not surprisingly, it is one of the most regulated aspects of pharmaceutical development. Safety biomarkers have spurred on interesting and innovative regulatory and industry initiative and collaborations to develop and qualify novel biomarkers. Examples are the guidance of the U.S. Food and Drug Administration (FDA) for voluntary submissions of genomic data [19] and partnerships among government, academia, and industry for qualification of safety biomarkers [20]. Data qualifying the interpretation and significance of changes in safety biomarkers are needed to guide pharmaceutical development as well as evaluation of risk to patients or healthy volunteers in clinical trials. The purpose of safety biomarkers in clinical trials can be (1) to exclude patients at risk of developing adverse effects, (2) to increase sensitivity of the adverse event monitoring, and (3) to evaluate the clinical relevance of toxicity observed in the preclinical studies. Introducing novel or more uncommon biomarkers into a development project to address any of these aspects will not be embraced universally. There may be concerns not only about the added testing burden but also about the sensitivity and specificity of the biomarker, its relevance, and its relationship to well-established biomarkers. Nevertheless, including novel or uncommon biomarkers may be a condition for the conduct of a clinical trial as mandated by either regulatory bodies or institutional review boards. For example, there may be requirements to include sperm analysis in healthy volunteers and adapting experimental genotoxicity assays to humans to address effects observed in preclinical safety studies on the male reproductive tract and in genotoxicity evaluation, respectively. These will directly affect the conduct of the trials, the investigator, his or her comfort level with the assay, and the ability to communicate the significance of baseline values and any changes in the biomarkers to the clinical trial participant. However, novel and uncommon biomarkers will also have strategic and practical implications for the overall development program: • Strategic issues • Will including the safety biomarker be a requirement for the entire pharmaceutical development program? • Are the efficacy and/or PK properties sufficiently promising to warrant continued development? • Can the identified safety concern be managed after approval?
MANAGEMENT OF LOGISTICS, PROCESSES, AND EXPECTATIONS
509
• Practical considerations • What are the implications for the clinical trial program, locations of trials, linking to testing laboratories? • Will additional qualification of the biomarker assay be required as the development advances and for regulatory approval?
MANAGEMENT OF LOGISTICS, PROCESSES, AND EXPECTATIONS The logistical aspects of biomarker management are often a major undertaking, particularly if these include multisite global clinical trials (Figure 4). In clinical trials, specialized or esoteric assays, sometimes including very few samples, may require processes and systems that are not commonly in place for high-throughput analytes with standard operating procedures (SOPs) and established contract service providers. In addition, managing the logistics requires recognition and integration of the different expertise, experience, expectations, culture, and mindset in each discipline within the team. Regulations and guidelines governing the different disciplines as well as generally accepted practices have a major impact on the culture and mindset. Transitioning from the less regulated discovery process into development more accustomed to good laboratory practices (GLPs), good manufacturing practices (GMPs), and good clinical practices (GCPs) can be a major cultural shift. The regulatory requirements will vary depending on the purpose of the biomarker. Safety biomarkers will require a high degree of formalized regulatory compliance, whereas there are no requirements for GLP compliance for
Lab D
Non-coagulated blood
Pharmacogenomic assay
Serum
Lab B
Lab E Target enzyme assays
Future proteomics
Plasma
WBCs
Clinical chemistry
Lab G
Heparinized blood freeze
Lab F
Lab A
Stimulated cell assay
LC/MS/MS assay parent drug and metabolites
Add stabilizer
freeze
Tissue Biopsy Add stabilizer
Urine
Lab C – LC/MS/MS assay pathophysiological substrate and product Figure 4
Sample logistics.
Lab B Urinanalysis
510
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
pharmacodynamic biomarker assays. The question of whether or not to conduct the assay under GLP regulations will need to be considered for all types of biomarkers. In the extensive project management coordination required to include particularly novel biomarkers in clinical trials, the long-term vision for program direction can become lost. The long-term view of the impact of the biomarker results on the pharmaceutical product under development as well as guidance for further discovery efforts should be considered. Questions about the impact of different outcomes have to be considered from both a strategic and a scientific perspective. For example, in which preclinical and clinical studies should the biomarker be included if normal values and variations are largely unknown? What will be the impact on future development of no change or unexpected effects in toxicology studies or in a first-in-human study? If there are changes to the assay or new technologies become available, should they be included to provide additional functional information about the target? Particularly when limited information is available about normal values and variation, adding additional parameters may not be of value for the decisionmaking process. It may be tempting to include a large number of biomarkers simply because they are available. Moreover, the increase in cost, complexity of the studies, and risk for erroneous results should be weighed carefully against the value added at each step of the development process. The evaluation of whether to include a biomarker in a drug development program may not be straightforward. There is no doubt that biomarkers have proven valuable in pharmaceutical development to provide guidance for dose selection in early clinical studies, to enhance understanding of disease mechanisms and pathobiology, and to support decision making and strategic portfolio considerations. Success stories for the use of biomarkers in translating from discovery concept to clinical development have been published. One example is the first proteosome inhibitor bortezomib approved for treatment of multiple myeloma. The protesome is a key component in the ubiquitin– proteosome pathway involved in catabolism of protein and peptides as well as cellular signaling [21]. Ex vivo determination of proteosome inhibition was used in discovery and continued through toxicology and early- and late-stage clinical studies. Although not a clinical endpoint, proteosome inhibition provided valuable information that the drug interacted with the intended target [22]. However, in contrast to well-publicized success stories such as the example above, it is more difficult to obtain information and find examples of decisions taken when the PD biomarker did not yield the results expected. Why and where in the continuum of development did the PD marker fail, and what were the consequences of its failure? Strong confidence in the assay and the mode of action of the biomarker, as well as expectations about enhanced effects in patients compared to healthy volunteers, may result in progression of the biomarker despite lack of apparent interaction with the target. Decisions based on biomarkers require making a judgment call taking into account all
SUMMARY
511
the data available. For the development team, the consequences of no effect or an unexpected effect of the drug on the PD marker should be considered and debated openly before the relevant studies are initiated. Questions that should be discussed and understood by the team in the inclusion of biomarkers are as follows: • Cost and logistics • What are the costs and logistics associated with including efficacy biomarkers? • What are the costs associated with not including a biomarker (i.e., progressing a compound without use of a biomarker or panel of biomarkers)? • Confidence in the biomarker • Will the team/management accept go/no go decisions on the basis of the results of the efficacy biomarker? • How many patients are required to obtain meaningful results and/or to demonstrate response? • What degree of change or lack of progression of disease is considered acceptable? These may appear to be relatively simple to answer, but it will take courage and conviction from the team to make decisions to discontinue development based, or at least partially based, on results of unproven efficacy biomarkers. There may be pressures from patient groups or specific patients for access or continuation of clinical development if the drug is perceived to be beneficial. Management may be reluctant to accept the decision if significant resources have been spent in development. SUMMARY The successful launch of a novel pharmaceutical product represents the culmination of years of discovery and development work driven by knowledgeable people passionate about their project and the pharmaceutical. The development process will be challenging, require perseverance, and cannot be successful without coordination and teamwork. Novel biomarkers, organizational structures with multiple stakeholders, and a need to bring data-driven decision-making strategies earlier in development make the paradigm more complex and place higher demands on team communication and project coordination. Effective program leadership together with formalized program management and communication tools and processes facilitate this endeavor. As biomarkers in discovery and development are here to stay, more attention will be paid to best practices for project management and teamwork, as these roles are recognized increasingly to be essential for successful pharmaceutical development.
512
BIOMARKERS IN PHARMACEUTICAL DEVELOPMENT
REFERENCES 1. Hurko O (2006). Understanding the strategic importance of biomarkers for the discovery and early development phases. Drug Discov World, Spring, pp. 63–74. 2. Ahouse J, Fontana D (2007). Negotiating as a cross-functional project manager: lessons from alliance management. Cambridge Healthtech Institute: http://www. healthtech.com/wpapers/WP_pam.asp (accessed Oct. 4, 2007). 3. Bamford JD, Gomes-Casseres B, Robinson M (2002). Mastering Alliance Strategy: A Comprehensive Guide to Design, Management, and Organization. Jossey-Bass, New York. 4. Dyer JH, Powell BC, Sakakibara M, Wang AJ (2006). Determinants of success in R&D alliances. Advanced Technology Program NISTIR 7323. http://www.atp.nist. gov/eao/ir-7323/ir-7323.pdf (accessed Oct. 4, 2007). 5. Means JA, Adams T (2005). Facilitating the Project Lifecycle: The Skills and Tools to Accelerate Progress for Project Managers, Facilitators, and Six Sigma Project Teams. Wiley, Hoboken, NJ. 6. Parker GM (2002). Cross-Functional Teams: Working with Allies, Enemies, and Other Strangers. Wiley, Hoboken, NJ. 7. Wong Z (2007). Human Factors in Project Management: Concepts, Tools, and Techniques for Inspiring Teamwork and Motivation. Wiley, Hoboken, NJ. 8. Tufts Center for the Study of Drug Development. http://csdd.tufts.edu program. 9. Atkinson AJ, Daniels CE, Dedrick RL, Grudzinskas CV, Markey SP (2001). Principles of Clinical Pharmacology. Academic Press, San Diego, CA, pp. 351–364. 10. PMI Standards Committee (2004). A Guide to the Project Management Body of Knowledge (PMBOK Guide), 3rd ed. Project Management Institute, Inc., Newtown Square, PA. 11. Biomarkers Definitions Working Group (2001). Biomarkers and surrogate endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther, 69:89–95. 12. Cilla DD, Gibson DM, Whitfield LR, Sedman AJ (1996). Pharmacodynamic effects and pharmacokinetics of atrovastatin after administration to normocholesterolemic subjects in the morning and evening. J Clin Pharmacol, 36:604–609. 13. Posvar EL, Radulovic LL, Cilla DD, Whitfield LR, Sedman AJ (1996). Tolerance and pharmacokinetics of a single-dose atrovastatin, a potent inhibitor of HMGCoA reductase, in healthy subjects. J Clin Pharmacol, 36:728–731. 14. Huang S-M, Goodsaid F, Rahman A, Frueh F, Lesko LJ (2006). Application of pharmacogenomics in clinical pharmacology. Toxicol Mechanisms Methods, 16:89–99. 15. Pfizer Inc. (2006). Camptosar (irinotecan) label. http://www.pfizer.com/pfizer/ download/uspi_camptosar.pdf (accessed Oct. 4, 2007). 16. Bristol-Myers Squibb Company (2007). Coumadin (warfarin) label. http://www. bms.com/cgi-bin/anybin.pl?sql=PI_SEQ=91 (accessed Oct. 4, 2007). 17. Genentech (2006). Herceptin (trastuzumab). http://www.gene.com/gene/products/ information/oncology/herceptin/insert.jsp (accessed Oct. 4, 2007).
REFERENCES
513
18. Taguchi F, Solomon B, Gregorc V, et al. (2007). Mass spectrometry to classify non–small-cell lung cancer patients for clinical outcome after treatment with epidermal growth factor receptor tyrosine kinase inhibitors: a multicohort cross-institutional study. J Natl Cancer Inst, 99:838–846. 19. Goodsaid F, Frueh F (2006). Process map for the validation of genomic biomarkers. Pharmacogenomics, 7:773–782. 20. Predictive Safety Testing Consortium. http://www.c-path.org. 21. Glickman MH, Cienchanover A (2002). The ubiquitin–proteasome proteolytic pathway: destruction for the sake of construction. Physiol Rev, 82:373–428. 22. EPAR (2004). Velcade (bortezomib). http://www.emea.europa.eu/humandocs/ PDFs/EPAR/velcade/166104en6.pdf (accessed Oct. 4, 2007).
28 INTEGRATING ACADEMIC LABORATORIES INTO PHARMACEUTICAL DEVELOPMENT Peter A. Ward, M.D., and Kent J. Johnson, M.D. The University of Michigan Medical School, Ann Arbor, Michigan
INTRODUCTION Historically, there has been somewhat of an arm’s-length relationship between researchers in academic medical centers and pharmaceutical companies. Traditionally, academic researchers often have taken an “ivory tower” approach: that their research is basic in nature and not meant necessarily to relate to the development of new drugs [1]. However, this attitude has undergone a major change over the past three decades. Academicians in California, Boston, and elsewhere have taken an entrepreneurial approach through the formation of biotech firms such as Genentech. However, it was not until fairly recently that most large pharmaceutical companies actively began developing collaborations with basic scientists in academia rather than relaying almost exclusively on their internal research and development programs. In the following discussions we look at reasons for this change, ways in which these collaborations can be fostered, and pitfalls associated with this association. Bridging between academia and the pharmaceutical industry has increased steadily over the past 30 or more years, and this trend has accelerated Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
515
516
INTEGRATING ACADEMIC LABORATORIES
significantly with time. Movement in this direction has been based on fundamental scientific discoveries at the universities. These institutions generally lack the background, abilities, and resources to transform such discoveries into diagnostic and/or therapeutic modalities. Large pharma has the opposite problem: namely, more limited fundamental discovery research efforts, less emphasis on scientific research into basic disease processes, but extensive experience in drug design and development, clinical trial design and conduct, and monetary resources that ultimately result in a successful commercial product. In this chapter, we discuss the history and development of strong ties between academic scientists and the pharmaceutical industry. This will include discussions on the advantages of these collaborations and how they are usually structured. In addition, there are examples of these collaborations at both the basic science discovery stage and later in clinical development. This will include an example of scientists (Arul Chinnaiyan, George Wang, Dan Rhodes) who have been involved in fundamental discoveries regarding antigenic epitopes and autoantibodies in prostatic cancer patients, have taken these observations further with the development of spinoff companies, and have projected plans for the future.
HISTORICAL PERSPECTIVE Historically, academic researchers have taken a research approach that stressed National Institutes of Health (NIH)–sponsored basic research. These programs have targeted disease characterization or processes, but were not directed at a specific therapy for a given disease. In fact, many academic centers still differentiate between NIH research dollars and industry funds when ranking academic departments and consider the NIH funding to be the “gold standard” in determining the success and reputation of an investigator. However, several forces are changing this somewhat elitist perspective, as described below. A major factor in this change has been the federal government change in funding levels. NIH dollars are much harder to come by, in particular the R01 primary research award. In fact, NIH funding for grants has diminished over the last few years, and several institutes in the NIH are now funding a much smaller percentage of research grants. Currently, only the top 8% of research applications submitted are funded (NIH Office of Extramural Research). Funding for the NIH has increased only slightly, with the fiscal year 2008 appropriation being increased only 0.9%, to $29.45 billion, compared to $29.3 billion in fiscal year 2007 (S. Forrest, Annual Report on Research and Scholarship, FY 2007 Financial Summary, University of Michigan). This amount has not kept up with inflation and has resulted in less funding for external investigators. For example, in fiscal year 2007 extramural funded R01 research grants decreased to 27,850 from 28,192 in 2006 (NIH Office of Extramural Research). Furthermore, the total amount of money given out in
THE BIOTECH EXPERIENCE
517
2007 dropped to $10.04 billion from $10.12 billion in 2006. The overall impact of the reduction in NIH funding from 1995 to 2007 has resulted in a decline of real dollars available to researchers of approximately 10% (T. Mazzaschi, Research Funding Trends: Surviving an NIH Recession, presented at the 2007 AMSPC Annual Meeting). This cutback in funds has been devastating to many investigators and has even resulted in several laboratory closings. In addition, the reduction in federal funds has had a severe impact on the number of young scientists, particularly physicians, deciding on a research career. There has also been a change in focus by the U.S. government and the NIH. Today, the NIH requires that grants have a “translational component” that provides a direct connection between the research hypothesis and a clinical disease parameter. To demonstrate this, a researcher studying a specific cytokine in an inflammatory process would have a specific aim in the grant for evidence that this cytokine was involved in a human disease. As would be expected, this has resulted in a major change in how basic investigators conduct their research projects. Additionally, it has mandated collaborations with clinicians and provided direct correlations with disease processes. The reduction in available funding, and closer linkage to translating research findings into disease diagnosis or treatments, has lead to renewed interest in academic–pharmaceutical company collaborations. This shift has been encouraged further by the increasing need of biotechnology and small-molecule pharmaceutical companies to address more complex and chronic diseases, incorporate new technologies into their drug development programs, and enhance their molecular understanding of disease. Currently, most academics do not have a clear idea of how to solicit research support from such companies, and thus it is still relatively unusual for investigators to get significant industry funding for preclinical research. Many scientists within the pharmaceutical industry also have limited understanding of operations within universities and how to identify/access basic research programs that might be of benefit to drug development. The industry perspective will be described in greater detail below, as well as examples of research collaborations that have been successful.
THE BIOTECH EXPERIENCE Traditionally, biotech companies have largely been created by entrepreneurial academic scientists who want to commercialize their research findings for the development of drugs. Historically, these companies have been funded by venture capital funds or other sources rather than by the pharmaceutical industry. Such funding provides for the early-stage development of compounds but usually does not allow for the high expenses associated with full clinical development. Phase II clinical proof-of-concept and phase III pivotal safety and efficacy clinical trials involve significantly larger numbers of patients,
518
INTEGRATING ACADEMIC LABORATORIES
clinicians, institutions, and amounts of money than even well-funded startup companies have access to through normal funding sources. To further complicate the current model, large pharmaceutical companies are most interested in new molecules which have demonstrated efficacy and safety in humans (phase IIb or early phase III). Given a choice, the companies are substantially less interested in investing in early-stage preclinical development, due to the dramatically higher risk of attrition from toxicity or lack of effect against the targeted disease. Once a compound shows real promise in patients, a number of pharmaceutical companies will be interested in purchasing the rights to the compound or may invest in co-development of a drug. This model has been quite successful in bringing new drugs to market that are not part of a pharmaceutical company’s internal portfolio. However, historically, this has not allowed for the consistent support of preclinical research. Academic researchers are often placed in the unenviable situation of having a molecule with high potential for an unmet medical need but insufficient funds to develop the molecule to a stage where outside organizations see sufficient commercial potential to invest. The unfortunate outcome all too often is that the new approach either languishes or the academic institution assigns rights to the compound at a relatively low price. The Pharmaceutical Approach Historically, pharmaceutical companies have relied primarily on their internal discovery scientists to develop most of their new drugs. They have also purchased the rights to compounds developed by biotech firms that show promise in the clinical trials. Thus, pharmaceutical companies previously have not been major supporters of external preclinical developmental, including those academic laboratories. Pharma often has supported targeted programs in academic medical centers, such as postdoctoral fellowships and occasional research laboratories, but this is not a widespread or reliable long-term funding mechanism for most universities. The nature and culture of academic and industrial groups can also inhibit successful interactions. The priorities, working processes, reward systems, and even ways of communicating differ significantly between the two institutional types. Recently, however, there has been greater movement to support preclinical research programs in academic laboratories. Increased costs of drug development, high failure rates of new drug moieties, greater acceptance for outsourcing of many roles, and recognition that the next great drug discovery could come from many fields have increased the pressure on pharmaceutical companies to look beyond their own walls. This shift is further supported by mergers within the industry and the need for new drug pipelines larger than can reasonably be achieved based solely on internal resources. For a company to prosper, it is becoming essential to explore all available sources of new medical treatments. Also, competition for new drugs in late-stage development is intense and subject to great competition and cost. This last fact is
THE BIOTECH EXPERIENCE
519
resulting in more attention to compounds in early clinical trials (phases I and IIa) or even late-stage preclinical evaluations. Another factor supporting the movement to fund academic laboratories is that most companies now recognize that external agreements and collaborations are often more cost-effective than developing and supporting the same expertise internally. Examples of outreach by companies into academic institutions or private organizations would include companies such as Sandoz and Pfizer partnering with the Scripps Research Institute in La Jolla, California. In fact, today, many of the investigational laboratories in pharmaceutical companies have closed. There is a belief that outsourcing these studies provides expertise without the need to maintain large in-house programs. Advantages of Collaboration There are a number of positive things associated with pharmaceutical company support for academic laboratories. For the company the academic laboratory provides cost-effective access to expertise and established research programs that do not have to be duplicated in-house. Since most critical observations in biology, including mechanisms of disease processes, are first elucidated in academic research laboratories, support of this process by companies provides access to this valuable basic research that cannot be duplicated in industry without tremendous internal investment. Such collaborations also allow pharmaceutical scientists access to in-residence stays in academic research laboratories. Furthermore, a pharmaceutical company investing in a faculty member’s research has a deeper understanding of the benefits and limitations of the research than does the average person in industry who is looking for licensing opportunities. This often translates into earlier recognition of new therapeutic opportunities and the first right to decide on licensing a new product from the academic laboratory. For academic researchers this funding provides an important additional source of research support in addition to the NIH. Industry funding also provides often for the purchase of expensive equipment which often would otherwise not be available to the investigator. Through academia–industry collaborations, many university scientists can gain access to specialized instruments or reagents that can be difficult or impossible to obtain within their institution. Additionally, taking an idea to actual therapy can be realized. Potential Pitfalls with the Collaborations As a research investigator in a university setting, it is important to keep in mind that several conditions must be met for research collaborations with pharmaceutical companies to be successful. First, it is critical that the agreement be structured as research collaboration and not as a testing service. Although there are situations where short-term fee-for-service testing may be desirable, that is rarely the case for an individual researcher. Industry has
520
INTEGRATING ACADEMIC LABORATORIES
contracts with outside laboratories such as toxicology and medical reference laboratories where data ownership and intellectual property concerns are more efficiently defined and negotiated than within the university. This model generally does not work in academic research collaboration because in that situation the investigator is responsible for the analysis of the data and owns the data, usually with the intent of placing this information into the public domain through publications. This is a critical component of successful research collaboration since ideally the companies are interested in academic expertise in evaluating the data. The quality of research conducted will also be significantly richer when the interaction is a joint research collaboration. By structuring the agreement to allow both parties to benefit, advance the research, and look for options that neither could achieve alone, academic– industry programs are valuable tools to use to address unmet medical needs. Hand in hand with control over the data is the issue of publishing and intellectual property. When employing a contract laboratory, the company owns all data and intellectual property pertaining thereto. This would not be acceptable to the institution in an academic environment. Most institutions require at least part of the intellectual property that comes from these collaborations, if not all of it. This is often a major sticking point between academic centers and pharmaceutical companies. Ideally, there is an agreement in place where the university has rights to patent data that come from the research collaboration, while the company has access to utilize these data in partnership with the institution. Another major issue that goes hand in hand with intellectual property is the right to publish findings in peer-reviewed journals. In universities it is fundamental that the right to publish be part of any agreement made between research investigators and for-profit companies. For many companies this can be a difficult hurdle since they are concerned about keeping information confidential, at least until a patent is issued. Usually, what it done is to have a clause written into the contract defining when findings can be submitted for publication. This usually means a few months (but not much longer) before the academicians can submit their report for peer review.
COLLABORATIONS BETWEEN PHARMACEUTICAL COMPANIES AND UNIVERSITIES IN PROMOTING BASIC RESEARCH Traditionally, pharmaceutical companies have concentrated their external investments on compounds that have completed the early stages of the drug development process, and they usually already have some patient information. However, increasingly, the pharmaceutical industry is funding collaborations with universities for support for early-stage research. Historically, this role has been dominated by government and venture capital funding. The combined factors of cutbacks in NIH and venture capital funding, and risk aversion to early-stage funding by pharmaceutical companies, is leading to an emphasis
PROMOTING BASIC RESEARCH
521
on supporting some early research for promising academic investigators and departments. This support of science at the “grassroots” level also provides companies with research capabilities that would not be possible in internal drug development laboratories and allows these companies to embrace the nimble style of biotechnology companies. A recent example of such a collaboration is that of Pfizer forming a collaboration with the University of California at San Francisco (UCSF). This collaboration will not only fund narrowly defined research projects but will allow for early proof-of-concept funding for ideas proposed by the academic researchers (Bernadette Tansey, San Francisco Chronicle. June 10, 2008). Thus, this new model of collaboration between academic researchers and a major pharmaceutical company both funds specific projects and utilizes the academic scientists’ expertise in developing new ideas and projects. The company can utilize the intellectual capital of these researchers, many of whom are acknowledged leaders in their particular field of study. From the perspective of the university, the individual scientists receive financial support and the ability to see accelerated progression of their discoveries toward medical implementation. The University of Michigan Experience with Pharma and Joint Research Projects Like other research institutions, the University of Michigan has had longstanding collaborations with pharmaceutical and biotechnology companies. In general, these companies have provided support for lecture series, specialized symposia, specialty meetings, and consulting. In addition, there has been support for individual investigators. One example is that of the authors of this chapter and the Department of Pathology at the University of Michigan, who have had research collaborations with pharmaceutical companies: primarily Warner-Lambert and subsequently, Pfizer, as well as other companies to a lesser extent. These research collaborations have focused on areas that provided value to both institutions. One area of collaboration has been the development by the academic researchers of experimental disease models that can be utilized by pharmaceutical scientists early in the drug discovery process. Examples include models of lung, skin, and gastrointestinal injury. The use of experimental in vitro and in vivo models allows pharmaceutical scientists to evaluate new compounds of interest in models not readily available in their laboratories and to determine activity relative to a particular aspect of a disease or adverse effect. These collaborations have been used extensively to evaluate efficacy and toxicity in the discovery stage of drug development. Another area of collaboration has involved studies comparing animal models with the human diseases that are targeted. The collaborations that these companies have with academic centers such as the University of Michigan allow comparison between the animal findings for molecules of interest and what is seen in the human disease. This is very valuable in determining how
522
INTEGRATING ACADEMIC LABORATORIES
relevant the preclinical animal models and toxicology studies are in predicting what will happen in humans. Examples include models of inflammatory vascular and lung injury, as well as sepsis. Another area of collaboration has been in the identification of biomarkers of disease activity. Pathologists have unique training that bridges basic and clinical science dealing with mechanisms of disease, with specific interest in developing tests for clinical laboratories to diagnose specific diseases. In this regard our laboratories have been able to develop high-throughput technologies such as antibody arrays with support from these companies that would otherwise not be possible because of cost. This support allows the identification of new biomarkers of specific diseases, as well as toxicity. This information is useful not only to the pharma companies, but also to the medical community at large. Examples of these collaborative activities are cited below, where we show that this technology has great potential to identify new markers of disease in humans [2]. The fact that pathologists typically are responsible for running clinical laboratories has also been of value to the pharmaceutical companies since they may have on-site phase I clinics that evaluate human specimens. Members of our department have provided supervision of laboratory operations and support for College of American Pathology certification. Our clinical expertise has also been utilized in specific issues that arise in phase II and III testing. Finally, and very important, our colleagues in the pharmaceutical industry have provided support for joint postdoctoral programs. This allows for the funding of postdoctoral scientists to work in the laboratories both on basic research studies at the university and on focused projects in a pharmaceutical setting. This has proven very successful, with several high-quality postdoctoral investigators trained in our laboratories going on to positions in industry and academia.
EXAMPLES OF CORPORATE TIES BETWEEN THE UNIVERSITY OF MICHIGAN AND BIOTECH COMPANIES Below are two examples of companies that are closely aligned with the research efforts of Arul Chinnaiyan at the University of Michigan Medical School. Ultimate commercialization of intellectual property by the two companies is closely linked to Chinnaiyan and his colleagues. Compendia Bioscience, Inc. Compendia Bioscience was incorporated in January 2006. Its chief executive officer (CEO) is Dan Rhodes, who worked with Arul Chinnaiyan and obtained his Ph.D. degree through this association. Compendia Bioscience takes advantage of the research database Oncomine, which has been in place for several
EXAMPLES OF CORPORATE TIES
523
years. This database takes published microarray information arising from more than 10,000 microarray studies of tumors and is made available at no charge to persons in academic settings. The information allows people to search the database and to extract information providing genomic sequences that can be used for diagnostic verification, predictions of drug sensitivity, and many other features, all of which ultimately bear on diagnostic analysis and clinical decision making in the oncology field. The Oncomine database is also available to the pharmaceutical industry on a charge basis, which involves the payment of license fees annually. Compendia was set up as a commercial entity with SPARK funding (a Michigan Economic Development Corporation entity in Ann Arbor, Michigan). In late 2007, Compendia received small business innovative research (SBIR) funding and funding from the 21st Century Job Funds (approximately $1.2 million). The company, which currently has approximately 12 employees, is seen as an important resource to the commercial world, with 14 of the top 20 pharmaceutical companies annually paying license fees to access the database. The fast-track SBIR granted to Compendia is allowing the company to increase its personnel to 17 employees, who are involved in software development and in evaluation of content and updating of the Oncomine databases and related matters. In 2008 Compendia is expected to break even financially. A decision has been made by Compendia not to become directly involved in drug discovery, personalized medicine, or other areas that depend on genomic information to make diagnoses or develop new drugs. The Compendia model is very different from that of most companies, in that in its licensing to pharmaceutical companies, no intellectual property is involved. In other words, if the pharmaceutical companies discover information in the Oncomine database that allows for development of a new drug, Compendia does not have any direct financial stake or gain. On the other hand, there is an increasing likelihood that Compendia will be involved in consultative activities with pharmaceutical companies to optimize use of its database. Armune BioScience, Inc. Armune BioScience, located in Kalamazoo, Michigan, was formed in 2007 to develop and commercialize diagnostic tests for prostate, lung, and breast cancers. Its CEO is Eli Thomssen. Most of the initial work on which the company is based was done by the Chinnaiyan group at the University of Michigan Medical School. In this setting of fundamental proteomic prostate cancer research, several proteins expressed in low- or high-grade prostatic cancers and in metastatic lesions were identified and monoclonal antibodies developed in mice [3,4]. These resulting approaches are proving to be useful prognostic and diagnostic reagents. The well-known prostate-specific antigen (PSA) test has been widely used for the past decade and has a very high sensitivity (but very poor specificity), with as many as 40% of the test results being
524
INTEGRATING ACADEMIC LABORATORIES
interpreted in a manner that does not correlate with the clinical condition of the patient. This has led to great consternation for both patients and physicians as they try to select the optimal clinical intervention. With respect to lung cancer, no commercial tests are available that allow a serological diagnosis of the presence of lung cancer. The leading candidate for studies in this area is peripheral adenocarcinomas, which currently are detected by CT scans or chest x-rays, often with the diagnosis being made only at an advanced clinical state. Similarly, the accurate diagnosis of breast cancer is chiefly made based on excisional or needle biopsy, with no reliable serological test existing at this time. The technology advanced by Armune Biosciences uses phage protein microarray methods employing beads coated with certain antigenic epitopes and the Luminex platform as the readout. In these assays, the coated beads interact with autoantibodies present in the blood of patients with prostate cancer. As reported by Wang et al. [5], the presence of these autoantibodies to prostate cancer cells represents an extremely specific humoral immune response and increasingly appears to be a reliable and early indicator of the presence of prostate cancer. It is hoped that the development of such diagnostic tests will result in high sensitivity (>90%) together with high specificity (>90%). It is expected that development of highly specific, highly sensitive, and reliable serological tests for the diagnosis of prostatic cancer will allow the diagnosis of prostatic cancer at an early stage so that a decision can be made much sooner with respect to whether surgery or radiation therapy should be employed. The technology should also reduce the number of unsuccessful or unnecessary needle biopsies of the prostate. Developing a lung cancer diagnostic test would represent the first of its kind and might allow detection much earlier in high-risk groups such as smokers. Such success could result in a much better cure percentage than the current five-year survival rate of G CYP2D6 GENE.g.100C>T CYP2D6 GENE.g.124G>A CYP2D6 GENE.g.883G>C CYP2D6 GENE.g.1023C>T …
PGTEST
PGSTRESC
PGORRES
PGSTRESN
PGSTRESN
CYP1A2 Mutation CYP1A2 Mutation DNA Analysis (provided f CYP2D6 test Cytochrome P450 2C19 Test
PGTEST EGFR-KD (EGFR Gene, Protein kinase domain assoc
M33388:g.-1584GG M33388:g.100TG M33388:g.124GC M33388:g.883GC M33388:g.1023CG …
PGORRES
CYP2D6 CYP2D6 CYP2D6 CYP2D6 CYP2D6 …
PGTESTCD
50-776 50-777 50-574 50-575
SPEC001 HGNC:2625 SPEC001 HGNC:2625 SPEC001 HGNC:2625 SPEC001 HGNC:2625 SPEC001 HGNC:2625 … …
PGREFID
PGMETHCD
CYP2D6-00001 CYP2D6-00001 CYP2D6-00001 CYP2D6-00001 CYP2D6-00001 …
PGGRPID
83891, 83892 x2, 83998 x 83891, 83892 x2, 83998 x 83891, 83892, 83901 x2, 83891, 83892, 83901 x2,
PGMETHCD PGASSAY 88323, 88380, 83890 (X2), 8389 12700056
EGFR-KD-001 CYP1A2-00001 CYP1A2-00003 CYP2D6-00001 CYP2C19-00001
PGGRPID
Partial sample of the pharmacogenomics SDTM domain.
1 2 3 4 5 …
1
7 1 1 11
PGSEQ
PGSEQ
ZBI000-0007 ZB1000-007 ZB1000-008 ZB1000-009 ZB1000-009
NSCLC10 NSCLC10 NSCLC10 NSCLC10 NSCLC10
Child Domain:
USUBJID
STUDYID
Parent Domain:
585
Unique subject identifier within the submission
Sequence number given to ensure uniqueness within a data set for a subject; can be used to join related records
Internal or external Imaging identifier; example: UUID for external imaging data File
Short name of the measurement, test, or examination described in IMTEST; can be used as a column name when converting a data set from a vertical to a horizontal format
Variable Name
Unique subject identifier
Sequence number
Imaging reference ID
Test or examination short name
CDISC STDM Imaging Domain
Clinical trial subject ID
Instance number
SOP instance UID
Study description
(0020,0013)
(0008,0018)
(0008,1030)
Attribute Name
(0012,0040)
Tag
Mapping of DICOM Imaging Metadata Tags into SDTM Imaging Domain
CDISC Notes (for Domains) or Description (for General Classes)
TABLE 1
Institution-generated description or classification of the study (component) performed
Uniquely identifies the SOP Instance (see C.12.1.1.1 for further explanation; see also PS 3.4)
A number that identifies this image (Note: This attribute was named an image number in earlier versions of this standard)
The assigned identifier for the clinical trial subject (see C.7.1.3.1.6; will be present if Clinical Trial Subject Reading ID (0012,0042) is absent, may be present otherwise)
Attribute Description
DICOM Tag
586
IT SUPPORTING BIOMARKER-ENABLED DRUG DEVELOPMENT
• The pharmacogenomic (PG) and pharamacogenomics results (PR) domains will support submission of summarized genomic (genotypic data). • A new imaging (IM) domain will include a mapping of the relevant DICOM metadata fields required to summarize an imaging submission. The PG domain belongs to the findings class and is designed to store panel ordering information. The detailed test-level information (such as Genotype/SNP summarized results) is reported in the PR domain. Figure 6 shows what a typical genotype test might look like in terms of data content and use of the HUGO [14] nomenclature. The PG domain supports the hierarchical nature of pharmacogenomic results, where for a given genetic test (such as EGFR, CYP2D6, etc.) from a patient sample (listed in the parent domain), multiple Genotypes/SNPs can be reported (listed in the child domain). To support the use of imaging biomarkers, DICOM metadata tags have to be mapped into the fields of the new IM domain. Table 1 illustrates this mechanism. While the FDA has proposed the SDTM data model for submission data, it is clear that this is only an interchange format for sponsors to submit summary clinical study data to the FDA in a standardized fashion. The FDA identified a need for an additional relational repository model to store the SDTM data sets. The requirement was to design a normalized and extensible relational repository model that would scale up to a huge collection of studies going back into the past and supporting those in the future. Under a CRADA, the FDA and IBM jointly developed this submissions repository called Janus (named after the two-headed Roman God) that can look backward to support historic retrospective trials and look forward to support prospective trials. The data classification system of CDISC such as interventions, findings, and events was leveraged in the Janus model with linkages to the subjects (for the patients enrolled in the clinical trial) to facilitate the navigation across the different tables by consolidating data in three major tables. Benefits resulting from this technique include reduced database maintenance and a simpler data structure that is easier to understand and can support cross-trial analysis scenarios. The ETL (extract–transform–load) process for loading the SDTM domain data sets instantiates the appropriate class table structure in Janus without requiring any structural changes. DATA INTEGRATION AND MANAGEMENT As scientific breakthroughs in genomics and proteomics and new technologies such as biomedical and molecular imaging are incorporated into R&D processes, the associated experimental activities are producing ever-increasing volumes of data that have to be integrated and managed. There are two major approaches to solving the challenge of enterprise-wide data access. The
DATA INTEGRATION AND MANAGEMENT
587
creation of data warehouses [15] is an effective way to manage large and complex data that have to be queried, analyzed, and mined in order to generate new knowledge. To build such warehouses, the various data sources have to be extracted, transformed, and loaded (ETL) into repositories built on the principles of relational databases [16]. Warehousing effectively addresses the separation of transactional and analysis/reporting databases and provides a data management architecture that can cope with increased data demands over time. The ETL mechanism provides a means to “clean” the data extracted from the capture databases and thereby ensures data quality. However, data warehouses require significant effort in their implementation. Alternatively, a virtual, federated model can be employed [17]. Under the federated model, operational databases and other repositories remain intact and independent. Data retrieval and other multiple-database transactions take place at query time, through an integration layer of technology that sits above the operational databases and is often referred to as middleware or a metalayer. Database federation has attractive benefits, an important one being that the individual data sources do not require modification and can continue to function independently. In addition, the architecture of the federated model allows for easy expansion when new data sources become available. Federation requires less effort to implement but may suffer in query performance compared to a centralized data warehouse. Common to both approaches is the need for sorting, cleaning, and assessing the data, making sure they are valid, relevant, and presented in appropriate and compatible formats. The cleaning and validation process would eliminate repetitive data stores, link data sets, and classify and organize the data to enhance their utility. The two approaches can coexist, suggesting a strategy where stable and mature data types are stored in data warehouses and new, dynamic data sources are kept federated. Genomic data are a good example of the dynamic data type. Since genomics is a relatively new field in biopharmaceutical R&D, organizations use and define data their own way. Only as the science behind genomics is better understood can the business definitions be modified to better represent these new discoveries. The integration of external (partly unstructured) sources such as GenBank [18], SwissProt [19], and dbSNP [20], can be complicated, especially if the evolving systems use does not match the actual lab use. Standardized vocabularies (i.e., ontologies) will link these data sources for validation and analysis purposes. External data sources tend to represent the frontier of science, especially since they store genetic biomarkers associated with diseases and best methods of testing that are ever-evolving. Having a reliable link between genetic testing labs, external data sources for innovations in medical science, and clinical data greatly improves the analytical functionality, resulting in more accurate outcome analysis. These links have been designed into the CDISC PG/PR domains to facilitate the analysis and reporting of genetic factors in clinical trial outcomes.
IT SUPPORTING BIOMARKER-ENABLED DRUG DEVELOPMENT
Stakeholder Stakeholder Management Management Discovery Discovery
Development Development
Imaging Imaging Sites Sites
CROs CROs
Regulatory Regulatory Affairs Affairs
Storage Policy Policy Definition Storage Definition
Taxonomy Definition Taxonomy Definition
Compliance Interpretation Compliance Interpretation Search Browser
Info Grid Work Area
Inbox Actions
Preferences Favorites
Reports Admin
Collaboration
Portal Workflow
Reg. Reg. Agencies Agencies
Process Flow
Business Process Management Lifecycle
Search
Assembly
Rendition
Security
Distribution
Import/Export
View/Print
Auditing
Change Ctrl
Retention
E-Signature
Services Services
Application Integration
588
Image Repository Repository Image Business Process Change Process Modeling
SOP Creation
Figure 7
Training
Change Management
Monitoring
IBM’s medical image management solution.
As standards continue to evolve, the need for semantic interoperability is becoming increasingly clear. To use standards effectively to exchange information, there must be an agreed-upon data structure, and the stakeholders must share a common definition for the data content itself. The true benefit of standards is attained when two different groups can reach the same conclusions based on access to the same data because there is a shared understanding of the meaning of the data and the context of use of the data. IMAGING BIOMARKER DATA, REGULATORY COMPLIANCE, AND SOA Under FDA’s strict 21 CFR Part 11 [21] guidelines, new drug submissions must be supported by documentation that is compliant with all regulations. As explained above and illustrated in Figure 3, IBM’s SCORE software asset has been designed for this purpose. SCORE’s flexibility and modular design make it particularly suitable for the management of imaging biomarker data. The FDA requires reproducibility of imaging findings so that an independent reviewer can make the same conclusion or derive the same computed measurements as that of a radiologist included in a submission. As a result, a unified architecture is required for a DICOM-based imaging data manage-
IMAGING BIOMARKER DATA, REGULATORY COMPLIANCE, AND SOA
589
ment platform that supports heterogeneous image capture environments and modalities and allows Web-based access to independent reviewers. Automated markups and computations are recommended to promote reproducibility, but manual segmentation or annotations are often needed to compute the imaging findings. A common vocabulary is also needed for the radiological reports that spell out the diagnosis and other detailed findings as well as for the specification of the imaging protocols. Figure 7 shows how imaging biomarker data and work flows can be managed in a regulated multistakeholder environment. The solution includes a range of capabilities and services: • Image repository: stores the image content and associated metadata. • Collaboration layer: provides image life-cycle tasks shared across sponsors, CROs, and investigator sites. • Image services: provide functionality such as security and auditing. • Integration layer: provides solutions for integration and interoperability with other applications and systems. • Image taxonomy definition: develops image data models, including naming, attributes, ontologies, values, and relationships. • Image storage policy definition: defines and helps to manage policies and systems for image storage and retention. • Regulatory interpretation: assists interpretation of regulations and guidelines for what is required for compliance. • Portal: provides a role-based and personalized user interface. In addition, the solution design incorporates the customized design, implementation, and monitoring of image management processes. It should also be pointed out that the medical image management solution architecture is fully based on principles of service-oriented architecture (SOA [22]). SOA is taking application integration to a new level. To take full advantage of the principles of SOA, it is important to start with a full understanding of business processes to be supported by an IT solution. Such a solution must be architected to support the documented business processes. The component business modeling (CBM [23]) methodology identifies the basic building blocks of a given business, leading to insights that will help overcome most business challenges. CBM allows analysis from multiple perspectives—and the intersection of those views offers improved insights for decision making. In the case of biomarkerenabled R&D, CBM will break down the transformed processes and identify the respective roles of in-house biopharmaceutical R&D functions and outside partners such as CROs, imaging core labs, investigator sites, genotyping services, and possible academic or biotech research collaborators. After mapping work flows it is then possible to define a five-level serviceoriented IT Architecture that supports the processes and work flows:
590
1. 2. 3. 4. 5.
IT SUPPORTING BIOMARKER-ENABLED DRUG DEVELOPMENT
Discrete: hard-coded application Partial: cross line of business processes Enterprise: cross-enterprise business processes Partner: a known partner Dynamic Partner: any trusted partner
In its most advanced form, SOA will support a complex environment with integrated data sources, integrated applications, and a dynamic network of partners. CONCLUSIONS Biomarkers are key drivers of the ongoing health care transformation toward the new paradigm of stratified and personalized medicine. In this chapter we focused on the role of IT in supporting the use of biomarkers in biopharmaceutical R&D. When doing so, we need to keep in mind that the benefits desired for patients and consumers will be realized only if the new biomedical knowledge is translated into stratified and personalized patient care. The biopharmaceutical industry will have to participate not only as a provider of drugs and medical treatments, but also as a contributor to the emerging biomedical knowledge base and to the IT infrastructures needed to enable biomarker-based R&D and clinical care. It is therefore critical to define the necessary interfaces between the respective IT environments and to agree on standards that enable data interchanges. IT standards and architectures must support the integration of new biomarker data with conventional clinical data types and the management of the integrated data in (centralized or federated) data warehouses that can be queried and analyzed Analysis and mining of biomarker and health care data are mathematically challenging but are necessary to support diagnostic and treatment decisions by providers of personalized care. Finally, service-oriented architectures are required to support the resulting processes and work flows covering the various health care stakeholders. REFERENCES 1. Trusheim MR, Berndt ER, Douglas FL (2007). Stratified medicine: strategic and economic implications of combining drugs and clinical biomarkers. Nat Rev, 6:287–293. 2. Pharma 2010: The Threshold of Innovation. http://www.ibm.com/industries/ healthcare/doc/content/resource/insight/941673105.html?g_type=rhc. 3. DiMasi JA (2002). The value of improving the productivity of the drug development process: faster times and better decisions. PharmacoEconomics, 20(Suppl 3):1–10.
REFERENCES
591
4. DiMasi JA, Hansen RW, Grabowski HG (2003). The price of innovation: new estimates of drug development costs. J Health Econ, 22:151–185. 5. http://www.fda.gov/oc/initiatives/criticalpath/whitepaper.html#execsummary. 6. http://www.fda.gov/cder/genomics/PGX_biomarkers.pdf. 7. Lesko LJ, Atkinson AJ Jr (2001). Use of biomarkers and surrogate endpoints in drug development and regulatory decision making: criteria, validation, strategies. Annu Rev Pharmacol Toxicol, 41:347–366. 8. CDISC. http://www.cdisc.org. 9. HL7. http://www.hl7.org. 10. DICOM. http://www.Medical.nema.org. 11. Janus data model. http://www.fda.gov/oc/datacouncil/, and http://crix.nci.nih.gov/ projects/janus/. 12. Hehenberger M, Chatterjee A, Reddy U, Hernandez J, Sprengel J (2007). IT solutions for imaging biomarkers in bio-pharmaceutical R&D. IBM Syst J, 46(1): 183–198. 13. SCORE. http://www.03.ibm.com/industries/healthcare/doc/content/bin/ HLS00198_USEN_02_LO.pdf. 14. http://www.gene.ucl.ac.uk/nomenclature/. 15. Kimball R, Caserta J (2004). The Data Warehouse ETL Toolkit: Practical Techniques for Extracting, Cleaning, Conforming, and Delivering Data. Wiley, Hoboken, NJ. 16. Codd EF (1981). The significance of the SQL/data system announcement. Computerworld, 15(7):27–30. See also http://www.informatik.unitrier.de/∼ley/db/ about/codd.html. 17. Haas L, Schwarz P, Kodali P, Kotlar E, Rice J, Swope W (2001). DiscoveryLink: a system for integrated access to life sciences data. IBM Syst J, 40(2):489–511. 18. GenBank. http://www.ncbi.nlm.nih.gov/Genbank/. 19. SwissProt. http://www.ebi.ac.uk/swissprot/. 20. dbSNP. http://www.ncbi.nlm.nih.gov/SNP/. 21. http://www.fda.gov/ora/compliance_ref/part11/. 22. Carter S (2007). The New Language of Business: SOA & Web 2.0. IBM Press, Armonk, NY. 23. http://www.ibm.com/services/us/gbs/bus/html/bcs_componentmodeling.html.
34 REDEFINING DISEASE AND PHARMACEUTICAL TARGETS THROUGH MOLECULAR DEFINITIONS AND PERSONALIZED MEDICINE Craig P. Webb, Ph.D. Van Andel Research Institute, Grand Rapids, Michigan
John F. Thompson, M.D. Helicos BioSciences, Cambridge, Massachusetts
Bruce H. Littman, M.D. Translational Medicine Associates, Stonington, Connecticut
INTRODUCTION A chronic disease is really a phenotype representing the combination of symptom patterns and pathological findings that practicing clinicians have classified together as a disease. Like any other trait, each component of the disease exists because of contributions from genetic and environmental factors and the resultant modifications of biological functions that disturb the normal homeostatic state. Thus, the expression of a single chronic disease can be due to different combinations of genetic and environmental factors. Yet, when
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
593
594
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
physicians treat patients with a drug, they are actually modulating just single or subsets of molecular targets and their downstream pathways. In a given patient, the importance of that target will generally be more or less significant than the average importance in the disease population. Since the drug was approved based on an average clinical response, there will be patients with much better than average responses and patients who do not have a satisfactory clinical response at all. Two important concepts for personalized medicines are hypothesized in this chapter. The first is that there is a distribution of relative expression of abnormal pathway activity for every element contributing to the pathogenesis of a disease. The second is that regardless of the proportion of subjects in a disease population, when a drug target is aligned with the most relevant abnormalities, it is likely that the therapeutic benefit of the drug for the individual patient where that alignment is found will be much greater than the average benefit for the entire disease population. This means that if the drug target and its downstream pathways are abnormally expressed to a large degree in a patient subpopulation, the therapeutic benefit of a drug targeting that pathway will also be more pronounced than that observed for the entire disease population. These concepts are expressed graphically in Figure 1. The first is illustrated by scenarios 1, 2, and 3, where the three distributions represent the proportion of patients with different degrees of abnormal pathway expression for three different targets or pathways in the same disease population. The second concept is depicted as the solid black curve, where the degree of expression of the abnormal target or pathway is correlated with the potential therapeutic benefit of a drug targeting that pathway. The shaded areas represent the proportion of the disease population with the best clinical response. These same two concepts will also determine the spectrum of relative toxicity (safety outcomes) of drugs across a population in a maner similar to efficacy. Thus, the therapeutic index (benefit/risk) of drugs targeting a specific pathway may become more predictable. Molecular definitions of disease and personalized medicine therefore has the potential to improve the outcomes of drug treatment provided that physicians have the appropriate diagnostic tools as well as drugs with targeted and well-understood biological activities. Then, rather than directing treatment toward the mythical average patient, physicians will be able to tailor their prescriptions to the patients who could benefit most and also customize the dose and regimen based on the pharmacogenetic profile of the patient and the absorption, distribution, metabolism, and elimination (ADME) properties of the drug. This should provide patients with safer and more efficacious treatment options. In this chapter we use three different disease areas to illustrate these principles and the potential of personalized medicine. These concepts are most advanced in oncology, and we start with this example. Here tumors from individual patients are molecularly characterized and drug regimens are
High
20% 15% 10% 5% 0% 0
Low 20
40
60
80
Percent Patients
25%
High
20% 15% 10% 5% 0% 0
Low 20
40
60
80
100 High
25% Percent Patients
100
20% 15% 10% 5% 0% 0
20
40
60
80
Clinical Response to Drug
Percent Patients
25%
Low 100
Clinical Response to Drug
Scenario 1 Scenario 2 Scenario 3 Clinical Response to Drug
595
Clinical Response to Drug
INTRODUCTION
Target Expression Level in Patients
Figure 1 Principles supporting personalized medicine strategies. Scenarios 1, 2, and 3 are three distributions representing the proportion of patients with different degrees of abnormal pathway expression for three different targets or pathways in the same disease population. The solid black curve shows the correlation between the degree of expression of the abnormal target or pathway and the potential therapeutic benefit of a drug targeting that pathway. The shaded areas represent the proportion of the disease population with increased probability of achieving the best clinical response. (See insert for color reproduction of the figure.)
selected that target the abnormal pathways driving the neoplastic phenotype. However, as described above, the same principles apply to all chronic diseases, and we illustrate this using type 2 diabetes and rheumatoid arthritis, where the science is just approaching a level that will enable personalized medicine strategies.
596
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
ONCOLOGY In the United States alone in 2007, it is estimated that nearly 1.5 million new cancer diagnoses will be reported and that over 500,000 patients will die from the disease [1]. Although early detection coupled with improved debulking procedures has led to some improvement in the survival of patients diagnosed with early-stage disease, the outcome of patients with advanced metastatic disease remains bleak. Metastatic disease will continue to burden society and the health care system due to an aging population, late-onset recurrence of microscopic metastases [2], and the fact that many tumors remain undetectable in their early stages. Long-term treatments for disseminated disease that maximize antitumor efficacy and patient survival while minimizing patient morbidity remain a primary objective in medical oncology, yet with few exceptions continue to be elusive. The level of interest in the field of individualized molecular-based treatments has been driven by a number of factors, including public demand [3], regulatory agencies [4], and the possible financial incentives associated with biomarker–drug co-development within the pharmaceutical and biotechnology industry [5]. In oncology in particular, the lack of a statistical demonstration of agent efficacy in phase III trials is the primary reason for late-stage drug failures that logically result in increasing costs of oncology treatments that successfully attain market approval [6]. Biomarker strategies that can accurately identify the responsive tumors and/or patient population most likely to receive benefit provide a clear conduit to rescue failed or failing drugs, and also provide a realistic approach to enrich patient populations in early clinical trials to maximize the probability of drug efficacy. The concept was perhaps best illustrated during the approval of trastuzumab, a monoclonal antibody against ERBB2 that is frequently amplified in breast tumors [7]. The co-development of a biomarker that assesses the mRNA or protein expression of ERBB2 increased the overall response rate from approximately 10% in the overall population to 35 to 50% in the ERBB2-enriched subpopulation [8]. Without a means to enrich for patients with this molecular subset of tumors, trastuzumab may not have gained U.S. Food and Drug Administration (FDA) approval for the breast cancer “indication.” Today, there are an unprecedented number of available therapeutic agents that have been designed to target specific molecular entities irrespective of the selected phenotypic indications. Within our current knowledge base, there are more than 1500 drugs that with varying degrees of specificity, target defined constituents of molecular networks. Coupled with advances in technology and computer sciences, in the postgenomic era we are now presented with an unprecedented opportunity to apply our knowledge and existing and/ or emerging resources to revolutionize medicine into a predictive discipline, where integration of clinical and molecular observations are used to maximize therapeutic index. We are now able to measure the molecular components of biological systems at an extraordinary density using standardized molecular
ONCOLOGY
597
profiling technologies, which provide a portrait of the perturbed molecular systems within a diseased tissue. The lack of efficacy of targeted agents in oncology is somewhat ironically likely due in part to their specificity, and biomarker-driven selection of drug combinations will be required to target the Achilles heel of the tumor system. Pathways involved in neoplastic transformation and in vivo tumor growth and progression are complex. Biological systems have evolved to provide the ultimate level of plasticity to allow cells to adapt to or exploit extracellular cues and ensure their long-term survival and/or survival of the organism as a whole [9]. The system is highly responsive to the cellular context, which includes both temporal (time dependent) and spatial (location dependent) factors that through a series of epigenetic events, influence the formation of the observed phenotype. The complexity of a tumor system is exacerbated due to the inherent genomic instability of a tumor cell; alterations in DNA repair mechanisms at the onset of tumorigenesis essentially instigate an accelerated microevolutionary process, where the interplay between each tumor cell and its microenvironment provides a constantly shifting context and selective milieu which naturally results in cellular heterogeneity [10,11]. The molecular network of the tumor system represents integration between the subsystems of malignant cells and their host microenvironment, which can include a multitude of proteomic and chemical constituents and other cellular systems contributed by endothelial, stromal, and inflammatory cells [12,13]. Collectively, tumor systems are highly adaptive and naturally exhibit significant variation over time, between locations, and across individuals. The malignant phenotype results from perturbation of many pathways that regulate the tumor–host interaction and affect fundamental cellular processes such as cell division, apoptosis, invasion, and metabolism. The multistage process of tumor etiology and progression is driven by the progressive accumulation of genetic mutations and epigenetic abnormalities that include programs of gene expression. At first glance, the perturbations in individual components (DNA, RNA, protein, and metabolites) of a tumor system that have been identified in association with the various malignant phenotypes appear to reflect a somewhat stochastic process. However, while recent efforts to sequence the human genome have confirmed the large degree of redundancy within signaling networks, they have also revealed the extent that tumor cells utilize converging pathways to thrive within their selective environment [14,15]. Indeed, a relatively small number of key intracellular switches have been associated with tumorigenesis in preclinical mouse models and in vitro cell lines. This phenomenon, termed oncogene addiction, suggests that targeted agents against these central signaling relays may prove effective [16]. The classical oncogene target should be expanded to include any molecular aberration that is causative with respect to the etiology and/or progression of the disease at the network level. Additional intervention points within conserved tumor systems are being associated with the various phenotypes of cancer, in part through the use of high-throughput functional screens [17].
598
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
While these preserved network hubs represent obvious candidates for targeted therapies, redundancy within molecular pathways also provides a predictable path to drug resistance. Given the genetic instability within a tumor and the robustness within molecular networks, the probability that a cell within the average tumor mass will acquire resistance to a single agent with a selective molecular target during the life span of a tumor population would be expected to be very high [18]. This is well illustrated with the drug imatinib, which was developed to target the ABL kinase, which is constitutively active in patients with chronic myelogenous leukemia due to a Bcr-Abl gene translocation. Resistant tumors have recently emerged that utilize alternative pathways downstream of the drug target to circumvent the absolute requirement for the ABL kinase [19]. Similar results with other targeted agents used in single or minimal combinational modalities are emerging and would indeed be predicted from analysis of the target network that demonstrates the level of redundancy and robustness within signaling pathways [20]. Essentially, the plasticity within these networks reduces the dependency on single nodes within pathways. Tools such as systems biology will play a critical role in modeling the pathways to de novo and acquired resistance. Coupled with knowledge of drug–biomarker associations, logical targeted approaches that minimize the probability that a tumor will develop resistance and/or target key network nodes/hubs to reverse the resistant phenotype can be developed. The target for individualized therapy therefore becomes the perturbed tumor system as a whole, against which targeted therapeutics could be combined to maximize disruption of the networks identified. An increasing number of technologies are available for assessment of the individual molecular components of cellular systems. Biomarkers can be genetic, genomic, proteomic, or metabolomic in nature, and have been used in various aspects of medicine to predict a phenotype of interest. While these have traditionally been developed as individual biomarkers that can readily be validated as an in vitro diagnostic, multivariate assays have recently been developed that simultaneously assess the levels of different biomarkers and provide a molecular signature in association with a phenotype or context [21]. These signature-based tests require integrated informatics, which can generate a mathematical algorithm that is trained and tested on independent sample sets [22]. Global molecular profiling offers many advantages over custom biomarkers, since a profile that accurately captures the underlying system of the disease can be attained and used as a common input for both the discovery and development of diagnostics. Indeed, a major bottleneck in the field of personalized medicine is the time required to develop a validated biomarker; a genome-scale technology that provides a standardized input of raw data coupled with computational methods that provide consistent algorithm-based predictive outputs would ultimately permit for the rapid development and testing of new molecular signatures associated with any phenotype of interest.
ONCOLOGY
599
Despite the proliferation of new technologies that enable detection of specific biomarkers, gene expression profiling represents a relatively standardized platform that has been used extensively to create a depth of empirical data sets in association with various phenotypes. The ability to utilize gene expression profiles of human cancers to identify molecular subtypes associated with tumor progression, patient outcome, and response to therapy is increasingly evident [23]. For example, a multiplexed assay that determines the expression of a number of mRNA transcripts from breast carcinomas has been developed as a commercial test to predict the risk of tumor recurrence [21,24]. With respect to the prediction of optimal cytotoxic or targeted therapies, systematic efforts utilizing gene expression signatures to identify compounds that reverse the diseased genotype hold great promise [25]. In vitro cell line gene expression signatures associated with differential drug sensitivity have also been shown to predict tumor response to some agents in the clinic with varying degrees of accuracy [26,27]. These and other empirically based methods for predicting optimal therapeutics based on the overall genomic signature of the tumor will play a pivotal role in future personalized medicine initiatives. These and other signature-based methods are currently being evaluated within our predictive therapeutics protocol in conjunction with network analysis to determine their feasibility for broad application. While the signature-based approaches outlined above represent a systematic approach for the logical selection of treatments based on the gene expression profile of a tumor, considerable experimentation is required to generate the predictive models. A supplementary approach is to utilize advances in network theory, systems biology, and computer modeling to reconstruct the aberrant molecular network predicted based on the same input of deregulated genes within the tumor. Although the deregulated expression of a molecular target may be associated with differential sensitivity to a targeted agent, gene or protein expression alone does not necessarily equate to target activity. For example, one of the first molecular events that occurs in some cells following stimulation with an extracellular ligand can be the down-regulation of the activated cell surface receptor (reduced protein expression) and/or reduced receptor mRNA transcription [28]. Nonetheless, successive waves of transcriptional events that occur within a tumor cell represent a hallmark of upstream chronic signaling cascades on which the tumor more likely depends. Gene expression profiling has been used successfully to identify the activation status of oncogenic pathways [29], demonstrating the feasibility of utilizing standardized gene expression signatures as a surrogate input for the prediction of network activity. Further analysis of the conceptualized network can predict convergence and divergence hubs within the tumor system, some of which can be targeted with existing therapeutics. This approach does not require comparison to an empirical data set, but rather, relies on network knowledge and graph theory to construct networks from known interactions between system components. Combinational strategies that target key nodes or hubs within
600
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
deregulated molecular networks associated with maintenance of the malignant phenotype may maximize therapeutic efficacy and may reduce the probability of a tumor cell utilizing alternative network components to achieve the “resistant” phenotype.
PREDICTIVE THERAPEUTICS PROTOCOL The general schema for a predictive therapeutics protocol is outlined in Figure 2. The primary objective of the protocol is to evaluate the merits of the various predictive methodologies outlined above while simultaneously providing information back to treating physicians in a real-time fashion for consideration in the design of a treatment plan. While a full description of the protocol is beyond the scope of this chapter, we have enrolled 50 patients in the first phase that was focused on the development of the critical infrastructure and logistics. From each patient, highly qualified tumor tissue (or isolated tumor cells) is processed using standard operating procedures to create a gene expression profile. The signature is used to compare to other well-annotated samples
Figure 2 High-level review of our IRB-approved predictive therapeutics protocol in which patient tumors are processed using Affymetrix GeneChip technology after the required consenting and pathology clearance, to generate a gene expression signature that is reflective of the underlying biological context. These samples are processed using standardized procedures, to minimize confounding variables that can significantly influence the interpretation of the results. Molecular data are analyzed statistically relative to a wide variety of well-annotated samples within the database, and these intermediate results are applied further to the integrated knowledge base that includes systems biology tools. For example, enriched networks are identified and further refined to categorize significant convergence and/or divergence hubs that represent drugable targets with existing agents with known molecular mechanisms of action. Irrespective of the predictive method employed, each drug is associated with a normalized score for predicted efficacy. A report with these standardized predictions (indicated by the arrow) is provided back to the medical oncologist, who determines a treatment selection using all information available to him or her, which may include the molecular evidence. The patient’s administered treatment is captured and their tumor response is assessed using standard clinical criteria. In this fashion, the association between the drug score predicted and the tumor response can be determined. In addition, a section of the patient’s tumor is implanted directly into immunecompromised mice to establish a series of tumor grafts, which naturally more closely resemble the human disease at the molecular and histological level relative to established-cell-line xenograft models. These tumors are expanded in additional mouse cohorts and alternative predictive methods are tested to prioritize those with the most promise. Over time it is hoped that this approach of predictive modeling from standardized data, experimental testing, and model refinement may provide a means to identify optimal therapeutics with a high degree of confidence in a systematic fashion.
601
Patient enrollment
Eligible?
STOP
Tumor implant into mice
Tumor sample Pathology
STOP
10 days
Treatment
Figure 2
6–9 months
Preclinical treatment evaluation
Gene expression profiling
First report
Patient response
Second Report
Second treatment
Patient response
602
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
within a large database, and deregulated patterns of gene expression used in conjunction with a knowledge base of known drug–target interactions to infer treatment strategies. We also attempt to establish a xenograft after implantation of a section of the fresh tumor into immune-compromised mice. These tumor grafts are expanded through two generations to create a large colony of mice harboring the patient’s tumor, and these are then used to statistically evaluate the different predictive methodologies and their corresponding treatment recommendations. While the preclinical component of the protocol does not typically provide useful information to the treating physician, it represents an excellent resource for prioritizing predictive methodologies and for developing a biomarker strategy for novel therapeutics. At the onset, it is apparent that this multidisciplinary protocol requires several infrastructural components as well as integrated logistics. These include the development of centralized informatics capabilities that permit full integration of clinical and molecular data, drug–biomarker knowledge, predictive modeling, and reporting. Standardized tissue procurement and pathological characterization with attention to quality control are essential to ensure consistency in the raw molecular data that are used to derive treatment predictions. Consistent feedback from the clinical and preclinical treatment outcomes is critical to assess the validity of the predictive methods. Each component of a therapeutic regimen is scored objectively based on the predictive methodology, and the ultimate success of the method is determined by comparing this standardized score with tumor response. It is important to state that at this time, the results obtained from the clinical arm of the protocol remain anecdotal, due to the underpowered nature of the initial proof-of-feasibility experimental design. However, despite representing a nonvalidated method for drug prioritization, any molecular information that can be provided to the treating physician is deemed valuable, especially for late-stage metastatic or refractory patients who have exhausted their standard of care options. In this sense, this protocol serves as a rudimentary clearinghouse where patients are placed onto experimental protocols, including off-label protocols based on the molecular profile of their disease. With the multitude of predictive models now available to suggest optimal combinational strategies based on a standardized gene expression signature from an individual tumor, the preclinical tumor grafts provide an invaluable resource for triaging ineffective methodologies. At this time, we are exploring a range of methods that range from rudimentary target expression to the more sophisticated signature-based methods and network inference. In general, the molecular similarities between the human tumor and the derived tumor grafts are excellent and represent a significant improvement from the classical cell line xenograft models (Figure 3). This implies that the molecular network within the tumor system as a whole is generally maintained in the human and mouse hosts, although some clear exceptions are noted; for example, the expected reduction in markers of human vasculature and inflammatory cells are evident and expected. Although it is too early to claim direct equivalence
PREDICTIVE THERAPEUTICS PROTOCOL
603
between the mouse tumor graft and patient tumor with respect to drug efficacy, early data are promising. In a handful of cases tested to date where the mouse and human harboring the same tumor are treated with the same treatment regimen, similar tumor responses have been observed. However, the tumor grafts are used predominantly to test the concept of using derived molecular data (in the mouse system) to predict optimal combinational therapies and not necessarily to define the best treatment strategy for the donating patient. To illustrate how the molecular signature of an individual tumor can be modeled to predict target activation and sensitivity to approved drugs, we use a case study from the first phase of our protocol. A 63-year-old male presented with metastatic non-small cell lung carcinoma, and after the necessary consent, enrolled in this research protocol. A biopsy of the tumor was qualified and released by pathology, and the sample processed within a CLIA/CAPaccredited laboratory that utilizes full genome Affymetrix GeneChip technology. The standardized gene expression data were compared to a database of other tumors and normal tissues, to identify the most significantly deregulated gene transcripts within the patient’s tumor. Our informatics solution uses a database of drug–biomarker knowledge that includes the reported molecular targets of more than 1500 drugs, the interaction type, and the effect of the interaction (agonistic or antagonistic). By aligning this knowledge with the deregulated signature within the tumor, potential drugs of interest are quickly identified. Of key importance, each drug is assigned a priority score (weight) based on the indication of the biomarker, which in turn depends on the predictive methodology used. For example, increased expression of a molecular target may indicate the corresponding targeted drug, and this is scored based on the normalized gene expression value. Since target expression does not necessarily equate to target activation status, we have also developed a specific network analysis tool in conjunction with GeneGo (http://www.genego.com), which systematically evaluates the topological significance within reconstructed networks. Molecular networks are constructed based on the input provided (in this example, overexpressed genes) and an algorithm compares this to the connectivity map of the global network. The significance for each node within the identified network is calculated based on its probability of providing network connectivity, and this is used to score each drug. In the tumor of this patient, a major divergence point was identified as epidermal growth factor receptor (EGFR) (Figure 4), suggesting EGFR lie upstream of the transcriptional events observed. In this case, EGFR was also overexpressed at the transcript level relative to other tumors and, collectively, was therefore assigned a high score. Based on this inference, the medical oncologist confirmed EGFR gene amplification in the tumor using traditional FISH analysis. This patient was treated with a combination of erlotinib, cisplatin (reduced expression of the ERCC1 gene), and bevacizumab (inferred constitutive activation of the VEGF–VEGFR pathway). The patient exhibited a partial response to this
604
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
targeted, combinational treatment, and is currently maintained on a noncytotoxic regimen of erlotinib and bevacizumab alone (Figure 5). While these results remain anecdotal in nature, a handful of significant tumor responses have been observed in the first phase of the protocol. Tumor regression has also been observed in several mouse models, where the various systematic methods for predicting optimal drug combinations are being evaluated and prioritized. The key to realizing the full promise of personalized medicine in oncology lies in the ability to predict combinations of agents in a systematic and objective fashion, irrespective of context and historical disease classification. The wealth of drugs currently available for combinational treatments necessitates a bold migration away from traditional diagnostics in which custom biomarkers are developed in parallel with a specific drug, typically in tumor subtypes. In conjunction with the isolation of the postulated tumor stem cell compartment of a cancer [30], standardized technologies that permit the application of genome scale network-based approaches for the prediction of any drug in various combinations may allow specific targeting of the Achilles heel of the molecular system irrespective of the observed phenotype.
TYPE 2 DIABETES Type 2 diabetes, in many ways like cancer, arises from a complex interplay of environmental and genetic factors that lead to a broad spectrum of conditions characterized by chronic high levels of glucose. While glucose levels are the common denominator for diagnosis of the disease, the simplicity of this mea-
Figure 3 The preclinical arm of our predictive therapeutics protocol allows for the prioritization of methodologies based on their ability to predict optimal combinational designs derived from the networks identified within the tumorgraft system. These tumorgrafts are established directly from the patient’s tumor by implantation into immune-compromised mice, and are characterized by both molecular profiling and histopathology. In this particular example, the data were restricted to include only biomarkers that represent known drug targets. In this fashion, the relative distribution of existing targets can be determined across patient tumors and their corresponding tumor grafts. (A) A heat map following unsupervised hierarchical clustering shows how the tumorgrafts in the mouse host closely resemble their donating human tumor at the genomic level even when the analysis was restricted to utilize only known drug targets. Patient tumors and their derived mouse tumorgrafts are coded with the same color and can be seen to co-cluster based on their overall genomic similarity. Probes encoding EGFR are highlighted to show the distribution of expression of this target across the various tumors. (B) The mean correlation coefficient in a direct comparison of human tumors with mouse tumor grafts is approximately 0.93, demonstrating excellent overall similarity at the biomarker level. Some notable exceptions are evident, such as reduced expression of human targets associated with angiogenesis in the murine host. (See insert for color reproduction of the figure.)
TYPE 2 DIABETES
605
surement actually masks a complex set of problems that vary among individuals. Some people are beset primarily with dysfunctional pancreatic beta cells, while others may have more significant issues in muscle or liver tissues. Being able to determine the underlying nature of an individual’s diabetes will help best determine the proper course of treatment.
Figure 3
606
Figure 4 Topological network analysis of the overexpressed genes from a non-small cell lung carcinoma identified a potential key input node at the level of EGFR. The results of these analyses are displayed using MetaCore, a systems biology network tool produced by GeneGo (www.genego.com). The significance of each node to confer system connectivity can be inferred after comparison with the global connectivity map and the drug–target knowledge base applied to select corresponding inhibitors. Among other applications, this type of systems approach, which does not depend on prerequisite empirical data sets, can readily be applied for the discovery of new disease targets, prioritization and/or validation of existing targets, and/or the identification of new indications for compounds that have a known or associated molecular mechanism of action. A key aspect is successful identification of the significant convergence or divergence hubs or nodes within the identified networks. (See insert for color reproduction of the figure.)
TYPE 2 DIABETES 12/29/06 1/2/07
Bevacizumab
Cisplatin
600
Erlotinib
Docetaxel CA 125 (U/mL)
607
400
SUV, g/mL * RECIST, mm
200
Biopsy 12/7/06
35 5 30 4 25 20 3 15 2 10 1 5 0 0 12/14/2006 12/28/2007
0 12/1/06
1/20/07
Normal Level: 0–30 2/7/2007
3/10/07
4/9/2007
4/29/07
6/15/2007
6/17/07
8/6/07
* Sum of SUV measurements from select lesions: right upper lobe, right hilar, right humerus and right acetabulum
Figure 5 Anecdotal evidence of a molecularly targeted combinational treatment in a 63-year-old man with metastatic non-small cell lung carcinoma. In this example, the patient’s tumors showed a prolonged partial response to erlotinib (overexpression of EGFR and network-based inference of activated EGFR) in combination with cisplatin and bevacizumab, which were also indicated from the molecular profiling data (low ERCC1 gene expression and evidence of constitutive VEGF–VEGFR network signaling, respectively).These agents were combined with docetaxel, an approved secondline treatment for metastatic NSCLC. The levels of the serum marker CA125 together with the sum of the maximum dimensions of the target lesions (CT scan) and standard uptake value for the glucose tracer ([18F]DG PET scan) are shown over time. The timing of the respective treatments is also shown.
Approaching diabetes from a more molecular point of view provides the opportunity to personalize treatment. Efforts to do this are still in their infancy, but significant progress is being made, with real results becoming apparent. Metabonomics focuses on analyzing a host of small molecules, generally in the urine but potentially in any bodily fluid, and determining differences among disease states. Most work thus far has been in animal models, but progress is being made with human samples. For example, a fingerprint of thiazolidinedione treatment was detected in humans, although no difference was found looking at healthy versus diseased individuals [31], a review of this field by Griffin and Nichols [32] with a focus on diabetes and related disorders highlights both the potential of the field and identifying issues.
608
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
Proteomics and transcriptomics have also been studied extensively as potential means of assessing subtypes of diabetes. For both technologies, diabetes presents a challenge, due to the inaccessibility of relevant tissues for the study of humans. However, in some cases, such as proteins derived from the kidney and other proteins secreted or leaked into the urine, it is possible to carry out such studies. Susztak and Bottinger provided examples of the advances and issues with these technologies for studying [33]. Because DNA is much more readily available than the relevant proteins and mRNA in tissues, genetic studies of diabetes have advanced more rapidly. Both type 1 and type 2 diabetes have long been known to have significant genetic components. Type 1 diabetes has generally been most highly associated with genes involved in the immune response, while type 2 has been less easily addressed, due both to its complex nature and its variable phenotype, such as age and severity of onset. Frequently, genes responsible for MODY (maturity onset diabetes of the young) have been assumed to play a role in the adult form of the disease, but that connection is weak in some cases. Genetic associations with type 1 diabetes have been strongest with genes in the MHC cluster and to a lesser extent with insulin and components of its signaling pathways. Genetic associations with type 2 diabetes have been less well replicated, but until the advent of whole genome scans had focused on genes known to be involved in obesity, lipid handling, and signaling, but many of these associations are poorly replicated. One attempt to replicate associations with 134 single-nucleotide polymorphisms (SNPs) across more than 70 genes found that only 12 SNPs in nine genes could be replicated [34]. This suggests both that there are many genes of weak effect and that there are many genes not yet discovered. Advances in selecting new therapeutic targets and the most appropriate patients for each therapy depend on identifying these genes so that the true complexity and subtypes of the disease can be determined. For both forms of diabetes, whole genome analysis of large cohorts is beginning to make substantial inroads into understanding the etiology of the diseases. The availability of large (multi-thousands) family sets for type 1 diabetes patients and large case–control cohorts for genetic analysis, coupled with cheaper genotyping technology and an in-depth understanding of human genome structure, are allowing previously unsuspected genes to be linked with the disease and setting the stage for a much more detailed molecular understanding of the systems involved and how they may go awry in diabetes. The bottleneck in overall understanding has changed from identifying the appropriate genes to study, which used to be virtually impossible, to functionally characterizing the myriad genes that are now associated with diabetes and placing them in appropriate cellular and molecular pathways. Now that the multitude of genes that lead to diabetes are being uncovered, it will be possible to subdivide the disease into categories and determine whether the different subsets might best be treated by particular therapies. Any review of novel genes associated with diabetes is certain to be outdated even before it is complete. With the advent of so many whole genome scans,
TYPE 2 DIABETES
609
cross-replication of findings across cohorts has recently become a priority among research groups, and many of the new associations have already been confirmed. Those that have not been confirmed may still be real but suffer difficulties between populations due to differences with respect to disease, risk factors, ancestry, and other confounding variables that make comparisons challenging. Nevertheless, recent reviews can help make sense of the exploding databases [35]. Recent publications of whole genome scans include type 1 diabetes in 2000 British cases [36] and 563 European cases/483 European trios [37], and in type 2 diabetes, 694 French cases [38], 1464 Finnish and Swedish cases [39], 1161 Finnish cases [40], and 1399 Icelandic cases [41]. Although not completely concordant, the genes shared among these studies provide strong evidence that many are involved in the genetic basis of diabetes. The most striking new gene to be associated with type 2 diabetes is TCF7L2, a transcription factor that regulates a number of genes relevant to diabetes [42]. Prior to the first publication on the association of TCF7L2 with diabetes, its role in the regulation of proglucagon had been established [43], but its high level of importance was not apparent. After the initial report, numerous publications emerged that associated it with diabetes, glucose levels, birth weight, and other related phenotypes in many different populations. TCF7L2 is clearly important in diabetes etiology, and its genotype may also be valuable for choosing the best mode of treatment. A retrospective analysis of patients treated with either metformin or a sulfonylurea showed no genetic difference in treatment effect with metformin but a significant genetic effect if treated with a sulfonylurea [44]. Thus, knowing a patient’s TCF7L2 genotype may help guide a physician in a drug treatment decision, but it may also affect how aggressively a patient should be handled. For example, should patients with one or two high-risk alleles be started on medication or lifestyle modification sooner than patients at lower risk? Should their glucose levels be managed more aggressively? Knowing which prediabetic patients are more likely to develop the disease based on TCF7L2 genotype may help motivate physicians and patients to take more aggressive prophylactic approaches. The 14% of patients homozygous for the high-risk allele rs12255372 [42], the most strongly associated SNP, might be willing to adopt more rigorous lifestyle changes than the majority who are at lower risk of disease. Across all disease areas, pharmacogenetic studies have been plagued by a lack of replication for a variety of reasons but generally related to the study size [45]. Appropriately sized studies on the genetics of response to diabetes medications are now just beginning to emerge, but like most other areas, replication is frequently lacking. There are two broad classes of drug response studies. In many cases, individuals have variations in enzymes responsible for the metabolism, uptake, or other aspect of drug handling. A genetic analysis would inform the proper dosage level for a given drug independent of disease subtype. Alternatively, genetics may predict the subtype of diabetes within the patient population and could determine which treatment, whether lifestyle or drug, would be most efficacious for that particular subtype.
610
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
The glitazone class of therapeutics act through peroxisome proliferatoractivated receptor (PPAR) gamma, and thus patients with variants in the gene, may respond differently to such drugs. One of the first such studies involved extensive resequencing of the gene in 93 Hispanic women [46]. Novel SNPs were identified that were weakly associated with troglitazone response, but the number of subjects was too small to be convincing. A subsequent study with a much larger population, 3548 individuals at high risk of diabetes, was examined for the P12A and other polymorphisms in PPARγ. This particular amino acid–changing SNP has frequently (though not always) been associated with risk of diabetes. The potential association of P12A with the efficacy of therapeutic intervention (lifestyle, metformin, or troglitazone) on development of diabetes was assessed rather than the risk of diabetes itself. Even though PPARγ is the target of troglitazone, no association with treatment effects was observed [47]. This could be caused by a multitude of possibilities. P12A may have little or no effect on troglitazone action, since it is far from the drug-binding site. The P12A polymorphism may have an early effect on diabetes progression and hence may not affect progression at later times after its early impact. Thus, even knowing that a patient is predisposed to diabetes because of variation in a particular gene may not be useful if the knowledge is not used at the appropriate time in disease progression. In contrast to the null troglitazone effect with PPARγ, metformin appeared to substantially benefit patients with the E23K variation in KCNJ11, a gene known to be associated with diabetes. Those with the E23 variant of KCNJ11 were less susceptible to progression when treated with lifestyle changes or metformin, while those with the K23 variant only benefited from lifestyle changes [48]. Further replication is required before recommendations can be made for clinical practice, but, if replicated, this information would help guide the most appropriate therapy for particular subgroups of patients. Other drugs have also been examined for associations with various candidate genes. These studies are often plagued by small numbers of people and/or varying definitions of drug response. In one study with pioglitazone, the population was relatively small (n = 113) and two different measures of drug response were used, resulting in different conclusions [49]. At least one prospective study has been initiated in which diabetic patients have been selected for antioxidant therapy based on genotype [50]. Prospective studies are the gold standard for proving an effect but are not always feasible because of the high cost. Categorizing patients for disease based on genetic or circulating markers will help choose the best therapeutic options once more data are available. In addition, choice of the most appropriate dose of a drug can be dependent on variation in ADME genes, as has been shown clearly for drugs such as warfarin [51]. Similarly, the dose of a treatment for diabetes can be affected by variation in genes completely unrelated to the underlying diabetic condition. For example, the OCT-1 gene is not thought to be involved in diabetes but still has an apparent effect on treatment. This gene is important in the uptake of metformin into the liver, where it acts on AMPK. When people with normal
RHEUMATOID ARTHRITIS
TABLE 1
611
Patient Genotype as a Potential Guide for Treatment Decisions
TCF7L2
PPARγ
KCNJ11
Diabetes Risk
Lifestyle Modification
Metformin
Percent of Population
aa aa aa aa AA/Aa AA/Aa AA/Aa AA/Aa
PP PP PA/AA PA/AA PP PP PA/AA PA/AA
KK EK/EE KK EK/EE KK EK/EE KK EK/EE
Very high High High Moderate High Moderate Moderate Lower
Aggressive Aggressive Aggressive Normal Aggressive Normal Normal Normal
Titrate Normal Titrate Normal Titrate Normal Titrate Optional
1.9 10.8 0.3 1.9 10.8 61.4 1.9 10.8
or variant OCT-1 genes were subjected to an oral insulin glucose tolerance test in the presence or absence of metformin treatment, those with normal OCT-1 were found to clear glucose much more effectively and maintain lower insulin levels [52]. Although this study and studies discussed above categorize patients as responders and nonresponders, it may be better to refine the analysis and choose the drug dose based on genetics, as is done with warfarin. As long as safety issues are not in question, patients with a nonresponder genotype may simply require a higher dose of medication, a possibility that could be tested in clinical trials. Thus, genes not directly involved in diabetes but affecting drug action can still be important to understand. A hypothetical example of how a patient’s genotype could be used in guiding treatment decisions is shown in Table 1. For simplicity, the minor allele frequency for each SNP is set at 15%, which is approximately what is observed in a prediabetic population for each of these SNPs. For TCF7L2 and KCNJ11, the minor allele is high risk, whereas the minor allele is low risk for PPARγ. With just these three genotypes, the prediabetic population generally considered to be at a similar high risk can be segregated into groups containing over 10% of the population actually at low risk, 65% at moderate risk, 23% at high risk, and 2% at very high risk. Even within these categories, differential treatment paradigms may be warranted based on individual genotypes. As more information accumulates relating to circulating biomarkers and additional genetic markers, these decision trees can be made much more powerful and personalized.
RHEUMATOID ARTHRITIS Rheumatoid arthritis (RA) is also a complex disease phenotype defined by clinical criteria [53] and with multiple genetic and environmental factors contributing to its pathogenesis, resulting in highly variable degrees of severity
612
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
and responsiveness to therapy. Selection of an initial therapeutic regimen for RA patients is currently based on disease duration and disease severity, particularly with respect to progression or status of joint damage assessed radiographically. Most studies suggest that early aggressive treatment with disease-modifying antirheumatic drugs (DMARDs) helps to delay or prevent joint damage and leads to better long-term outcomes. All DMARDs have significant safety issues, and the cost of treatment with biological DMARDs is high, about $10,000 per year. In the United States the most commonly used DMARDs are methotrexate (MTX), sulfasalazine, hydroxychloroquine, and the biologicals targeting TNFα. Newly diagnosed patients are generally started on a DMARD, and future changes in therapeutic regimen are based on clinical response. These include addition of combinations of DMARDs as well as changes in DMARD and use of corticosteroids. While assessment of clinical status often involves the use of biomarkers (diagnostics) as well as clinical parameters, these therapeutic choices, other than those dictated by specific safety concerns, are currently empirical and are not based on prospective genetic or biomarker factors. Approximately 46% of RA patients achieve an ACR20 response with lowdose MTX (the most common initial DMARD) [54]. TNFα-targeted therapies are more successful at reducing or even halting the progression of joint damage, but again about 29 to 54% of patients do not achieve a satisfactory clinical (ACR20) response [54]. In addition to these agents, other biologicals are available or in development that have other specific targets. Recombinant human IL-1 receptor antagonist (anakinra) competes with IL-1 for stimulation of IL-1 receptors and has moderate efficacy in RA [54]. A recently approved biologic, abatacept (a CTLA-4+ modified immunoglobulin Fc region fusion protein), targets T-lymphocyte activation by blocking co-stimulation of T-lymphocyte CD28 by antigen presenting cell CD80/86 [55]. Rituximab, an anti-CD20 monoclonal antibody previously used for treating B-cell lymphomas, has also been shown to be effective in RA [56,57]. Other biologicals targeting different cytokine pathways, such as tocilizumab, an anti-IL-6 receptor monoclonal antibody [58,59], have also reported efficacy in RA and may become available in the future. Thus, there are multiple treatment possibilities for RA patients with distinctly different molecular targets and mechanisms of action, but currently, biomarkers are not used to select those more likely to have superior efficacy in individual patients. The heterogeneity of RA is not only apparent from the unpredictable clinical response to approved and experimental treatments; it is also confirmed by studies of RA synovial tissue histology and patterns of gene expression within inflamed joints of RA patients [60]. The Online Mendelian Inheritance in Man (OMIM) database listing for RA (http://www.ncbi.nlm.nih.gov/entrez/ dispomim.cgi?id=180300) includes references to at least 19 specific genes with significant associations with RA susceptibility, disease severity, or response to therapy. Whole genome-wide scans using single-nucleotide polymorphism (SNP) maps have become very cost-effective, enabling the rapid confirmation
RHEUMATOID ARTHRITIS
613
of genetic associations with RA [61]. In addition, RNA peripheral blood microarray-based RA studies (transcriptomic studies) are beginning to appear in the literature, and preliminary data from these suggests that patterns of gene expression may predict disease severity and response to specific therapies [62,63]. In this section we describe how biomarker, genomic, and transcriptomic data may be used in the future to help improve clinical outcomes for RA patients. Some genetic associations with disease are really just associations with markers that tell us that a region of a particular chromosome is associated with RA or some feature of RA. However, with a greater understanding of gene function and the ability to perform whole genome-wide scans, a useful way to classify SNP associations with RA is to infer from this information whether it is likely that the genetic differences will have functional significance and influence T-lymphocyte activation, macrophage function, specific cytokine and inflammatory signaling pathways, and/or generalized inflammatory responses secondary to downstream dysregulation of these pathways. This information, together with an ever-increasing number of targeted therapeutic agents and greater understanding of biomarkers predicting response to older drugs such as MTX, may lead to a more rational basis for treating individual patients or RA subpopulations. In this section we describe a number of genetic and biomarker associations that suggest possible strategies to design such individualized therapeutic regimens for RA patients. This is not intended to be a complete list of all such reported associations, but rather, a selection to illustrate how a path to personalized medicine in RA may be investigated further. As such, we are proposing these hypotheses partly to accelerate this type of clinical research and hopefully to improve outcomes and lower the cost of treatment for RA. We have not found any evidence in the literature that biomarkers like these have actually been used prospectively and systematically to test personalized treatment hypotheses in RA. In Table 2 we classify a number of known genetic and biomarker associations and speculate as to how this information may lead to therapeutic decisions. Using this information, it is also possible to create hypotheses that are easily testable using samples from randomized controlled clinical trials to achieve prospective scientific confirmation. They also illustrate how the practice of personalized medicine in RA may evolve, and its potential benefits. Hypothesis A If tocilizumab is approved as a DMARD for RA patients it will likely be on the basis of results similar to those in published phase II and phase III trials. ACR20 response scores at 16 and 22 weeks in two different trials were 63% alone and 74% with MTX [58] and 59% with MTX [59], respectively. Thus, roughly one-third of patients did not achieve an ACR20 clinical response. Yet this compound had significant safety issues, including higher risk for infection, and if it becomes an approved drug, it will probably have a high cost of treatment, similar to other biologics. As described in Table
614 Anti-inflammatory cytokine.
Macrophage activation.
IL-10 gene [72]
IL-1β and IL-8 mRNA in blood monocytes [78]
HLA-G Lack of 14-bp polymorphism [71]
Myeloid cell (macrophage) activation and joint inflammation
Cytokine and severity of disease
Soluable HLA-G is antiinflammatory, inhibits NK cell activity, and is increased by IL-10 [71].
DAS response to TNF targeted biologic best in G/G (81%) vs. 42% in A/A and A/G [67].
TNFα response to stress or inflammatory stimuli: A/A largest TNF response, G/G lowest response [67].
TNFα gene promoter: G to A SNP at position -308 [68]
Cytokine and severity of disease
High levels may indicate better treatment response to anakinra, MTX.
Significant association between favorable response to MTX and lack of the 14-bp polymorphism of HLA-G, odds ratio 2.46 for methotrexate responsiveness. Methotrexate, a folate antagonist used for the treatment of rheumatoid arthritis, induced the production of soluble HLA-G molecules by increasing IL-10 [71].
IL-10 promoter genotypes or IL-10 haplotypes may correlate with response to IL-10 treatment.
Higher dose of TNF targeted agents for A/A and A/G may be needed.
Possible Treatment Implications
Increased activated macrophages in synovium.
-2849A/G SNP G allelle associated with higher progression rate and more joint damage; promoter SNPs -1082A, -819T, and -592A define a low IL-10 producer haplotype [73].
Clinical Correlation
Function / Role
Biomarker / Gene
Rheumatoid Arthritis Genetic and Biomarker Associations
RA Mechanism
TABLE 2
615
Levels of PTPN22 expression and/or presence of the 1858T SNP may predict good response to T-cell targeted therapy such as abatacept or cyclosporine A. SNP R620W minor allele 1858T associated with RA, type 1 diabetes, SLE, and autoimmune thyroiditis is present in approximately 28% of white patients with RA [69].
PTPN22 down-regulates T-cell activation mediated by TCR and CD28 co-stimulation [69].
PTPN22 (is a lymphoid-specific intracellular phosphatase)
TCR response
Gold inhibits myeloid differentiation (R8) and may therefore reduce PAD; estrogen or estrogen receptor antagonists modulate PADI4 activity.
A functional haplotype of PADI4 is associated with susceptibility to rheumatoid arthritis [74] and was associated with levels of antibody to citrullinated peptide.
Posttranslational modification enzyme that converts arginine residues to citrulline [74].
PADI4 haplotype (peptidylarginine deiminase)
Auto-antigen generation
Response to MTX + etanercept best in RA patients with two copies of SE [77]; abatacept blocks T-cell co-stimulation through CD80/CD86.
May be associated with initial T-cell-driven response to disease initiation factors or auto-antigens; CD80 and CD86 candidate genes also linked to this locus.
SE predisposes to sero-positive RA and more severe disease, including extraarticular manifestations.
HLA-DRB1-betachain shared epitope (SE) [77]
Clinical benefit of IL-6 targeted therapies (R4) may be greater in carriers of G allele and may also benefit more from B-cell targeted therapies.
Associated with B-cell neoplasms, Kaposi sarcoma [75,76], and systemic JRA [64] rituxamab.
C/C has low IL-6 response to IL1β, G/G and G/C higher production of IL-6.
IL-6 promoter: G/C polymorphism at position -174 [64]
Possible Treatment Implications
Clinical Correlation
Function / Role
Biomarker / Gene
TCR response and antigen presentation
RA Mechanism
616
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
2, there is a promoter SNP at position 174 of the IL-6 gene that significantly influences the amount of IL-6 produced in response to IL-1 and other inflammatory stimuli [64]. The gene frequency of the C allele is about 0.4 in healthy subjects. In vitro C/C cell constructs do not increase IL6 production in response to IL-1 stimulation compared to a 3.6-fold increase for G/G cell constructs [63]. Thus, it is likely that this SNP has functional significance. It is reasonable to hypothesize that the therapeutic benefit of tocilizumab will be different in populations that do not increase IL-6 signaling as much as a population with a robust increase in IL-6 during disease flares. If 30 to 40% of RA patients do not achieve a good clinical response with tocilizumab, could these be patients whose disease is not so dependent on this pathway (e.g., C/C genotype) or patients who produce such large amounts of IL-6 (e.g., G/G genotype) that higher doses of tocilizumab would have been needed? These hypotheses are easily tested in the clinic and could lead to a rational personalized medicine treatment regimen that would be more cost-effective and have a better efficacy and safety profile. Hypothesis B TNFα-targeted agents are very effective DMARDs in RA. Yet, on average, 40% of patients do not achieve an ACR20 response. Again these agents cost $8000 to $10,000 per year and have significant safety issues [54]. As noted in Table 2, there is a -308 promoter G to A SNP in the TNFα gene with probable functional significance since it is associated with outcomes in several infectious diseases and different clinical outcomes in septic shock [65]. The allele frequencies are reported to be 0.77 for allele G and 0.23 for allele A in a Swedish study [66]. In one published study on RA where the clinical response to infliximab was compared between -308 G/A genotypes, a disease activity score (DAS28) improvement of 1.2 occurred in 81% of G/G patients and in only 42% in A/A and A/G patients. The clinical improvement based on the DAS28 score was about twice as good in the G/G patients as in the A/A and A/G patients [67]. TNFα promoter SNPs, including the -308 SNP, are also associated with clinical outcomes in RA [68]. If these findings were replicated, how could that lead to a personalized medicine approach that improved outcomes and reduced the overall cost of therapy in a population of RA patients? Using the gene frequency and response information above, 84% of responders would be G/G, 4% would be A/A, and 12% would be A/G. Clearly, A/A and A/G patients would be far better off trying a different type of DMARD first, or perhaps they require a different dose or dose regimen. These clinical differences in response to a TNFα blocking agent could also occur if the amount of TNFα produced during disease flares is much greater in patients with the A allele and the blood level of their TNFα blockers is not high enough to neutralize TNFα activity at these times. In other words, it is possible that the dose of TNFα-targeted agents that is required for a durable clinical response is really different in these populations. Since the dose and frequency of dosing in the label for these agents is based on the average response in groups given different doses and dosing
RHEUMATOID ARTHRITIS
617
frequencies, it is quite possible that response rates could be improved using higher doses in patients with the A allele. Patients with the G/G genotype may actually do well with lower doses. If this hypothesis is proven correct, there is an opportunity for improved patient outcomes by using higher doses in a small number of A/A and A/G patients and cost-saving with improved safety by using lower doses in a much larger number of G/G patients. Clearly, both of these personalized medicine approaches to the use of TNFα-targeted agents can be tested prospectively and could greatly influence patient outcomes. Hypothesis C Abatacept was recently approved as a DMARD in RA for patients who do not respond adequately to an earlier DMARD. It blocks T-lymphocyte co-stimulation needed to fully activate a T-lymphocyte-driven immune response through interaction between CD28 and CD80/86 (B7-1 and B7-2) mimicking natural CTLA4-mediated down-regulation of immune responses. In controlled trials with MTX background therapy about 60% of patients achieved an ACR20 response compared to about 30% on MTX alone [55]. Again, because of high cost and significant safety risks, one asks whether there is a testable personalized medicine hypothesis that could improve the probability of response. PTPN22 is a lymphoid-specific phosphatase that down-regulates T-cell activation mediated by TCR and CD28 co-stimulation. This gene has a very strong association with RA and in particular there is a SNP associated with RA and other autoimmune diseases, such as type 1 diabetes [69]. This SNP, a 1858C-T transition, results in an arg620-to-trp amino acid change that alters the protein’s function as a negative regulator of T-cell activation. This allele is present in approximately 17% of white people from the general population and in approximately 28% of white people with RA. Other variants of the PTPN22 gene are also probably associated with RA [70]. The relationship between response to abatacept and PTPN22 genotype has not been investigated, but it is likely that the T-lymphocyte co-stimulation pathway is more active in RA patients with a variant of the PTPN22 gene that results in reduced phosphatase function, such as the 1858T SNP. Should these patients receive a drug such as abatacept as first-line treatment for RA instead of waiting to fail another DMARD? Perhaps an alternative biomarker could be a measure of PTPN22 phosphatase activity or gene expression in lymphoid cells. Regardless of the biomarker used, this clinical question can be answered easily in appropriately designed clinical trials. Hypothesis D MTX is commonly used as the first DMARD treatment and it is used increasingly in combination with biologics. Often, the period of time to assess the therapeutic benefit of MTX is prolonged as doses are increased and other drugs are added. For those patients who do not achieve a satisfactory response to MTX, this practice often results in MTX dose escalation into ranges more likely to cause liver damage, pulmonary toxicity, or bone marrow suppression in addition to allowing further progression of disease and joint
618
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
damage. Because of its low cost and acceptable safety profile, a strategy that enriched the population of patients started on MTX with those more likely to respond and provides alternative first treatments for patients less likely to respond to MTX could lead to significant improvements in overall outcomes and treatment costs. Several alternatively spliced HLA-G mRNA isoforms have been described, including a 14-bp polymorphism of the HLA-G gene with the 14-bp sequence deleted, and a significant association has been reported between a favorable response to MTX and a lack of the 14-bp polymorphism of the HLA-G gene, with an odds ratio of 2.46 for MTX responsiveness [71]. This finding, if confirmed, may enable an enrichment strategy for RA patients more likely to respond to MTX. Interestingly, in vitro MTX also induces the production of soluble HLA-G molecules by increasing IL-10. Promoter polymorphisms of the IL-10 gene have also been reported to have functional significance and associations with RA [72]. Using samples collected from clinical trials with MTX treatment arms, it would be possible to test the hypothesis that good prospective MTX responders can be identified by evaluating these two biomarkers.
CONCLUSIONS The still experimental practice of personalized medicine in cancer patients described here illustrates all of the necessary components to develop effective personalized medicine treatment strategies that are systematic in nature: diagnostics, targeted agents with well-understood mechanisms of action, an understanding of the molecular pathways important in disease progression, and ways of rapidly assessing clinical success. This has especially been enabled by the power of new genomic, biomarker, and informatics technologies. These technologies have also been applied to other chronic disease states where the potential for personalized medicine also exists. Diabetes and rheumatoid arthritis in many ways are like cancer, with genetic and environmental factors contributing to a very heterogeneous spectrum of disease. As the understanding of what drives these chronic disease phenotypes improves and more homogeneous subpopulations can be identified, treatment regimens will become more personalized. This trend will lead to safer and more efficacious treatments earlier and reduce the burden of disease to individuals and society.
REFERENCES 1. Jemal A, Siegel R, Ward E, Murray T, Xu J, Thun MJ (2007). Cancer statistics, 2007. CA: Cancer J Clin, 57(1):43–66. 2. Vessella RL, Pantel K, Mohla S (2007). Tumor cell dormancy: an NCI workshop report. Cancer Biol Ther, 6(9):1496–1504.
REFERENCES
619
3. Maron BJ, Hauser RG (2007). Perspectives on the failure of pharmaceutical and medical device industries to fully protect public health interests. Am J Cardiol, 100(1):147–151. 4. Goodsaid F, Frueh FW (2007). Implementing the U.S. FDA guidance on pharmacogenomic data submissions. Environ Mol Mutagen, 48(5):354–358. 5. Jain KK (2006). Challenges of drug discovery for personalized medicine. Curr Opin Mol Ther, 8(6):487–492. 6. DiMasi JA, Grabowski HG (2007). Economics of new oncology drug development. J Clin Oncol, 25(2):209–216. 7. O’Donovan N, Crown J (2007). EGFR and HER-2 antagonists in breast cancer. Anticancer Res, 27(3A):1285–1294. 8. Vogel CL, Cobleigh MA, Tripathy D, et al. (2002). Efficacy and safety of trastuzumab as a single agent in first-line treatment of HER2-overexpressing metastatic breast cancer. J Clin Oncol, 20(3):719–726. 9. Huang S (2004). Back to the biology in systems biology: what can we learn from biomolecular networks? Brief Funct Genom Proteom, 2(4):279–297. 10. Wang E, Lenferink A, O’Connor-McCourt M (2007). Cancer systems biology: exploring cancer-associated genes on cellular networks. Cell Mol Life Sci, 64(14):1752–1762. 11. Aranda-Anzaldo A (2001). Cancer development and progression: a non-adaptive process driven by genetic drift. Acta Biotheor, 49(2):89–108. 12. Hanahan D, Weinberg RA (2000). The hallmarks of cancer. Cell, 57–70. 13. Webb CP, Vande Woude GF (2000). Genes that regulate metastasis and angiogenesis. J Neurooncol, 50(1–2):71–87. 14. Balakrishnan A, Bleeker FE, Lamba S, et al. (2007). Novel somatic and germline mutations in cancer candidate genes in glioblastoma, melanoma, and pancreatic carcinoma. Cancer Res, 67(8):3545–3550. 15. Sjoblom T, Jones S, Wood LD, et al. (2006). The consensus coding sequences of human breast and colorectal cancers. Science, 314(5797):268–274. 16. Weinstein IB, Joe AK (2006). Mechanisms of disease: Oncogene addiction: a rationale for molecular targeting in cancer therapy. Nat Clin Pract, 3(8):448–457. 17. Haney SA (2007). Increasing the robustness and validity of RNAi screens. Pharmacogenomics, 8(8):1037–1049. 18. Michor F, Nowak MA, Iwasa Y (2006). Evolution of resistance to cancer therapy. Curr Pharm Des, 12(3):261–271. 19. Hochhaus A, Erben P, Ernst T, Mueller MC (2007). Resistance to targeted therapy in chronic myelogenous leukemia. Semin Hematol, 44(1 Suppl 1):S15–S24. 20. Bublil EM, Yarden Y (2007). The EGF receptor family: spearheading a merger of signaling and therapeutics. Curr Opin Cell Biol, 19(2):124–134. 21. Kaklamani VG, Gradishar WJ (2006). Gene expression in breast cancer. Curr Treat Options Oncol, 7(2):123–128. 22. Webb CP, Pass HI (2004). Translation research: from accurate diagnosis to appropriate treatment. J Transl Med 2(1):35. 23. Rhodes DR, Kalyana-Sundaram S, Tomlins SA, et al. (2007). Molecular concepts analysis links tumors, pathways, mechanisms, and drugs. Neoplasia, 9(5):443–454.
620
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
24. Miller LD, Liu ET (2007). Expression genomics in breast cancer research: microarrays at the crossroads of biology and medicine. Breast Cancer Res, 9(2):206. 25. Lamb J, Crawford ED, Peck D, et al. (2006). The Connectivity Map: using gene expression signatures to connect small molecules, genes, and disease. Science, 313(5795):1929–1935. 26. Lee JK, Havaleshko DM, Cho H, et al. (2007). A strategy for predicting the chemosensitivity of human cancers and its application to drug discovery. Proc Nat Acad Sci USA, 104(32):13086–13091. 27. Potti A, Dressman HK, Bild A, et al. (2006). Genomic signatures to guide the use of chemotherapeutics. Nat Med, 12(11):1294–1300. 28. Shtiegman K, Kochupurakkal BS, Zwang Y, et al. (2007). Defective ubiquitinylation of EGFR mutants of lung cancer confers prolonged signaling. Oncogene, 26(49):6968–6978. 29. Bild AH, Yao G, Chang JT, et al. (2006). Oncogenic pathway signatures in human cancers as a guide to targeted therapies. Nature, 439(7074):353–357. 30. Schulenburg A, Ulrich-Pur H, Thurnher D, et al. (2006). Neoplastic stem cells: a novel therapeutic target in clinical oncology. Cancer, 107(10):2512–2520. 31. Van Doom M, Vogels J, Tas A, et al. (2006). Evaluation of metabolite profiles as biomarkers for the pharmacological effects of thiazolidinediones in type 2 diabetes mellitus patients and healthy volunteers. Br J Clin Pharmacol, 63:562–574. 32. Griffin JL, Nichols AW (2006). Metabolomics as a functional genomic tool for understanding lipid dysfunction in diabetes, obesity and related disorders. Pharmacogenomics, 7:1095–1107. 33. Susztak K, Bottinger EP (2006). Diabetic nephropathy: a frontier for personalized medicine. J Am Soc Nephrol, 17:361–367. 34. Willer CJ, Bonnycastle LL, Conneely KN, et al. (2007). Screening of 134 single nucleotide polymorphisms (SNPs) previously associated with type 2 diabetes replicates association with 12 SNPs in nine genes. Diabetes, 56:256–264. 35. Sale MM, Rich SS (2007). Genetic contributions to type 2 diabetes: recent insights. Expert Rev Mol Diagn, 7:207–217. 36. Wellcome Trust Case Control Consortium (2007). Genome-wide association study of 14,000 cases of seven common diseases and 3000 shared controls. Nature, 447:661–678. 37. Hakonarson H, Grant SFA, Bradfield JP, et al. (2007). A genome-wide association study identifies KIAA0350 as a type 1 diabetes gene. Nature, 448:591–594. 38. Sladek R, Rocheleau G, Rung J, et al. (2007). A genome-wide association study identifies novel risk loci for type 2 diabetes. Science, 445:881–885. 39. Saxena R, Voight BF, Lyssenko V, et al. (Diabetes Genetics Initiative) (2007). Genome-wide association analysis identifies loci for type 2 diabetes and triglyceride levels. Science, 316:1331–1336. 40. Scott LJ, Mohlke KL, Bonnycastle LL, et al. (2007). A genome-wide association study of type 2 diabetes in Finns detects multiple susceptibility variants. Science, 316:1341–1345. 41. Steinthorsdottir V, Thorleifsson G, Reynisdottir I, et al. (2007). A variant in CDKAL1 influences insulin response and risk of type 2 diabetes. Nat Genet, 39:770–775.
REFERENCES
621
42. Grant SFA, Thorleifsson G, Reynisdottir I, et al. (2006). Variant of transcription factor 7-like 2 (TCF7L2) gene confers risk of type 2 diabetes. Nat Genet, 38:320–323. 43. Yi F, Brubaker PL, Jinet T (2005). TCF-4 mediates cell type-specific regulation of proglucagon gene expression by catenin and glycogen synthase kinase-3. J Biol Chem, 280:1457–1464. 44. Pearson EW, Donnelly LA, Kimber C, et al. (2007). Variation in TCF7L2 influences therapeutic response to sulfonylureas. Diabetes, 56:2178–2182. 45. Loannidis JPA, Trikalinos TA, Ntzani EE, Contopoulos-Ioannidis DG (2003). Genetic associations in large versus small studies: an empirical assessment. Lancet, 361:567–571. 46. Wolford JK, Yeatts KA, Dhanjal SK, et al. (2005). Sequence variation in PPARg may underlie differential response to troglitazone. Diabetes, 54:3319–3325. 47. Florez JC, Jablonski KA, Sun MW, et al. (2007). Effects of the type 2 diabetes associated PPARg P12A polymorphism on progression to diabetes and response to troglitazone. J Clin Endocrinol Metal, 92:1502–1509. 48. Florez JC, Jablonski KA, Kahn SE, et al. (2007). Type 2 diabetes–associated missense polymorphisms KCNJ11 E23K and ABCC8 A1369S influence progression to diabetes and response to interventions in the Diabetes Prevention Program. Diabetes, 56:531–536. 49. Wang G, Wang X, Zhang Q, Ma Z (2007). Response to pioglitazone treatment is associated with the lipoprotein lipase S447X variant in subjects with type 2 diabetes mellitus. Int J Clin Pract, 61:552–557. 50. Levy AP (2006). Application of pharmacogenomics in the prevention of diabetic cardiovascular disease: mechanistic basis and the clinical evidence for utilization of the haptoglobin genotype in determining benefit from antioxidant therapy. Pharm Ther 112:501–512. 51. Yin T, Miyata T (2007). Warfarin dose and the pharmacogenomics of CYP2C9 and VKORC1: rationale and perspectives. Thromb Res, 120:1–10. 52. Shu Y, Sheardown SA, Brown C, et al. (2007). Effect of genetic variation in the organic cation transporter 1 (OCT1) on metformin action. J Clin Invest, 117:1422–1431. 53. Arnett FC, Edworthy SM, Bloch DA, et al. (1988). The American Rheumatism Association 1987 revised criteria for the classification of rheumatoid arthritis. Arthritis Rheum, 31:315–324. 54. Olsen NJ, Stein CM (2007). New Drugs for Rheumatoid Arthritis. N Engl J Med, 3501:2167–2179. 55. Kremer JM, Dougados M, Emery P, et al. (2005). Treatment of rheumatoid arthritis with the selective costimulation modulator abatacept: twelve-month results of a phase IIb, double-blind, randomized, placebo-controlled trial. Arthritis Rheum, 52:2263–2271. 56. Cohen SB, Emery P, Greenwald MW, et al. (REFLEX Trial Group) (2006). Rituximab for rheumatoid arthritis refractory to anti-tumor necrosis factor therapy: results of a multicenter, randomized, double-blind, placebo controlled, phase III trial evaluating primary efficacy and safety at twenty-four weeks. Arthritis Rheum, 54:2793–2806.
622
REDEFINING DISEASE AND PHARMACEUTICAL TARGETS
57. Emery P, Fleischmann R, Filipowicz-Sosnowska A, et al. (DANCER Study Group) (2006). The efficacy and safety of rituximab in patients with active rheumatoid arthritis despite methotrexate treatment: results of a phase IIB randomized, double-blind, placebo-controlled, dose-ranging trial. Arthritis Rheum, 54:1390–1400. 58. Maini RN, Taylor PC, Szechinski J, et al. (CHARISMA Study Group) (2006). Double-blind randomized controlled clinical trial of the interleukin-6 receptor antagonist, tocilizumab, in European patients with rheumatoid arthritis who had an incomplete response to methotrexate. Arthritis Rheum, 54:2817–2829. 59. Smolen AB, Rubbert-Roth A, Alecock E, Alten R, Woodworth T (2007). Tocilizumab, A novel monoclonal antibody targeting IL-6 signalling, significantly reduces disease activity in patients with rheumatoid arthritis. Ann Rheum Dis, 66(Suppl II):87. 60. Glocker MO, Guthke R, Kekow J, Thiesen, H-J (2006). Rheumatoid arthritis, a complex multifactorial disease: on the way toward individualized medicine. Med Res Rev, 26:63–87. 61. Docherty SJ, Butcher LM, Schalkwyk LC, Plomin R (2007). Applicability of DNA pools on 500 K SNP microarrays for cost-effective initial screens in genomewide association studies. BMC Genom, 8:214. 62. Edwards CJ, Feldman JL, Beech J, et al. (2007). Molecular profile of peripheral blood mononuclear cells from patients with rheumatoid arthritis. Mol Med, 13:40–58. 63. Lindberg J, Klint E, Catrina AI, et al. (2006). Effect of infliximab on mRNA expression profiles in synovial tissue of rheumatoid arthritis patients. Arthritis Res Ther, 8:R179. 64. Fishman D, Faulds G, Jeffery R, et al. (1998). The effect of novel polymorphisms in the interleukin-6 (IL-6) gene on IL-6 transcription and plasma IL-6 levels, and an association with systemic-onset juvenile chronic arthritis. J Clin Invest, 102:1369–1376. 65. Mira J.-P, Cariou A, Grall F, et al. (1999). Association of TNF2, a TNF-alpha promoter polymorphism, with septic shock susceptibility and mortality: a multicenter study. JAMA, 282:561–568. 66. Rosmond R, Chagnon M, Bouchard C, Bjorntorp P (2001). G-308A polymorphism of the tumor necrosis factor alpha gene promoter and salivary cortisol secretion. J Clin Endocrinol Metab, 86:2178–2180. 67. Mugnier B, Balandraud N, Darque A, Roudier C, Roudier J, Reviron D (2003). Polymorphism at position -308 of the tumor necrosis factor alpha gene influences outcome of infliximab therapy in rheumatoid arthritis. Arthritis Rheum, 8:1849–1852. 68. Fonseca JE, Cavaleiro J, Teles J, et al. (2007). Contribution for new genetic markers of rheumatoid arthritis activity and severity: sequencing of the tumor necrosis factor-alpha gene promoter. Arthritis Res Ther, 9:R37. 69. Begovich AB, Carlton VEH, Honigberg LA, et al. (2004). A missense singlenucleotide polymorphism in a gene encoding a protein tyrosine phosphatase (PTPN22) is associated with rheumatoid arthritis. Am J Hum Genet, 75: 330–337.
REFERENCES
623
70. Carlton VEH, Hu X, Chokkalingam AP, et al. (2005). PTPN22 genetic variation: evidence for multiple variants associated with rheumatoid arthritis. Am J Hum Genet, 77:567–581. 71. Rizzo R, Rubini M, Govoni M, et al. (2006). HLA-G 14-bp polymorphism regulates the methotrexate response in rheumatoid arthritis. Pharmacogenet Genom, 16:615–623. 72. Lard LR, van Gaalen FA, Schonkeren JJM, et al. (2003). Association of the -2849 interleukin-10 promoter polymorphism with autoantibody production and joint destruction in rheumatoid arthritis. Arthritis Rheum, 48:1841–1848. 73. Summers AM, Summers CW, Drucker DB, Barson A, Hajeer AH, Hutchinson IV (2000). Association of IL-10 genotype with sudden infant death syndrome. Hum Immunol, 61:1270–1273. 74. Suzuki A, Yamada R, Chang X, et al. (2003). Functional haplotypes of PADI4, encoding citrullinating enzyme peptidylarginine deiminase 4, are associated with rheumatoid arthritis. Nat Genet, 34:395–402. 75. Kawano M, Hirano T, Matsuda T, et al. (1988). Autocrine generation and requirement of BSF-2/IL-6 for human multiple myelomas. Nature, 332:83–85. 76. Foster CB, Lehrnbecher T, Samuels S, et al. (2000). An IL6 promoter polymorphism is associated with a lifetime risk of development of Kaposi sarcoma in men infected with human immunodeficiency virus. Blood, 96:2562–2567. 77. Criswell LA, Lum RF, Turner KN, et al. (2004). The influence of genetic variation in the HLA-DRB1 and LTA-TNF regions on the response to treatment of early rheumatoid arthritis with methotrexate or etanercept. Arthritis Rheum, 50:2750– 2756. 78. Schulze-Koops MD, Davis LS, Kavanaugh AF, Lipsky PE (2005). Elevated cytokine messenger RNA levels in the peripheral blood of patients with rheumatoid arthritis suggest different degrees of myeloid activation. Arthritis Rheum, 40:639–647.
35 ETHICS OF BIOMARKERS: THE BORDERS OF INVESTIGATIVE RESEARCH, INFORMED CONSENT, AND PATIENT PROTECTION Heather Walmsley, M.A. Lancaster University, Bailrigg, UK
Michael Burgess, Ph.D., Jacquelyn Brinkman, M.Sc., Richard Hegele, M.D., Ph.D., Janet Wilson-McManus, M.T., B.Sc., and Bruce McManus, M.D., Ph.D. University of British Columbia, Vancouver, British Columbia, Canada
INTRODUCTION In 2000, the Icelandic Parliament (Althingi) authorized an Iceland-based subsidiary of deCODE Genetics to construct a biobank of genetic samples from the Icelandic population [1–4]. The Althingi also granted deCODE (which had a five-year commercial agreement with the Swiss pharmaceutical company Roche Holdings) a 12-year exclusive commercial license to use the country’s medical records, in return for an annual 70 million kronur (approximately 1 million USD in 2007). These records were to be gathered, together with lifestyle and extensive genealogical data, into the Icelandic Health Sector Database. The resulting public outcry and academic critique has been well documented [3,5,6]. Several hundred articles appeared in newspapers [7],
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
625
626
ETHICS OF BIOMARKERS
many of them referring to the sale of the “genetic heritage” of the nation (see http://www.mannvernd.is/english/home.html for a list of media articles). A grass-roots lobby group, Mannvernd, emerged to fight the project, complaining principally about the use of “presumed consent” and the commercial aspects of the agreement [4]. Despite these critiques, Iceland was one of the first countries to discuss how to structure a biobank at the political level [8]. When a population geneticist from Stanford University announced plans for a human genome diversity project, he received a similar reception. This project aimed to challenge the ethnocentrism of the Human Genome Project by studying 722 diverse “anthropologically unique” human populations [9]. Indigenous activists were unconvinced. Debra Harry worried that “these new ‘scientific findings’ concerning our origins can be used to challenge aboriginal rights to territory, resources, and self-determination” [10]. The Canada-based Rural Advancement Foundation International (RAFI), now the ETC Group (Action Group on Erosion, Technology and Concentration), characterized the list of 722 as a list of peoples who had suffered most at the hands of Western “progress” and campaigned against this “bio-colonial Vampire Project.” The project has since stimulated productive dialogue about the importance of race and ethnicity to health and genetic research. In June 2007, UK Biobank opened its doors to donors in Glasgow, the fourth of about 35 planned donation points [11]. The project hopes to recruit a total of 500,000 volunteers aged between 40 and 69. This biobank is a prospective cohort study hoping to contribute to disease risk prediction through the identification of biomarkers [12]. The UK biobank has recognized the need to build public trust and knowledge. This has led to public engagement, although some critics suggest that public acceptance of this project has been carefully cultivated, with varying success, in a context of controversy and distrust [13,14]. The UK is no stranger to human tissue scandals. In 2001 it became known that the organs of deceased children were routinely kept for research purposes at Alder Hey Hospital in Liverpool and Bristol Royal Infirmary without their parents’ knowledge or consent [15]. Public outrage led to a near moratorium on tissue banking and research. An expensive system of accreditation of specimen collections by the newly formed Human Tissues Authority eventually followed [16]. These three examples illustrate the increasingly visible role of large biobanking projects within biomedical research. They publicly announce the complexity of international collaborations, commercial involvement, and public–private partnerships that have become the norm in biomedical research. They also reveal major public concerns with the social and ethical implications of these projects: for privacy, indigenous identity and self-determination, ownership and control over body parts, and medical data for individuals and their families. Traditionally, the interests of patient protection and investigative research have been served jointly by Research Ethics Boards and the guiding principles of biomedical ethics: respect for autonomy, beneficence, nonmaleficence, and
BIOMARKERS, ETHICS, AND INVESTIGATIVE RESEARCH
627
justice. These have been enacted through the process of obtaining informed consent, alongside measures to protect privacy and confidentiality of research participants and guard against discrimination. They have ensured, to a reasonable degree, the ethical enactment, legitimacy and public acceptance of research projects. Today, however, the demands of biomedical research, of the informed consent process and of patient protection, especially privacy, are beginning to jostle against each other uncomfortably. They are engaged in an increasingly public struggle and there appears to be ever-decreasing space in which to maneuver. If biomarker research is to proceed without unnecessary constraint toward improving patient care in a manner that individuals and society at large deem ethical, radical intervention is needed. This chapter begins by outlining the diversity of social and ethical issues surrounding biomarker-related research and its applications. Focusing in on the ever more central process of banking of human biological materials and data, it then traces a recent trend toward large-scale population biobanks. Advances in genomics and computational biology have brought a whole raft of new questions and concerns to the domain of biomedical ethics. The peculiarities of these large biobanks, in the context of divergent legislative frameworks and increasing demands for international networking and collaboration, make such challenges ever starker. Privacy advocates argue that studies using DNA can never promise anonymity to their donors [17,18]. Prospective collections of human DNA and tissues seem doomed either to fail the demands of fully informed consent, or face the crippling financial and administrative burden of seeking repeated consent. Population biobanks are increasingly conceived as national resources [19]. Indigenous and wider publics are now vocal in their concerns about ownership, commercialization, privacy: essentially, about who uses their DNA, and how. We do not set out here to design new governance frameworks for biobanking, or suggest the best ethical protocols for biomarker research, although these are sorely needed. The aim of this chapter is to suggest legitimate processes for so doing. In our search we veer outside the realm of ethics as traditionally conceived, into the domain of political science. New theories of deliberative democracy facilitate public participation in policy decision making; they aim for deliberation and communicative actions rather than strategic action; they have much to offer. Our conclusion is that ethics must embrace politics. Those involved in biomarker-related research are essential—as informers and participants in democratic public deliberation.
BIOMARKERS, ETHICS, AND INVESTIGATIVE RESEARCH What are the ethics of biomarkers? The application of biomarkers to assess the risks of disease, adverse effects of drugs, and organ rejection and for the development of targeted drugs and treatments is essential. Yet the search for biomarkers of exposure, effect and susceptibility to disease, toxic chemicals,
628
ETHICS OF BIOMARKERS
or pharmaceutical drugs raises many diverse ethical questions. Some of the most common debates surround the impact of developing predictive genetic tests as biomarkers for disease and their use in pharmacogenomics. Neuroimaging, for example, promises much for the identification of biomarkers of diseases such as Alzheimer disease, offering earlier prediction capability than is currently available. But this technology may have unintended social or ethical consequences [20]. It could lead to reduced autonomy for patients at an earlier age if they are not allowed to work or drive. New tests may not be distributed equitably if certain health insurance plans refuse to include the test. Physicians may not be adequately prepared to counsel patients and interpret biomarker test results. Most important, the value of early prediction is questionable for a disease that as yet has no effective treatment. Ethical concerns surrounding the use of biomonitoring in the workplace or by insurers have also been voiced within the health sciences literature and the wider media [21–23]. Biomarkers offer some hope for monitoring exposure to toxic chemicals in the workplace and protecting the health of employees. In an environment of high exposure to carcinogens, for example, a test could be developed to identify persons with an increased genetic risk of developing cancer from a specific dose, who could, for example, be excluded from the workplace. This would probably reduce the number of workers at risk of developing cancer. There are, however, concerns about discrimination, as well as the reliability of such tests for measuring risk [22]. Is it right for an employer to exclude people from an occupation or workplace on genetic grounds rather than reducing carcinogen exposure for all employees? Some high-risk individuals could spend a lifetime working under high exposure to carcinogens and never develop cancer, whereas some low-risk co-workers might. There are also fears that insurance companies could use biomonitoring methods to exclude people from insurance opportunities on the basis of genetic risk* [21,24,25]. Confidentiality, interpretation of biomarker data, and the problem of obtaining genuinely informed consent emerge as the key ethical tension zones identified by occupational health stakeholders involved in one research project in Quebec [21]. The promise of pharmacogenomics and the ethical issues it raises have also been the subject of lengthy debate. The Human Genome Organization (HUGO) Ethics Committee released a statement in 2007 recognizing that “pharmacogenomics has the potential to maximize therapeutic outcomes and
*In the UK, such fears were voiced by a coalition of 46 organizations in a Joint Statement of Concern presented to a House of Commons Cross Party Group on February 14, 2006. The issue has also been the subject of much debate and policy analysis in the United States, given its system of private health insurance. The Genetic Information Nondiscrimination Act was passed in the U.S. House of Representatives on April 25, 2007. See U.S. National Institutes of Health fact sheet at http://www.genome.gov/page.cfm?pageID=10002328.
POPULATION BIOBANKS AND THE CHALLENGE OF HARMONIZATION
629
minimize adverse reactions to therapy, and that it is consistent with the traditional goals of public health and medical care to relieve human suffering and save lives” but noting many ethical concerns. These include the implications for developing countries and for those seeking access to therapy for neglected diseases, the impact on health care costs and on research priorities, and the fear that pharmacogenomics could reinforce genetic determinism and lead to stigmatization of individuals and groups [26]. Perhaps the widest range of social and ethical issues emerging from biomarker research, however, surround the process of collection, storage and use of human biological samples, and associated data for research purposes: to identify new biomarkers of exposure, effect, and susceptibility to disease and pharmacogenomic products. Many genetic and epidemiological studies require access to samples of annotated human blood, tissue, urine, or DNA and associated medical and lifestyle data. Often, they need large numbers of samples, and repeated sampling over many months or years. Often, the outcomes of the research are uncertain, technological advances in research methodologies are unpredictable, and neither can be anticipated. This discussion focuses on ethical issues relating to the biobanking process. The development of large-scale population databases has rendered the ethics of this technology complex, controversial, and publicly visible. Debates about biobanking also reveal the increasing inadequacy of the old ethics guidelines, frameworks, and protocols that have served us for the last 50 years.
POPULATION BIOBANKS AND THE CHALLENGE OF HARMONIZATION The “banking” of human biological samples for research is not a twenty-first century phenomenon. Human tissue has been gathered and collected for at least 100 years. According to the U.S. National Bioethics Advisory Committee, by 1999, a total of 282 million unique tissue specimens were being held in the United States [27]. The term biobank, however, is relatively new. It appeared in PubMed for the first time in 1996 [28] and was not common nomenclature until the end of the decade. The sequencing of the human genome, advances in computational biology, and the emergence of new disciplines such as biomarker discovery, pharmacogenomics, and nutrigenomics have sparked unprecedented demand for samples of human blood, tissue, urine, DNA, and data. Three-fourths of the clinical trials that drug companies submit to the U.S. Food and Drug Administration for approval now include a provision for sampling and storing human tissue for future genetic analysis [3]. Biobanking has become deserving of its own name and has gained a dedicated society, the International Society for Biological and Environmental Repositories (ISBER), as well as two recent worldwide congresses: the WorldWide BioBank Summits (organized by IBM Healthcare and Life Sciences) and Biobanking and Biorepositories (organized by Informa Life Sciences).
630
ETHICS OF BIOMARKERS
The collection of human samples and data for research has not just accelerated, it has evolved. Four features differentiate biobanks today from those of 20 years ago: the emergence of large population-level biobanks, increased levels of commercial involvement, the desire for international collaborations requiring samples and data to be shared beyond national borders, and finally, the prospective nature of many emerging collections. The increased speed and scale of biobanking has contributed to the increasing public and academic concern with the ethical and social implications of this technology. The rules and practices of research and research ethics developed prior to the consolidation of these trends now inhibit the ability to construct biobanks and related research efficiently. They also provide ineffective protection for individuals and populations. Small genetic databases containing a limited number of samples, attached to one research project focused on a specific disease were once standard. Such collections still exist: clinical collections within hospital pathology departments, and case- or family-based repositories for genetic studies of disease. Larger provincial, national, and international repositories are now increasingly common, as is the networking of existing collections. Provincial examples include the CARTaGENE project in Quebec (Canada). National diseasebased biobanks and networks include the Alzheimer’s Genebank, sponsored jointly by the U.S. National Institute on Ageing and the Alzheimer’s Association. Examples of national or regional population-level biobanks include the Estonian Genome Project (Estonia), Biobank Japan (Japan), Icelandic Health Sector Database, UK Biobank (UK), Medical Biobank (Sweden) and the Singapore Tissue Network (Singapore). International collaborations include the European GenomEUtwin Project, a study of twins from Denmark, Finland, Italy, the Netherlands, Sweden, the UK, France, Australia, Germany, Lithuania, Poland, and the Russian Federation (http:// www.genomeutwin.org/). Levels of commercial involvement vary among these biobanks. The Icelandic Biobank was founded as a public–private partnership between the Icelandic government and deCODE Genetics. UmanGenomics was given exclusive rights to commercialize information derived from Sweden’s Medical Biobank. The Singapore Tissue Network, by contrast, is publicly funded and will not be involved in commercialization. Biotechnology companies involved in biobanking include Newfound Genomics, which gathers DNA samples from volunteers across Newfoundland and Labrador. Many of these large population databases are designed as research infrastructures. They do not focus on one specific disease or genetic characteristic, but contain samples from sick and healthy persons, often across several generations. DNA, blood, or other tissues are stored together with health and lifestyle data from medical records, examinations and questionnaires. These large population databases support research into complex gene interactions involved in multifactoral diseases and gene–gene and gene–environment interactions at the population level. There are few clinical benefits to indi-
POPULATION BIOBANKS AND THE CHALLENGE OF HARMONIZATION
631
vidual donors. Benefits are expected to be long term and often cannot be specified at the time of data and tissue collection. It is a major challenge to the requirement of informed consent that persons donating biological and data samples cannot know the specific future research purposes for which their donations will be used. This proliferation of biobanks, and the advent of population-wide and transnational biobanking endeavors, has triggered a variety of regulatory responses. Some national biobanks have been created in association with new legislation. Estonia and Lithuania enacted the Human Genes Research Act (2000) and the Human Genome Research Law (2002), respectively, possibly motivated by the inadequacy of existing norms, a belief that genetic data and research require different regulation than traditional medicine, as well as by the need for democratic legitimacy [19]. The UK Human Tissue Act (2004), Sweden’s Act on Biobanks (2002), and the Norwegian Act on Biobanks (2003) all pertain to the storage of biological samples [29]. Other national initiatives do not treat genetic data as exceptional. They remain dependent on a network of existing laws. A series of national and international guidelines have also been produced, such as the World Medical Association’s Declaration on Ethical Considerations Regarding Health Databases (2002) and guidelines from the U.S. National Bioethics Advisory Commission (1999) and the Council of Europe Committee of Ministers (2006). As with national regulation, however, the norms, systems, and recommendations for collection and processing of samples, informed consent procedures, and even the terminology for degrees of anonymization of data differ substantially between guidelines. Anonymization terminology illustrates the confusion that can result from such diversity. European documents distinguish five levels of anonymization of samples [30]. Within European documents, anonymized describes samples used without identifiers but that are sometimes coded to enable reestablishing the identity of the donor. In most English Canadian and U.S. texts, however, anonymized means that the sample is irreversibly de-identified. Quebec follows the French system, distinguishing between reversibly and irreversibly anonymized samples. In European documents, coded usually refers to instances where researchers have access to the linking code. But the U.S. Office for Human Research Protection (OHRP) uses the word to refer to situations where the researcher does not have access to the linking code [30]. To add to the confusion, UNESCO has been criticized for creating new terms, such as proportional or reasonable anonymity, that do not correspond to existing categories [19]. Such confusion has led to repeated calls for harmonization of biobank regulations. The Public Population Project in Genomics consortium (P3G) is one attempt, a nonprofit consortium aiming to promote international collaboration and knowledge transfer between researchers in population genomics. With over 30 charter and associate members, P3G declares itself to have “achieved a critical mass to form the principal international body for the
632
ETHICS OF BIOMARKERS
harmonization of public population projects in genomics” (http://www. p3g.org). Standardization also has its critics, notably among smaller biobanking initiatives. In 2006, the U.S. National Cancer Institute (NCI) launched guidelines spelling out best practices for the collection, storage, and dissemination of human cancer tissues and related biological specimens. These high-level guidelines are a move toward standardization of practice, following revelations in a 2004 survey of the negative impact of diverse laboratory practices on resource sharing and collaboration [31]. The intention is that NCI funding will eventually depend on compliance. The guidelines were applauded in The Lancet by the directors of major tissue banks such as Peter Geary of the Canadian Tumor Repository Network. They generated vocal concerns from other researchers and directors of smaller banks, many of which are already financially unsustainable. Burdensome informed consent protocols and the financial costs of infrastructural adjustments required were the key sources of concern. This is a central problem for biobanking and biomedical ethics: the centrality, the heavy moral weight, and the inadequacy of individual and voluntary informed consent.
INFORMED CONSENT: CENTRALITY AND INADEQUACY OF THE IDEAL Informed consent is one of the most important doctrines of bioethics. It was introduced in the 1947 Nuremberg Code, following revelations during the Nuremberg trials of Nazi medical experimentation in concentration camps. It developed through inclusion in the United Nations’ Universal Declaration of Human Rights in 1948 and the World Medical Association’s Declaration of Helsinki in 1964. Informed consent is incorporated in all prominent medical, research, and institutional ethics codes, and is protected by laws worldwide. The purpose of informed consent in research can be viewed as twofold: to minimize harm to research subjects and to protect their autonomous choice. Informed consent requires researchers to ensure that research participants consent voluntarily to participation in research and that they be fully informed of the risks and benefits. The focus of informed consent has slowly shifted: from disclosure by health professionals toward the voluntary consent of the individual based on the person’s understanding of the research and expression of their own values and assessments [32]. Simultaneously, health research has shifted from predominantly individual investigator-designed protocols with specific research questions to multiple investigator and institution projects that gather many forms of data and samples to understand complex phenomena and test emerging hypotheses. Further, informed consent as a protection for autonomy has become important in arguments about reproductive autonomy. Informed consent has been described as representing the dividing line between “good” genetics and “sinful” eugenics [32].
INFORMED CONSENT: CENTRALITY AND INADEQUACY OF THE IDEAL
633
Unprecedented computational power now makes it possible to network and analyze large amounts of information, making large-scale population biobanks, and genetic epidemiology studies more promising than ever before. This research context raises the stakes of research ethics, making it more difficult to achieve individual consent and protect privacy while serving as the basis for strong claims of individualized and population health benefits. Large-scale biobanks and cohorts by their very nature cannot predict the exact uses to which samples will be put ahead of time. The ideal of voluntary participation based on knowledge of the research appears to require new informed consent for every emergent hypothesis that was not part of the original informed consent. The practicality of such an ideal approach is less than clear. Genetic testing can also use samples that were not originally collected for genetic studies. Tissue biopsies collected for clinical diagnosis are now providing information for gene expression studies [33]. The precise nature of future technologies that will extract new information from existing samples cannot be predicted. On the other hand, seeking repeated consent from biobank donors is a costly and cumbersome process for researchers that can impede or even undermine research. Response rates for data collection (e.g., questionnaires) in any large population may vary between 50 and over 90%. The need for renewed consent could therefore reduce participation in a project and introduce selection bias [34]. Repeat consent may also be unnecessarily intrusive to the lives of donors or their next of kin. Other forms of consent have been suggested and implemented for biobanking purposes. These include consent with several options for research use: presumed consent, broad consent, and blanket consent. Many European guidelines, including a memorandum from the Council of Europe Steering Committee on Bioethics, laws in Sweden, Iceland, and Estonia, and the European Society for Human Genetics guidelines, consider broad consent for unknown future uses to be acceptable as long as such future projects gain approval from Research Ethics Boards and people retain the right to withdraw samples at any time [30]. The U.S. Office for Human Research Protection went one step further in 2004, proposing to broaden the definition of nonidentifiable samples, upon which research is allowed under U.S. federal regulations without the requirement of informed consent. The problem is that no informed consent mechanism—narrow or broad— can address all ethical concerns surrounding the biobanking of human DNA and data [35]. Such concerns include the aggregate effects of individual consent upon society as a whole and upon family and community members given the inherently “shared” nature of genetic material. If people are given full choice as to which diseases their samples can be used to research, and they choose only to donate for well-known diseases such as cancer, rare disease may be neglected. The discovery that Ashkenazi Jews may have particular mutations predisposing them to breast, ovarian, and colon cancer has generated fears that they could become the target of discrimination [36].
634
ETHICS OF BIOMARKERS
Concerns include irreconcilable trade-offs between donor desires for privacy (best achieved by unlinking samples), control over the manner in which their body parts and personal information are used (samples can be withdrawn from a biobank only if a link exists), and access to clinically relevant information discovered in the course of research. For some individuals and communities, cultural or religious beliefs dictate or restrict the research purposes for which their samples can be used. The Nuu-chah-nulth nations of Vancouver Island became angry in 2000 after discovering that their samples, collected years before for arthritis research, had been used for the entirely different purpose of migration research [37,38]. In some cases, a history of colonialism and abusive research makes a group demand that their samples be used for research that benefits their community directly. Complete anonymization of samples containing human DNA is technically impossible, given both the unique nature of a person’s DNA and its shared characteristics. Consequently, in 2003, the Icelandic Supreme Court ruled that the transfer of 18-year-old student Ragnhildur Gudmundsdottir’s dead father’s health data infringed her privacy rights: “The court said that including the records in the database might allow her to be identified as an individual at risk of any heritable disease her father might be found to have had—even though the data would be made anonymous and encrypted” [39]. Reasonable privacy protection in a biobanking context is tough to achieve, extending to the technological capacity to protect privacy through linked or unlinked anonymized samples without risk of error. Informed consent cannot provide a basis for participants to evaluate the likelihood of benefit arising from their participation in a biobank when these merits are contested by experts. Critics of UK Biobank, for example, have little faith in the value and power of such prospective cohort studies, compared to traditional case–control studies, for isolating biomarkers and determining genetic risk factors. Supporters argue that the biobank will be a resource from which researchers can compile nested case–control studies. Critics claim that it will only be useful for study of the most common cancers, those that occur with enough frequency among donors. Others claim that even UK Biobank’s intended 500,000 participants cannot provide reliable information about the genetic causes of a disease without a study of familial correlations [12]. Informed consent is inadequate as a solution for ensuring that the impacts of biobanking and related research will be beneficial to individuals and society, will uphold the autonomy of the individual, or will facilitate justice. Given its historical importance and bureaucratic and legal dependence [40], it is not surprising that informed consent remains central to contemporary discussions of ethical and social implications of biobanking, biomarkers, and biomedical research. Unfortunately, the substance of such debates centers upon the inadequacy of both ideal and current procedures. As Hoeyer points out in reference to Medical Biobank run by Uman Genomics in northern Sweden, informed consent offers an illusion of choice without real consideration of the
SCIENCE, ETHICS, AND THE CHANGING ROLE OF THE PUBLIC
635
implications of such choices, “by constructing a diffuse arrangement of donors who can only be semiaccountable agents” [41].
SCIENCE, ETHICS, AND THE CHANGING ROLE OF THE PUBLIC Novel and innovative norms and models for biobank management have been proposed by bioethics, social science, and legal practitioners and theorists in recent years, in an attempt to deal with some of these issues. Alternative ethical frameworks based on social solidarity, equity, and altruism have been suggested [42,43]. These formed the basis for the recent Human Genome Organisation Ethics Committee statement on pharmacogenomics [26]. Onora O’Neil has argued for a two-tiered consent process in which public consent for projects is solicited prior to individual consent for donation of samples [44]. The charitable trust model has also been proposed for biobanking, as a way of recognizing DNA both as a common heritage of humanity and as uniquely individual, with implications for family members. “All information would be placed in a trust for perpetuity and the trustees overseeing the information would act on behalf of the people who had altruistically provided information to the population collection. They would be accountable to individuals but could also act as representatives for the community as a whole” [45]. It is not clear, however, whether such models could ever gain widespread public endorsement and legitimacy without direct public involvement in their design. Appeals to the need for community consultation [45] and scientific citizenship [46] may be more suited to the current mood. There is growing awareness globally, among government, policymakers, regulators, and advocacy groups alike, of the importance of public engagement, particularly in relation to emerging technologies. In the UK, crises over bovine spongiform encephalopathy (BSE), otherwise known as “mad cow disease,” and genetically modified (GM) crops have forced the government to proclaim the value of early public participation in decision making [47,48]. A statement by the UK House of Lords Select Committee in 2000 concluded that “today’s public expects not merely to know what is going on, but to be consulted; science is beginning to see the wisdom of this and to move out of the laboratory and into the community to engage in dialogue aimed at mutual understanding” [49]. In Canada, the provincial government of British Columbia pioneered a Citizens’ Assembly in 2003, charging 160 citizens with the task of evaluating the existing electoral system. A new BC Conversations on Health project aims to improve the health system by engaging in “genuine conversation with British Columbians” during 2007. Indeed, public consultations have become the norm for soliciting public support for new technologies. In the UK these have included Weekends Away for a Bigger Voice, funded by the National Consumer Council in 2001 and the
636
ETHICS OF BIOMARKERS
highly publicized government-funded GM Nation consultation in 2002. In Canada, notable examples include the 1999 Canadian Citizen’s Conference on Biotechnology and the 2001 Canadian Public Consultation on Xenotransplantation. In Denmark, more than 20 consensus conferences have been run by the Danish Board of Technology since 1989, on topics as diverse as genetically modified foods, electronic surveillance, and genetic testing [50]. In New Zealand, the government convened a series of public meetings in 2000 as part of its Royal Commission on genetic modification. UK Biobank marketing is careful to assert that the project has “undergone rigorous review and consultation at all levels” (http://www.ukbiobank.ac.uk/about/what.php). Traditional public consultations have their limitations, however. Past examples of consultations have either been unpublicized or restricted to stakeholder involvement, undermining the claim to be representative of the full range of public interests [8]. Some critics suspect consultations of being a front to placate the public, a means of researching market strategy, and speeding product development [51] or as a mechanism for engineering consent [13]. GM Nation is one example of a consultation that has been criticized for “capture” by organized stakeholder groups and as misrepresentative of the public it aimed to consult [52].
PROMISING FUTURE DIRECTIONS: PUBLIC CONSULTATION AND DELIBERATIVE DEMOCRACY The use of theories and practices of deliberative democracy within such public consultations are a more recent and innovative trend. Deliberation stands in opposition to the aggregative market model of representational democracy and the strategic behavior associated with voting. It offers a model of democracy in which free and equal citizens exchange reasons through dialogue, and shape and alter their preferences collectively, and it is rapidly gaining in popularity, as evidenced by the growth of nonprofit organizations such as the Everyday Democracy (http://www.everyday-democracy.org), AmericaSpeaks (http://www.americaspeaks.org/), and National Issues Forums (http://www. nifi.org/) throughout the United States. Origin stories of this broad deliberative democracy “movement” are as varied as its incarnations, and practice is not always as closely linked to theory as it could be. But most theorists will acknowledge a debt to the work of either (or both) Habermas and Rawls. Habermas’s wider program of discourse ethics provides an overarching rationale for public deliberation [53]. This asserts that publicly binding norms can make a legitimate claim to rationality—and thus legitimacy—only if they emerge from free argument between all parties affected. Claims about what “any reasonable person” would accept as right can only be justified by putting them to the test. This is then a far cry from the heavily critiqued [13,54] model of public consultation as a tool for engendering public trust or engineering acceptance of a new technology.
CONCLUSIONS
637
By asking participants to consider the perspectives of everyone, deliberation orients individuals away from consideration of self-interest and toward consideration of the common good. Pellizzoni characterizes this governance virtue as one of three key virtues of deliberative democracy [55]. The second is civic virtue, whereby the process of deliberation produces more informed, active, responsible, cooperative, and fair citizens. The third is cognitive virtue, the notion that discussion oriented to understanding rather than success enhances the quality of decisions, gives rise to new or unarticulated points of view, and allows common understanding of a complex problem that no single person could understand in its entirety. Deliberative democracy is not devoid of challenges when applied to complex issues of science and technology, rich as they can be in future uncertainties and potential societal impact. But it offers much promise as a contribution to biobanking policy that can provide legitimate challenges to rigidly structured research ethics.
CONCLUSIONS Biomarker research is greatly advanced by good-quality annotated collections of tissues, or biobanks. Biobanks raise issues that stretch from evaluation of the benefits and risks of research through to the complexity of informed consent for collections for which the research purposes and methods cannot be described in advance. This range of ethical and organizational challenges is not managed adequately by the rules, guidelines, and bureaucracies of research ethics. Part of the problem is that current research ethics leaves too much for the individual participant to assess before the relevant information is available. But many other aspects of biobanks have to do with how benefits and risks are defined, achieved, and shared, particularly those that are likely to apply to groups of individuals with inherited risks, or those classified as having risks or as being more amenable to treatment than others. These challenges raise important issues of equity and justice. They also highlight tradeoffs between research efficiency and benefits, privacy and individual control over personal information, and tissue samples. These issues are not resolvable by appeal to an existing set of rules or ethical framework to which all reasonable people agree. Inevitably, governance decisions related to biobanks will need to find a way to create legitimate policy and institutions. The political approach of deliberative democracy may hold the most promise for wellinformed and representative input into trustworthy governance of biobanks and related research into biomarkers. Acknowledgments The authors thank Genome Canada, Genome British Columbia, and the Michael Smith Foundation for Health Research for their essential support.
638
ETHICS OF BIOMARKERS
We also appreciate the support and mutual commitment of the University of British Columbia, the British Columbia Transplant Society, Providence Health Care, and Vancouver Coastal Health, and all participants in the Biomarkers in Transplantation initiative.
REFERENCES 1. Sigurdsson S (2001). Ying-yang genetics, or the HSD deCODE controversy. New Genet Soc, 20(2):103–117. 2. Sigurdsson S (2003). Decoding broken promises. Open Democracy. www. opendemocracy.net/theme-9-genes/article_1024.jsp (accessed June 1, 2004). 3. Abbott A (2003). DNA study deepens rift over Iceland’s genetic heritage. Nature, 421:678. 4. Mannvernd, Icelanders for Ethics in Science and Medicine (2004). A landmark decision by the Icelandic Supreme Court: the Icelandic Health Sector Database Act stricken down as unconstitutional. 5. Merz JF, McGee GE, Sankar P (2004). “Iceland Inc.”?: On the ethics of commercial population genomics. Soc Sci Med, 58:1201–1209. 6. Potts J (2002). At least give the natives glass beads: an examination of the bargain made between Iceland and deCODE Genetics with implications for global bioprospecting. Va J Law Technol, Fall, p. 40. 7. Pálsson G, Rabinow P (2001). The Icelandic genome debate. Trends Biotechnol, 19:166–171. 8. Burgess, M, Tansey J. (2009). Democratic deficit and the politics of “informed and Inclusive” consultation. In Einseidel E, Parker R (eds.), Hindsight and Foresight on Emerging Technologies. UBC Press, Vancouver, British Columbia, Canada. 9. Morrison Institute for Population and Resource Studies (1999). Human Genome Diversity Project: Alghero Summary Report. http://www.stanford.edu/group/ morrinst/hgdp/summary93.html (accessed Aug. 2, 2007). 10. Harry D, Howard S, Shelton BL (2000). Indigenous people, genes and genetics: what indigenous people should know about biocolonialism. Indigenous Peoples Council on Biocolonialism. http://www.ipcb.org/pdf_files/ipgs.pdf. 11. BBC (2007). Volunteers join £61m health study. BBC News, July 16, 2007. http:// news.bbc.co.uk/2/hi/uk_news/scotland/glasgow_and_west/6900515.stm (accessed Sept. 24, 2007). 12. Barbour V (2003). UK Biobank: a project in search of a protocol? Lancet, 361:1734–1738. 13. Peterson A (2007). Biobanks “engagements”: engendering trust or engineering consent? Genet Soc Policy, 3:31–43. 14. Peterson A (2005). Securing our genetic health: engendering trust in UK Biobank. Sociol Health Illness, 27:271–292. 15. Redfern M, Keeling J, Powell M (2001). The Royal Liverpool Children’s Inquiry Report. House of Commons, London.
REFERENCES
639
16. Royal College of Pathologists’ Human Tissue Advisory Group (2005). Comments on the Draft Human Tissue Authority Codes of Practice 1 to 5. The Royal College of Pathologists, London, Sept. 28. 17. Lin Z, Owen A, Altman R (2004). Genomic research and human subject privacy. Science, 305:183. 18. Roche P, Annas G (2001). Protecting genetic privacy. Nat Rev Gene, 2: 392–396. 19. Cambon-Thomsen A, Sallée C, Rial-Sebbag E, Knoppers BM (2005). Population genetic databases: Is a specific ethical and legal framework necessary? GenEdit, 3:1–13. 20. Illes J, Rosen A, Greicius M, Racine E (2007). Prospects for prediction: ethics analysis of neuroimaging in Alzheimer’s disease. Ann NY Acad Sci, 1097: 278–295. 21. Caux C, Roy DJ, Guilbert L, Viau C (2007). Anticipating ethical aspects of the use of biomarkers in the workplace: a tool for stakeholders. Soc Sci Med, 65:344–354. 22. Viau C (2005). Biomonitoring in occupational health: scientific, socio-ethical and regulatory issues. Toxicol Appl Pharmacol, 207:S347–S353. 23. The Economist (2007). Genetics, medicine and insurance: Do not ask or do not answer? Aug. 23. http://www.economist.com/science/displaystory.cfm?story_id= 9679893 (accessed Aug. 31, 2007). 24. Genewatch UK (2006). Genetic discrimination by insurers and employers: still looming on the horizon. Genewatch UK Report, Feb. 14. http://www.genewatch. org/uploads/f03c6d66a9b354535738483c1c3d49e4/GeneticTestingUpdate2006.pdf (accessed Aug. 31, 2007). 25. Rothenberg K, et al. (1997). Genetic information and the workplace: legislative approaches and policy challenges. Science, 275:1755–1757. 26. Human Genome Organisation Ethics Committee (2007). HUGO Statement on Pharmacogenomics (PGx): Solidarity, Equity and Governance. Genom Soc Policy, 3:44–47. 27. Lewis G (2004). Tissue collection and the pharmaceutical industry: investigating corporate biobanks. In Tutton R, Corrigan O. (eds.), Genetic Databases: Socioethical Issues in the Collection and Use of DNA. Routledge, London. 28. Loft S, Poulsen HE (1996). Cancer risk and oxidative DNA damage in man. J Mol Med, 74: 297–312. 29. Maschke KJ (2005). Navigating an ethical patchwork: human gene banks. Nat Biotechnol, 23:539–545. 30. Elger BS, Caplan AL (2006). Consent and anonymization in research involving biobanks. Eur Mol Biol Rep, 7:661–666. 31. Hede K (2006). New biorepository guidelines raise concerns. J Nat Cancer Inst, 98:952–954. 32. Brekke OA, Thorvald S (2006). Population biobanks: the ethical gravity of informed consent. Biosocieties, 1:385–398. 33. Cambon-Thomsen A (2004). The social and ethical issues of post-genomic human biobanks. Nat Rev Genet, 5:866–873.
640
ETHICS OF BIOMARKERS
34. Hansson MG, Dillner J, Bartram CR, Carlson JA, Helgesson G (2006). Should donors be allowed to give broad consent to future biobank research? Lancet Oncol, Mar, 7. 35. Burgess MM (2001). Beyond consent: ethical and social issues in genetic testing. Nat Rev Genet, 2:147–151. 36. Weijer C, Emanuel E (2000). Protecting communities in biomedical research. Science, 289:1142–1144. 37. Baird L, Henderson H (2001). Nuu-Chah-Nulth Case History. In Glass KC, Kaufert JM (eds.), Continuing the Dialogue: Genetic Research with Aboriginal Individuals and Communities, pp. 30–43. Proceedings of a workshop sponsored by the Canadian Commission for the United Nations Educational, Scientific, and Cultural Organization (UNESCO), Health Canada, and the National Council on Ethics in Human Research, pp. 26–27, Jan. 2001, Vancouver, British Columbia, Canada. 38. Tymchuk M (2000). Bad blood: management and function. Canadian Broadcasting Company, National Radio. 39. Abbott A (2004). Icelandic database shelved as court judges privacy in peril. Nature, 429:118. 40. Faden RR, Beauchamp TL (1986). A History and Theory of Informed Consent. Oxford University Press, New York. 41. Hoeyer K (2004). Ambiguous gifts: public anxiety, informed consent and biobanks. In Tutton R, Corrigan O (eds.), Genetic Databases: Socio-ethical Issues in the Collection and Use of DNA. Routledge, London. 42. Chadwick R, Berg K (2001). Solidarity and equity: new ethical frameworks for genetic databases. Nat Rev Genet, 2:318–321. 43. Lowrance W (2002). Learning from Experience: Privacy and Secondary Use of Data in Health Research. Nuffield Trust, London. 44. O’Neil O (2001). Informed consent and genetic information. Stud History Philos Biol Biomed Sci, 32:689–704. 45. Kaye J (2004). Abandoning informed consent: the case for genetic research in population collections. In Tutton R, Corrigan O (eds.), Genetic Databases: Socioethical Issues in the Collection and Use of DNA. Routledge, London. 46. Weldon S (2004). “Public consent” or “scientific citizenship”? What counts as public participation in population-based DNA collections? In Tutton R, Corrigan O (eds.), Genetic Databases: Socio-ethical Issues in the Collection and Use of DNA. Routledge, London. 47. Bauer MW (2002). Arenas, platforms and the biotechnology movement. Sci Commun, 24:144–161. 48. Irwin A (2001). Constructing the scientific citizen: science and democracy in the biosciences. Public Understand Sci, 10:1–18. 49. House of Lords Select Committee on Science and Technology (2000). Science and Society, 3rd Report. HMSO, London. 50. Anderson J (2002). Danish Participatory Models: Scenario Workshops and Consensus Conferences, Towards More Democratic Decision-Making. The Pantaneto Forum 6. http://www.pantaneto.co.uk/issue6/andersenjaeger.htm (accessed Oct. 22, 2007).
REFERENCES
641
51. Myshkja B (2007). Lay expertise: Why involve the public in biobank governance? Genet Soc Policy, 3:1–16. 52. Rowe G, Horlick-Jones T, Walls J, Pidgeon N (2005). Difficulties in evaluating public engagement activities: reflections on an evaluation of the UK GM Nation public debate about transgenic crops. Public Understand Sci, 14:331–352. 53. Habermas J (1996). Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy. MIT Press, Cambridge, MA. 54. Wynne B (2006). Public engagement as a means of restoring public trust in science: Hitting the notes, but missing the music? Commun Genet, 9:211–220. 55. Pellizzoni L (2001). The myth of the best argument: power, deliberation and reason. Br J Sociol, 52:59–86.
36 PATHODYNAMICS: IMPROVING BIOMARKER SELECTION BY GETTING MORE INFORMATION FROM CHANGES OVER TIME Donald C. Trost, M.D., Ph.D. Analytic Dynamics, Niantic, Connecticut
INTRODUCTION The purpose of this chapter is to introduce some approaches to thinking about biological dynamics. Pathodynamics is a term used by the author to describe a quantitative approach to disease that includes how the biological system changes over time. In many ways it is analogous to thermodynamics [1,2] in that it deals with the macroscopic, measurable phenotypic aspects of a biological system rather than the microscopic aspects such as those modeled in mathematical physiology [3]. For the purposes of this chapter, macroscopic will refer to measurements that do not involve the destruction of the biological system being studied but may vary in scale (i.e., cell, tissue, organ, and body). For example, clinical measurements would be macroscopic whether invasive or not, but histopathology, cell lysis, or genotype would be microscopic. Another way to view this is that when the dynamics of the system being studied result from something greater than the sum of the parts, usually from networks, it is macroscopic; otherwise, studying the parts of the system is microscopic. One of the problems in macroscopic biology is that many of the underlying system characteristics are immeasurable. This is why the biological Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
643
644
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
variation is just as important, if not more important than the mean behavior of a system because it represents the system changes that cannot be measured directly. The term parameter is used rather loosely in biology and other sciences and has a very specific meaning in mathematics. For the purposes here, it will be defined in a statistical manner as a quantity which is a characteristic of the system that is not directly measurable but can be estimated from experimental data and will be represented by Greek symbols. Quantities being measured will be referred to as variables. The letter y will designate a random, or stochastic, variable, one that has a probability distribution, and the letter x will represent a deterministic, or controlled, variable, one that is measured “exactly,” such as gender, temperature, and pH. The letters s, t, and T will refer to time and will be either deterministic or random, depending on the experimental context. Biomarkers are really mathematical functions of parameters. In modeling, parameters appear as additive coefficients, multiplicative coefficients, and exponents. Sometimes parameters are deterministic, sometimes random, and sometimes functions of time or variables. Probably the most common parameter in biology is the population mean of a random variable. The sample mean is an estimate of this parameter; it is also a statistic, a quantity that is a function of the data. However, under the definitions above, some statistics do not estimate parameters and are known as nonparametric statistics. Nonparametric statistics are generally used for hypothesis testing and contain little or no mechanistic biological information [4–7]. Analogy to Thermodynamics Thermodynamics is a macroscopic view of physics that describes the flow of energy and the disorder of matter (i.e., entropy) [1,2]. The former is reflected in the first law of thermodynamics,which says that Δenergy = Δwork + Δheat which is basically a law of the conservation of energy. The second law has various forms but generally states that entropy changes of a system and its exterior never decrease. Aging and disease (at least some) may be examples of increasing entropy. In chemistry, classical thermodynamics describes the behavior of molecules (particles) and changes in the states of matter. Equilibrium occurs at the state of maximum entropy [i.e., when the temperature is uniform throughout the (closed) system]. This requires constant energy and constant volume. In a system with constant entropy and constant volume, equilibrium occurs at a state of minimum energy. For example, if constant entropy occurs at a constant temperature, equilibrium occurs when the Helmholtz free energy is at its minimum. In all cases, the particles continue to move; this is a dynamics part. For nonequilibrium states, matter and energy will flow to reach an equi-
INTRODUCTION
645
librium state. This is also a dynamics part. Modern thermodynamics is a generalization of classical thermodynamics that relates state variables for any system in equilibrium [2], such as electromagnetism and fluid dynamics. The open question is whether or not these concepts apply to biological systems. For this chapter, the pathodynamics concept is that the states of a biological system can be measured and related in a manner similar to thermodynamics. At this time only the simplest relationships are being proposed. The best analogy seems to be viewing a biological system: in particular, a warm-blooded (constant-temperature) mammal as a particle in a high-dimensional space whose states are measured via (clinical) laboratory tests. The dimension of this space will be defined by the information contained in these biomarkers (see below). This particle is in constant motion as long as the system is alive and the microscopic behavior of the biology is generally unobservable, but the probabilitistic microscopic behavior of this particle is observable through its clinical states. The probabilistic macroscopic properties are described by the probability distributions of the particle. These distributions are inherently defined by a single biological system, and how they are related to population distributions for the same species is unknown. Hopefully, there will be some properties that are invariant among individuals so that the system behavior can be studied with large enough sample sizes to get sufficient information about the population of systems. One way to visualize this concept of pathodynamics is to imagine a single molecule moving in a homogeneous compressible fluid [8]. A drop of this fluid represents the probability distribution and is suspended in another fluid medium. The fluids are slightly miscible, so that there is no surface on the drop, but there is a cohesive (homeostasic) force that pulls the particles toward the center of the drop while the heat in the system tends to diffuse the particles out into the medium. Dynamic equilibrium occurs when the drop is at its smallest volume, as measured by levels of constant probability density. The conservation of mass is related to the total probability for the particle (mass = 1), which is analogous to the mass of the drop. The drop becomes distorted when external forces on it cause distortion of the shape and may even fragment the drop into smaller drops or change the drop so that holes appear. These external forces are due to factors such as external environment, disease, and therapies. A goal of pathodynamics is to infer the presence of an external force by observing the motion of the particle and finding the correspondence between the force and known causes. The Concept of Time Since dynamics relates to the changes over time, some mention of time is appropriate. Everyone has a general concept of time, but in mathematical and physical thinking, time is a little more complicated [9,10]. There are two main classes of time: continuous (analog) and discrete (digital). Although physicists use the reversibility of time, which at least makes the mathematics easier, Prigogine [10] argues that at the microscopic level, the world is probabilistic
646
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
and that in probabilistic (stochastic) systems, time can only go forward. Furthermore, he argues that “dynamics is at the root of complexity that [is] essential for self-organization and the emergence of life.” There are many kinds of time. In the continuous category are astronomical time (ordinary time), biological time (aging), psychological time, thermodynamic time, and information time [11]. Discrete time is more difficult to imagine. Whenever digital data are collected, the observations occur only at discrete times, usually equally spaced. Even then, the process being observed runs in continuous time. This is probably the most common case. However, discrete time is really just a counting process and occurs naturally in biology. Cell divisions, heartbeats, and respirations are a few of the commonly observed discrete biological clocks. In this chapter only continuous-time processes are discussed. An example of discrete-time pathodynamics can be found elsewhere [12].
BROWNIAN MOTION Diffusion One of the most basic continuous stochastic processes is Brownian motion. The concept of Brownian motion was born when a biologist named Robert Brown observed pollen under a microscope vibrating due to interactions with water molecules [13]. This concept should be familiar to biologists. Brownian motion has been modeled extensively [14,15]. Standard Brownian motion (Bt) is a Gaussian stochastic process with mean and variance μt = 0 σ t2 = t respectively. In the laboratory, Brownian motion is a good model for diffusion (e.g., immunodiffusion), where the concentration of the protein is related to the time of observation and the diffusion coefficient. In an open system where time goes forever or when the particles have no boundary, the diffusion of a particle is unbounded because the variance goes to infinity. To make it a useful concept in biology, the particle needs to be either in a closed container or stabilized by an opposing force. Homeostasis: Equilibrium Pathodynamics In biological systems, diffusion occurs only on a microscopic level and is not usually measurable macroscopically. However, in a probabilistic model of pathodynamics, the particle representing person’s clinical health state can be thought of as a microscopic object (e.g., a molecule in a fluid drop), while the probability distribution is the macroscopic view. The reader should note that
BROWNIAN MOTION
647
the concepts of microscopic biology as defined in the Introduction and this imaginary microscopic particle of pathodynamics are different uses of the microscopic/macroscopic dichotomy. The Ornstein–Uhlenbeck (OU) stochastic process [16] is a stationary Gaussian process with conditional mean and variance given y0: μ t = μ + e − αt ( y0 − μ ) σ t2 = σ 2 (1 − e −2 αt ) respectively, where y0 is the baseline value, μ is the equilibrium point (mean of yt), and σ2 is the variance of yt. The autocorrelation between two measurements of y is Corr [ ys , yt ] = e − α (t − s) for times t ≥ s. In thermodynamic terms, the average fluctuation of a particle is dμ t = −α ( μ t − μ ) dt dμ t = −α ( μ t − μ ) dt Read “dx” as a small change in x. This ordinary differential equation (ODE) is analogous to the stochastic differential equation (SDE) [14,15] that generates the OU process, dyt = −α ( yt − μ ) dt + 2α σ dBt which has a biological variation term that is driven by Brownian motion. In statistical physics, this is called the Langevin equation, or the equation of motion for a Brownian particle [17]. Here a link between thermodynamics and pathodynamics will be attempted. Using the Einstein theory of fluctuations [2,17], the change in entropy is ΔS = −
1 ( yt − μ )2 2 2σ
Since this quantity is always zero or negative, this says that in the equilibrium state the entropy is decreasing with time, which suggests that there is some organizing force acting on the system. This is the homeostatic force FH =
1 ( yt − μ ) σ2
In statistical terms under near-equilibrium conditions, the drag in the system increases as the autocorrelation increases or as the variance decreases.
648
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
When the time rate of change of ΔS is equal to the product −JF, where J is the current or flow of the particles in the system, the system is in an equilibrium state. Solving this equation gives JH =
1 ( yt − μ ) 2
which is the entropy current. In homeostasis the average force and the average flow are zero and the average of the change in entropy is −½. This can all be generalized to multivariate biomarkers by making the measurements and their means into vectors y and μ, respectively, and by making α and σ2 into symmetric positive definite matrices A and Σ, respectively. The usual thermodynamic parameters are embedded in these statistical parameters and can be determined as needed for specific biological uses. To illustrate this physical analog a little further, suppose that a drop of particles were placed in a well in the center of a gel, the external medium, and allowed to diffuse for a fixed time tH. If Fick’s law applies, there is a diffusion coefficient D and the Stokes–Einstein relation holds such that D=
kBT γ
where kB is Boltzmann’s constant, T is the absolute temperature of the system, and γ is the viscosity (friction) coefficient. Now suppose that the particles are charged and an electrical field is applied radially inward at tH so that the field strength is proportional to the distance from the center of the sample and is equal to the force of the diffusion. This means that the distribution of the particles is in steady state and that the particles experience no acceleration. This leads to the Langevin equation with the friction coefficient in the system at γ = 1/ασ2. It turns out that at this equilibrium state σ2 = 2DtH and then by substitution α=
1 2kB tHT
and
ασ=
D kBT
As long as T is constant, α is constant and inversely proportional to T, which may represent physical temperature or some biological analog but is assumed to be constant as well; and ασ2 is proportional to D/T. Signals of Change: Nonequilibrium Pathodynamics Changes from equilibrium are the signals of interest in pathodynamics. In the simplest case, the signal can be an observation that occurs outside the dynamic reference interval (“normal limits”). This interval can be constructed by estimating its endpoints at time t using
INFORMATION FROM DATA
649
μ + e − α (t − s) ( ys − μ ) ± 1.96σ 1 − e −2 α (t − s) where s is the time of a previous measurement and would be zero if it is the baseline measurement. In either case, two measurements are required to identify a dynamic signal when the parameters do not change with time. Some care needs to be exercised when estimating this interval [18]. The interval clearly will be shorter than the usual one, μ ± 1.96σ, will be different for each person, and will be in motion unless there is no autocorrelation. A value outside this interval would indicate a statistically significant deviation from homeostasis with a probability of a false positive being 0.05 for each pair of time points. Simultaneous control of the type I error requires multivariate methods. Nonequilibrium states might be modeled by allowing one or more parameters to change with time. If μ is changing with time, this is called convection, meaning that the center of gravity of the system is changing, resulting in a flow, or trajectory. If α or σ is changing, the temperature or the diffusion properties are changing. When the particle is accelerating, an acceleration term needs to be added to the Langevin equation. The Fokker–Planck equation [17] provides a way to construct the steady-state probability distributions of the particle along with the transition probability states for the general Langevin equation. It is possible for a new equilibrium distribution to occur after the occurrence of a disease or after a therapeutic intervention, which would indicate a permanent residual effect. In this paradigm, a “cure” would occur only if the particle distribution returned to the healthy normal homeostasis state. The observation of the dynamics of the particle may suggest that an external (pathological) force is acting on the system. Any changes in “thermodynamic” variables may form patterns that lead to diagnostic criteria. The construction of optimal criteria and the selection and measurement of biomarkers are discussed in the next section.
INFORMATION FROM DATA Parameter Estimation and the Mean Squared Error The mean squared error (MSE) is just what it says: the average of the squared difference between the estimated parameter and the true parameter. The square root of the MSE is often referred to by engineers as the root mean square (RMS). In general,
()
()
()
MSE θˆ = Variance θˆ + Bias θˆ
2
The accent mark over the parameter means that it is estimated from the data; a parameter without the accent mark is a theoretical, or unknown, quantity.
650
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
The MSE shows up in statistical estimation theory in the Cramér–Rao inequality [19–21]:
()
MSE θˆ I (θ ) ≥ 1 The second term, I(·) in this inequality is called the Fisher information. The statistician tries to construct an estimator of the unknown parameter θ to make this product as close to 1 as possible. An estimator that makes the product equal to 1 is an unbiased, minimum-variance estimator (UMV) and derives the maximum Fisher information from the data. Unfortunately, these estimators do not always exist for a given situation. The estimation strategy is usually to minimize the MSE when the UMV estimator is not available. There may be situations where a biased estimator will produce an MSE smaller than the unbiased approach. The maximum likelihood (ML) estimator is probably the most common estimation procedure used today, although it requires that a model for the probability can be written explicitly. It has the nice property that a function of an ML estimate is the ML estimate of the function, allowing the ML parameter estimates to be plugged directly into the function. This property may not be true for other estimation procedures. Since the mathematics is easier when a sample size (n) is large (a euphemism for asymptotic, i.e., when n is “near” infinity), ML estimates are favored because for large samples, they are also Gaussian and UMV even when the underlying distribution of the data is not Gaussian. However, an extremely large n may be needed to get these properties, a fact often overlooked in statistical applications. This is particularly true when estimating some parameter other than the mean, such as the standard deviation or the coefficient of variation (CV). For the analysis of experiments, ordinary least squares (OLS) estimation is used most often. Analysis of variance (ANOVA) and ordinary regression are special cases of OLS estimation. OLS estimation is unbiased if the correct model is chosen. If the residuals (difference between the observation and the model) are Gaussian, independent, and have the same variance, then OLS is equivalent to ML and UMV. When the variance is not constant, such as in experiments where the analytical variation (measurement error) is related to the size of the measurement, various methods, such as a logarithm transformation, need to be used to stabilize the variance; otherwise, OLS does not have these good properties, and signals can be missed or falsely detected. Fisher Information Fisher information (I) provides a limit on how precisely a parameter can be known from a single measurement. This is a form of the uncertainty principle seen most often in physics. In other words, what statistic gives you the best characterization of your biomarker? When measurements are independent, such as those measured on different experimental units, the information is
INFORMATION FROM DATA
651
additive, making the total information equal to nI. If you can afford an unlimited number of independent measurements, which is infinite information, the parameter can be known exactly, but this is never the case. For dynamic measurements (i.e., repeated measurements over time on the same experimental unit), although the measurement errors are generally independent, the measurements are not. It is usually cheaper to measure the same subject at multiple times than to make the same number of measurements, one per subject. In addition, information about the time effects within a person cannot be obtained in the latter case. This is key information that is not available when the measurements are not repeated in time. As illustrated in the examples below, this additional information can have major effects on the characteristics of the signal and the ability to detect it. The Fisher information is generally a theoretical quantity for the lower bound of the MSE that involves a square matrix of the expectations of pairwise second-order partial derivatives of the log-likelihood. For those interested, this theory can be found in many books [19–21] and is not covered here. However, some of the results given below for the OU model are used to illustrate the information gain. For this purpose, an ANOVA model will be used for the means at equally spaced time points μt. In such an experimental design, time changes can be detected using specific orthogonal (independent) contrasts. These can be written as follows: Constant effect: Linear effect: Quadratic effect: Cubic effect: Quartic effect:
1 5 1
(μ 0 + μ1 + μ 2 + μ 3 + μ 4 )
10 1 14 1 10 1 70
( −2μ 0 − μ 1 + μ 3 + 2μ 4 ) ( 2μ 0 − μ 1 − 2μ 2 − μ 3 + 2μ 4 ) ( − μ 0 + 2μ 1 − 2μ 3 + μ 4 ) ( μ 0 − 4μ 1 + 6μ 2 − 4μ 3 + μ 4 )
The Fisher information for these sums of means assuming that μ0 = 0 estimated from an OU process is compared in Figure 1 to the information when the time points are independent. To reduce the complexity in graphing the relationships, the OU information is plotted against λ = α · Δt. If the approximate value of α is known from previous experiments, Δt can be chosen to achieve improved efficiency in detecting the desired time relationship of the experimental response. All efficiency appears to be lost for λ greater than 4, while the information appears to increase exponentially when it is less then 1. The maximum relative efficiencies are 1, 1.5, 2.5, 9, and 69 for the constant, linear, quadratic, cubic, and quartic effects, respectively. Estimating a constant response is not at all efficient using dynamic measures. It is not immediately
652
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
10
Information gain
8
6 Constant Linear Quadratic Cubic Quartic
4
2
0 0
1
2
3
4
λ
Figure 1 Fisher information gain from dynamic measurements relative to independent measurements as a function of λ. (See insert for color reproduction of the figure.)
clear why a linear relationship would be less efficient than the others, although it is well known that the best design in this case is to put the observations at beginning and end, with none in the middle. Since it was assumed that the beginning was zero, only one point is needed to estimate a linear response. For in vivo biomarkers, it would be difficult to imagine a constant or linear response. For those not inclined mathematically, it should be noted that all continuous functions of time can be approximated with a polynomial in time if the degree is sufficiently large. This means that additional contrasts, extensions of those above, can be used to get a better estimate of the time relationship as long as the degree is less than the number of time points. If the change with time is not well modeled by low-degree polynomials, a regression using the specific time-dependent function of interest should be used to save error degrees of freedom. It seems apparent from Figure 1 that the Fisher information of dynamic measurements is increased as the degree of polynomial or autocorrelation increases. Besides obtaining additional information about the curvature of the mean time effect, the autocorrelation parameter α contains information that is seldom estimated or used. For the OU process, this parameter combined with the variance provides a measure of the biological variation. It is proportional to the homeostatic force needed to maintain the dynamic equilibrium. The Fisher information for α is mathematically independent of the mean and vari-
653
5
INFORMATION FROM DATA
2
3
α = 0.01
α = 0.025 1
Fisher information (log10 )
4
α = 0.005
0
α = 0.05 correlation half-life λ=3
α = 0.1 0
100
200
300
400
500
600
700
Δt
Figure 2
Fisher information for α as a function of time between measurements (Δt).
ance. The relationship between the base 10 logarithm of α-information as a function of Δt for various values of α is shown in Figure 2. The closed circles are the autocorrelation half-life for each α and the open circles represent a loose upper bound for Δt at λ = 3, which is an autocorrelation of 0.05. Those serious about capturing information about α should seriously consider λ < 1. For a given α, the information approaches 2/α2 as Δt goes to zero. Obviously, the larger α is, the less information about the autocorrelation is available in the data, necessitating larger m or smaller Δt to get equivalent information. The time units are arbitrary in the figure but must match the time unit under which α is estimated. Shannon Information Shannon information is about how well a signal is communicated through some medium or channel; in this case it is the biological medium. The measurement of variables to estimate parameters that indicate which signal was transmitted is the signal detection process. Pathological states and the body represent a discrete communication system where the disease is the signal transmitted, which may affect any number of biological subsystems that act as the communication channels, and the biomarker is the signal detected in one or more biomarker measurements. The disease is then diagnosed by partitioning the biomarker space into discrete, mutually exclusive regions (R) in a way that minimizes the signal misclassification. Information, or communication, theory is usually applied to electronic systems that can be designed to take
654
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
advantage of the optimal properties of the theory. In biology, it is mostly a reverse-engineering task. The signal is the health or disease state of the individual, which is transmitted through various liquid and solid tissues with interacting and redundant pathways. In this paradigm, biomarkers are the signal detection instruments and algorithms. Rules, usually oversimplified, are then constructed to determine which signal was sent based on the biomarker information. An elementary background in information theory is given by Reza [22]. Shannon information is really a measure of uncertainty or entropy and describes how well signals can be transmitted through a noisy environment. Probability models are the basis of this type of information which complements Fisher information rather than competing with it. The general framework for Shannon information follows. As a reminder, Bayes’ theorem, where P[·] is a probability measure and P[A|B] = P[event A given event B], states that P[A|B] = P[A and B]/P[B]. In this particular representation, some elemental probabilities will be used: the probability that the signal was sent: for example, the prevalence or prior probability, π i = P [ Si ] the probability that the signal was received, q j = P [ Dj ] and the probability that a particular diagnosis was made given that the signal was sent, qij = P [ Dj Si ] = ∫R dPi ( Y ) j
The last expression is rather ominous, but in words it is the probability that the multivariate biomarker Y, a list (vector, array) of tests, is in the region Rj given that signal Si was sent through a noisy channel, properly modeled by the probability function Pi. This P can be either continuous or discrete or both and is generally a function of unknown parameters and time. It reflects both the biological variability and the analytical variability. Since this function contains the parameters, it is where the Fisher information applies. In its crudest form, qij is just a proportion of counts. Although the latter is simple and tractable for a biologist, it is subject to losing the most information about the underlying signal and does not lend itself readily to the incorporation of time used in pathodynamics. Generally, any categorization of continuous data will result in information loss. Table 1 shows the correspondence between signals (S) and diagnoses, or decisions (D), in terms of the probability structure. If the number of decision
INFORMATION FROM DATA
TABLE 1
655
Noisy Discrete-Signal Multichannel Communication Probability Table Decision
Signal
D1
D2
…
D
Total
S1 S2 ⯗ Sk Total
π1q11 π2q21 ⯗ πkqk1 q1
π1q12 π2q22 ⯗ πkqk2 q2
… …
π1q1 π2q2 ⯗ πkqk q
π1 π2 ⯗ πk 1
… …
classes is not equal to the number of underlying signals, inefficiencies are likely to occur. However, in biology it is not always possible to optimize this, especially if some of the signals are unknown. The mathematical objects under the control of the biologist are Y, P, R, and D. Since the rest of this book is mostly about Y, and in most cases y, a single biomarker, this chapter is mostly about P, R, and D. In other words, the biologist can optimize the qij’s only when D is specified. The one exception is when the experiment can be designed so that the πi’s are known and individuals can be selected to make them equally likely. A few calculations from information theory are presented here. For applications below, S1 will represent the “normal” state and D1 will represent the “normal” classification. The others will represent “abnormal” or pathological states and classifications. Pharmacological or other therapeutic effects will be considered as abnormal states except when a cure results (i.e., the subject reverts back to the normal state). The idea of entropy comes from thermodynamics, and it has been shown that thermodynamic entropy and Shannon entropy (information) are related. There are three primary types of Shannon entropy (average uncertainty): the entropy of the source, k
H ( S ) = − ∑ π i log 2 π i i =1
the entropy in the receiver,
H ( D) = − ∑ q j log 2 q j i =1
and the communication system entropy, k
H ( S, D) = − ∑ ∑ π i qij log 2 π i qij i =1 j =1
The base 2 logarithm is used here because the units of information are in bits (binary digits). With an ideal, noise-free communication channel, H(S) = H(D) = H(S,D). This means that qii = 1 and qij = 0 for all i ≠ j.
656
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
TABLE 2 Biological Channel Capacity (Bits) by the Number of Possible Signals Transmitted and the Probability of an Abnormal Signal Being Sent Probability of an Abnormal Signal (1 − π1) Signals (k)
0.99
0.95
0.9
0.8
0.7
0.6
0.5
2 3 5 10 25 50 75 100
0.08 1.07 2.06 3.22 4.62 5.64 6.23 6.64
0.29 1.24 2.19 3.30 4.64 5.62 6.19 6.58
0.47 1.37 2.27 3.32 4.60 5.52 6.06 6.44
0.72 1.52 2.32 3.26 4.39 5.21 5.69 6.03
0.88 1.58 2.28 3.10 4.09 4.81 5.23 5.52
0.97 1.57 2.17 2.87 3.72 4.34 4.70 4.95
1.00 1.50 2.00 2.58 3.29 3.81 4.10 4.31
0.4
0.3
0.2
0.1
0.05
0.01
0.001
0.97 1.37 1.77 2.24 2.80 3.22 3.45 3.62
0.88 1.18 1.48 1.83 2.26 2.57 2.74 2.87
0.72 0.92 1.12 1.36 1.64 1.84 1.96 2.05
0.47 0.57 0.67 0.79 0.93 1.03 1.09 1.13
0.29 0.34 0.39 0.44 0.52 0.57 0.60 0.62
0.08 0.09 0.10 0.11 0.13 0.14 0.14 0.15
0.01 0.01 0.01 0.01 0.02 0.02 0.02 0.02
2 3 5 10 25 50 75 100
A measure of the information transmitted is k
I ( S, D) = ∑ ∑ π i qij log 2 i =1 j =1
qij qj
When qij = qj, for i ≠ j, the log term is zero; no information about Si is transmitted. The channel capacity (C) is then defined as the maximum of I(S,D) over all possible values of the πi’s. For a noise-free system, this maximum occurs when all signals are equally likely (i.e., C = log2k). In the 2 × 2 case, C = 1 bit; in general, for biological systems, C = I(S,D), because the πi’s are fixed by nature. Table 2 shows the maximum amount of information possible for the case where πi = (1 − π1)/(k − 1) for the abnormal signals. If the biologist has some idea of the prevalence of the signal of interest, this table can give some idea of how feasible (futile) searching for biomarkers might be. This table shows that when several relatively rare signals are being sent via the same channel, very little information is available even in the best of conditions. Table 3 is a slightly different way of looking at the same question. These are the values in Table 2 divided by log2k. Similar theory is available when S and
657
MEASURES OF DIAGNOSTIC PERFORMANCE
TABLE 3 Biological Communication Efficiency (%) by the Number of Possible Signals Transmitted and the Probability of an Abnormal Signal Being Sent Probability of an Abnormal Signal (1 − π1) Signals (k)
0.99
0.95
0.9
0.8
0.7
0.6
0.5
2 3 5 10 25 50 75 100
8.1 67.6 88.8 96.9 99.5 99.9 100.0 100.0
28.6 78.0 94.2 99.3 100.0 99.6 99.3 99.1
46.9 86.4 97.7 100.0 99.0 97.8 97.2 96.9
72.2 96.0 100.0 98.1 94.5 92.4 91.3 90.7
88.1 99.8 98.2 93.3 88.1 85.3 83.9 83.1
97.1 99.1 93.5 86.5 80.1 76.9 75.4 74.5
100.0 94.6 86.1 77.8 70.9 67.5 65.9 64.9
0.4
0.3
0.2
0.1
0.05
0.01
0.001
97.1 86.5 76.3 67.4 60.4 57.0 55.5 54.5
88.1 74.5 63.8 55.2 48.6 45.5 44.1 43.2
72.2 58.2 48.3 40.8 35.3 32.7 31.5 30.8
46.9 35.9 28.8 23.7 20.0 18.3 17.5 17.0
28.6 21.2 16.6 13.4 11.1 10.0 9.6 9.3
8.1 5.7 4.3 3.4 2.7 2.4 2.3 2.2
1.1 0.8 0.6 0.4 0.3 0.3 0.3 0.3
2 3 5 10 25 50 75 100
D are continuous rather than discrete, but this does not seem relevant for biomarkers since diseases are treated as discrete entities [23]. Some might argue that hypertension and hypercholesterolemia are continuous diseases, but under the paradigm in this chapter, they are just biomarkers, for which some arguments can be made that they are surrogates for particular diseases.
MEASURES OF DIAGNOSTIC PERFORMANCE Much has been written about this topic. In this section we point out some issues with the current wisdom, and the reader can decide if changes are needed. In addition, the impact of the measurement of time changes on Shannon information is discussed here. In standard statistical hypothesis testing, there are two types of inference errors: type I (α) is where the null hypothesis (H0) is rejected when it is true, and type II (β) is where the null hypothesis (H0) is accepted when it is false. With the null hypothesis symbolized by signal 1 (S1) and the alternative hypothesis (H1) by signal 2 (S2), Table
658
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
TABLE 4 Hypothesis-Testing Outcome Probabilities Decision Signal S1 S2
D1
D2
1−α β
α 1−β
TABLE 5 Two-Channel Signal Detection Probabilities Decision Signal
D1(H0)
D2(H1)
P[Si]
S1 S2 P[Dj]
π1(1 − α) π2β q1
π1α π2(1 − β) q2
π1 π2 1
4 shows the error relationship for the decision that signal 1 was detected (D1) or signal 2 was detected (D2). In customary frequentist statistical practice, α is chosen to be fixed at 0.05, and a fixed sample size (n) is estimated to attain a power = 1 − β for some specified value in the interval (0.75, 0.95). The reality in biomedical science is that the power is fudged to get a sample size that the scientist can afford. It should really be set at some conventional value like α, so that experiments are comparable and have the same probability of missing the signal. Everything in hypothesis testing is focused on controlling α and letting β float. Unless a biomarker whose characteristics are fully understood is being used to prove efficacy in a phase III trial, this is probably not the best way to evaluate the biomarker. Table 4 looks very similar to Table 1 but includes only the qij’s. Table 5 is the proper setup for evaluating the information. This is a Bayesianlike framework because is requires the prior probability for each hypothesis. If it is assumed that π1 = π2 = 0.5, then Tables 4 and 5 are equivalent, a state of ignorance in many cases. However, most biologists are not ignorant about the underlying system; a bad guess for these probabilities would probably be better for evaluating biomarkers than assuming equality because it can be very misleading to assume that the maximum information is transmitted if, in fact, it is not (see Tables 2 and 3). Much money can be wasted when numbers are misused. Diagnostic tests (biomarkers) are usually evaluated in terms of true positives (TPs), true negatives (TNs), false positives (FPs), and false negatives (FNs). Table 6 is a typical setup for this evaluation. Again these are just the qij’s obtained by counting in most cases. The concepts of biomarker sensitivity and specificity are as follows [5]:
MEASURES OF DIAGNOSTIC PERFORMANCE
659
TABLE 6 Outcomes for a Binary-Decision Diagnostic Test Decision Signal
D1(−)
D2(+)
Total
S1 S2 Total
TN FN TN + FN
FP TP FP + TP
TP + FN FP + TN n
TP = 1−β TP + FN TN = 1− α specificity = TN + FP sensitivity =
It is clear that these are just q11.and q22, respectively. Obviously, a biomarker such as aspartate aminotransferase (AST, SGOT) has information about many diagnoses; therefore, looking at each diagnosis one at a time seems less than optimal, could be grossly misleading, and probably should be avoided when possible. Mathematically, there is no reason to limit the diagnostic categories to two. However, with more than two outcomes, the terms sensitivity and specificity become somewhat meaningless. Perhaps a term such as D-specificity would be appropriate for the generalization, where D is replaced by the particular disease name. The D-specificity is a measure of how well the biomarkers Y detect the specific signal under the set of decision rules R. This is an important metric for the biomarker developer. However, for biomarker application the biomarker’s utility must be evaluated in light of the prior probabilities of the signals being sent. A common way, preferred by clinicians, to get at this issue is through the positive and negative predictive values, PPV and NPV, respectively [5]. For the normal–abnormal case, the PPV = P[S2|D2]. Bayes’ theorem can be applied to get PPV =
π 2 (1 − β ) π 1α + π 2 ( 1 − β )
Similarly, the NPV = P[S1|D1], which is NPV =
π 1 (1 − α ) π 1 (1 − α ) + π 2 β
In comparing these formulas to Table 5, these are the D-diagonal terms divided by the sum of the corresponding column. Both will have the value 1
660
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
in a noise-free system. For more than two outcomes, these can be generalized to D-predictive values (DPV), where DiPV = P[Si|Di] and Dj PV =
π j q jj k
∑ π i qij i =1
This is the probability that the clinician would use if considering only one diagnosis against all others. More commonly, a clinician is considering a number of diagnoses simultaneously; this is called the differential diagnosis. A list of probabilities would be appropriate for this situation. Since only one D is chosen in the information framework, a probability for each potential signal given D can be created to arrange the differential diagnosis in order of decreasing probability. The D-differential value can be defined as DjDV = P[Si|Dj] and calculated as Dj DV =
π i qij k
∑ π i qij i =1
which is just proportion for each cell in the Dj column for each S. These are the numbers that a clinician would use to order the differential diagnosis according to probability. Therefore, the context for the utility of a biomarker really depends on how it will be used. The ROC (receiver operating characteristic) curve is a common way to evaluate biomarkers. It combines specificity and sensitivity. Figure 3 illustrates how the ROC curve is constructed. These represent the probability density p(y) for a continuous Gaussian biomarker y. S1 has mean 0 and standard deviation 1 in all cases. For cases A and D, S2 has mean 4 and variance 1; for cases B and C, the means are 0.01 and 2, respectively. The vertical black line represents the partition determined by some optimality rule: The y values to the left of the line represent R1, and the values to the right, R2. If the signal observed is on the left side (D1), the signal is called S1; if on the right side (D2), it is called S2. The total channel noise (N) is given by N = π 1α + π 2 β These are the off-diagonal terms in Table 5. One optimization rule is to choose the cut point z so that N is minimized. This leads to the relationship π 1 p1 ( z) =1 π 2 p2 ( z) which can be rewritten in a simpler form
MEASURES OF DIAGNOSTIC PERFORMANCE (A)
(B) 0.4
0.4 D1
D2
0.2 S1
D1
0.3 f(x)
0.3 f(x)
661
D2
S2
0.2
S2
S1
0.1
0.1 α
β
α
β
0.0
0.0 −4
−2
0
2
4
6
8
−4
−2
0
2
x
4
6
8
x
(C)
(D) 0.4 0.6 D1
D2
D1 f(x)
f(x)
0.3 0.2
S1
D2
0.4
S2 0.2
0.1
S1
α
β
β
0.0
0.0 −4
−2
0
2 x
4
6
8
−4
−2
0
α S 2 2 x
4
6
8
Figure 3 Construction of diagnostic rules for various probability structures. (A) Signal S1 has prior probability 0.5, mean 0, and standard deviation 1; S2 has prior probability 0.5, mean 4, and standard deviation 1. (B) S1 has prior probability 0.5, mean 0, and standard deviation 1; S2 has prior probability 0.5, mean 0.1, and standard deviation 1. (C) S1 has prior probability 0.5, mean 0, and standard deviation 1; S2 has prior probability 0.5, mean 1, and standard deviation 1. (D) S1 has prior probability 0.9, mean 0, and standard deviation 1; S2 has prior probability 0.1, mean 4, and standard deviation 1. (See insert for color reproduction of the figure.)
log
π1 + log p1 ( z) − log p2 ( z) = 0 π2 π 1 1 2 log 1 − z2 + ( z − μ ) = 0 π2 2 2
The solution of this equation (z) is the point at which the density functions cross. In cases A, B, and D, the black line is placed at this point. For cases A, B, and C, π1 = π2 = ½; in case D, π 1 = 9 10 and π 2 = 1 10.
662
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
This approach is known in statistics as discriminant analysis and in computer science as supervised learning. Sometimes classification is done with logistic regression, which is equivalent when π 2 p2 ( z) π 1 p1 ( z) + π 2 p2 ( z) 1 = 1 + π 1 p1 ( z) π 2 p2 ( z)
P [ S2 z] =
=
1 1 + exp [ log ( π 1 π 2 ) + log p1 ( z) − log p2 ( z)]
A D
0.8
1.0
The cut point is the z where P[S2|z] = ½, which is the same as above as long as the logistic model has this form. Usually, the first log term is estimated from the data, which will give a result that is not optimal for the real prior probabilities. The receiver operating characterization (ROC) curve is constructed by sweeping the black line from right to left and calculating α and β at each point. The results are shown in Figure 4 for all four cases. The positively sloped diagonal line represents the case when the distributions are exactly the same. The letters for the cases in Figure 3 are placed at the optimal points, respectively. For equally likely priors, they fall on the negatively sloped diagonal line. A statistical test can be performed to see if the area under the ROC curve
1− −β
0.6
C
0.0
0.2
0.4
B
0.0
0.2
0.4
0.6
0.8
α
Figure 4
ROC curves for cases A to D.
1.0
MEASURES OF DIAGNOSTIC PERFORMANCE
663
is significantly greater than ½. Unfortunately, this only indicates that the biomarker can detect one of two signals under ideal conditions. A better test might be one that can detect if the length of the vector
(
P12 ( z) + [1 − P2 ( z)]
2
)
from the lower right corner to the optimal point extends significantly beyond the positive diagonal line. It should be noted that this length measure depends both on the data observed and on the priors. If one of the two signals is rare, it would be very hard to show significance. This is not true for the general test of ROC area. With respect to evaluating a biomarker, Figures 3 and 4 demonstrate some interesting aspects. Case A represents a situation where a single test discriminates well between S1 and S2 but it is not noise-free. It would be difficult to do better than this in the real world. What is striking is that case B, which shows essentially no visible separation, actually has an ROC area greater than ½ that could be detected with sufficiently large n. Typical ROC curves look more like case C, perhaps slightly better. The densities in this case show only modest separation and would not make a very impressive biomarker. However, the ROC curve area test might tend to get people excited about its prospects. If one of the signals is rare, it becomes essentially undetectable, but the ROC area test gives no indication of this. Case C is a good candidate for adding another test to make it a multivariate biomarker. In higher dimensions, greater separation may occur (i.e., more information might be transmitted). However, the ROC curve cannot handle more than two outcomes, at least not in a visually tractable way, although collapsing cells by summing in Table 1 to the 2 × 2 case might work in some cases. This takes us back to I(S,D) as a measure of the biomarker’s utility since it works for any number of signals and any number of tests. A generalization of the discriminant function would minimize the total noise, obtained by summing the off-diagonal elements of Table 1: k
k
N = ∑ ∑ π i qij − ∑ π i qii i =1 j =1
i =1
This minimization would probably be difficult using calculus as before and would require a numerical analysis approach, too involved to describe here. This minimization would determine the R’s. The partitions, the edges of the R’s, are a point, a line, a plane, and flat surfaces of higher dimension as the number of tests increases from one, respectively, assuming that the parameters of the P’s differ only in the means and that the P’s are Gaussian. When the other parameters differ among the P’s, the surfaces are curved. Without some involved mathematics, it is difficult to know if the minimal noise optimization is equivalent to the maximal information optimization. It seems entirely feasible that communication systems with the same total noise could have different diagonal elements, one of which might give the most information about all the signals in the system. The answer is unknown to the
664
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
author and is probably an open research question for biological systems. Another level of complication, but one that has more real-world relevance, is one that minimizes the cost. This is discussed at an elementary level elsewhere [24]. Multiplying each term in Table 1 by its cost, and minimizing the total expected cost, is a classical decision theory approach [25–27]. The previous discussion of diagnostic performance has said little about time effects. The time variable is contained in P and will add another dimension for each time point measured, in general. This presents a severe dimensionality problem, both for the biologist and the analyst, since each measurement of the biomarker on the same person creates a new dimension. If the measurement times are not the same for all cases, the entire optimization process may depend on which times are chosen. The biggest problem with dimensionality is that it usually involves a growing number of parameters. To obtain precise (high-Fisher-information) estimates of all the parameters simultaneously, the sample size requirement grows much faster than the dimension. Here is a place where invariance plays a key role. If some parameters can be shown to be biological constants analogous to physical constants, through data pooling they can be estimated once and reused going forward. If the parameters vary with time, the time function for the parameter needs to be determined and reused similarly. Functions and parameters are very compact objects for storing such reusable knowledge. GENERAL STRATEGY FOR DEVELOPING AND USING DYNAMIC BIOMARKERS To summarize, there are many types of information, only two of which were described here. In general, dynamic biomarkers will have more information than static biomarkers; multivariate biomarkers will have more information than univariate biomarkers. These ideas are foreign to most biologists and will take some time to spread among them. The standard entities used to evaluate biomarkers, such as sensitivity, specificity, and ROC curves, have questionable or limited utility but can easily be modified to fit the Shannon information framework. Some steps for biomarker development and implementation are given here as a guideline and probably contain significant gaps in knowledge, requiring further study: 1. Choose the signals (S) or the surrogates (D) that are relevant (biology). 2. Choose the best models (P) for the given S (mathematics/statistics) and all available biomarkers (Y) (biology). 3. Choose the best decision rules (R) given P (mathematics/statistics). 4. Choose the best subset, or subspace, of Y (statistics). The mathematical entities P, R, and Y determine the diagnosis of S. Currently, most of the effort is focused on Y.
MODELING APPROACHES
665
MODELING APPROACHES Signal Types This section deals with some very specific aspects of the models for P(Y t). Most people think of signals as being electrical. This is probably because most of the terminology and use comes from electrical engineering. However, the mathematics is completely general. Signals can be static or dynamic, meaning something that is measurable with a constant value or a time-varying value, respectively. Biomarkers are just biological signals. Signals are classified as analog (continuous) or digital (discrete). Birth and death are discrete signals; blood pressure and serum glucose levels are continuous signals. Dynamic signals vary over time. Time can also be classified as continuous or discrete, as described above. Anything that changes in time and space is a dynamic system. This generally implies that space has more than one dimension, but it does not have to be physical space. The space of biomarkers would be an example where each univariate biomarker (signal) defines a spatial dimension. A continuous system is modeled with a set of differential equations where the variables defining the space usually appear in more than one equation. A discrete system is modeled with a set of difference equations. Mathematical Models of Dynamic Signals Mathematical models are usually hypothesized prior to the experiment and then verified by the experimental data. Historically, these models have been deterministic differential equations. Pharmacokinetics is an example of this. Following are some typical mathematical models (ordinary differential equations) and their solutions: dx dt dx Quadratic model: dt dx Exponential model: dt dx Sigmoidall model: dt dx Sine model: dt Linear model:
= β1
⇒ x = β 0 + β1 t
= β1 + 2β 2 t
⇒ x = β 0 + β1 t + β 2 t 2
= β1 x
⇒ x = e β 0 + β1 t
=
β1 β0 β 2 x (β 2 − x ) ⇒ x = β2 β 0 + (β 2 − β 0 ) e β 1 t
= β1 x (β 22 − x 2 ) ⇒ x = β0 + β 2 sin β1t
The middle column is the hypothesized model of the velocity (first time derivative) of x, a dynamic quantity. Solving such models involves finding a method that ends up with x by itself on the left of the equal sign and a
666
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
function of time on the right. More complicated types of models are used by mathematical biologists to describe and predict the behavior of microscopic biological processes such as intracellular metabolism or nerve conduction [3]. Statistical Models Statistical models are models that have at least one random (stochastic) variable. Going forward, mechanistic biological models will probably be a mixture of deterministic and stochastic variables. Most statistical models describe the behavior of the mean of P as a function of other variables, including time. Unless there is a specific experimental design model such as analysis of variance (ANOVA), statistical models tend to be constructed after the behavior of the data is known. A serious loss of information can occur if the wrong model is chosen. Here are some typical statistical models of data varying in time, where T is the total time of observation: Mean model: ANOVA model: Straight-line model: Polynomial model: Trigonometric model:
Log-linear model: Sigmoidal model: Sine model: OU model:
yt = μt + εt yt = μ t + ε yt = β0 + β1 t + ε yt = β0 + β1 t + β 2 t 2 + β 3 t 3 + β 4 t 4 + + ε 2 πt 2 πt y t = β0 + β1 cos + β 2 sin T T 4 πt 4 πt + + ε + β 3 cos + β 4 sin T T 2
3
4
yt = eβ0 + β1t + β2 t + β3t + β4 t ++ ε β0 β 2 yt = +ε β0 + (β 2 − β0 ) eβ1t yt = β0 + β 2 sin β1 t + ε
y t = e − α (t − s) ys + μ (1 − e − α (t − s) ) t
+ 2ασ e − α (t − s) ∫ s eαu dBu + ε In all of these models, it is generally assumed that ε is the random measurement error, or residual difference between the mean of the model and the observed y. This error is assumed to be Gaussian with mean zero and variance σ 2ε , and each measurement error is statistically independent of all the others. Since the rest of the model is usually deterministic, in most fields of measurement it is the error term that induces y to be a random variable. However, in biology there are many sources of biological variation as well. In the OU model when the biological Brownian motion B is also contributing variation in a complicated way, mechanistic interpretation is much easier through the SDE. It should be noted that all the parameters in these models are also independent of time, except the first two.
MODELING APPROACHES
667
The “mean model” is not really a model; it represents simply the calculation of the mean and standard deviation at each time point, a typical biologist approach. These are descriptive statistics and do not lend themselves to efficient statistical analyses. The next five models are all special cases of the linear model; that is, they are linear in the unknown parameters, not in the variables. ANOVA is a model that parameterizes the mean for each combination of the experimental factors; there are many equivalent ways to parameterize these means. The fundamental difference between the mean model and ANOVA is that for the latter, the standard deviations are assumed to be equal, leading to a more precise estimate of the means, if the assumption is true. In the representation of ANOVA in the example below, the cell-mean model is used, meaning that a mean is estimated for each combination and any modeling of those means is done using contrasts of those means (shown above). Given the data and a linear model, a system of equations, one for each measurement, can be solved using linear algebra to get OLS estimates of the parameters. The last three models are nonlinear and require iterative methods to solve for the parameters, which means repeated guessing using mathematical techniques such as calculus. This requires a computer and a stopping rule to determine when the guess is close enough. Sometimes it is not possible or is very difficult to get the algorithm to find the solution. This is related to the nature of the model and the nature of the data. Statisticians prefer to avoid nonlinear models for this reason. Note that the OU model is nonlinear but can be “linearized” in special cases. Monte Carlo Data Generation Most software packages have random number generators. These are actually pseudorandom numbers because all of the algorithms are deterministic [28]. Monte Carlo simulations are just the generation of these numbers to suit a particular purpose. The two most common uses of Monte Carlo methods are the generation of random data to evaluate or display statistical results and the solution of deterministic equations that are too difficult to solve using mathematical methods. Commercial simulators use Monte Carlo methods as well. All of the data used in the examples below were generated by this method. Each of the example experiments used the same data sets for a given autocorrelation and response percentage, starting with the first observation in each group. The underlying probability distribution was the OU model using different means where the control (no-effect) group (n = 500) had all constant parameters and the experimental group (n = 500) had the same parameters except that the mean was a quadratic model in time deviating from the controls after t = 0, with δ being assigned randomly as a 0 or 1, with probability p for responding to the intervention: Parameter Control Group Experiment Group μ β0 β 0 + δ (β 1 t + β 2 t 2 )
668
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
The values for β0, β1, and β2 were 3, 1/5, and −2/300, respectively; σ was ½; and α was either 3 to simulate independence or 0.03 to simulate a value similar to those observed previously in liver tests [29,30]. The response proportion was varied among 1.0, 0.5, and 0.1 to show the effect of partial responses, which is common in biology. A measurement error was added with a CV of 10% (e.g., σε = 0.1μ), except that the added noise was based on the actual measured value, not the average measured value, where it was assumed that the raw data were log-Gaussian, like an enzyme [18,31]. The parameters of the quadratic response model were chosen so that the maximum is 3σ from the control group mean halfway between the ends of the time interval. The first 50 samples in each group of the simulated data are shown in Figures 5 and 6 for α = 3 and α = 0.03, respectively. In Figure 5 at 100% response, the quadratic response is very clear and diminishes with decreasing
6 5 4 3 2 1 0 0
5
10
15
20
Experimental Group
100 %
yt
yt
Control Group
25
6 5 4 3 2 1 0
30
0
5
10
Time
15
20
25
30
20
25
30
20
25
30
Time
6 5 4 3 2 1 0
yt
yt
50 %
0
5
10
15
20
25
6 5 4 3 2 1 0
30
0
5
10
Time
15 Time
6 5 4 3 2 1 0
yt
yt
10 %
0
5
10
15 Time
20
25
30
6 5 4 3 2 1 0 0
5
10
15 Time
Figure 5 Examples of the simulated data for α = 3 and responses 100%, 50%, and 10%.
MODELING APPROACHES
6 5 4 3 2 1 0 0
5
10
15
20
Experimental Group
100 %
yt
yt
Control Group
25
669
6 5 4 3 2 1 0 0
30
5
10
Time
15
20
25
30
20
25
30
20
25
30
Time
6 5 4 3 2 1 0
yt
yt
50 %
0
5
10
15
20
25
6 5 4 3 2 1 0
30
0
5
10
Time
15 Time
6 5 4 3 2 1 0
yt
yt
10 %
0
5
10
15 Time
20
25
30
6 5 4 3 2 1 0 0
5
10
15 Time
Figure 6 Examples of the simulated data for α = 0.03 and responses 100%, 50%, and 10%.
response. No statistician is needed here. In Figure 6 the signals have a very different appearance, even though it is only α that differs. The strong autocorrelation mutes the variation in the signal and causes it to lag in time. There is little visual difference among the difference responses. A look at the plots might suggest that there is no signal at all, but the mean signal is exactly the same as in Figure 5. Standard statistical methods currently used in drug development may not find this signal. The examples below are intended to illustrate some of the issues. It will obviously take more sophisticated methods, not used in this chapter, to detect this signal properly under autocorrelation. The following statistical models were used for the experimental effect while t was set to zero at all time points on the right-hand side for the control group:
670
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
Model 1: Model 2: Model 3: Model 4: Model 5: Model 6: Model 7: Mod del 8: Model 9: Model 10:
yt = μ t + ε yt = γys + μ t + ε yt = γy0 + μ t + ε yt − y0 = μ t + ε yt − y0 = γ ( ys − y0 ) + μ t + ε yt = β0 + β1t + β 2 t 2 + β 3 t 3 + β 4 t 4 + ε yt = γys + β0 + β1t + β 2 t 2 + β 3 t 3 + β 4 t 4 + ε yt = γy0 + β0 + β1t + β 2 t 2 + β 3 t 3 + β 4 t 4 + ε yt − y0 = β0 + β1t + β 2 t 2 + β 3 t 3 + β 4 t 4 + ε yt − y0 = λ ( ys − y0 ) + β0 + β1t + β 2 t 2 + β 3 t 3 + β 4 t 4 + ε
These models are compared in the simulated experiments below. Models 1 to 10 used the lm(·) function in R [32], which is OLS. Models 11 to 20 are the same models, respectively, except that the linear mixed-effect model lme(·) in R with restricted maximum likelihood estimation was used, allowing random baselines (intercepts). Model 1 is ANOVA, model 2 is ANOVA covariateadjusted for the previous observation, model 3 is ANOVA covariate-adjusted for the baseline, model 4 is ANOVA for the change from baseline, and model 5 is ANOVA for the change from baseline that is covariate-adjusted for the previous change from baseline. Models 6 to 10 replaced the means with fourthdegree polynomials in time. These models were chosen because they are typical of what a statistician or a biologist might apply to similar data sets to study biomarkers when the underlying time-dependence structure is not known. The focus is on the mean response, which is typical; no attempt was made to extract the biological variation component σ or the analytical variation component σε. The nonconstant variance of the residuals was also ignored. If there is no measurement error and the biology is known to follow an OU model, model 2 or 7 would be the “best” to estimate α, μ, and σ, although transformations would be needed to get the correct estimates. For example, α = −log γ/(t − s) only if t − s is a constant, which was used below. When γ is not in the model, a value of “na” is shown in the results; when it is negative, “*” is shown. The mean would have to be transformed similarly, but this was not done below in the calculation of the bias, since it is not likely that the analyst would know the proper transformation. If the underlying process is known to be OU and the baseline of both groups has the same distribution as the control group over time, covariance adjustment for the baseline loses information about α and increases the model bias, possibly masking the signal. Modeling the change from baseline underestimates the variance because it assumes that the variance of the baseline distribution is zero. This approach might give less biased results but is likely to have p-values that are too small, meaning that false signals may be detected.
EXAMPLE EXPERIMENTS
671
Statistical tests for signal detection in the experiments below are based on the differences in parameters between the two groups: linear effect, quadratic effect (quad), cubic effect, and quartic effect. Hierarchial statistical testing was performed to learn about the mathematical form of the biological response. An overall test for the deviation from baseline was done to see if any signal was present. This is the test that counts since it controls the type I error. The subsequent tests were secondary and were not corrected for multiple testing. Second, a test for a fourth-degree polynomial (poly) was done; if it is significant, it means that at least one coefficient in the polynomial is not zero. Then, each degree of the polynomial is tested to determine the functional shape of the curve. In the simulated data, only the true quadratic term is nonzero; any other significant findings are spurious. In all cases presented below, the estimates of σ are too low, resulting in p-values that are too small. To evaluate the Fisher information for each model, the MSE was calculated using the estimated variance (Var) of the time function integrated over time from 0 to 30 and the integrated bias using the true polynomial. This represents the mean squared area between the true time function and the estimated time function. In most experimental cases, the bias can never be known, but it can be reduced using techniques such as bootstrapping [21]. When using OLS, it is assumed that the estimated parameters are unbiased. In a real statistical experiment, the behavior of the particular estimation procedure would be simulated hundreds or thousands of times and the distributions of the results would be studied. Here only a single simulation was done for illustration, and no scientific conclusion should be inferred from this presentation. EXAMPLE EXPERIMENTS In Vitro Experiments This section is intended to explore the behavior of an OU process when only a few experiments are run but the response can be measured at many equally spaced time points. Assay development is not discussed here but can be found in many papers and textbooks, such as that of Burtis and Ashwood [33]. All the experiments that follow assume that the assay has been “optimized” according to the existing standards. One point that needs to be stressed here is the relationship between the true value of the biomarker μ and the variation of the measurement error σε, assuming that no assay bias is present. Many use the concept of the CV = σε/μ being constant. If this relationship really holds, the statistical analyst needs to know that and approximately what the constant is. Prior to using a biomarker, it is important to define the mathematical relationship between σε and μ so that the most information can be gained by the modeling. It should also be noted the since μ is a function of time, the measurement variance will also be a function of time. Most statistical procedures assume that the measurement variance is constant, including those in these examples. This assumption will tend to hide signals.
672
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
In the laboratory, it is relatively easy to obtain measurements at frequent, equally spaced time intervals. If a good biological model is available, this is the place to define the best (maximum Fisher information) mathematical model of the response. If such a model is not known, it should be explored first. This section looks at two simulation experiments comparing the size of α (3 vs. 0.03) with 100% response. Each has n = 5 experimental units per group, and each unit has m = 30 follow-up measurements. These could represent chemical reaction experiments, cell measurements, well measurements, or any similar in vitro biological model for a biomarker. The raw data are shown in Figures 5 and 6, and the statistics are shown in Tables 7a and 7b. When α is large, the measurements are effectively independent and there should be no relationship to the previous measurements, including the baseline. However, if the baselines do not come from the same distribution, the comparability of the results is called into question. Here in Table 7a, only models 1, 5, and 7 gave reasonable estimates of α. In the strong autocorrelation case, Table 7b shows that models 2, 5, 7, 10, 12, 15, 17, and 20 gave estimates of α that, although not very precise, were the correct order of magnitude. In Table 7a, every model detected the signal and the polynomial and the quadratic effect intended. However, there are many p-values showing significance where there should not be any. This can lead to spurious conclusions about the nature of the biomarker. In the experiment shown in Table 7b, several models gave reasonable indications for the magnitude of the autocorrelation but did not always find the underlying signal. In the first case, all the biases show overestimates of the time effect, while in the second case, the biases are in the opposite direction and have a larger magnitude, making the MSE larger—less information. The larger and negative bias is exacerbated by the fact that the time model parameter estimates need to be divided by 1 − e−α(t−s), which is always less than 1. This correction was not done because the estimates of α are not good enough to make the correction reasonable to use. Additionally, the analyst would not be estimating α, making these results more representative of current practice. When t − s is not constant, software for the estimation procedure is not readily available. In Vivo Experiments This section is meant to illustrate experiments of the size used in animal studies but may also apply to early clinical development. Here there are only m = 4 follow-up measurement, but the number (n = 36) per group was scaled up so that approximately the same number of observations are available. This results in comparable degrees of freedom for the model errors. In Tables 8a and 8b, the results are analogous to those in Tables 7a and 7b. However, in Table 8a the estimates of α are generally all bad, leading to the conclusion that autocorrelation is present, when it really is not. The quadratic
673
na 1.29 * na 6.39 na 1.58 * na * na * * na * na * * na *
na, not available. *, negative estimate.
a
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
240 239 239 240 239 295 294 294 295 294 231 230 9 231 230 286 285 286 286 286
Error df
0.59 0.57 0.50 0.50 0.50 0.57 0.56 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50 0.50
σ 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.08 0.06 0.02 0.02 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Linear
α
Model
a
Statistical Results for α = 3, p = 1, m = 30, and n = 5
TABLE 7a
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.54 0.52 0.67 0.64 0.64 0.01 0.03 0.06 0.04 0.04 0.00 0.00 0.23 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.29 0.44 0.34 0.32 0.32 0.02 0.04 0.06 0.05 0.05 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0110 0.0111 0.0080 0.0080 0.0090 0.0040 0.0091 0.0035 0.0031 0.0070 0.0080 0.0090 0.0080 0.0080 0.0090 0.0233 0.0308 0.0040 0.0036 0.0079
Var
0.2746 0.1452 0.2802 0.2795 0.2787 0.2430 −0.0166 0.0381 0.0626 0.0993 0.2746 0.2767 0.2802 0.2795 0.2819 0.2307 0.2905 0.0397 0.0640 0.1103
Bias
0.0864 0.0322 0.0865 0.0861 0.0867 0.0630 0.0093 0.0050 0.0070 0.0169 0.0834 0.0856 0.0865 0.0861 0.0884 0.0766 0.1152 0.0056 0.0077 0.0201
MSE
674
na, not available.
a
na 0.02 0.61 na 0.01 na 0.02 0.61 na 0.01 na 0.07 0.61 na 0.08 na 0.10 0.56 na 0.10
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
a
α
240 239 239 240 239 295 294 294 295 294 231 230 9 231 230 286 285 286 286 286
Error df
0.51 0.12 0.44 0.49 0.12 0.47 0.12 0.41 0.45 0.12 0.26 0.12 0.26 0.26 0.12 0.25 0.12 0.25 0.25 0.25
σ 0.00 0.23 0.00 0.00 0.32 0.00 0.02 0.00 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.20 0.00 0.00 0.15 0.00 0.02 0.00 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.00 0.13 0.00 0.00 0.08 0.00 0.04 0.00 0.00 0.05 0.00 0.00 0.00 0.00 0.22 0.00 0.00 0.00 0.00 0.00
Linear
Statistical Results for α = 0.03, p = 1, m = 30, and n = 5
Model
TABLE 7b
0.24 0.27 0.19 0.25 0.29 0.03 0.10 0.03 0.07 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.77 0.13 0.69 0.69 0.13 0.08 0.14 0.07 0.13 0.16 0.00 0.00 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.45 0.32 0.41 0.49 0.34 0.11 0.15 0.09 0.16 0.16 0.00 0.00 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0085 0.0005 0.0062 0.0078 0.0005 0.0028 0.0003 0.0021 0.0025 0.0003 0.0023 0.0005 0.0023 0.0023 0.0005 0.0129 0.0011 0.0122 0.0126 0.0012
Var
MSE 0.3658 0.0686 0.3623 0.3629 0.0669 0.0670 0.9204 0.0865 0.1059 0.9333 0.3596 0.0781 0.3584 0.3573 0.0794 0.3055 0.8869 0.2960 0.3108 0.8980
Bias −0.5978 −0.2610 −0.5967 −0.5959 −0.2578 −0.2534 −0.9592 −0.2905 −0.3215 −0.9659 −0.5978 −0.2787 −0.5967 −0.5959 −0.2809 −0.5409 −0.9411 −0.5327 −0.5461 −0.9470
675
na, not available.
a
280 279 279 280 279 283 282 282 283 282 209 208 71 209 208 212 211 212 212 212
0.66 0.58 0.51 0.51 0.51 0.66 0.57 0.51 0.51 0.51 0.49 0.58 0.49 0.49 0.51 0.49 0.57 0.49 0.49 0.49
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.05 0.00 0.05 0.05 1.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.97 0.00 0.00 0.00 0.00 0.00
Linear
na 0.09 0.00 na 0.27 na 0.64 0.00 na 0.27 na 0.09 0.00 na 1.92 na 0.09 0.00 na 0.27
σ
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Error df
α
Model
a
Statistical Results for α = 3, p = 1, m = 4, and n = 36
TABLE 8a
0.00 0.00 0.00 0.00 0.00 0.11 0.00 0.06 0.06 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.43 0.47 0.21 0.21 0.47 0.23 0.00 0.16 0.16 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.34 0.05 0.26 0.25 0.15 0.22 0.01 0.15 0.14 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0024 0.0021 0.0014 0.0014 0.0018 0.0054 0.0062 0.0033 0.0033 0.0056 0.0013 0.0021 0.0013 0.0013 0.0018 0.0124 0.0062 0.0042 0.0040 0.0056
Var
MSE 0.0451 0.0041 0.0420 0.0420 0.0266 0.0056 0.1925 0.0085 0.0084 0.0389 0.0441 0.0041 0.0418 0.0418 0.0266 0.0126 0.1925 0.0093 0.0092 0.0389
Bias −0.2068 −0.0452 −0.2013 −0.2013 −0.1574 0.0124 −0.4316 −0.0717 −0.0718 −0.1825 −0.2068 −0.0452 −0.2013 −0.2013 −0.1574 0.0124 −0.4316 −0.0717 −0.0718 −0.1825
676
na 0.01 0.02 na 0.02 na 0.01 0.02 na 0.02 na 0.01 0.02 na 0.04 na 0.01 0.02 na 0.05
na, not available.
a
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
280 279 279 280 279 283 282 282 283 282 209 208 71 209 208 212 211 212 212 212
Error df
0.62 0.31 0.43 0.44 0.31 0.61 0.31 0.43 0.44 0.31 0.27 0.31 0.27 0.27 0.29 0.27 0.31 0.27 0.27 0.27
σ 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.00 0.92 0.00 0.00 1.00 0.86 0.93 0.80 0.80 0.94 0.00 0.15 0.00 0.00 0.00 0.00 0.21 0.00 0.00 0.91
Linear
α
Model
a
Statistical Results for α = 0.03, p = 1, m = 4, and n = 36
TABLE 8b
0.26 0.00 0.11 0.11 0.00 0.70 0.59 0.59 0.59 0.59 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.49 0.95 0.33 0.33 0.94 0.78 0.52 0.69 0.69 0.53 0.00 0.37 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.71 0.15 0.59 0.60 0.16 0.86 0.55 0.80 0.80 0.56 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0021 0.0005 0.0010 0.0010 0.0006 0.0047 0.0012 0.0023 0.0024 0.0013 0.0005 0.0005 0.0004 0.0004 0.0005 0.0155 0.0012 0.0064 0.0065 0.0020
Var
MSE 0.2096 0.1244 0.2086 0.2086 0.1255 0.4935 0.7737 0.4910 0.4910 0.7682 0.2080 0.1244 0.2080 0.2080 0.1377 0.5043 0.7737 0.4951 0.4952 0.7092
Bias −0.4556 −0.3520 −0.4556 −0.4556 −0.3535 −0.6992 −0.8789 −0.6991 −0.6991 −0.8757 −0.4556 −0.3520 −0.4556 −0.4556 −0.3704 −0.6992 −0.8789 −0.6991 −0.6991 −0.8409
DISCUSSION
677
signal is missed in many models in Table 8b. A second design was analyzed where there was only one follow-up time (m = 1). This is typical of designs that incorporate time but want independent measurements. In comparing Tables 8e to 8h with their counterparts Tables 8a to 8d, respectively, the signals detected are much weaker and, of course, there is no information about α even though estimates were calculated from the regression coefficients for the baseline term.
Clinical Trials Clinical trials generally have much larger sample sizes, especially phase III trials. In this section the same designs and models are used, but the sample size is increased. In Tables 9a and 9b the response is 50% in the experimental treatment group and n = 200. In Tables 9c and 9d, the response is only 10% in the experimental group and n = 500. The latter case is more typical of clinical safety data. With larger sample sizes, the properties of the models would be expected to improve. The p-values may get smaller and the MSEs, at least the variance component, should be reduced because more samples should produce more information. Here the bias seems to be unaffected for both response categories. This generally means that if the wrong model is chosen, more measurements will not make it better. Tables 9a and 9b look very similar to those above, but Tables 9c and 9d show some notable features. First because the models do not estimate the proportion p of responders, the models for the experimental treatment group are a weighted average of 10% quadratic response and 90% no response. This should cause the bias to increase, which it generally does. When autocorrelation is present and the response rate is low (Table 9d), the signal is lost completely (i.e., no information is available). It remains to be seen if better statistical procedures can find this signal.
DISCUSSION Overview of Results Biomarker experiments with repeated measures over time do not ensure that additional information will be obtained, even though, theoretically, it is guaranteed. The experimental design has to be correct, the biomathematical model of time response has to be correct, and the statistical modeling procedure must be an efficient estimator of that model. If any one of these parts is broken, information can be lost or destroyed completely. For efficacy biomarkers, this means wasted money or missed opportunities. For safety biomarkers, this leads to late attrition or a market recall.
678
na 0.10 0.00 na * na 0.10 0.00 na * na * 0.00 na * na * 0.00 na *
na, not available. *, negative estimate.
a
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
280 279 279 280 279 283 282 282 283 282 209 208 71 209 208 212 211 212 212 212
Error df
0.74 0.66 0.54 0.54 0.54 0.74 0.66 0.54 0.54 0.54 0.56 0.55 0.54 0.54 0.54 0.55 0.55 0.54 0.54 0.54
σ 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.34 0.08 0.27 0.27 0.06 0.04 0.00 0.01 0.01 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Linear
α
Model
a
Statistical Results for α = 3, p = 0.5, m = 4, and n = 36
TABLE 8c
0.00 0.00 0.00 0.00 0.00 0.17 0.01 0.07 0.07 0.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.40 0.58 0.22 0.22 0.17 0.21 0.02 0.10 0.09 0.17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.30 0.09 0.16 0.16 0.21 0.19 0.02 0.08 0.08 0.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0030 0.0025 0.0016 0.0016 0.0017 0.0068 0.0060 0.0036 0.0036 0.0043 0.0017 0.0019 0.0016 0.0016 0.0017 0.0155 0.0175 0.0036 0.0036 0.0046
Var
MSE 0.0851 0.0406 0.0826 0.0826 0.0957 0.2607 0.5010 0.2869 0.2878 0.2417 0.0838 0.0862 0.0826 0.0826 0.0979 0.2694 0.2546 0.2869 0.2878 0.2323
Bias −0.2865 −0.1953 −0.2847 −0.2847 −0.3065 −0.5039 −0.7036 −0.5322 −0.5331 −0.4872 −0.2865 −0.2904 −0.2847 −0.2847 −0.3101 −0.5039 −0.4869 −0.5322 −0.5331 −0.4772
679
na 0.01 0.00 na 0.01 na 0.01 0.00 na 0.02 na 0.01 0.00 na 0.02 na 0.01 0.00 na 0.03
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
na, not available.
a
α
Model
a
280 279 279 280 279 283 282 282 283 282 209 208 71 209 208 212 211 212 212 212
Error df
0.61 0.30 0.42 0.42 0.30 0.61 0.30 0.42 0.42 0.30 0.26 0.30 0.26 0.26 0.29 0.26 0.30 0.26 0.26 0.26
σ 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.01 0.49 0.00 0.00 0.77 0.67 0.14 0.40 0.40 0.14 0.00 0.00 0.00 0.00 0.98 0.00 0.00 0.00 0.00 0.00
Linear 0.29 0.01 0.07 0.07 0.01 0.93 0.39 0.80 0.80 0.40 0.00 0.00 0.00 0.00 0.00 0.01 0.00 0.00 0.00 0.00
Quad
p-Value
TABLE 8d Statistical Results for α = 0.03, p = 0.5, m = 4, and n = 36
0.93 0.09 0.99 0.99 0.11 0.95 0.52 0.86 0.86 0.53 0.02 0.00 0.91 0.80 0.00 0.06 0.00 0.00 0.00 0.00
Cubic 0.78 0.71 0.72 0.72 0.70 0.94 0.58 0.86 0.86 0.59 0.00 0.00 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.00
Quartic 0.0020 0.0005 0.0009 0.0009 0.0005 0.0045 0.0011 0.0022 0.0021 0.0012 0.0004 0.0005 0.0004 0.0004 0.0005 0.0151 0.0011 0.0059 0.0058 0.0016
Var
MSE 0.1946 0.1215 0.1956 0.1957 0.1253 0.5195 0.7838 0.4662 0.4651 0.7591 0.1930 0.1215 0.1951 0.1951 0.1281 0.5300 0.7838 0.4700 0.4688 0.7222
Bias −0.4389 −0.3479 −0.4412 −0.4413 −0.3533 −0.7176 −0.8847 −0.6812 −0.6804 −0.8706 −0.4389 −0.3479 −0.4412 −0.4413 −0.3573 −0.7176 −0.8847 −0.6812 −0.6804 −0.8489
680
na, not available.
a
64 63 64 67 66 67 64 63 64 67 66 67
0.65 0.52 0.52 0.65 0.55 0.55 0.23 0.18 0.18 0.23 0.19 0.19
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.28 0.27 0.28 0.08 0.07 0.08 0.00 0.00 0.00 0.00 0.00 0.00
Linear
na 0.01 na na 0.02 na na 0.01 na na 0.02 na
All
1 3 4 6 8 9 11 13 14 16 18 19
σ
αa
Model
Error df
Statistical Results for α = 3, p = 1, m = 1, and n = 36
TABLE 8e
0.00 0.00 0.00 0.39 0.50 0.54 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.27 0.20 0.20 0.48 0.69 0.74 0.00 0.00 0.00 0.00 0.00 0.01
Cubic 0.07 0.02 0.02 0.46 0.70 0.77 0.00 0.00 0.00 0.00 0.00 0.02
Quartic 0.0090 0.0058 0.0057 0.0212 0.0155 0.0152 0.0090 0.0058 0.0057 0.0212 0.0155 0.0152
Var
MSE 0.0385 0.0355 0.0355 0.0212 0.0217 0.0242 0.0385 0.0355 0.0355 0.0212 0.0217 0.0242
Bias −0.1718 −0.1726 −0.1726 0.0054 −0.0788 −0.0946 −0.1718 −0.1726 −0.1726 0.0054 −0.0788 −0.0946
681
na, not available.
a
64 63 64 67 66 67 64 63 64 67 66 67
0.59 0.42 0.43 0.60 0.42 0.42 0.21 0.15 0.15 0.21 0.15 0.15
0.08 0.01 0.02 0.16 0.01 0.02 0.00 0.00 0.00 0.00 0.00 0.00
All 0.08 0.01 0.02 0.16 0.01 0.02 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.05 0.04 0.07 0.97 0.87 0.84 0.00 0.00 0.00 0.80 0.19 0.10
Linear
na 0.03 na na 0.03 na na 0.03 na na 0.03 na
σ
1 3 4 6 8 9 11 13 14 16 18 19
Error df
α
Model
a
Statistical Results for α = 0.03, p = 1, m = 1, and n = 36
TABLE 8f
0.51 0.88 0.68 0.71 0.77 0.81 0.00 0.24 0.00 0.00 0.02 0.05
Quad
p-Value
0.35 0.33 0.38 0.64 0.67 0.72 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.67 0.77 0.83 0.61 0.64 0.68 0.00 0.02 0.10 0.00 0.00 0.00
Quartic 0.0074 0.0038 0.0039 0.0177 0.0086 0.0089 0.0074 0.0038 0.0039 0.0177 0.0086 0.0089
Var
MSE 0.2599 0.2072 0.1956 0.5030 0.4864 0.4851 0.2599 0.2072 0.1956 0.5030 0.4864 0.4851
Bias −0.5025 −0.4510 −0.4378 −0.6966 −0.6912 −0.6901 −0.5025 −0.4510 −0.4378 −0.6966 −0.6912 −0.6901
682
na * na na * na na * na na * na
1 3 4 6 8 9 11 13 14 16 18 19
na, not available. *, negative estimate.
a
α
a
64 63 64 67 66 67 64 63 64 67 66 67
Error df
0.88 0.58 0.59 0.88 0.59 0.60 0.31 0.20 0.21 0.31 0.21 0.21
σ
0.41 0.09 0.10 0.24 0.02 0.03 0.00 0.00 0.00 0.00 0.00 0.00
All 0.41 0.09 0.10 0.24 0.02 0.03 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.37 0.97 0.77 0.46 0.53 0.48 0.00 0.75 0.02 0.00 0.00 0.00
Linear
Statistical Results for α = 3, p = 0.5, m = 1, and n = 36
Model
TABLE 8g
0.74 0.12 0.18 0.51 0.72 0.64 0.01 0.00 0.00 0.00 0.01 0.00
Quad
p-Value
0.81 0.86 0.83 0.48 0.72 0.63 0.05 0.15 0.09 0.00 0.01 0.00
Cubic 0.27 0.09 0.10 0.43 0.67 0.57 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0167 0.0074 0.0075 0.0384 0.0173 0.0178 0.0167 0.0074 0.0075 0.0384 0.0173 0.0178
Var
MSE 0.1662 0.0829 0.0954 0.4539 0.4908 0.4799 0.1662 0.0829 0.0954 0.4539 0.4908 0.4799
Bias −0.3866 −0.2748 −0.2965 −0.6445 −0.6881 −0.6798 −0.3866 −0.2748 −0.2965 −0.6445 −0.6881 −0.6798
683
na 0.00 na na 0.01 na na 0.00 na na 0.01 na
1 3 4 6 8 9 11 13 14 16 18 19
na, not available.
a
α
Model
a
64 63 64 67 66 67 64 63 64 67 66 67
Error df
0.62 0.43 0.43 0.61 0.44 0.44 0.22 0.15 0.15 0.21 0.15 0.15
σ 0.17 0.01 0.01 0.16 0.04 0.04 0.00 0.00 0.00 0.00 0.00 0.00
All 0.17 0.01 0.01 0.16 0.04 0.04 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.10 0.00 0.00 0.08 0.36 0.40 0.00 0.00 0.00 0.00 0.00 0.00
Linear 0.67 0.84 0.84 0.10 0.59 0.65 0.00 0.11 0.11 0.00 0.00 0.00
Quad
p-Value
TABLE 8h Statistical Results for α = 0.03, p = 0.5, m = 0, and n = 36
0.88 0.38 0.38 0.10 0.68 0.74 0.23 0.00 0.00 0.00 0.00 0.01
Cubic 0.21 0.73 0.73 0.11 0.73 0.80 0.00 0.01 0.01 0.00 0.01 0.04
Quartic 0.0082 0.0040 0.0039 0.0184 0.0096 0.0095 0.0082 0.0040 0.0039 0.0184 0.0096 0.0095
Var
MSE 0.2149 0.2368 0.2366 0.5061 0.4921 0.4917 0.2149 0.2368 0.2366 0.5061 0.4921 0.4917
Bias −0.4546 −0.4825 −0.4823 −0.6983 −0.6946 −0.6944 −0.4546 −0.4825 −0.4823 −0.6983 −0.6946 −0.6944
684
α
na 0.08 0.00 na * na 0.08 0.00 na * na 0.08 0.00 na * na 0.08 0.00 na *
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
a na, not available. *, negative estimate.
a
1592 1591 1591 1592 1591 1595 1594 1594 1595 1594 1193 1192 399 1193 1192 1196 1195 1196 1196 1196
Error df
0.72 0.61 0.51 0.51 0.51 0.72 0.61 0.51 0.51 0.51 0.51 0.61 0.51 0.51 0.51 0.51 0.61 0.51 0.51 0.51
σ 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.47 0.00 0.67 0.69 0.64 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Linear 0.00 0.00 0.00 0.00 0.00 0.07 0.00 0.02 0.02 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
Statistical Results for α = 3, p = 0.5, m = 4, and n = 200
Model
TABLE 9a
0.78 0.00 0.87 0.88 0.90 0.23 0.00 0.12 0.12 0.13 0.00 0.00 0.00 0.00 0.02 0.00 0.00 0.00 0.00 0.00
Cubic 0.64 0.07 0.57 0.57 0.58 0.26 0.00 0.14 0.14 0.15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0005 0.0004 0.0003 0.0003 0.0003 0.0012 0.0009 0.0006 0.0006 0.0007 0.0003 0.0004 0.0003 0.0003 0.0003 0.0028 0.0009 0.0006 0.0006 0.0007
Var
MSE 0.0706 0.0355 0.0692 0.0692 0.0694 0.2319 0.5383 0.2655 0.2665 0.2651 0.0704 0.0355 0.0692 0.0692 0.0701 0.2336 0.5383 0.2655 0.2665 0.2617
Bias −0.2648 −0.1876 −0.2626 −0.2625 −0.2630 −0.4804 −0.7330 −0.5147 −0.5156 −0.5142 −0.2648 −0.1876 −0.2626 −0.2625 −0.2643 −0.4804 −0.7330 −0.5147 −0.5156 −0.5109
685
na 0.01 0.00 na 0.02 na 0.01 0.00 na 0.02 na 0.01 0.00 na 0.03 na 0.01 0.00 na 0.04
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
na, not available.
a
α
a
1592 1591 1591 1592 1591 1595 1594 1594 1595 1594 1193 1192 399 1193 1192 1196 1195 1196 1196 1196
Error df
0.64 0.30 0.41 0.41 0.30 0.64 0.30 0.41 0.41 0.30 0.26 0.30 0.26 0.26 0.30 0.26 0.30 0.26 0.26 0.26
σ 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.00 0.53 0.00 0.00 0.42 0.70 0.00 0.06 0.06 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Linear 0.43 0.00 0.01 0.01 0.00 0.90 0.01 0.38 0.38 0.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
Statistical Results for α = 0.03, p = 0.5, m = 4, and n = 200
Model
TABLE 9b
0.60 0.10 0.96 0.96 0.12 0.82 0.02 0.41 0.41 0.02 0.00 0.00 0.20 0.02 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.87 0.05 0.61 0.61 0.06 0.74 0.02 0.37 0.37 0.02 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0004 0.0001 0.0002 0.0002 0.0001 0.0009 0.0002 0.0004 0.0004 0.0002 0.0001 0.0001 0.0001 0.0001 0.0001 0.0030 0.0002 0.0010 0.0010 0.0003
Var
MSE 0.1810 0.1238 0.1856 0.1856 0.1299 0.6977 0.8165 0.5589 0.5591 0.7793 0.1807 0.1238 0.1855 0.1855 0.1326 0.6999 0.8165 0.5596 0.5598 0.7586
Bias −0.4250 −0.3517 −0.4307 −0.4307 −0.3603 −0.8348 −0.9035 −0.7474 −0.7475 −0.8827 −0.4250 −0.3517 −0.4307 −0.4307 −0.3641 −0.8348 −0.9035 −0.7474 −0.7475 −0.8708
686
a na, not available. *, negative estimate.
3992 3991 3991 3992 3991 3995 3994 3994 3995 3994 2993 2992 999 2993 2992 2996 2995 2996 2996 2996
0.72 0.61 0.50 0.50 0.50 0.72 0.61 0.50 0.50 0.50 0.51 0.61 0.50 0.50 0.50 0.51 0.61 0.50 0.50 0.50
0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.09 0.47 0.10 0.10 0.08 0.88 0.94 0.46 0.47 0.46 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Linear
na 0.08 0.00 na * na 0.08 0.00 na * na 0.08 0.00 na * na 0.08 0.00 na *
σ
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Error df
α
Model
a
Statistical Results for α = 3, p = 0.1, m = 4, and n = 500
TABLE 9c
0.00 0.00 0.00 0.00 0.00 0.38 0.34 0.13 0.13 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
0.46 0.92 0.17 0.17 0.16 0.31 0.20 0.10 0.10 0.10 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.47 0.18 0.26 0.26 0.27 0.32 0.16 0.12 0.12 0.12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0002 0.0001 0.0001 0.0001 0.0001 0.0005 0.0003 0.0002 0.0002 0.0002 0.0001 0.0001 0.0001 0.0001 0.0001 0.0011 0.0003 0.0002 0.0002 0.0002
Var
MSE 0.1420 0.1296 0.1406 0.1406 0.1411 0.8031 0.9030 0.8532 0.8523 0.8497 0.1419 0.1296 0.1406 0.1406 0.1411 0.8037 0.9030 0.8532 0.8523 0.8497
Bias −0.3766 −0.3597 −0.3748 −0.3749 −0.3754 −0.8959 −0.9501 −0.9236 −0.9231 −0.9217 −0.3766 −0.3597 −0.3748 −0.3749 −0.3754 −0.8959 −0.9501 −0.9236 −0.9231 −0.9217
687
na 0.01 0.00 na 0.03 na 0.01 0.00 na 0.03 na 0.01 0.00 na 0.03 na 0.01 0.00 na 0.03
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
na, not available.
a
α
a
3992 3991 3991 3992 3991 3995 3994 3994 3995 3994 2993 2992 999 2993 2992 2996 2995 2996 2996 2996
Error df
0.6365 0.3033 0.3978 0.3978 0.299 0.6362 0.3033 0.3977 0.3977 0.299 0.2621 0.3033 0.2621 0.2621 0.299 0.262 0.3033 0.262 0.262 0.262
σ 0.81 0.86 0.43 0.43 0.81 0.73 0.77 0.29 0.29 0.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
All 0.81 0.86 0.43 0.43 0.81 0.73 0.77 0.29 0.29 0.71 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Poly 0.53 0.83 0.33 0.33 0.99 0.88 0.67 0.80 0.80 0.67 0.00 0.00 0.00 0.00 0.59 0.00 0.00 0.00 0.00 0.00
Linear 0.74 0.69 0.61 0.61 0.66 0.80 0.49 0.67 0.67 0.50 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quad
p-Value
Statistical Results for α = 0.03, p = 0.1, m = 4, and n = 500
Model
TABLE 9d
0.87 0.43 0.80 0.80 0.46 0.79 0.42 0.66 0.66 0.44 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Cubic 0.98 0.87 0.97 0.97 0.89 0.79 0.39 0.67 0.66 0.41 0.00 0.00 0.04 0.00 0.00 0.00 0.00 0.00 0.00 0.00
Quartic 0.0002 0.0000 0.0001 0.0001 0.0000 0.0004 0.0001 0.0001 0.0001 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000 0.0012 0.0001 0.0004 0.0004 0.0001
Var
MSE 0.1468 0.1427 0.1466 0.1466 0.1432 0.9583 0.9878 0.9597 0.9597 0.9839 0.1466 0.1427 0.1466 0.1466 0.1432 0.9592 0.9878 0.9599 0.9599 0.9839
Bias −0.3829 −0.3777 −0.3828 −0.3828 −0.3784 −0.9788 −0.9938 −0.9796 −0.9796 −0.9919 −0.3829 −0.3777 −0.3828 −0.3828 −0.3784 −0.9788 −0.9938 −0.9796 −0.9796 −0.9919
688
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
Translational Aspects of Pathodynamics Signals are relatively easy to isolate and model in in vitro conditions, due mostly to the ability to control the environment and to the lack of exposure to the biological system network. If the biomarker cannot be characterized and modeled in the laboratory, the chance of getting meaningful information in vivo is slim to none. Once a laboratory model is established, an animal model can be chosen and the pathodynamics work starts all over again until the biomarker can be characterized and modeled in vivo. As all biologists know, just because it works in one species, it is not necessarily going to work in another species. Before going into humans, a pathodynamic model for the biomarker should be studied in several species. The similarities and differences in these interspecies results should provide guidance about the applicability in humans. For translation across species, the pathodynamic models must have some characteristics that are invariant. Without this invariance, all the preclinical work will probably be a waste of time and money. Mathematical physics would not exist without laws of invariance such as conservation of mass and energy. The same will probably hold true in biology. In the context of this chapter, P (probability structure), Y (biomarkers), and D (disease or decision space) have to be invariant in some sense. Here having invariance in P is not as strong as it seems. The simplest distribution type is the same, but the parameters of the distribution model have species-specific variation. A more complicated type of invariance is that the topologies of the interspecies probability spaces are equivalent. This just means that any “physical” distortion of the response distribution does not create or remove any holes. To a greater extreme, a type of invariance would be present if there is a one-for-one matching of probability objects between species (i.e., when a particular probability object is present in one species with disease, or response, D, there is always a probability object, not necessarily similar, in another species that represents D). The bottom line for biologists is that this will probably require more mathematics than most can presently do. This approach to translation is relatively worthless without laboratory standards. When biomarkers are developed and applied, either the exact assay methods and standards must be applied in every experiment or there needs to be a mathematical transformation that makes them equivalent. Currently in the clinic, these standards do not exist. Therefore, preclinical experiments using the same methods may work fine, but when the clinical trial is run, variation in methods and sample handling may distort or destroy the information.
Future Needs in Method Development Biologists need to get involved directly in pathodynamics to get an efficient merger of the biology and the mathematics. It is a rare mathematician or
REFERENCES
689
statistician that has biology training and intuition. Once the biologist gets involved in developing the models, progress will accelerate. Remember, the OU model is basically the simplest case for a pathodynamic model. As is illustrated in the examples above, simpler models will have information loss. Therefore, standard experimental design and analysis may not be sufficient. The second issue is whether the OU model is correct. Preliminary research suggests that it is not [34], but only minor modifications may be needed for modeling homeostasis (i.e., dynamic equilibrium). The models for disease or therapeutic effects are mostly unknown. Chronic effects may be directional diffusion or slow convection, while acute effects are likely to generate trajectories such as liver injury [31]. The mathematics of statistical physics [17] is likely to be needed. It seems clear from the examples presented here that the current statistical estimation algorithms commonly used and available are not efficient in a Fisher information sense when autocorrelation is present. This has been handled in the economic applications for equally spaced measurement times, but biology is not quite so regular, especially clinical trials, even under the strictest protocols. The communication/information theory and decision theory that is presented here was only an introduction. Optimal information and decision algorithms need to be developed in the context of pathodynamics. Such algorithms may be synergistic with Fisher information optimization or may have some conflict. How the information will be used should determine the optimization approach. In this chapter biomarkers have been defined as functions of parameters, as vectors of tests, and as signals. These are just aspects of the same mathematical object called a probability distribution function P(Y). The parameters are an integral part of the probability model, the vector Y represents the measurements that get combined in the model, and change in these measurements with time is the signal.
REFERENCES 1. Klotz IM, Rosenberg RM (1994). Chemical Thermodynamics: Basic Theory and Methods. Wiley, New York. 2. Kondepudi D, Prigogine I (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structure. Wiley, New York. 3. Keener J, Sneyd J (1998). Mathematical Physiology. Springer-Verlag, New York. 4. Box GEP, Hunter WG, Hunter JS (1978). Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building. Wiley, New York. 5. Woolson RF, Clarke WR (2002). Statistical Methods for the Analysis of Biomedical Data, 2nd ed. Wiley, Hoboken, NJ. 6. Mendenhall W, Sincich T (1995). Statistics for Engineering and the Sciences, 4th ed. Prentice Hall, Upper Saddle River, NJ.
690
PATHODYNAMICS: IMPROVING BIOMARKER SELECTION
7. Puri ML, Sen PK (1971). Nonparametric Methods in Multivariate Analysis. Wiley, New York. 8. Thompson PA (1972). Compressible-Fluid Dynamics. McGraw-Hill, New York. 9. Hawking SW (1988). A Brief History of Time: From the Big Bang to Black Holes. Bantam Books, New York. 10. Prigogine I (1996). The End of Certainty: Time, Chaos, and the New Laws of Nature. Free Press, New York. 11. Frieden BR (1998). Physics from Fisher Information. Cambridge University Press, Cambridge, UK. 12. Trost DC (2008). A method for constructing and estimating the RR-memory of the QT-interval and its inclusion in a multivariate biomarker for torsades de pointes risk. J Biopharm Stat, 18(4):773–796. 13. Brown R (1828). A brief account of microscopic observations made in the months of June, July, and August, 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Philos Mag, 4:161–173. 14. Karatzas I, Shreve SE (1991). Brownian Motion and Stochastic Calculus, 2nd ed. Springer-Verlag, New York. 15. Øksendal B (1998). Stochastic Differential Equations: An Introduction with Applications, 5th ed. Springer-Verlag, Berlin. 16. Uhlenbeck GE, Ornstein LS (1930). Phys Rev, 36:823–841. 17. Reichl LE (1998). A Modern Course in Statistical Physics, 2nd ed. Wiley, New York. 18. Trost DC (2006). Multivariate probability-based detection of drug-induced hepatic signals. Toxicol Rev, 25(1):37–54. 19. Hogg RV, Craig A, McKean JW (2004). Introduction to Mathematical Statistics, 6th ed. Prentice Hall, Upper Saddle River, NJ. 20. Bickel PJ, Doksum KA (1977). Mathematical Statistics: Basic Ideas and Selected Topics. Holden-Day, San Francisco. 21. Stuart A, Ord JK (1991). Kendall’s Advanced Theory of Statistics, vol. 2, Classical Inference and Relationship, 5th ed. Oxford University Press, New York. 22. Reza FM (1994). An Introduction to Information Theory. Dover, Mineola, NY. 23. Kullback S (1968). Information Theory and Statistics. Dover, Mineola, NY. 24. Williams SA, Slavin DE, Wagner JA, Webster CJ (2006). A cost-effectiveness approach to the qualification and acceptance of biomarkers. Nat Rev Drug Discov, 5:897–902. 25. Wald A (1971). Statistical Decision Functions. Chelsea Publishing, New York. 26. Blackwell DA, Girshick MA (1979). Theory of Games and Statistical Decisions. Dover, Mineola, NY. 27. Chernoff H, Moses LE (1987). Elementary Decision Theory. Dover, Mineola, NY. 28. Knuth DE (1981). The Art of Computer Programming, vol. 2, Seminumerical Algorithms, 2nd ed. Addison-Wesley, Reading, MA. 29. Trost DC (2007). An introduction to pathodynamics from the view of homeostasis and beyond. Presented at the Sixth International Congress on Industrial and Applied Mathematics, Zürich, Switzerland, July 16–20.
REFERENCES
691
30. Rosenkranz GK (2009). Modeling laboratory data from clinical trials. Comput Stat Data An, 53(3):812–819. 31. Trost DC, Freston JW (2008). Vector analysis to detect hepatotoxicity signals in drug development. Drug Inf J, 42(1):27–34. 32. R Development Core Team (2007). R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. http:// www.R-project.org. 33. Burtis CA, Ashwood ER (eds.) (1999). Tietz Textbook of Clinical Chemistry, 3rd ed. W.B Saunders, Philadelphia. 34. Trost DC, Overman EA, Ostroff JH, Xiong W, March PD (in press). A model for liver homeostasis using a modified mean-reverting Ornstein–Uhlenbeck process.
37 OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT: A CLINICIAN’S PERSPECTIVE Alberto Gimona, M.D. Merck Serono International, Geneva, Switzerland
INTRODUCTION Drug development is currently facing many challenges, from the everincreasing costs of developing drugs to the reducing numbers of new drug approvals. Bringing a drug to market requires on average 12 years and $1 billion. Approximately 53% of compounds entering phase II fail, resulting in amortized costs of approximately $0.8 billion per registered drug (DiMasi et al., 2003). The majority of these costs are due to failures encountered during drug development; this represents approximately 75% of the costs for registration (Figure 1). According to the Pharmaceutical Research and Manufacturers of America (PhRMA), the U.S. biopharmaceutical industry spends $49.3 billion per year on drug research and development. According to the 2004 estimate of the U.S. Food and Drug Administration (FDA), only 8% of drugs entering clinical trials had a legitimate chance of reaching the market. The reasons for drug development failure vary: from 1990 to 1999, the most relevant reason for drug development failure was related to pharmacokinetic (PK) and bioavailability issues (accounting for approximately 40% of
Biomarkers in Drug Development: A Handbook of Practice, Application, and Strategy, Edited by Michael R. Bleavins, Claudio Carini, Mallé Jurima-Romet, and Ramin Rahbari Copyright © 2010 John Wiley & Sons, Inc.
693
694
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT Biology
Chemistry
Basic OptimiTarget Screening Target ID Research zation validation
Clinical Development Preclinical Development
Phase l
Phase Il
Phase IIl Regulatory
Cumulative total costs ($880M)
$M 800
Cumulative attrition costs ($655M)
400
Figure 1
Components of the overall costs for one approved drug.
failures), while from 2000 to 2008, the leading causes for failure were related to efficacy (approximately 30%), while PK and bioavailability failures diminished to approximately 10% (Figure 2). As shown in Figure 1, the late-stage failures tended to be extremely expensive. It is now well established that tools such as biomarkers are able to predict the efficacy of a compound at the early stage of development. This can drastically increase the efficiency of the development process, resulting in an increased productivity. These lessons learned demonstrate the need to decrease the time required for drug approval as well as a decline in late-stage failures. Biomarkers, including imaging biomarkers, can address these needs. In the early drug development stage, biomarkers can be instrumental in ameliorating the decision-making process, introducing the concept of “fail early, fail fast,” thus allowing drug companies to concentrate their resources on the most promising drug candidates. In addition, a biomarker strategy applied during late-stage development would allow a drug candidate to achieve regulatory approval much earlier in cases where resources are limited. Fundamental differences do exist, depending on whether the biomarker strategy is applied to early- or late-stage drug development; implementing a biomarker strategy during early drug development may represent a risk that a company can more readily absorb. Indeed, biomarkers at the early stage are validated through the various phases of drug development and would be used solely to determine whether or not to proceed in development with the drug candidate in question. Conversely, a biomarker that is validated during the late phase of drug development could be used for drug approval purposes.
DEFINITION AND CLASSIFICATION OF BIOMARKERS
695
50%
Percentage of NCE projects fating
1991 2000 40%
30%
20%
10%
r he
ds U
nk
no
w
n/
ot
oo fg
to os
C
To
xi
co
lo
ci
gy
al
y
er
om C
ai av io
/b
m
la
bi
io at ul
rm Fo
lit
n
y ac fic
Ef
PK
C
lin
ic
al
sa
fe
ty
0%
Ressons for attrition during clinical devalopment
Figure 2 Comparison of the reasons for drug failure in 1991 vs. 2000.
DEFINITION AND CLASSIFICATION OF BIOMARKERS In 2001, the Biomarkers Definition Working Group, sponsored by the National Institutes of Health, defined a biomarker as “a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention”; a clinical endpoint as “a characteristic or variable that reflects how a patient feels or functions, or how long a patient survives”; and a surrogate endpoint as “a biomarker that is intended to substitute for a clinical endpoint.” There are important differences among biomarkers. Type 0 biomarkers are natural history markers of a disease and tend to correlate longitudinally with known clinical references, such as symptoms. Type I biomarkers capture the effect of an intervention in accordance with the mechanism of action of the drug, even though the mechanism might not be known to be associated with the clinical outcome. Type II biomarkers are considered surrogate markers since the change of a specific marker predicts a clinical benefit (Frank and Hargreaves, 2003). A hierarchy also exists among clinical endpoints. A clinical endpoint can be defined as an intermediate endpoint, which is a clinical endpoint that is not
696
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT
the ultimate outcome but is nonetheless of real clinical benefit; the ultimate outcome is a clinical endpoint such as survival, onset of serious morbidity, or symptomatic response that captures the benefits and risks of an intervention (Lesko and Atkinson, 2001). Classification can be made according to the specificity of a biomarker versus the intended therapeutic response. A linked drug biomarker demonstrates a strict correlation between the pharmacological action of the drug and its effect on the disease. Danhof et al. (2005) have proposed to classify biomarkers in three distinct categories: (1) pharmacological and (2) toxicological markers, both observed in healthy subjects, and (3) pathological biomarkers observed in subjects affected by disease (Danhof et al., 2005). It is noteworthy to review the definition of biomarkers from the 2003 FDA “Guidance for Industry: Exposure–Response Relationships”: • Biomarkers are considered valid surrogates for clinical benefit (e.g., blood pressure, cholesterol, viral load). • Biomarkers are thought to reflect the pathologic process and at least be candidate surrogates (e.g., brain appearance in Alzheimer’s disease, brain infarct size, various radiographic/isotopic function tests). • Biomarkers reflect drug action but of uncertain relation to a clinical outcome (e.g., inhibition of ADP-dependent platelet aggregation, ACE inhibition). • Biomarkers may be more remote from the clinical benefit endpoint (e.g., degree of binding to a receptor or inhibition of an agonist).
Classification Based on Mechanism of Action The COST B15 working group 2, “Markers of Pharmacological and Toxicological Action,” has proposed a conceptually similar classification. Based on the location of the biomarker in the chain of events from underlying subject genotype or phenotype to clinical scales, the following types of biomarkers have been be defined: Type Type Type Type Type Type Type
0: 1: 2: 3: 4: 5: 6:
genotype or phenotype concentration target occupancy target activation physiologic measures or laboratory tests disease processes clinical scales.
This classification is not universally accepted since the type 0 biomarker relating to a subject’s genotype or phenotype can be considered a covariate
DEFINITION AND CLASSIFICATION OF BIOMARKERS
697
rather than a biomarker. Similarly, the type 6 biomarker, such as a clinical scale, can be regarded as a measurement of a clinical endpoint and not a biomarker. These classifications have been proposed in attempt to reconcile disagreements on the potential role of biomarkers.
Classification Based on Clinical Applications The paradigm is now shifting from the classical model of clinical care to development and application of biomarkers in different therapeutic areas according to their clinical application, which can be classified as follows: • Preventive biomarkers, which identify people at a high risk of developing disease • Diagnostic biomarkers, which identify a disease at the earliest stage, before clinical symptoms occur • Prognostic biomarkers, which stratify the risk of disease progression in patients undergoing specific therapy • Predictive biomarkers, which identify patients who respond to specific therapeutic interventions • Therapeutic biomarkers, which provide a quantifiable measure of response in patients who undergo treatment
Classification According to Measurement Scale From a mathematical perspective, biomarkers can also be classified on the basis of the measurement scale it utilizes: • Graded response, which is a quantifiable biomarker that is causally linked to drug treatment and temporally related to drug exposure (e.g., blood pressure, cholesterol). Usually such endpoints are chosen based on the pharmacodynamic response. • Challenge response, which is a quantifiable graded response to a standardized exogenous challenge, modified by the administration of the drug (e.g., challenge test in asthma). Usually, these markers are based on the mode of administration (MoA) of the drug and the response is a continuous variable. Other types of responses can be observed with biomarkers: • Categorical response is usually a “Yes” or “No” response for a clinically relevant outcome based on the disease progression, regardless of MoA (e.g., response based on tumor size, incidence of an AE); such an event is generally not linked to the MoA of the drug.
698
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT
• Time to event response is usually a clinically relevant outcome regardless of the MoA, such as survival time or time to relapse. It is a censored continuous clinical variable which can be measured only once for each patient. • Event frequency/rate of response is the frequency of clinical events related to drug exposure (e.g., MRI lesions in multiple sclerosis); it is usually a censored continuous variable. DEVELOPMENT OF BIOMARKERS The development of biomarkers can be divided artificially into two steps: (1) evaluation/qualification of the candidate biomarker and (2) validation of biomarkers to become a surrogate endpoint. Evaluation and Qualification of Biomarkers Many disease biomarkers are well characterized and are used extensively in drug development. However, there is the frequently need to develop new biomarkers, especially in new therapeutic areas and/or when dealing with innovative therapeutic approaches. Development of new biomarkers should start at the preclinical stage with the intent to have a new biomarker when the lead candidate enters the human development stage. The objective of biomarker development should be clearly defined, such as the need for markers related to disease progression or the pharmacological effect of the drug or for markers indicating therapeutic activity. In the evaluation phase, the candidate biomarker should be measured against the following attributes that define a biomarker (Lesko and Atkinson, 2001): • Clinical relevance, which may theoretically reflect a physiologic or pathologic process or activity over a relatively short period of time. Ideally, this effect should be related to the MoA of the drug and to the clinical endpoint. This obviously requires an understanding of the pathophysiology of a disease and of a drug’s mechanism of action, taking into consideration the fact that diseases frequently have multiple causal pathways. • Sensitivity and specificity to treatment effects, defined as the ability to detect the intended measurement or change in the target patient population. • Reliability, defined as the ability to measure the biomarker analytically with accuracy, precision, robustness, and reproducibility. • Practicality, defined as noninvasiveness or only modest invasiveness. • Simplicity, for routine utilization without the need for sophisticated equipment or operator skill, extensive time commitment, or high measurement cost.
DEVELOPMENT OF BIOMARKERS
699
Validation of Biomarkers The validation of a biomarker is a work in progress that ends when the biomarker is validated as a surrogate endpoint. During the development of a biomarker, aside from the characteristics mentioned above, the investigator should take into account the risk of a false positive or false negative result [which occurs when the value(s) of specific biomarker(s) does not reflect a positive change in the clinical endpoint(s)]. During the validation process, the assay that is used must be highly reliable and reproducible. As far as the demonstration of the predictive value of the candidate as a surrogate endpoint for the clinical outcome is concerned, regulatory guidance does not specify which methodology should be used in validating biomarkers as surrogate endpoints. It is well recognized that developing a single biomarker as a surrogate endpoint can become rather cumbersome for a pharmaceutical sponsor (Lesko, 2007). To complicate matters further, a biomarker may become a surrogate endpoint for efficacy but not for toxicity. Indeed, there are few biomarkers of toxic effects (such as the QTc prolongation) that predict torsade de pointe, or the increase in aminotranspherases predicting liver failure. Biomarkers may also be misleading in areas where they may result in a short-term beneficial effect but a long-term deleterious effect. As a consequence, the benefit/risk ratio can rarely be evaluated based on a surrogate marker, hence the use of biomarkers as surrogate endpoints only in those areas with critical unmet medical needs. In the process for biomarker validation, the following properties should be evaluated: (1) feasibility of a surrogate marker in predicting the clinical outcome, and (2) statistical relationship between a biomarker and the clinical outcome. This should first be demonstrated by the natural history of the disease, then by adequate and well-controlled clinical trials that estimate the clinical benefit by changing the specific surrogate endpoint. It should be noted that during the biomarker validation process, it is insufficient to show only that the biomarker correlates with the clinical endpoint; it is also necessary to demonstrate that the effect on the surrogate endpoint interferes with the treatment effect on the clinical endpoint. In rare cases, a biomarker is elevated to the status of surrogate endpoint based solely on the results obtained from one drug. A metaanalysis of multiple clinical trials with different drugs and different stages of disease may be required to determine the consistency of effects and strengthen the evidence that a change in the biomarker level resulted in an effect on the clinical outcome. With the FDA Modernization Act of 1997, the U.S. Food and Drug Administration (FDA) has gained a legal basis for using surrogate endpoints in ordinary and accelerated drug approvals. Indeed, the FDA was given explicit authority to approve drugs for the “treatment of a serious or lifethreatening condition … upon a determination that a product has an effect on a clinical endpoint or on a surrogate endpoint that is reasonably likely to predict clinical benefit,” leading to market access of new drugs and drug
700
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT
products. The standards for linking a biomarker to a clinical outcome are higher for ordinary approvals than for accelerated approvals. This difference is based on consideration of many factors, including the degree of scientific evidence needed to support biomarker surrogacy, public health needs, relative risk/benefit ratio, and the availability of alternative treatments. For ordinary approvals there are relatively few approved surrogate endpoints, such as “lower cholesterol and triglycerides” for coronary artery disease, “lower arterial blood pressure” for stroke, heart attacks and heart failure, “increase cardiac output for acute heart failure,” “reduce HIV-RNA load and enhance CD4+ cells” for AIDS, “lower glycosilated hemoglobin” for diabetes, and “reduced tumor size” in solid tumors. Oncology is an interesting example of this practice. In oncology, survival is the ultimate clinical outcome. However, approvals in the United States in the field of oncology from 1990 to 2002 highlight that tumor response was the approval basis in 26 of 57 regular approvals, supported by relief of tumorspecific symptoms in 9 of these 26 regular approvals (Table 1). Relief of tumor-specific symptoms provided critical support for approval in 13 of 57 regular approvals; approvals were based on tumor response in 12 of 14 accelerated approvals. In Europe, regulatory awareness is increasing and some initiatives are ongoing, such as the EMEA/CHMP Biomarkers Workshop, which was held in 2006. However, while biomarker development is encouraged during earlystage development, there is significant hesitancy in accepting biomarkers as surrogate endpoints for drug approval. As an example, a review of the
TABLE 1 Summary of Endpoints for Regular Approval of Oncology Drug Marketing Applications, January 1, 1990 to November 1, 2002 Parametera Total Survival RR RR alone RR + decreased tumor-specific symptoms RR + TTP Decreased tumor-specific symptoms DFS TTP Recurrence of malignant pleural effusion Occurrence of breast cancer Decreased impairment of creatinine clearance Decreased xerostomia Source: Johnson et al. (2003). a
RR, response rate; TTP, time to progression; DFS, disease-free survival.
Endpoint 57 18 26 10 9 7 4 2 1 2 2 1 1
DEVELOPMENT OF BIOMARKERS
701
European guidelines reveals that outside the oncology and the muscoloskeletal field, biomarkers in general and biomarker imaging in particular, are not considered surrogate endpoints. A list of imaging biomarkers endpoints accepted as primary endpoints for regulatory submissions and those suggested as endpoints in early development are listed in Table 2. Overall, it is very complex to validate biomarkers to become surrogate endpoints, but the value of developing a biomarker resides in the information that can be obtained during such development. Among these benefits are the possibility of defining the population who may benefit from the drug candidate, the screening of patients for adverse events, the possibility to enrich the population for proof-of-concept studies, the possibility to stratify patients, the selection of doses for pivotal trials, and the potential for dose adjustments at patient level.
TABLE 2 Imaging Biomarkers: Review of Accepted Primary or Secondary Endpoints in the CHMP Guidelines
Condition Fungal infections Cancer Osteoporosis Juvenile idiopatic arthritis Psoriatic arthritis Osteoarthritis X-ray Ankilosing spondilitis Crohn disease Profilaxis of thromboembolic disease Peripheral artherial obstructive disease Ischemic stroke Treatment of venous trombotic disease
Incontinence Anxiety Multiple sclerosis Panic disorders Acute stroke
Imaging Accepted as a Primary Endpoint Relevant imaging is part of the clinical outcome — X-ray X-ray
Imaging Accepted as a Secondary Endpoint or Early in Development
Imaging and functional imaging DEXA for BMD
X-ray X-ray
MRI and ultrasound (US)
— — Ultrasound (detection of DVT) and venography —
X-ray/MRI/DEXA/US Endoscopy
— Ultrasound (detection of DVT) and venography; angiography for pulmonary embolism — — — — —
Hemodynamic measurements Neuroimaging techniques
Urodynamic studies or x-ray videography Functional neuroimaging MRI Neuroimaging MRI
702
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT
REASONS FOR FAILURE OF A BIOMARKER There are several examples of biomarker failure. One of the most recent examples is the approval of gefitinib for non-small cell lung cancer (NSCLC). Gefitinib was originally approved based on tumor response as opposed to overall survival. In a postmarketing survival trial that included approximately 1700 patients, there was no benefit over placebo on overall survival. Other examples include bone mineral density (BMD) in osteoporosis for fluoride treatment. However, the Cardiac Arrhytmia Suppression Trial (CAST) provides the best known example of a failure of a biomarker. This study was based on the hypothesis (supported by statistical association and a plausible biological mechanism) that suppression of arrythmias would prevent sudden death after myocardial infarction. The study demonstrated a worse outcome on mortality for patients receiving active treatments compared to those receiving placebo. The theoretical background for biomarker failure is given by Frank and Hargreaves (2003). The reasons biomarkers can lead to erroneous conclusions have been divided into the following five categories (Figure 3):
A
Surrogate endpoint
Disease
True clinical outcome
Intervention
B
Surrogate endpoint
Disease
True clinical outcome
Intervention
C Disease
Surrogate endpoint
True clinical outcome
Intervention
D
Surrogate endpoint
Disease
True clinical outcome
Intervention
E
Surrogate endpoint
Disease True clinical outcome cannot be measured at this stage of disease or does not discern uniquo treatment benefit
Figure 3
Reasons why biomarkers have failed to become surrogate endpoints.
IMAGING AS A BIOMARKER TO OPTIMIZE DRUG DEVELOPMENT
703
1. Changes in the biomarker reflect the effect of treatment, but these changes are irrelevant to the pathophysiology of the disease indicated (false positive). 2. Changes in the biomarker reflect an effect of treatment on an element of the pathophysiology, but this element is clinically irrelevent (false positive). 3. Changes in the biomarker reflect clinically relevant changes in pathophysiology but do not capture the mechanistic effect of the treatment (false negative). 4. Changes in the biomarker reflect one effect of the treatment, but there are other, more relevent effects on outcome that are not captured (false negative or positive). 5. The biomarker may not correlate well with classical clinical assessors because the biomarker is more sensitive or the classical assessor is irrelevant to a subset of the patient population, a novel mechanism, or a new indication. It is important to consider this theoretical framework while developing a biomarker or considering biomarkers in decision making. Using more than one biomarker and reviewing the consistency of the data across biomarkers may decrease the risk of making an incorrect conclusion. The classical example of this comes from the osteoporosis field, where BMD alone may provide misleading results, while BMD in combination with other biomarkers of bone metabolism (such as osteocalcin and collagen cross-links) provide a much more robust basis for decision making.
IMAGING AS A BIOMARKER TO OPTIMIZE DRUG DEVELOPMENT Imaging (e.g., x-ray) has been used in clinical practice for over a century, but mainly for diagnostic purposes. The value of imaging as a biomarker has been recognized only recently. In clinical development, the use of imaging must take into consideration the scope of clinical studies, which are usually multicenter and multinational (at least in confirmatory development), and there are restraints such as cost, effort and resources as well as the need to standardize techniques across centers to maximize the signal-to-noise ratio. The most common techniques used in clinical development include x-ray imaging, digitalized imaging [including DEXA (dual energy x-ray absorptiometry)], computed tomography (CT) scans, nuclear imaging such as positron-emission tomography (PET) and single-photon-emission CT (SPECT), ultrasound, magnetic resonance imaging (MRI), spectrometry MR, and functional MR. Some of these tools are available only in specialized centers and are therefore adapted mainly for small studies during early development.
704
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT
Imaging biomarkers can be used, for example, for the assessment of bioactivity (not only through change in anatomical shape but also through change in functional status), for the evaluation of the disposition of drugs, for the measurements of tissue concentrations of drugs, to characterize the number of receptors, for the binding efficiency and the receptor occupancy, and as a prognostic indicator as well as for assessment of molecular specificity. IMAGING AS A MARKER OF BIOLOGICAL ACTIVITY Imaging techniques are used to evaluate the biological activity of a drug candidate by performing pre- and posttreatment measurements (and in many instances, performing measurements during treatment). These include: • Oncology: CT scan or MRI for the measurement of the size of solid tumors • Neurology: MRI for the measurement of multiple sclerosis lesions and for the evaluation and quantification of brain atrophy in Alzheimer disease (e.g., brain boundary shift integral or ROI-based MRI) • Musculoskeletal diseases: x-ray and DEXA to evaluate vertebral fractures and BMD in osteoporosis; x-ray and MRI for the evaluation of erosions and joint space in rheumatoid arthritis and psoriatic arthritis; x-ray and MRI for the evaluation of joint space and cartilage volume in osteoarthritis; and the same techniques to evaluate spinal changes in spondyloarthritis In addition to the use of imaging to capture anatomical changes, imaging can be used to perform a functional evaluation of a tissue/organ before and after treatment. Classical examples include: • Oncology: the evaluation of tumor metabolism with fluorodeoxyglucose (FDG)-PET. This may be helpful as an early predictor of a later anatomical response. Indeed, the reduction in tumor size with imatinib was preceded by decreased tumor glucose intake by a median of 7 weeks (Stroobants et al., 2003). Another example is the measurement of tumor blood flow and blood volume with CT: after bevacizumab (VEGF- Ab) treatment a decreased blood flow and volume in colorectal cancer was observed as early as 12 days after initiation of treatment (Miller et al., 2005). • Neurology: FDG-PET was used in Alzheimer disease (AD) to evaluate regional cerebral metabolic rate and SPECT was used to evaluate blood flow in AD. In multiple sclerosis, PET and SPECT have shown a reduction in cerebral metabolism and blood flow. It should be noted, however, that these approaches
DISCUSSION AND CONCLUSIONS
705
have not been tested and qualified to measure the potential effect of a therapeutic intervention (Bakshi et al., 2005).
IMAGING TO EVALUATE DISPOSITION OF DRUGS An interesting use of imaging is to estimate the concentration of a drug in different tissues or organs or to evaluate the disposition of a drug into nonaccessible compartments. An example of the first use is the antifungal drug fluconazole. By using PET imaging and [18F]fluconazole, an in vivo pharmacokinetic profile was characterized. Concentrations were measured over time in multiple organs, such as the brain, muscle, heart, and bowel. Based on these data, it was possible to test fluconazole only for infections in organs in which adequate concentrations (to exert antifungal effect) were achieved (Pien et al., 2005). Imaging can also be used to characterize the number of receptors, the binding efficiency, and the receptor occupancy. An interesting example is aprepitant, which is an NK1 receptor antagonist (used for the prevention of nausea and vomiting with chemotherapy). By using an 18F ligand with high affinity and specificity to NK1 receptor, PET was used to image the displacement of this ligand by aprepitant given systemically, therefore demonstrating the ability of aprepitant to cross the blood–brain barrier. Imaging biomarkers as prognostic indicators and assessment of molecular specificity As mentioned above, FDG-PET can be used as an early indicator of bioactivity and therefore can help separate responders versus nonresponders at earlier time points compared to anatomical imaging. The case of imatimib is instructive in this sense: FDG-PET identified the biological response a median of 7 weeks earlier than did CT scans. In addition, all patients with metabolic responses were clinical responders later during the trial (Stroobants et al., 2003). Similar examples are available with chemotherapy in breast cancer (Schelling et al., 2000), in NSCLC (Weber et al., 2003), and in gastroesophageal cancer (Weber et al., 2001). Another interesting development for use of imaging is its use to assess molecular specificity. In oncology, spectrometry MR can evaluate the molecular content of tissues and assist diagnosis. It is also used extensively in brain, breast, and prostate cancers. In neurology the brain β amyloid content can be measured in AD patients using PET technique. (Sorensen, 2006).
DISCUSSION AND CONCLUSIONS The need for the pharmaceutical industry to increase its productivity and, in particular, the need to concentrate the available limited resources to the most promising drug candidates lead to the current emphasis on development and use of biomarkers. Since 2001 and as a result of the Biomarkers Definition
706
OPTIMIZING THE USE OF BIOMARKERS FOR DRUG DEVELOPMENT
Working Group, there is clarity on the definitions concerning biomarkers, surrogate markers, and clinical endpoints. Many classifications exist to guide the development of new biomarkers. The evaluation/qualification and eventually the validation of biomarkers is a long and complex process which needs to be supported by many stakeholders, such as the pharmaceutical industry, regulatory authorities, and academic centers. None of these entities alone can succeed easily in such an endeavor. A few biomarkers have reached the status of surrogate endpoint and can be used for registration purposes, notably in the areas of cardiovascular disease, diabetes, rheumatology, and oncology. Imaging can be particularly useful in the early stages of development in the evaluation of bioactivity and drug disposition, which are among the many possibilities. In the evaluation of biomarkers and in the decision-making process based on biomarkers, investigators and scientists should always consider the possibility that the biomarker may fail to predict the clinical outcome, therefore leading to potential false positives and false negatives. It is reassuring, however, that even when the biomarker fails, significant learning is achieved, which eventually will benefit not only the scientific community in general but the pharmaceutical industry as well. REFERENCES Bakshi R, Minagar A, Jaisani Z, Wolinsky JS (2005). Imaging of multiple sclerosis: role in neurotherapeutics. NeuroRx, 2:277–303. Biomarkers Definitions Working Group (2001). Biomarkers and surrogate endpoints: preferred definitions and conceptual framework. Clin Pharmacol Ther, 69:89–95. Danhof M, Alvan G, Dahl SG, Kuhlmann J, Paintaud G (2005). Mechanism-based pharmacokinetic–pharmacodynamic modeling: a new classification of biomarkers. Pharm Res, 22:1432–1437. DiMasi JA, Hansen RW, Grabowskal HG (2003). The price of innovation: new estimates of drug development costs. J Health Econ, 22:151–185. European Medicines Agency (2006). Report on the EMEA/CHMP Biomarkers Workshop. EMEA/522496/2006. FDA Center for Drug Evaluation and Research (2003). Guidance for Industry: Exposure-response relationships—study design, data analysis, and regulatory applications. http://www.fda.gov/cder/guidance/index.htm. FDA (2004). The “Critical Path Initiative”–innovation–stagnation: challenge and opportunity on the critical path to new medical products. http://www.fda.gov/oc/ initiatives/criticalpath/whitepaper.html. Frank R, Hargreaves R (2003). Clinical biomarkers in drug discovery and development. Nat Rev, Drug Discov, 2:566–580. Johnson JR, Williams G, Pazdur R (2003). End points and United States Food and Drug Administration approval of oncology drugs. J Clin Oncol, 21:1404–1411. Lesko LJ, Atkinson AJ (2001). Use of biomarkers and surrogate endpoints in drug development and regulatory decisionmaking: criteria, validation, strategies. J Annu Rev Pharmacol Toxicol, 41:347–366.
REFERENCES
707
Lesko LJ (2007). Paving the critical path: how can clinical pharmacology help achieve the vision? Clin Pharmacol Ther, 81:170–177. Miller JC, Pien HH, Sahani D, Sorensen AG, Thrall TH (2005). Imaging angiogenesis: applications and potential for drug development. J Natl Cancer Inst, 97:172–187. Pien HH, Fischman AJ, Thrall JH, Sorensen AG (2005). Using imaging biomarkers to accelerate drug development and clinical trials. Drug Discov Today, 10:259–266. Schelling M, Avril N, Nährig J, et al. (2000). Positron emission tomography using 18F fluorodeoxyglucose for monitoring primary chemotherapy in breast cancer. J Clin Oncol, 18:1689–1695. Sorensen AG (2006). Magnetic resonance as a cancer imaging biomarker. J Clin Oncol, 24:3274–3281. Stroobants S, Goeminne J, Seegers M, et al. (2003). 18FDG-positron emission tomography for early prediction of response in advanced soft tissue sarcoma treated with imatinib mesylate (Glivec). Eur J Cancer, 39:2012–2020. Weber WA, Ott K, Becker K, et al. (2001). Prediction of response to preoperative chemotherapy in adenocarcinomas of the esophagogastric junction by metabolic imaging. J Clin Oncol, 19:3058–3065. Weber WA, Petersen V, Schmidt B, et al. (2003). Positron emission tomography in non-small-cell lung cancer: prediction of response to chemotherapy by quantitative assessment of glucose use. J Clin Oncol, 21:2651–2657.
38 NANOTECHNOLOGY-BASED BIOMARKER DETECTION Joshua Reineke, Ph.D. Wayne State University, Detroit, Michigan
ADVANTAGES OF NANOTECHNOLOGY TO BIOMARKERS Nanotechnology refers to the fabrication, manipulation, use, and study of phenomena of materials with at least one dimension in the nanoscale (