MATHEMATICS FOR INDUSTRY: CHALLENGES AND FRONTIERS
SIAM PROCEEDINGS SERIES LIST Bermúdez, Alfredo, Gomez, Dolores, Ha...
30 downloads
601 Views
29MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
MATHEMATICS FOR INDUSTRY: CHALLENGES AND FRONTIERS
SIAM PROCEEDINGS SERIES LIST Bermúdez, Alfredo, Gomez, Dolores, Hazard, Christophe, Joly, Patrick, and Roberts, Jean E., Fifth International Conference on Mathematical and Numerical Aspects of Wave Propagation (2000) Kosaraju, S. Rao, Bellare, Mihir, Buchsbaum, Adam, Chazelle, Bernard, Graham, Fan Chung, Karp, Richard, Lovász, László, Motwani, Rajeev, Myrvold, Wendy, Pruhs, Kirk, Sinclair, Alistair, Spencer, Joel, Stein, Cliff, Tardos, Eva, Vempala, Santosh, Proceedings of the Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms (2001) Koelbel, Charles and Meza, Juan, Proceedings of the Tenth SIAM Conference on Parallel Processing for Scientific Computing (2001) Berry, Michael, Computational Information Retrieval (2001) Estep, Donald and Tavener, Simon, Collected Lectures on the Preservation of Stability under Discretization (2002) Achlioptas, Dimitris, Bender, Michael, Chakrabarti, Soumen, Charikar, Moses, Dey, Tamal, Erickson, Jeff, Graham, Ron, Griggs, Jerry, Kenyon, Claire, Krivelevich, Michael, Leonardi, Stefano, Matousek, Jiri, Mihail, Milena, Rajaraman, Rajmohan, Ravi, R., Sahinalp, Cenk, Seidel, Raimund, Vigoda, Eric, Woeginger, Gerhard, and Zwick, Uri, Proceedings of the Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms (2003) Ladner, Richard E., Proceedings of the Fifth Workshop on Algorithm Engineering and Experiments (2003) Barbara, Daniel and Kamath, Chandrika, Proceedings of the Third SIAM International Conference on Data Mining (2003) Olshevsky, Vadim, Fast Algorithms for Structured Matrices: Theory and Applications (2003) Munro, Ian, Albers, Susanne, Arge, Lars, Brodal, Gerth, Buchsbaum, Adarn, Cowen, Lenore, Farach-Colton, Martin, Frieze, Alan, Goldberg, Andrew, Hershberger, John, Jerrum, Mark, Johnson, David, Kosaraju, Rao, Lopez-Ortiz, Alejandro, Mosca, Michele, Muthukrishnan, S., Rote, Gunter, Ruskey, Frank, Spinrad, Jeremy, Stein, Cliff, and Suri, Subhash, Proceedings of the Fifteenth Annual ACM-SIAM Symposium on Discrete Algorithms (2004) Arge, Lars, Italiano, Giuseppe F., and Sedgewick, Robert, Proceedings of the Sixth Workshop on Algorithm Engineering and Experiments and the First Workshop on Analytic Algorithmics and Combinatorics (2004) Hill, James M. and Moore, Ross, Applied Mathematics Entering the 21st Century: Invited Talks from the ICIAM 2003 Congress (2004) Berry, Michael W., Dayal, Umeshwar, Kamath, Chandrika, and Skillicorn, David, Proceedings of the Fourth SIAM International Conference on Data Mining (2004) Azar, Yossi, Buchsbaum, Adam, Chazelle, Bernard, Cole, Richard, Fleischer, Lisa, Golin, Mordecai, Goodrich, Michael, Grossi, Roberto, Guha, Sudipto, Halldorsson, Magnus M., Indyk, Piotr, Italiano, Giuseppe F., Kaplan, Haim, Myrvold, Wendy, Pruhs, Kirk, Randall, Dana, Rao, Satish, Shepherd, Bruce, Torng, Eric, Vempala, Santosh, Venkatasubramanian, Suresh, Vu, Van, and Wormald, Nick, Proceedings of the Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms (2005) Kargupta, Hillol, Srivastava, Jaideep, Kamath, Chandrika, and Goodman, Arnold, Proceedings of the Fifth SIAM International Conference on Data Mining (2005)
Demetrescu, Camil, Sedgewick, Robert, and Tamassia, Roberto, Proceedings of the Seventh Workshop on Algorithm Engineering and Experiments and the Second Workshop on Analytic Algorithmics and Combinatorics (2005) Ferguson, David R. and Peters, Thomas J., Mathematics for Industry: Challenges and Frontiers. A Process View: Practice and Theory (2005)
This page intentionally left blank
MATHEMATICS FOR INDUSTRY: CHALLENGES AND FRONTIERS A PROCESS VIEW; PRACTICE AND THEORY
Edited by
David R. Ferguson The Boeing Company (retired) Seattle, Washington
Thomas J. Peters University of Connecticut Storrs, Connecticut
Society for Industrial and Applied Mathematics Philadelphia
MATHEMATICS FOR INDUSTRY: CHALLENGES AND FRONTIERS A PROCESS VIEW: PRACTICE AND THEORY
Proceedings of the SIAM Conference on Mathematics for Industry: Challenges and Frontiers, Toronto, Ontario, October 13-15, 2003. Copyright © 2005 by the Society for Industrial and Applied Mathematics. 1098765432 1 All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688. Library of Congress Control Number: 2005931140 ISBN 0-89871-598-9
is a registered trademark.
CONTENTS
1
Introduction
3
Part I: Industrial Problems
5
Paradigm-Shifting Capabilities for Army Transformation John A. Parmentolo
24
Computational Simulation in Aerospace Design Raymond R. Cosner and David R. Ferguson
45
LEAPS and Product Modeling R. Ames
58
3D Modelling of Material Property Variation for Computer Aided Design and Manufacture M. J. Pratt
85
Part II: Mathematical Responses to Industrial Problems
87
A Framework for Validation of Computer Models James C. Cavendish
100
Numerical Investigation of the Validity of the Quasi-Static Approximation in the Modelling of Catalytic Converters Brian J, McCartin and Paul D. Young
117
A Framework Linking Military Missions and Means P. J. Tanenbaum and W. P. Yeakel
125
Computational Topology for Geometric Design and Molecular Design Edward L F. Moore and Thomas J. Peters
140
Discretize then Optimize John T. Betts and Stephen L Campbell
158
Transferring Analyses onto Different Meshes D. A. Field
178
Bivariate Quadratic B-splines Used as Basis Functions for Collocation Benjamin Dembart, Daniel Gonsor, and Marian Neamtu
VII
199
Part III: The Process of Mathematical Modeling, Practice & Education
200
A Complex Systems Approach to Understanding the HIV/AIDS Epidemic Carl P. Simon and James S. Koopman
222
A Combined Industrial/Academic Perspective on Fiber and Film Process Modeling C. David Carlson, Jr. and Christopher L. Cox
242
Helpful Hints for Establishing Professional Science Master's Programs Charles R. MacCluer and Leon H. Seitelman
249
Author Index
VIII
Introduction This collection of papers is a novel publication for SIAM. It is first, and foremost, a forum for leading industrial and government scientists and engineers to describe their work and what they hope to accomplish over the next decade. The goal is to focus the mathematical community on research that will both advance mathematics and provide immediate benefits to industry. So, this is not a book on mathematics in the strictest sense. It is not organized around a single or even multiple mathematical disciplines. The papers contain few, if any, statements and proofs of theorems. Rather, they focus on describing the needs of industry and government and on highlighting mathematics that may play a role in providing solutions. The papers were elicited primarily from the conference, Mathematics for Industry: Challenges and Frontiers, sponsored by SIAM in October, 2003. That conference itself was different from other SIAM conferences in that it was a forum for scientific experts from industry and government to meet with members of the mathematical community to discuss the visions they had for their industries and the obstacles they saw to reaching those goals. The conference was the result of over a decade of conversations among academics and industrialists as to why SIAM was not having a greater impact on industry and why there weren't more industrial scientists participating in SIAM. There are probably many answers to those questions but one struck us as particularly relevant - the traditional mathematical format expected by SIAM audiences is not a comfortable means for communicating the engineering concerns of industry and government. Thus, it is difficult for the two communities to talk effectively with each other and to look to each other for help and solutions. It occurred to us that a partial solution to this problem might be to hold a conference where invited representatives from industry and government, senior scientists, engineers, and managers, would be asked to talk simply about their industrial directions and perceived obstacles without the restriction that their talks fit a narrow mathematical model. It would then be left to the mathematicians to extract relevant issues where mathematical research could be helpful. In June, 2001 a group of industrial and academic mathematicians met at a workshop at the Fields Institute in Toronto to consider organizing such a conference1. The rough outline of the conference emerged from the workshop along with another interesting idea. The workshop participants began to realize that there was another recurrent theme that arose, namely that the traditional role of mathematicians as valuable, but narrowly focused, contributors to engineering projects 1
See http://www.fields.utoronto.ca/programs/cim/00-01/productdev/
1
was quickly being replaced by the notion that mathematics is itself a fundamental technology needed to advance many of the concepts of industrial interest. One particular example that came out of the workshop was that of relying on virtual prototyping methods and then the modeling and simulation technologies needed to validate virtual prototypes: an area where mathematicians naturally lead. This led to considering that a new mathematical discipline may be emerging whose research is centered on mathematics in industry and is based upon a broad synthesis of using existing mathematics to solve industrial problems while discovering further extensions and unifications of the mathematics that are essential to practice - thus enriching both the application of interest and the discipline of mathematics. A corrollary to this realization was that there is a need for a different kind of industrial-academic collaboration. In the past, industrialists often seemed to believe that the right mathematics for their problems existed in some academic closet and all they had to do was find it, while academics treated industrial problems only as a source for test cases and funding, with no intrinsic value for the development of mathematical theories and methods. In our minds a change was needed. For the benefit of both, industrialists needed to make a serious attempt to communicate to the mathematical community what they were trying to do, while the academic community needed to understand that the solution processes for industrial problems serves as a rich nurturing ground for new mathematics and demands close collaboration of industrialists and academics. All of this discussion on the relevance of SIAM to industry and the desire for a different kind of industrial-academic collaboration resulted in first, the Mathematics for Industry: Challenges and Frontiers Conference and then, eventually, in this book. The book itself has three parts: Industrial Problems in which leading government and industry scientists and engineers discuss their industries and the problems they face; Mathematical Responses to Industrial Problems where illustrative examples of mathematical responses to industrial problems are presented; and finally, The Process of Mathematical Modeling, Practice & Education where the actual process of addressing important problems by mathematical methods is examined and illustrated and the role of education is discussed. Summary and Acknowledgements
This book covers a broad range of mathematical problems that arise in industry. It contains some of the novel mathematical methods that are developed to attack the complexities of industrial problems. The intent is to bring both the beauty and sophistication of such mathematics to the attention of the broader mathematical community. This should simultaneously stimulate new intellectual directions in mathematics and accelerate timely technology transfer to industry. The editors thank all the authors for their contributions. The editors speak on behalf of all the members of the conference organizing committee in acknowledging, with appreciation, the financial support of SIAM.
2
Part I: Industrial Problems In Part I, leading industrial and government scientists, engineers, and mathematicians present problems relevant to their industries. These are not articles boasting triumphant solutions. Rather, through these essays, the authors challenge the mathematical research community to help with some of the big issues facing industrial and government scientists. They present their problems in order to provide context for developing new mathematical theories and discovering novel applications for traditional mathematics. They leave it to the mathematical research community to figure out just what those theories and applications should be. The problems presented also provide a novel glimpse into how industrial and government practice actually works and the need to reach out to other disciplines for ideas and solutions. There are four articles in this section. The first: Paradigm-Shifting Capabilities for Army Transformation, by John Parmentola, Director for Research and Laboratory Management, U.S. Army, describes fundamental challenges facing the army as it transforms into a lighter and more flexible force, the Future Combat System (FCS), all the while retaining needed military capabilities. The FCS is constructed around the Brigade Combat Team, envisioned to be a self- organizing, self-configuring, and self-healing network of over 3,000 platforms maintaining secure communications while moving. Among the challenges that FCS will need to meet are: reducing equipment weight, improving soldier protection, developing better armor, and providing secure and mobile command and control. Parmentola outlines how the military is looking to technologies such as High Performance Computing, miniaturization, new materials, nanotechnologies, and network science to help meet those challenges. He concludes with a call for Army partnerships with academia, industry, and U.S. allies to realize the needed advances and changes. The next two articles focus on computer simulation and the verification and validation of virtual prototypes. In Issues in Relying upon Computational Simulation for Aerospace Design, Raymond Cosner and David Ferguson, both from Boeing, describe changes that intense competition is bringing to design in the aerospace industry. They argue that computational simulation will become an even more pervasive tool in the design of complex systems. But challenges to achieving the full potential of simulation remain, among which are issues of fidelity and accuracy, throughput and cost, and confidence in computed simulations. Using their experiences in computational fluid dynamics and geometry as representative technologies for computational simulation, they describe how things stand now in industry and
3
identify some major barriers to advancement. Robert Ames of the Naval Surface Weapons Center, Carderock Division focuses on the entire ship design, manufacture, and assembly processes and the requirements for Smart Product Models for virtual prototyping and operational simulation to support testing, evaluation, and acquisition. His article, LEAPS and Product Modeling examines the challenges of bridging modeling solutions with simulation federations while addressing validation, verification and analysis issues of large acquisition programs. In his paper he describes an integrated architecture to support such complete simulations and the geometry, topology, and application frameworks needed to implement such an integrated architecture. The final article of Part I looks at manufacturing and focuses on the emerging technology of layered manufacturing. In 3D Modelling of Material Property Variation for Computer Aided Design and Manufacture, Michael Pratt of LMR Systems surveys the state of layered manufacturing and the challenges remaining. Layered manufacturing methods are ideal for interfacing directly with CAD design systems and can be used to build very complicated artifacts with intricate geometry and internal voids. However, there are a number of issues yet to be settled.
4
Paradigm-Shifting Capabilities for Army Transformation John A. Parmentola * Abstract The tragic events of 9/11 and our nation's response through the war in Afghanistan and the current conflict in Iraq reflect major changes in our national security strategy. Our current adversary, global terrorism, has no boundaries. We no longer know where, when and how U.S. vital interests will be threatened. As a part of the Joint Team, the Army is undergoing Transformation to ensure that it is relevant and ready to defend U.S. interests anywhere on the globe. Through Transformation, the Army is focusing on advancing capabilities of the Current Force and planning for the exploitation of future technology trends to ensure that the Army continues to be the most relevant, ready and powerful land force on earth well into the future. This article will describe some of the ways the Army is meeting these future challenges through its strategic investments in research and technology for the Future Force. These investments are likely to result in paradigm-shifting capabilities which will save soldiers lives while ensuring that future conflicts are won quickly and decisively.
1
Transformation to a New Force
The U.S. Department of Defense (DoD) has embarked on an extraordinary process of change called Transformation, that is, the creation of a highly responsive, networked, joint force capable of making swift decisions at all levels and maintaining overwhelming superiority in any battlespace. In support of this process, the Army is developing the Future Combat System (FCS), a major element of its Future Force, which will be smaller, lighter, faster, more lethal, survivable and smarter than its predecessor. Transformation will require that the Army make significant reductions in the size and weight of major warfighting systems, at the same time ensuring that * Director for Research and Laboratory Management, U.S. Army.
5
U.S. troops have unmatched lethal force and survivability. It also means that the Army and other military services (as well as coalition forces) will be interdependent. To better appreciate the process and nature of Transformation, it is important to understand the motivation behind the concept. From a historical perspective, the Army's Current Force was largely designed during a period in time when the U.S. had a single superpower adversary contained in a geographic battlespace that was thoroughly analyzed and well understood. Their tactics, techniques and procedures were also well understood, and large-scale U.S. materiel and forces were pre-positioned. The Cold War was characterized by a fragile and dangerous detente between two competing superpowers, the U.S. and the Former Soviet Union. This situation is in stark contrast to our current security environment where we are a nation at war against an elusive enemy called terrorism that can strike anywhere at anytime through unanticipated means. We have experienced this emerging form of warfare through attacks by terrorists on our homeland and across the globe. In this era of threat uncertainty, we cannot be sure where, when and how our adversaries will threaten our citizens, our allies and U.S. interests. Consequently, our military forces must be more responsive, more deployable, our Soldiers multi-skilled and able to transition quickly from a direct ground combat action to stability and support operations in short periods of time. Unlike the threat-based approach of the Cold War, the Army is transforming to a capabilities-based, modular, flexible and rapidly employable force as part of a Joint Team, and our current warfighting strategy now reflects this new approach. Important Transformational characteristics of new information-age operations are speed of planning, speed of decision making, speed of physical movement, speed with which physical barriers are overcome, and effects-based precision fires, as demonstrated in the mountains of Afghanistan and the urban and desert scenarios of Iraq. Operation Enduring Freedom (OEF) in Afghanistan, initiated less than a month after the attacks of September 11, was carried out with great speed and precision. In Operation Iraqi Freedom (OIF), we went from direct ground combat action to stability and support operations in a matter of three weeks. A recent addition to the Current Force is the Stryker Brigade Combat Team, a smaller, lighter and faster combat system that embodies several of the desired Transformational warfighting capabilities of the Future Force (Figure 1).
2
Brigade Combat Teams
To meet the goals of Transformation, the Army is developing Brigade Combat Teams (BCTs). The FCS-equipped BCT is comprised of about 2550 Soldiers and FCS equipment that together comprise approximately 3300 platforms (Figure 2). The challenge is to ensure that when 3300 platforms in the BCT hit the ground, all are networked and ready to fight. The key characteristics of the network are the ability to self-organize, self-configure, self-heal and provide assured and invulnerable communications while the BCT is moving, which means everything is moving together - command-and-control, information, logistics, etc. When links break, other platforms will be made available to restore them. All communications will be se-
6
Figure 1. A Stryker Brigade Combat Team on patrol in Iraq. Source: U.S. Army Photo cure through the use of various techniques such as encryption, low probability of detection and low probability of intercept as well as directional antennas so as to limit the ability of the adversary to detect and intercept signals. Of utmost importance on the battlefield is the knowledge of where everyone is - where I am, where my buddy is, and where the enemy is. For the BCT, each of its 3300 platforms senses a portion of the battlespace and then transmits the most salient aspects of that portion back to a command-and-control vehicle that is assembling a common operating picture (COP) of the battlefield. As the BCT moves, it is constantly updating and reconfiguring the 3300 platforms to fill out this COP. The ultimate goal is a COP that minimizes latency so as to enable our Soldiers to execute complex operations with great speed and precision to devastate any adversary.
3
Future Combat Systems (FCS)
FCS is a paradigm shift in land combat capability that will be as significant as the introduction of the tank and the helicopter. FCS is comprised of a family of advanced, networked air-based and ground-based maneuver, maneuver support, and sustainment elements that will include both manned and unmanned platforms.
7
Figure 2. Elements of the Future Combat Systems (PCS) Brigade Combat Team. For rapid and efficient deployability using ClSOs, the maximum weight of each ground platform will not exceed 20 tons. The FCS system-of-systems family currently consists of 18 vehicles that range from combat vehicles to command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) command centers and medical vehicles. In addition, there are unmanned aerial vehicles (UAVs) that include larger platforms for the more robust sensor and communication packages, as well as smaller UAVs that carry small payloads for local reconnaissance missions. FCS also includes remote launch systems capable of carrying rockets and mortars, a robotic mule to aid the Soldier with respect to sustainment and logistics, as well as numerous types of unattended ground sensors for unprecedented situational awareness.
4
Transformation Challenges
To meet the Army goals for "strategic responsiveness", that is, the ability to deploy a brigade combat team in 96 hours, a division in 120 hours, five divisions in 30 days, and to fight immediately upon arrival, the Army must overcome a number of technical challenges. These include: reducing the weight of soldier equipment while improving soldier protection; making lightweight combat systems survivable;
8
and ensuring that command-and-control centers are mobile and much more capable (Figure 3).
Figure 3. Army Transformation from the Current to the Future Force.
4.1
The Weight of Equipment
Today, soldiers must carry as much as 100 pounds of equipment, which has a dramatic effect on their agility and endurance (Figure 4). The Army goal is to reduce the effective fighting load to 40 pounds, while improving protection against threats from the enemy and the environment. As a first step, the Army is developing robotic "mules" that can follow soldiers into battle and carry a good part of the load. 4.2
Improved Soldier Protection
The Army is also pursuing novel ways to use nanomaterials to protect against ballistic projectiles and chemical and biological attacks and to enable the soldier ensemble to perform triage through active-control materials and diagnostic sensors. An immediate challenge is to protect against injuries to the extremities, the most prevalent injuries on the battlefield (Figure 5). The Army Research Laboratory, in collaboration with the Army Center of Excellence in Materials at the University of Delaware, has developed a new Kevlarbased garment by applying shear thickening liquids to the material. These substances are composed of nanoparticles of silica suspended in a liquid, such as
9
Figure 4. Impact of the Soldiers load on performance. Source: Dr. James Sampson, Natick Soldier Center polyethylene glycol. When a high-speed projectile impacts into these liquids, the nanoparticles are compressed into a rigid mass that resists penetration. At slow speeds, the nanoparticles are able to move around the projectile, offering little or no resistance to a slow-moving projectile. The result is a garment with normal flexibility that is completely stab resistant. The garment is currently being assessed to determine its effectiveness with respect to other types of injuries to the extremities. Recently, the Army Institute for Soldier Nanotechnology at the Massachusetts Institute of Technology (MIT) discovered a novel, active-control material, dubbed "exomuscle", that might be used as a prosthesis to help soldiers handle and lift heavy objects. Exomuscle might also be embedded in the soldier ensemble, along with physiological monitoring and diagnostic sensors. The soldiers uniform could then act as a tourniquet to limit blood loss or perform CPR, as needed on the battlefield. 4.3
Stronger, Lighter Weight Armor
Currently, the most advanced combat system is the Abrams tank, which weighs more than 70 tons and can only be deployed either by C-5 aircraft (two per aircraft) using special runways, C-17 aircraft (one per aircraft), or ship and rail. The Abrams tank has a remarkable record of limiting casualties (only three in combat since its deployment nearly 20 years ago). To meet the new deployment goals, how-
10
Figure 5. In Operation Iraq Freedom (OIF), majority juries/casualties are due to lack of protection to extremities.
of in-
ever, the Army must use C-130-like intratheater cargo aircraft to transport troops and equipment. Traditional approaches to survivability have relied heavily on armor, which has driven up the weight of ground combat systems. Because of the weight limits of FCS, the Army must develop a new survivability paradigm that relies on speed, agility, situational understanding, active protection systems, lighter weight armor, signature management, robotic systems, indirect precision fire, terrain masking, and various forms of deception rather than heavy armor. Realizing this new paradigm will require sophisticated research tools. For example, suppose for each of the 10 parameters listed above there are 10 points to explore. This means there are 10 billion points representing varying degrees of survivability. So where in this 10-dimensional volume are the acceptable levels of survivability for light combat systems in desert terrain, rugged terrain, urban terrain, and jungle terrain, taking into account the environmental conditions associated with them? Analyzing this complex 10-dimensional volume experimentally is both unaffordable and impractical. Therefore, we must rely on modeling and simulation. Fortunately, with focused research, emerging technological developments, and advances in high-performance computing (HPC), it is anticipated that the Army will be able to conduct trade-off analyses to help resolve this critical issue (Figure 6). Armor will undoubtedly remain an important aspect of survivability, and many innovative approaches are under development, including advanced lighter
11
Figure 6. High performance computing trends in computation and affordability. Source: Dr. Rich Linderman, RADC weight composite armors and ceramic armor that can sustain greater loading for longer periods of time, thus increasing its ability to dissipate energy. At the Army Research Laboratory, scientists and engineers have modified the surface roughness of glass fibers that go into composite ceramic armor at the nanoscale. By fine tuning this roughness at the nanoscale to better match the characteristics of the epoxy resin that holds these fibers in place, it has been possible to increase the frictional forces during debonding when a kinetic energy penetrator impacts the armor. This has enabled the armor to sustain greater loading for longer periods of time, hence, increasing its ability to dissipate energy. These novel materials have enabled engineers to trade levels of protection for reductions in armor weight. 4.4
Mobile, More Capable Command-and-Control Centers
Another challenge is making command-and-control centers mobile and capable of maintaining the momentum of the fighting force. Currently, these centers are massive and relatively immobile. They move at less than 10 miles per hour, not as slow as the air traffic control center at a major airport. One of DODs top five goals is network-centric warfare, the central element in fully realizing Transformation in this century. The network must include individual soldiers on point, operations centers
12
in the theater of operation, and the home station, which can be anywhere in the world. Communications and the network are the key foundational elements of PCS and the Future Force.
5
Trends in Science and Technology
Because the Army strategy for transformation is strongly dependent on the continuous infusion of new technologies, trends in technology are continually monitored and assessed to determine their applicability to meeting the Armys needs. Certain trends are expected to persist well into this century. These trends include: time compression; miniaturization; and the understanding and control of increasingly complex systems. 5.1
Time Compression
Time compression involves the conveyance of information at the speed of light, and, more importantly, the ubiquitous availability of HPC that can process information very rapidly. Knowledge management, data processing, data interpretation, information routing, and link restoration for assured communications will be essential to situational awareness. Real-time, multisensor, data-fusion processing will be possible with embedded HPC capabilities. This technology will also be important for autonomous unmanned systems and reliable autonomous seekers for smart munitions. Advances in silicon-based HPC are likely to be overtaken by rapid developments in molecular electronics, and possibly DNA and quantum computing, with speeds that will make current supercomputers seem like ordinary pocket calculators. According to futurist and inventor Dr. Ray Kurzweil, we can expect a steady, exponential progress in computing power. At that rate of advance, we could have embedded HPC with remarkable speeds within the next decade. If Dr. Kurzweil is correct, computing ability will exceed the ability of all human brains on the planet by 2050 (Figure 7).
5.2
Miniaturization
Space continues to be "compactified", as more and more functions are performed by devices that take up smaller and smaller spaces. Golf-ball-size systems on the horizon include advances in microelectromechanical systems (MEMS). These systems will improve sensor systems and lead to low-cost inertial-navigation systems, diagnostics, prognostics, microcontrol systems, and so forth. Miniaturization will also improve logistics. Maintenance of warfighting systems on the battlefield will be managed in real time through predictive capabilities involving sophisticated prognostic and diagnostic systems, all connected and communicating on the FCS mobile wireless network. Further advances in miniaturization will result in inexpensive, self-contained, disposable sensors, such as Smart Dust (Figure 8). These small, inexpensive sensors will be dispersed by soldiers on the battlefield in handfuls over an area where they will self-organize and self-configure to suit the particular situation.
13
Figure 7. Over time, exponential growth in computing power will exceed that of all human brains. Source: Dr. Ray Kurzweil, Kurzweil Technologies Miniaturization will also have a major impact on flexible display technology, conformal displays that can be placed on a soldiers face plate or wrapped around the arm. The Armys Flexible Display Center at Arizona State University leads the field in research in this area. Within this decade, we expect to realize a wireless device contained in a six-inch long, one-inch diameter tube (Figure 9). Anticipated advances in miniaturization, computer memory, computational speed, and speech recognition should lead to a compact device capable of video recording, speech recognition, embedded mission-rehearsal exercises, stored illustrative manuals, wireless communications, and real-time situational awareness through a flexible display, all in a compact form that will easily fit into the pocket of a soldier. We will also be working on the development of very small complex machines, such as nanobots that can perform microsurgery, prostheses that can enhance soldier capabilities, and machines that can go into places that are dangerous to humans. The development of micro unmanned aerial vehicles (UAVs) the size of a human hand, or even smaller, is within our grasp (Figure 10). Micro UAVs will enable soldiers to gather information about threats and provide both lethal and nonlethal capabilities, while keeping soldiers out of harms way. Our inspiration for this system is the common bumblebee (Figure 11). This small creature, with a body weight that is essentially all nectar, has a horizontal thrust of five times its weight and is capable of flying at a speed of 50 km per hour with a range of 16 km. Recently, researchers have discovered that the bumblebee
14
Figure 8. Smart dust, a miniaturized, inexpensive, self-contained disposable sensor. Source: Dr. Kenneth Pister, University of California at Berkeley
navigates by balancing information flow from its left and right optical systems. Our current challenge is to understand the control system that enables this small creature to land precisely and exquisitely with zero velocity under turbulent conditions. Achieving this capability in a micro UAV will require extensive research on small- scale instabilities at low Reynolds numbers, the development of lightweight, durable materials, and sophisticated control systems that can work in turbulent environments. We will also have to develop highly efficient active-control materials and low-noise propulsion systems with compact power and energy sources that can operate reliably and for extended periods of time. Through biotechnology, we have a real opportunity to take advantage of four billion years of evolution. Biotechnology could lead to the engineering and manufacturing of new materials for sensors and other electronic devices for ultra-rapid, ultra-smart information processing for targeting and threat avoidance. Dr. Angela Belcher of MIT has tapped into the biological self-assembly capabilities of phages
15
Figure 9. Wireless flexible display for use by the Soldier on the battlefield. Source: Eric Forsythe and David Morton, Army Research Laboratory
Figure 10. Micro-unmanned aerial vehicles (UAVs). Source: M. J. Tarascio and I. Chopra, University of Maryland (viruses that infect bacteria) that could potentially enable precise, functioning electrical circuits with nanometer-scale dimensions. By allowing genetically engineered phages to self-replicate in bacteria cultures over several generations (Figure 12), Dr. Belcher has identified and isolated the phages that can bind with particular semiconductor crystals with high affinity and high specificity. These phages can then self-assemble on a substrate into a network forming exquisitely precise arrays. The ultimate goal of this research is to replace the arduous fabrication of electronic, magnetic, and optical materials with genetically engineered microbes that can selfassemble exquisitely precise nanoscale materials based on codes implanted in their DNA. By exploiting living organisms as sensors, we are making advances in detection and identification. After all, why invent a sensor when evolution has already done it for you? The U.S. Army Medical Research and Materiel Command has developed
16
Figure 11. The bumblebee a highly agile and efficient navigator; Source: M. V. Srinivansan, M. Poteser and K. Krai, Australian National University a technique for using common freshwater Bluegill sunfish to monitor water quality in several towns around the country. The system successfully detected in real-time a diesel fuel spill from a leaking fuel line at a New York City reservoir. Fortunately, the reservoir intake was off line at the time of the incident, and no contaminated water reached consumers. A Belgian research organization, APOPO, has developed a way to detect land mines using giant African pouched rats. In Tanzania, these rats have been trained to detect land mines with extraordinarily high detection rates. Research is ongoing on the detection of explosives by parasitic wasps, the early diagnosis of pulmonary tuberculosis by rats, and the detection of certain types of cancers in humans by dogs. 5.3
Control of Increasingly Complex Systems
Our understanding and control of increasingly complex human-engineered and biologically evolved systems continues to improve. Besides creating new materials from the atom up and managing these new configurations through breakthroughs in nanotechnology and biotechnology as described above, we are improving our control of the communications network to support the Future Force. The FCS network
17
Figure 12. Self-assembly characteristics of genetically modified phages. Source: Dr. Angela M. Belcher, MIT will be a network of humans collaborating through a system of C4ISR (command, control, communications, computers, intelligence, surveillance, and reconnaissance) technologies. Humans process sensory information and respond through an ad hoc communication network, which affects network performance and, in turn, feeds back into human behavior. For the network to meet the Armys goals, we need a better understanding of the best way for humans to behave and collaborate on such a network. Although multi-hop mesh networks hold out the promise of self-organizing, self-configuring, self-healing, and higher bandwidth performance, we still need considerable research to understand network performance in a wide range of conditions to optimize protocols for military operations. We especially need to identify network instabilities to ensure that the network remains invulnerable to attack.
6
Network Science
The network is the centerpiece of network-centric warfare and the Armys transformation to the Future Force. There are networks in all aspects of our daily lives
18
and throughout the environment, such as the Internet (we are still trying to understand how it works), power grids (we could have used a common operating picture in the Northeast in 2002 to avoid a blackout); and transportation (cars, trains, and airplanes). There are also social networks comprised of people and organizations. Studies of social networks focus on understanding how interactions among individuals give rise to organizational behaviors. Social insects, such as bees, ants, wasps, and other swarming insects, also operate as networks. There are networks in ecosystems as well as in cellular (the human brain) and molecular (e.g., metabolic) systems. We are learning how information is processed throughout the prefrontal cortex of the brain and where various types of events occur in this region of the brain (Figure 13). There are about 100 billion neurons in the brain, approximately half of them in the cerebellum. Modeling and simulation at the University of Pennsylvania has resulted in a depiction of the dynamic activity of approximately 10,000 neurons in the cerebellum . Although this is only a small fraction of the total, we continue to advance our understanding of how neuronal networks function and affect human behavior. One goal is to understand how the brain and cognition work to learn about the software of the brain and its application to artificial intelligence. This knowledge will significantly affect virtual reality, robotics, human-factors behavioral science, and smart munitions, all of which are likely to be important to the Armys transformation to the Future Force. However, we currently lack a fundamental understanding of how networks operate in general.
Figure 13. Information flow between regions of the prefrontal cortex. Source: Adapted from Laura Helmuth, Science, Vol 302, pg 1133 The network of network-centric warfare will be a system of connections be-
19
tween humans organized and interacting through a system of technologies. This network will be a highly nonlinear sense-and-response system about which little is known and for which there are few research tools to enable us to predict performance. Although the main focus is on C4ISR technologies and associated concepts, they are only part of the picture. At a very basic level, the rules or principles governing the behavior of this complex system are not well understood. Consequently, we do not have a language appropriate for describing the dynamics or a systematic mathematical formalism to make predictions of network performance for comparison with experimental data. We will need a multidisciplinary approach to advance our knowledge. This network is an example of an entire class of complex systems that exhibit network behavior. Therefore, rather than focusing research on the network of network-centric warfare, there may be an opportunity to advance knowledge and develop synergies along a broader front that will improve many complex systems and processes that exhibit network behavior. This new front could be called "network science", and progress in network science could have significant impacts on many fields, including economics and sociology. Research in network science could address a number of intriguing and important questions. Do seemingly diverse systems that exhibit network behavior have the same or similar underlying rules and principles? Is there a common language that can give us insight into the behaviors of these systems? Is there a general mathematical formalism for a systematic study of these systems? What should the Army focus on in the near term (010 years), midterm (1020 years), and long term (beyond 20 years) to advance Future Force capabilities?
7
Conclusions
The Army faces formidable technical challenges on its path to Transformation. We are already seeing the emergence of a paradigm-shift in capabilities that will save the lives of soldiers and lead to a smaller, lighter, faster, and smarter force. The Army partnerships with other Services and government agencies, academia, industry, and U.S. allies are essential to advancing science and engineering to realize the vision of the Future Force. Our investments in science and technology will enable us to overcome the many technical challenges associated with Transformation, but more importantly, to ensure that when our soldiers are called upon to defend freedom and liberty anywhere in the world, they come home safe and victorious.
8
Acknowledgement
The author is deeply indebted to Irena D. Szkrybalo for her creative comments and careful editing of the original transcript associated with my SIAM conference briefing. She also made several important suggestions, which significantly improved this paper. A version of this paper was presented at the National Academy of Engineering Symposium on Biotechnology in February 2004.
20
T [1] P. A. ANQUETIL,H-H Yu, J.D. MADDEN, P.G. MADDEN, T.M. SWAGER, AND I.W. HUNTER, Thiophene-based conducting polymer molecular actuators, in Smart Structures and Materials 2002: Electroactive Polymer Actuators and Devices, Proceedings of SPIE, Vol. 4695, edited by Y. Bar-Cohen. Bellingham, Wash.: International Society for Optical Engineering (SPIE), 424434. [2] G. BUGLIARELLO, Bioengineering: An Engineering View, San Francisco: San Francisco Press, Inc., 1968. [3] P. FAIRLEY, 2003. Germs that build circuits. IEEE Spectrum Online. Available online at: http://www.spectrum.ieee.org/WEBONLY/publicfeature/nov03/1103bio.html. [4] L. HELMUTH, 2003. Brain model puts most sophisticated regions front and center. Science 302(5648): 1133. [5] W.E. HOWARD, 2004. Better displays with organic films. Scientific American 290(2): 7681. [6] R.E. JENSEN AND S.H. MCKNIGHT. 2004. Inorganic-organic fiber sizings for enhanced impact energy absorption in glass reinforced composites. Submitted to Composites Science and Technology. [7] R. KURZWEIL, 1999. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. East Rutherford, N.J., Viking Penguin Group. [8] R. KURZWEIL, 2003. Future Technology Vision Document. Report to U.S. Army Leadership and Army Science Assessment Group. [9] R. KURZWEIL,2003. Societal Implications of Nanotechnology, 2003. Testimony to the Committee on Science, U.S. House of Representatives. [10] S.-W. LEE, S.K. LEE, AND A.M. BELCHER. 2003. Virus-based alignment of inorganic, organic, and biological nanosized materials. Advanced Materials 15(9): 689-692. [11] Y.S. LEE, R.G. EGRES, AND N.J. WAGNER. 2003. The ballistic impact characteristics ofKevlar woven fabrics impregnated with a colloidal shear thickening fluid. Journal of Materials Science 38(13): 2825-2833.
21
[12] S. LLOYD, 1995. Quantum-mechanical computers. Scientific American 273(4): 140145. Scent Detection Workshop. 2004. Unpublished papers delivered at the Scent Detection Workshop, Sokoine University for Agriculture, Morogoro, Tanzania, July 2730, 2004. [13] T.R. SHEDD, W.H. VAN DER SCHALIE, M.W. WIDDER, D.T. BURTON, AND E.P. BURROWS. 2001. Long-term operation of an automated fish biomonitoring system for continuous effluent acute toxicity surveillance. Bulletin of Environmental Contamination and Toxicology 66(3): 3928 [14] M.V. SRINIVANSAN, M. POTESER, AND K. KRAL. 1999. Motion detection in insect orientation and navigation. Vision Research 39(16): 27492766. [15] P. STONE, 2003. Cebrowski Sketches the Face of Transformation. Available online at: http://www.defenselink.mil/news/Dec2003/nl2292003 _200312291.html. [16] M.J TARASCIO AND I. CHOPRA. 2003. Design and development of a thrust augmented entomopter: an advanced flapping wing micro hovering air vehicle. Pp. 86-103 in Proceedings of 59th Annual Forum of the American Helicopter Society. Alexandria, Va.: AHS International. [17] T.P. TRAPPENBERG, 2002. Fundamentals of Computational Neuroscience, Oxford, U.K.: Oxford [18] UNIVERSITY PRESS. University of Pennsylvania. 2004. Modeling and Simulation of the Cerebellum. Available online at: www.neuroengineering.upenn.edu/finkel/. [19] U.S. ARMY. 2004. Department of Army Executive Summary. LTC R.K. Martin, U.S. Army. [20] U.S. ARMY. 2004. The 2004 Army Posture Statement. Available online at: www.army.mil/aps/04/index.html. [21] U.S. ARMY. 2004. The Way Ahead: Our Army at WarRelevant and Ready. Available online at: www.army.mil/thewayahead/relevant.html. [22] U.S. ARMY, OFFICE OF FORCE TRANSFORMATION. 2004. Top Five Goals of the Director, Force Transformation. Available online at: www.oft.osd.mil/top_five_goals.cfm, July 16, 2004. [23] U.S. DEPARTMENT OF DEFENSE. 2004. Transformation. Available online at: www. defenselink. mil/ Transformation. [24] B. WARNEKE, M. LAST, B. LEIBOWITZ, AND K. PISTER. 2001. Smart dustcommunicating with a cubic-millimeter computer. IEEE Computer 34(1): 4451.
22
[25] H.-H. Yu AND T.M. SWAGER. In press. Molecular actuators: designing actuating materials at the molecular level. IEEE Journal of Oceanic Engineering 29(2).
23
Computational Simulation in Aerospace Design Raymond R. Cosner* and David R. Ferguson^ Abstract Driven by pressures to improve both cost and cycle time in the design process, computational simulation has become a pervasive tool in the engineering development of complex systems such as commercial and military aircraft and space launch vehicles. Fueled by the tremendous rate of advance in engineering computing power, acceptance of computational design tools and the results from using those tools has accelerated in the last few years. We anticipate this pace for improving our capabilities likely will continue to accelerate. Based on this view of enabling technologies, we present a vision for the future of aerospace design, ten to twenty years from now. We will then identify some of the major barriers that will hinder the technical community in realizing that vision, and discuss the advances that are needed to overcome those barriers.
1
Introduction
Modeling and simulation are tools that allow companies to design new products both more quickly and of higher quality than before. Computational simulations such as Computational Fluid Dynamics (CFD) are key to modern modeling and simulation efforts. However, even while these tools continue to gain acceptance as a primary tool of aerospace product design, they have yet to realize their full potential in the design process. In our view, in order to achieve its full potential, the entire aero CFD process must become more highly automated over a wide variety of product types, geometry representations, and user skills as encountered in a *Senior Technical Fellow, Director for Technology, Boeing Integrated Defense Systems, St. Louis, MO. t Boeing Technical Fellow - retired.
24
typical industrial environment. Despite considerable improvements and successes, especially in academic institutions and government labs (see for example the work described in [9]), this kind of productivity has yet to arrive at the typical industrial setting. Even while there are a number of factors that are accelerating the acceptance of these tools, factors such as: • Improving maturity and capability of the computational tools and the supporting infrastructure, • Clear benefits gained from computational simulation in previous applications, which encourages design teams to be more aggressive in the next application, and • Customer and market pressure which dictates ever-faster design cycles, often to a degree impossible without computational simulation. There are other factors that continue to limit acceptance, factors such as: • Insufficient accuracy in various applications • Throughput and cost issues including — Lack of versatility of the tools in addressing complex engineering challenges over widely varying product classes — Inability of the tools to meet the data generation rates required to support a large engineering project — Computational failures requiring human intervention and — Total cost of the simulation based process in a large engineering project. • Lack of confidence in computational simulation. This article discusses these factors, with emphasis on those which limit acceptance of computational simulation in the aerospace design process. The authors experiences are mainly in geometry and Computational Fluid Dynamics (CFD), and this article is illustrated with examples from that domain. However, the key issues transcend individual analysis domains.
2
Factors Favoring Computational Simulation
Computational simulation has made great strides in the last decade. Tools are becoming more useful in that they are beginning to have the capability of producing required accuracy at an acceptable cost, over a wide range of design space. There have been clear successes in engineering development which have been made possible by effective use of computational simulation. These successes have motivated others to adopt computational simulation in their design processes. Several factors are promoting an increased reliance on computational simulation in the design process. Increased market pressure and globalization are both
25
making traditional experimentation based design more difficult while, at the same time, computing itself is becoming more capable. The aerospace market is intensely competitive at all levels, from major manufacturers such as Boeing down to the level of smaller manufacturers creating an imperative to use design processes which define practical aerospace products quickly and at low cost. But aerospace product development is a long process, typically spanning a decade or more from the initial concept to the point where the product enters service. The long time span is based on the need to develop a vehicle that can fully meet a large, aggressive, complex set of requirements. The conflicting needs of quick design cycles versus supporting complex requirements has resulted in portions of the US Department of Defense adopting a spiral development process aimed at accelerating bringing new capabilities into service. This is done by defining a sequence of products with increasing capabilities. The initial product from design spiral one might have 70% of the ultimate capabilities. The products of subsequent spirals will have greater capabilities. This approach is intended to bring products into service quickly with the capabilities that are ready for use, without waiting for the remaining capabilities to reach acceptable maturity. Further, the spiral development approach provides the opportunity to gain operational experience in the earlier products, thus leading to a more capable product later. One key to success of the spiral development process lies in the ability to execute a single design cycle very quickly, a few months, or perhaps a year instead of multiple years in a traditional process. This means that a traditional experimental design approach such as one based on wind tunnel testing may be inappropriate and so, spiral development will encourage a trend toward an increased focus on computational simulation to obtain key data. Globalization and mergers are also driving increased reliance on computational simulation. Suppliers from around the world contribute to the development of new aerospace vehicles. Further, a design team likely is now dispersed over many more geographic locations than previously. Both of these factors a global supplier base and a geographically dispersed organization mean that it is impractical to rely on a single physical source of design data, e.g., a wind tunnel, based in a specific location. It is imperative that the designers have access to tools, no matter where the designer is located, in which they have a high degree of confidence in the data generated and which are suitable to a wide variety of applications and products and are not tuned to specific products. This again points to an increased role for computational simulations which have the capability of generating high quality data and are not tied to a specific location. Computational tools and their required computing infrastructure also are becoming substantially more capable each year. As they become more capable, there is a tendency to employ them more in the design process. Based on these observations, it seems reasonable to anticipate that computation simulation will be the primary data source for aerospace product development in the next few years. The key advantage of computational simulation will lie in its extension to multidiscipline analysis and optimization, for which there is simply not an affordable competing capability from any other technology or design process. However, even as tools become more useful and better adapted to todays markets, they still fall
26
short of having the full impact we want and need. To get to that full impact several challenges must be met. We turn now to discussing some of those challenges.
3
Challenges
To realize this vision of the future, computational simulation must demonstrate its readiness for a larger role in three key areas. Fidelity and Accuracy "Is the data good enough?" Computational analysis must demonstrate it is capable of achieving the quality of data required. Thus, data must be at an acceptable level of fidelity meaning that it is complete in its representation of geometry, flight conditions, and the physics of the flowfields. At acceptable fidelity, analysis must demonstrate that it achieves reliably the required level of accuracy to meet engineering needs as the primary data source. Throughput and Cost "Does CFD provide data quickly enough and at an affordable cost?" Engineering design is based on large amounts of data used to assess the effects of each vehicle parameter that can be altered by the design team. Vehicle performance must be estimated at a range of flight conditions spanning the complete flight envelope including unusual or emergency conditions. And, it must span the parametric range of the vehicle state, e.g., deflection of each control surface, other factors such as landing gear up or down, various bays or cavities open or closed, and so on. These requirements mean that large number of cases must be analyzed, and the entire cycle of analysis will be repeated at various points in the design process when significant changes are made to the baseline configuration. Thus, the analysis process must be capable of sustaining a high throughput of data, at an acceptable cost in analysis, man-hours, computing resource requirements, and other metrics. Confidence "Can we rely on computed data in making critical decisions? Will we know if something is wrong?" Unless the program team has sufficient confidence in the simulation data to make key design decisions based solely on the computed data, the computational simulation process will add only modest value to the design program. If the design team finds it necessary to hedge their bets by acquiring independent data from a different source such as wind tunnel testing, then they are apt to question why they needed to obtain the computational data at all. One key aspect, therefore, is to be able to assure the statistical accuracy of the data based on the actual characteristics of individual elements of data, rather than based on a universal statement that all the data is accurate to, say, ±1%. For example, we need to get to the point where we can say that a specific element of data, with 95% confidence, is accurate within ±1%, while another data element, due to specific cause, is less accurate, say, ±5%. Each of these factors accuracy, throughput, and confidence will be considered in turn.
27
3.1
Accuracy
Throughout most of the history of CFD, the research focus has been on achieving ever-improving accuracy over a range of conditions. This accuracy improvement has been achieved through research on numerical algorithms, methods for modeling common physics such as turbulent boundary layers and shear layers, and methods for modeling special areas of physics such as combustion or multi-phase flow. In our experience, for aerospace design applications, useful accuracy often means at most ±1% uncertainty in force and moment predictions CONSISTENTLY. To achieve this level of accuracy, four factors must exist: • It must be possible to employ dense grids where critical to the analysis without incurring penalties by adding high density where it is not needed. • The analysis must converge to tight criteria. Shortfalls in accuracy are generally blamed on deficiencies in turbulence models. We were amazed how much better our turbulence models performed, once we developed the practice of running on dense grids with tight convergence criteria. • Skilled users must be available. The most significant factor in quality of CFD predictions is the users ability to make optimal decisions in user-controlled modeling parameters such as grid construction, selection of algorithms and physical models among several available options, and management of the solution from initialization to convergence. • Engineers must anticipate key physical interactions likely to exist in a new analysis flowfield, and assess realistically the modeling requirements and the accuracy that can be expected before the engineering program commits to using CFD analysis to provide the required data. This requires experience and judgment in both physical and computational fluid dynamics. There are several areas of common flow physics where accuracy remains troublesome for engineering CFD analysis. Where these physical interactions are important to the engineering problem, it is more difficult to achieve the level of accuracy required to meet engineering design needs. Two of the most critical areas are: • Boundary Layer Transition Current transition models are based on gross correlations of limited experimental data to modeling parameters such as momentum thickness Reynolds number. These models generally are not adequate for high-accuracy predictions in complex (e.g., realistic) 3-D flowfields. Improved transition models are needed which take into account specific localized details of the flowfield, and which add only a modest increment to the total computing cost of the analysis. • Heat Transfer Heat transfer (or adiabatic wall temperature) continues to be a difficult parameter to predict at high accuracy. There are several reasons for this difficulty. First, fluid temperature or heat transfer rate is a difficult quantity to measure experimentally much more difficult to measure than fluid velocity. So, there is only a small amount of detailed experimental data
28
available to guide development of models and solution methods. Second, the asymptotic temperature profile near the wall is non-linear, whereas the asymptotic velocity profile (either laminar or turbulent) is linear. A much higher grid density is required to capture accurately the nonlinear temperature profile adjacent to the wall. This near-wall profile determines the wall temperature or heat transfer rate. Third, the wall temperature or heat transfer rate is very strongly affected by the state of the boundary layer laminar, transitional, or turbulent. The transport properties of air are well known for laminar flow, but they are modeled approximately by turbulence models for turbulent flow. We will not recite the limitations of turbulence models here. The transport properties are even less well known for transitional flows, where the fluid is changing from a laminar state to a fully turbulent state. So, transitional and turbulent flow prediction is subject to the familiar limitations of current models for those regions of a flowfield. There is an additional issue. Models for transition and turbulence predict (with recognized deficiencies) the local eddy or kinematic viscosity of the fluid. The effective heat transfer coefficient must be determined for thermal predictions. This heat transfer coefficient can be determined from the kinematic viscosity using a proportionality constant, the Prandtl number. The Prandtl number for laminar flow is a known property of the fluid, but for turbulent or transitional flow it is subject to significant uncertainty. Generally, a constant value of 0.9 is assumed but this is a very weak assumption. Thus, calculation of temperature or heat transfer in transitional or turbulent flow is subject to all the uncertainties of predicting mean velocity in those flow regimes, plus the uncertainty associated with the turbulent Prandtl number. And, the grid requirements are more stringent due to the nonlinear asymptotic behavior of the temperature profile. All in all, it is a very difficult calculation to execute with high accuracy.
Also important but beyond the scope of this paper are analyses involving unsteady flows, chemistry, and turbulence as these are beyond our particular areas of expertise. However, we suggest that turbulence and chemistry modeling are areas where basic research is needed to capture the physics of interest and to develop numerical algorithms which capture unsteady flow physics to the required accuracy remembering all the while that levels of accuracy that were acceptable ten years ago are not considered sufficient today. As accuracy increases in steady-state applications, we become less comfortable with inaccuracies that exist in unsteady calculations. As experience is gained using CFD for systems with chemical reactions (combustion, for example), we see more clearly areas where further improvements would lead to significant gains in the engineering product development process. In any metric, once we can regularly achieve satisfactory results at a given level in some metric, the engineering community begins calling for a higher level of performance (accuracy, cost, etc).
29
3.2
Throughput and Cost
The three primary sources of aerodynamic data design are computational analysis, wind tunnel testing, and flight testing. Of course, the product must already exist before data can be obtained from flight testing. So, the incumbent source of aerodynamic data for design (prior to building the vehicle) is the wind tunnel. The wind tunnel has been the primary data source since before the flights of the Wright brothers. A strong and well-developed body of practice has evolved based on wind tunnel testing. While wind tunnel testing is subject to various limitations and deficiencies, these issues are well understood throughout the aerospace community. Thus, there is high confidence that in most situations the uncertainties in wind tunnel data are known and understood, and that situations where unacceptable uncertainty exists can be identified. Based on this degree of understanding of data uncertainty, critical design decisions based on data from wind tunnel testing can and are made routinely. Consider the factors involved in using CFD data as a direct replacement for wind tunnel data. Significant time and money are needed to initiate a wind tunnel test, but once testing is underway the marginal cost per data point (a single combination of model geometry and test conditions) is minimal. Thus, while CFD analysis may enjoy a cost and cycle time advantage in situations where only a small number of data points are required, the advantage vanishes as the number of needed data points is increased1. Eventually, a crossover point is reached where wind tunnel testing becomes the faster and cheaper source of data. The crossover point in the number of cases where analysis is faster or cheaper is currently a few hundred cases (data points). And it is rising every year. There are many segments of the design process, such as analysis of components rather than the whole vehicle, where data is required in quantities where analysis is the faster and cheaper data acquisition path. For example, propulsion system design inlets and nozzles was one of the earliest areas where high-end Navier-Stokes CFD gained a prominent role. For this segment of design, testing is particularly expensive and computational simulation is very cost effective. Data requirements in an airplane development cycle may change as the design goes through the cycle. Generally, data is required in smaller quantities in the initial stages. As the design gains maturity, large parametric database are required to support trade studies and to verify design performance in a variety of measures throughout the flight envelope. In the detailed (final) design stage, databases of 30,000 points or more are common and oftentimes over a half million points will be needed. As noted above, a wind tunnel is very efficient at generating large databases. Test data can be acquired to fill a large database at a cost in the general range of $30 to $150 per point, for a full airplane geometry including all movable control surfaces, protuberances, etc.2 ^n this context, a data point consists of a set of data for one combination of vehicle geometry and flight condition. This single data point may consist of thousands of discrete measured values in a test, or tens of millions of calculated values in an analysis. One CFD analysis run generates one data point in this terminology, though that data point may consist of millions of discrete predicted values on a grid. 2 An important factor in assessing the cost of a wind tunnel test is whether a suitable wind
30
To compete with testing for the task of building large databases, CFD analysis must at least be competitive with testing that is, execute 30,000 cases in less than 12 months at a cost of less than $150 per case. And, the CFD cost per case must include all elements of cost (as does the wind tunnel cost): labor, computing facilities, software licenses and support, facilities (buildings and floor space), and overhead. This is a demanding requirement, and CFD analysis is not yet competitive in the role of building large databases although progress is being made on a number of fronts. See for example, [9] where they describe a distributed parallel computing environment and a software system capable of generating large data bases for parametric studies. One area where industry continues to struggle despite advances in technology is geometry where improvements will lead to significant gains in throughput is in geometry. The geometry representation of a vehicle is the most basic representation of all and is the starting point for all downstream applications including especially the engineering analysis applications of fluid dynamics, electro-magnetics, and structural analysis. It also forms the basis for other important applications such as numerical controlled machining and even the development of installation and maintenance guides. Most would assume that in todays world the geometry representation is brought forward directly from the CAD system used by the designer into the analysis package and that all the information is present to enable the application. Sadly, this is not so at present. At Boeing we estimate that upwards of 30% of engineering time for analysis is in fact spent on preparing the designers geometry for analysis. This is a tremendous drain on resources and our vision for the future is to eliminate as much as possible this activity. Geometry preparation consists of several tasks, any or all of which may be present in any one application. These tasks include importing the geometry from a source such as a CAD system, modifying the geometry by simplifying if needed and eliminating unnecessary or intractable features, adding pieces of geometry needed to enable the analysis, and, most obnoxious, going through a quality assessment process and repairing the geometry, and lastly exporting the geometry to the next user. The distribution of effort over the various tasks for the typical CFD process3 is captured in the following table.
tunnel model is available. A wind tunnel model is a precision scientific instrument, with extensive sophisticated instrumentation. The cost of designing and building a model from scratch can get into multiple millions of dollars. Therefore, if a suitable model is available already (or an existing model can be easily modified to meet the new need), then the cost of testing will be at the low end of the range mentioned above and the desired data can be acquired within 1-2 months. If a new model must be built, then the cost will be at the high end of the range, and it could easily take a year to obtain the needed data. While the accuracy of test data depends on many factors, as a general statement most data can be obtained with uncertainties of ±1%. 3 This table is, of course, only qualitative. These estimates will vary according to the particular problem. They are, however, representative of a typical CFD process.
31
Average CFD Process Average % of total CFD effort Activity Geometry Preparation 25% Mesh Generation 30% Flow Solution 25% Post Processing 20% What this table shows is that after the designers work is completed, geometry preparation still constitutes a significant portion of the CFD task and, indeed, of other analyses such as computational electro- magnetics. The situation is such that in many instances geometry preparation issues make an analysis infeasible or too expensive to perform [5]. Our goal is to remove upwards of 50% of CFD processing cost by significantly reducing geometry preparation with the ultimate goal of having geometry moved directly from design to automatic mesh generation procedures. The affect of this would be to reduce CFD time by at least 50% but, even more, it would give the designer an on-demand capability for flow analysis. This in turn would significantly increase opportunities for analysis in the design loop. For example, being able to move directly from design through mesh generation is one of the key enablers for multi-disciplinary design and optimization. Achieving these goals will require answers to several key questions and issues. Among these are: • Is it possible to have a master geometry model that is immediately usable by all disciplines? Or will every master geometric model have to be modified to meet the requirements of specific downstream applications? If the answer is yes, a master geometric model is possible, then it is imperative that the geometry and analysis research communities begin work on understanding what such a master model might look like and what, if any, advances in analysis need to be made to make use of such a master model. If the answer is no, there can never be a master geometric model then research must be undertaken to understand how at least to limit the amount of individual modifications needed to enable all relevant analyses. In either case, research is needed to understand what exactly should be in a master or core geometry model and how should analysis codes be configured to take full advantage of the core geometry model. • How can the transfer of geometry between different parts of the engineering process with varying requirements for geometry quality, fidelity of details, included features, and mathematical representations be improved? Often it is suggested that the answer to this question is to impose a single CAD system with a single requirement for quality, fidelity, etc. from the beginning. However, such a simple answer does not seem feasible [7]. At best, it assumes that the final requirements for accuracy, fidelity, etc. are known at the beginning of what can be a very long design process. At worst it imposes a degree of fidelity on all components for the process that may simple be too burdensome for the overall process. Do the initial design concepts really have to have the accuracy and fidelity of detail design?
32
• How should geometry from many sources be handled considering that there are multiple sources within any company and where suppliers and partners use different CAD systems? This will remain a key issue whether a master geometry model can be made available or not. Our experience within a large corporation certainly suggests that uniformity of geometry sources within the same corporation may be a practical impossibility. But certainly, uniformity across a broad spectrum of customers, partners, and vendors is a certain impossibility. Thus, even if we can obtain a seamless transfer of geometry from design, through mesh generation and into analysis within a single corporation, the need for handling geometry from multiple sources will continue. We have discussed some of the overarching issues to be addressed when considering how geometry, meshing, and analysis systems should work together. We now turn our attention to some of the specific problems that arise with geometry and cause meshing and, in particular, automatic meshing to be very difficult and often to fail all together. Geometry problems include gaps where surface patches fail to meet correctly, twisted patches where normals are reversed, missing geometry pieces, overlapped surface boundaries, and an excessive number of surface patches being used to define a specific geometry. The sources for these problems can be in the CAD system itself, in the design process, or in the underlying mathematics of the process. For example, gaps in the geometry although not likely could be caused by the designer not doing a proper job. Gaps are more likely caused by a failure in a mathematical algorithm designed to compute the curve of intersection between two surfaces. The algorithm could fail by simply not having a tight enough tolerance4 thus allowing gaps that are too large or it could fail to find all components of an intersection, or simply fail to return an intersection at all. Surface overlaps on the other hand are usually the result of the designer not paying careful enough attention to the details. Thus, the gap problem often can be traced directly to the mathematics of the CAD system while the overlap and missing geometry problems are more likely to be process problems. Excessive numbers of patches used to define geometry undoubtedly result from the designer needing very precise control but not having a rich enough assortment of tools within the CAD system to obtain that control resulting in the only recourse being to continue to refine the geometry into finer and finer pieces. Difficulties in finding and fixing geometry problems continue to vex designers and engineers and remain a major cause of wasting both engineering labor and time to complete the CFD process. Although some tools exist for detecting and repairing flaws in geometry, most critical flaws still are detected visually and repaired in an ad hoc manner. Improving and perhaps automating CFD specific geometry diagnostics and repair tools would significantly reduce cycle time. Even with the advent of some promising tools, see [2], repairing geometry continues to be difficult. There is a huge 4
Caution: The temptation here is to attempt to set a tolerance that is tight enough to handle all geometries and all applications. We do not know how to do this[6]. The accuracy requirements on surface intersections vary by orders of magnitude from very loose to satisfy simply graphical display requirements to moderately tight for CFD applications, to extremely tight for some electromagnetic applications.
33
process bottleneck in availability of people with skills to do a geometry task and this is compounded by the fact that effective repair often requires users proficiency in two complex skill areas, e.g., CAD system and CFD. Given the complexity and persistence of these problems, we dont believe we should expect immediately a tool with all the required functionality that would remove all problems. Rather, borrowing from the spiral development process in design we suggest that a similar approach could work here and that there are a number of areas where results could lead to immediate improvements. We mention a few. The first phase5, spiral one, would be to develop metrics for geometry quality appropriate to each user group and then develop tools to evaluate quality against the metrics. Too often the required quality is a vague notion of what works that is developed by users of specific software for specific applications. Thus, it is almost impossible for a new comer to understand what makes one instance of geometry acceptable and another not. This means it is impossible to automate these processes. Such metrics and tools should be generated both for internally developed geometry and for geometry received outside the organization. Further, the metrics should be communicated to all providers of geometry and most especially to the CAD vendors. The second spiral, building on results from the first spiral, would begin work on developing robust geometry and analysis tools. The sensitivity to levels of precision should be eliminated if at all possible. This may require substantial changes to both geometry and analysis tools in use today. A background level of geometry quality should be identified. This would also help in the question of core or master models. The next phase, spiral three, would examine the question of how healed or fixed geometry gets absorbed back into a CAD system. This involves obtaining a deep understanding of the flow of geometry data through the entire product life cycle including the need to provide feedback upstream through the process on geometry problems, to address design changes, to preserve geometry fitness to understand tradeoffs between process cost/time and geometry quality. 3.3
Confidence
Weve covered increasing accuracy and throughput and reducing cost as important topics in promoting the role of computational simulation in engineering design. We now turn to perhaps a more fundamental question: Can we rely on computed data in making critical decisions? Will we know if something is wrong? This is the crux of the matter in gaining acceptance of computational simulation in the engineering process. One way of viewing this issue is to think in terms of having a useful understanding of the level of uncertainty in each piece of data produced by the computational simulation process. This uncertainty estimate must be sound if the estimated uncertainty is in fact too low, then the users are exposed to a potentially dangerous failure. If the estimated uncertainty is too high, it may unnecessarily discourage users from gaining the value in the data. 5 There would actually be no need to hold off on future spirals until the first ones are done. Rather, there are possibilities for concurrent development. We only divide them into cycles as a convenience for understanding the work.
34
The difficulty in gaining confidence in fluid flow results is compounded by the fact that both physical fluid dynamics and computational fluid dynamics exhibit highly nonlinear behavior where a small change in the conditions defining the flowfield in question (physical or computational) can produce important changes in the flowfield and where the response of the physics and the computational simulation to a change in the flow conditions generally is different. We assert, without offering proof, that it is impossible to run enough validation cases to establish complete confidence throughout a needed design space, when the physics are nonlinear. In the rest of the paper, we will discuss both todays essentially a priori methodology and possible improvements and a new, a posteriori methodology that may result in dramatic increases in confidence. Todays Validation Process
Today, validation is an a priori process described as follows where a significant body of literature exists on practices for verification and validation of computational simulations [1]. Recall that • Verification is the process of establishing the degree to which the computational simulation provides accurate solutions of the selected mathematical model. • Validation is the process of establishing the degree to which the computational simulation provides accurate representations of a selected physical flowfield, or a defined range of flowfields. In todays practice, validation is the primary method used to establish confidence in computational predictions. The general approach is: 1. Select a high-confidence set of experimental data, as "close" as possible to the analysis application which it is desired to validate. Due to the issue of differing nonlinear response (physics vs. computation) determining the best experimental data close to the desired validated analysis objective can be a very subjective and judgmental process. 2. Conduct repeated analyses of the validation problem(s), adjusting the computation model to determine the operational procedures required to attain the best possible accuracy and cost. This is a trial-and-error process. 3. When satisfied that the optimal combination of accuracy and cost has been identified, document the key lessons, expected accuracy, and expected cost from the validation results. The key lessons will include user-controlled modeling parameters that were adjusted (optimized) to achieve the best results, such as grid topology, density, and structure, flow solver options, guidance on simplifying the representation of the physical flowfield while maintaining acceptable accuracy and cost, etc. 4. Apply these lessons from the validation process in new applications. The new applications by definition are different in some significant way from the
35
existing data used to guide the validation. Therefore, the user must exercise judgment in applying the knowledge from the validation process in the new application. At several points in this sequence, the user must apply judgment to achieve good results. Thus, one can see that the skill, experience, and insight of the user are critical factors in achieving satisfactory results in a "validated" application. There is an important consequence of this process. The person performing the validation typically repeats the analysis a number of times, in the process of learning the optimal combination of modeling techniques and user-controllable modeling parameters. Without having such intent, the person conducting the validation has in fact identified the best-case outcome of the analysis methodology, in the validated application. Thus, the validation process actually winds up validating the best accuracy that the method can obtain rather than validating a level of accuracy that can be expected of most users in most situations. For engineering applications, it would be far better to conduct validation based on an approach which sets expectations at a level which can be achieved most of the time in a first-case solution, rather than an expectation that can only be achieved rarely in the first analysis. This iterative approach to validation has an additional implication. The use of an iterative trial-and-error process for setting expectations on CFD accuracy leads to reliance on highly tuned analysis models that often are very sensitive to specific users. Such a process is non-robust. The preferred analysis process is robust in that the quality of results is relatively insensitive to usage practices, over a meaningful range of practice. With a robust process, the natural range in variations due to differences in user practices should have little impact on the quality of results. In such a process, the rate of first-try success (acceptable quality) is high. A nonrobust process, on the other hand, is one that requires careful adjustment of usercontrolled parameters to produce acceptable quality. Todays CFD process exhibits several non-robust characteristics, particularly with regard to geometry and grid issues. This non-robustness ultimately is related to the selection of algorithms used for CFD analysis. Algorithm development has long been directed based on considerations of spatial and temporal order of accuracy, suitability for high-rate parallel processing, and other issues. The ability of the algorithm to provide acceptable results over a range of grid characteristics has generally not been a factor in the development and implementation of algorithms in a computational code. The lack of robustness is illustrated by the iterative nature of most validation exercises. In turn, the expectations set by publishing validation data based on careful tuning of the computational analysis tend to perpetuate highly tuned, non- robust processes. It is a chicken-and-egg problem. To summarize: the primary approach to CFD quality assurance today is validation. However, the validation process has several weaknesses and falls well short of providing a complete solution to the problem of assuring quality of CFD data in the engineering design process. Key areas of concern are: • Validation cases almost always are simpler than the real applications simpler in terms of both the geometry of the problem, and the flowfield physics exhib-
36
T
data, essential for value in validation, for simplified geometries. Frequently, validation also is based on integrated flowfield parameters such as drag. If validation is performed on that basis, without additional data describing the details of the fluid interactions in the validation cases, then there is a real risk of false optimism in the validation case due to offsetting errors and those errors wont be offsetting in a new application, in the subsequent design process. • CFD data quality is strongly dependent on small details of the modeling approach. For example, accuracy can be strongly affected by small details of the computational grid and its interaction with the numherical algorithms. Fluid dynamics (physical or computational) is a domain where small, localized features of a flowfield can have far-reaching impact, e.g., formation of a vortex, or details of a shock wave boundary layer interaction. Thus, a localized modeling deficiency can destroy all value in the overall solution. • As discussed above, it is very difficult to model the real application while exactly preserving the operational approach that was successfully validated. Success in this extension from the validated outcome is heavily dependent on the judgment of the user, which in turn is based on the users skill, experience, and insight. To mitigate the impact of these issues, a design team relying on CFD predictions often adds an additional step after the analysis results have been provided: a careful review and assessment of those results by independent experts, that is, by knowledgeable people who had no role in producing the CFD predictions. To succeed in this role, the reviewing expert must be skilled in the pertinent aspects of both physical and computational fluid dynamics. People with skills in both areas can be hard to find. In any event, adding a process of human inspection greatly reduces the throughput of the CFD analysis process, thus reducing the value of the CFD analysis even if no deficiencies are found in the CFD data. For all these reasons, it does not appear that the current approach to quality assurance in computational simulation data can be retained as we seek to increase substantially the throughput of the computational simulation process. A new approach is needed. Issues that Degrade Quality
There are many factors that contribute to uncertainty in CFD predictions, or in other computational simulation disciplines. Each factor must be addressed to improve the overall analysis quality. Thus, there cannot be a single silver bullet approach to improving confidence in CFD simulation data each factor affecting uncertainty must be dealt with, appropriately. It can be helpful to consider the contributors to uncertainty in two categories: those factors which are inherent in our tools and processes, and those which can be controlled by the user. Clearly, the approach to improvement is very different in these two categories. Uncertainties due to fundamental limitations in the analysis codes include:
37
• Algorithm errors truncation, roundoff, phase error, etc. • Limitations of the physical models for turbulence, transition, chemical reactions, etc. Uncertainties due to user-controllable factors in usage of the computational tools include: • Imperfect grids, • Inadequate convergence spatial or temporal, • Inappropriate turbulence model locally or globally, • Omitted physical interactions, e.g., boundary layer transition, and • Boundary condition simplification or ambiguity. Many additional examples could be listed in these two categories. As discussed to this point, a priori validation is a valuable tool for improving the quality of CFD predictions. However, the most careful and complete validation process cannot provide full assurance that acceptable results will be realized when CFD analysis is conducted for a new problem, in the engineering design process. This is due to several reasons. Chief among them is that user-controlled factors are the chief factor in establishing solution quality, and user practices inevitably are different in small but important details between the validated cases and the real cases analyzed in engineering design activities. This sensitivity to small details of usage practices suggests the CFD analysis process is not robust and that new processes are needed. Potential New Approach
We must establish methods for verifying solution quality that take into account specific details of usage practices, in producing specific sets of data. This means that a capable quality assurance process must include a posteriori methods of evaluating data quality inspection methods. At least two different approaches to solution assessment can be identified, and both can be useful. • Physics-Based Assessment Test the full solution, or regions of the solution, for conformance to known physical laws. For example: discontinuities across shock waves should be in accord with the Rankine-Hugoniot laws, and the net circulation along a steady-state vortex should be constant. Local solution quality can be assessed based on the degree to which the solution exhibits this expected (required) behavior. An example of a physics-based local assessment is seen in Figure 1, below. This figure illustrates a prototype tool which identifies shock wave locations, and assesses the local modeling accuracy based on conformance to the Rankine-Hugoniot laws of shock wave jump conditions.
38
• Rule-Based Assessment Test localized subdomains of the solution for their conformance to sound modeling practices for example, test the actual grid density against the density known to be required for high accuracy, given the flow interactions which are seen to be present in the selected subdomain. Various metrics are used that correlate with accuracy, e.g., grid spacing normal to the wall.
Figure 1. Physics-Based Assessment of Shock Wave Modeling Accuracy Both of these approaches (physics-based and rule-based) begin with an assessment of solution quality or uncertainty in localized regions of the solution domain. These localized uncertainty estimates must then be combined to build uncertainty estimates for global parameters of engineering interest, e.g., lift, drag, pitching moment. Several methods can be conceived to do this. Perhaps the most rigorous approach, and also the most computationally expensive, would be to use adjoint methods to combine a set of local uncertainties into global uncertainties. Simpler and more approximate methods could be conceived based on Greens functions or influence matrices. One approach which has received some attention is that of calculus of belief functions [8]. This provides a mathematically formal process for combining various component uncertainty assessments into a composite assessment of a global measure of uncertainty. An initial prototype has been developed using a rule-based approach and belief function calculus to establish confidence boundaries
39
on CFD predictions of store separation trajectories6. The prototype approach for this application used a set of metrics which correlated with various uncertainties associated with the store separation analysis uncertainties for example in each of the three components of net aerodynamic force and net aerodynamic moment on the store. Evaluation of each of theses metrics at various points in a solution leads to estimates of components of uncertainty affecting the motion of the store. These component uncertainties are rolled up to overall uncertainties in the forces and moments on the store using belief function calculus, and a Monte Carlo method is then used to build an envelope of the confidence limits of the store separation trajectory [4]. Another interesting approach to uncertainty estimation was developed by Childs [3]. His approach enables estimates of local truncation error, that is, one type of error that results from using a too- coarse computational grid. Users would prefer to use the coarsest grid consistent with the required levels of accuracy, because solutions will run faster on a coarse grid. The penalty for using too coarse a grid is unacceptable uncertainty in the solution, and in the absence of suitable guidance most users will err on the side of caution and accept the resulting penalties in CFD throughput. Childs used neural networks to identify the truncation error in a CFD solution. The network was trained using CFD data on a high-density high-quality grid as truth data, and then it was used to assess local errors in other solutions. Key Elements of New Approach
Three examples have been presented to show the reader that an a posteriori uncertainty estimation approach can be developed for CFD analyses. Mimicking the DoD spiral development process, any such assessment approach probably would be brought into service incrementally. Initial capabilities would be relative simple, and the process would gain sophistication based on experience. Probably, the rulebased approach would be the most likely starting point. The knowledge required to implement this approach would be captured in three stages of research: Identify metrics relating to local or global solution quality which can be evaluated by a posteriori evaluation of a CFD dataset, conduct validation studies to quantify the relationship between each metric and the related sources of uncertainty and build a prototype to combine the discrete uncertainty measures into global uncertainty estimates relevant to the aerospace design process. The second step suggests that an entirely different approach to validation is needed. Current validation processes, as discussed above, generally are based on finding the modeling approach that yields the best accuracy on a (hopefully) representative problem. Then, the process relies on the judgment of the user in replicating the validated modeling approach in a new analysis problem. In the proposed new approach, validation instead is focused on quantifying the accuracy obtained in a range of modeling methodologies, with 6
A store is any object, weapon, fuel tank, sensor, or some other object, that is intentionally separated from an airplane in flight. A key issue is to ensure the store separates cleanly from the aircraft at all flight conditions where it might be released. In particular, it is important to ensure that the store does not follow a trajectory which might cause it to re-contact the airplane which could result in loss of the airplane and crew. Therefore, it is vitally important to understand the degree to which confidence can be placed in CFD predictions of a safe store separation trajectory.
40
specific attention to the quantitative uncertainty resulting from non-optimal modeling. One key concept is to build this quantitative understanding of uncertainty prior to the engineering application of the analysis methods. A user cannot be conducting grid density studies during the live application of CFD in the design process; the required knowledge of uncertainties must be gained prior to launching the engineering design analyses. The required uncertainty rule base of course should cover a number of metrics that relate to solution quality. These metrics include (but are not limited to) grid density, grid imperfections, turbulence model usage, and spatial and temporal convergence. The uncertainty estimation process must lead to estimates of uncertainty that are quantitative, well founded in validation data, mathematically and statistically valid, and are predictive in nature. 3.4
Statistical Process Control
With such a process for a posteriori assessment of simulation data quality, many paths open up for adding value to simulation data. One of the most immediate benefits would be to enable statistical process control (SPC) as a means of assuring data quality in high-rate computational simulation processes. Statistical Process Control (SPC) is a method of monitoring, controlling and, ideally, improving a process through statistical analysis. Its four basic steps include measuring the process, eliminating variances in the process to make it consistent, monitoring the process, and improving the process to its best target value. This would be SPC in its familiar form as taught, for example, in industrial engineering curricula except that SPC would be applied to datasets coming out of a computational engine rather than widgets coming out of an assembly line. Statistical Process Control, enabled by a posteriori quality assessment tools, is an effective and proven tool for assuring quality in high-rate production processes. It does not require a quantum leap to apply these concepts to the production of computational data, instead of the traditional applications of production of mechanical components. An automated quality assurance process such as this is essential to achieving high-rate computational simulations required for engineering product development, since human inspection of the computational data is impractical (as discussed above). For an introduction to SPC see [10]. 3.5
Additional Added Value
With valid quality assessment techniques in place, statistical process control is only the first step. Once these assessment methods are in routine use, we will naturally begin building a database of historical data on quality assessments of data emerging from the high-rate analysis process. Analysis of this database will identify key factors that correlate with uncertainty in the computational data. These key factors potentially would include: • Geometric features that lead to high uncertainty. • Flowfield conditions or combinations of flowfield features that lead to high uncertainty.
41
• Modeling selections or modeling practices that lead to high uncertainty. • Specific codes or code options that lead to high uncertainty. This information, of course, can be used immediately to guide tool and process improvement studies, and to justify and guide basic research initiatives. Analysis of the historical quality database also, of course, can be performed to identify the users that are having difficulty achieving high data quality in specific situations. This information can be used to improve training materials and user support capabilities in the organization. Of course, the database also will be invaluable in identifying the modeling practices that consistently provide superior data quality. With a process in place for quality assessment, capable of supporting high- rate computational simulations, the value of CFD data (or other domains of computational simulation) inevitably will increase greatly. This process of collecting and analyzing data will lead to establishing the capability to predict uncertainty in future analyses, based on the characteristics of the future analysis. This in turn will have a magnifying factor on the value of the computational data, since this capability will help ensure that the computational simulations are used wherever they are the most effective tool in providing the desired data. A quantitative understanding of uncertainty in CFD data (and/or other domains of computational simulation data) is needed to enable design teams to rely on the predictions. Without this understanding, engineering programs must ensure that they err on the side of conservatism, by acquiring additional data from independent sources to confirm predictions, by adding safety margins, and by other means. These defensive requirements, undertaken because the uncertainties in the data are not understood, lead to added cost and time in developing new products. Quantitative understanding of the sources of uncertainty also will be a powerful tool to focus our efforts to improve the CFD process. It will make it easier to justify these improvement efforts, by allowing the CFD team to state clearly the benefits in risk, cost, and cycle time that will be obtained through specific improvements. Finally, quality assessment will lead to uncertainty prognostics. With this ability to forecast data uncertainty in various applications, we can optimize the design data flow to achieve the best possible cost or cycle time at a predetermined level of risk that the aerospace product will meet its design objectives. And, we can use this process to understand the tradeoff between risk, cost, and cycle time in planning future product development programs. These benefits will lead to another round of dramatic growth in the value of computational simulation in the aerospace product development process.
4
Summary
Aerospace industries need improved design processes for both cost reduction, quality improvement, and break-through new vehicle concepts. In our view computational simulations, CFD and, for another example, computational electro-magnetics among others will play key roles in these new design processes but there are significant challenges that must be overcome, challenges related to accuracy of simulation, throughput and costs, and confidence in simulation results. Among specific areas where work is needed are:
42
• Faster CFD analyses with greater accuracy, • Automated griding, • Improved handling of geometry inputs, and • Better quantitative understanding of uncertainty in CFD data. Our stretch goal for CFD that would indicate that the discipline is beginning to have the capabilities discussed in this paper would be the ability to generate 30,000 test cases/year at an average cost of $150 and with a computable error of less than 1% for all force and moment predictions for full airplane geometry including all control surfaces and perturbances.
43
Bibliography [1] AIAA Committee on Standards for CFD, Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, AIAA G-077-1998 [2] url: www.transcendata.com/products-cadfix.htm [3] R.E. CHILDS, W.E. FALLER, G.A. CEDAR, J.P. SLOTNICK, AND C.-J. WOAN, Error and uncertainty estimation in CFD, NEAR TR 574 (Nielsen Engineering and Research), 7 June 2002 [4] A. W. GARY, L. P. WESLEY, Uncertainty management for store separation using the belief function calculus, 9th ASCE EMD/SEI/GI/AD Joint Specialty Conference on Probabilistic Mechanics and Structural Reliability (PMC2004), Albuquerque, NM, July 2004 [5] R. T. FAROUKI, Closing the gap between CAD models and downstream applications, SIAM News, 32(5), June 1999 [6] T. J. PETERS, N. F. STEWART, D. R. FERGUSON, AND P. S. FUSSELL, Algorithmic tolerances and semantics in data exchange, Proceedings of the ACM Symposium on Computational Geometry, Nice, France, June 4 - 6 , 1997, 403 - 405. [7] D. R. FERGUSON, L. L. MIRIAM, AND L. H. SEITELMAN, PDES Inc. Geometric Accuracy Team, Interim Report, July 24, 1996. [8] W.L. OBERKAMPF, T.G. TRUCANO, AND C. HIRSCH, Verification, Validation, and Predictive Capability in Computational Engineering and Physics, presented at Foundations for Verification and Validation in the 21st Century Workshop, Johns Hopkins University, Applied Physics Laboratory, Laurel, MD, October 22-23, 2002. [9] S. ROGERS, M. AFTOSMIS, S. PANDYA AND N. CHADERJIAN, NASA Ames Research Center, Moffett Field, CA; E. Tejnil and J. Ahmad, Automated CFD parameter studies on distributed parallel computers, Eloret Institute, Moffett Field, CA 16th AIAA Computational Fluid Dynamics Conference, AIAA Paper 2003-4229, June 2000 [10] http: //reliability.sandia.gov/index.html
44
LEAPS and Product Modeling R. Ames*
Abstract The requirements for product model integration via virtual prototyping suggest that the underlying mathematical framework must be robust, extensive, and accurate. The challenges of integrating modeling solutions with simulation federations while addressing Validation, Verification, and Analysis issues of large acquisition programs will be discussed.
1
Introduction
Ships are arguably the most complex of inventions. Dan Billingsley, of the Naval Sea Systems Command writes, The typical ship is comprised of hundreds of times as many parts as the typical aircraft, thousands of times as many parts as the typical power plant, ten thousands of times as many parts as the typical vehicle. Indeed, our more complex ships fly aircraft off the roof, have vehicles running around inside, and have a power plant in the basement all incorporated in a floating city capable of moving at highway speeds around the oceans of the world. The process of ship development is likewise complex, particularly for naval warships. It involves thousands of individuals in hundreds of corporations, governmental and regulatory bodies operating throughout the world. Each ship is in some ways unique. A ship may have a conception-to-retirement lifespan of 50 years involving both those notyet-born when it was launched and those who will retire before it retires. Certainly today's ship will outlive several generations of information technology applied to its development, construction, and service life support [3]. The estimated design cost of the US Navy's new destroyer is $3-5B. Detail design will add another $500M. The production costs for the first ship are estimated at $2.7B. These costs rule out building operational prototypes as a means of test, evaluation, and selection. As a result, ship programs are moving toward * Naval Surface Warfare Center, Carderock, MD
45
virtual prototyping and operational simulations within an integrated architecture or framework that is, toward the development of Smart product models. Further, with the new emphasis on the war on terror, system acquisition decisions are being framed in light of a larger context. Specifically, DOD Instruction 5000.2R is being reformulated in favor of a broader Joint Capabilities Integration and Development System JCIDS 3170.01, which focuses more on system-of-system requirement and upfront systems engineering [8]. Just as in the development of mechanical products, software development must follow a comprehensive process involving requirements, design, development, testing, and deployment. If a naval ship acquisition program must ensure that operational requirements can be met by a particular design before metal is cut, then one can conclude that a virtual prototype of this product must pass the test of validation and verification. An essential component of any virtual prototyping simulation will be a credible and comprehensive smart product model that encompasses all system dependencies and system performance. Since virtual prototypeing simulations operate on smart product models, requirements on smart product models are extensive and complex. This paper describes software development work being done in support of emerging System-of-System and Smart product model requirements. The ability to automatically conduct analyses in a multitude of disciplines is a cornerstone for validation and verification of Smart product models. Consequently, the emphasis here is to develop techniques that support multidisciplinary analysis. Three related software development efforts are described. The first, Geometry and Engineering Mathematics Library (GEML), provides a mathematical framework for geometry, grid data structures, analysis characterization, and behavior modeling. The second, Geometry Object Structure (GOBS), is a framework that provides a unique modeling capability for geometry, as well as more complete functionality for discretization, evaluation, modeling of complex solids of different material elements, logical view construction, and bi-directional communication of topology between geometry systems, modelers, and analysts. The third, Leading Edge Architecture for Prototyping Systems (LEAPS), provides the product information representation methodology required to facilitate the work of an integrated product team. Finally, underlying this development is the idea that all shared or integrated product information must follow an ontology of machine-processable data.
2
Approach
The approach assumes that the requirements for simulation include an architecture that is integrated, extensible, supports life-cycle product knowledge, and is physically valid. Unlike arcade games whose simulations may lack a certain reality when it comes to performance, simulation based design systems must ensure that the physics of the product is sufficiently accurate to support acquisition decisions. This paper will describe an approach to smart product modeling using new representational methods. It starts with the assumption that all disciplines need smart product model data; that the need for a common representation exists; and
46
that the smart product model must contain sufficient information to convey design intent and detail for all disciplines. The fundamental premise is made that smart product model data is composed of geometric, topological, performance, and process data and that product and process integration occurs through a common representation. In essence, a common smart product model supports the application of information technology to all aspects of product development, manufacturing, and operation. To understand how the smart product model concept with its emphasis on integrated design and analyses domains changes the traditional approach to design software, consider Figure 1 where a notional architecture supporting a smart product model with relationships between functional domains including design and analysis as well as utilities and tools common to all users is shown. Traditional tool development has focused on the boxes, not the arrows, with the consequence that the requirements for the individual boxes have dominated. However, the requirements implied by the arrows change the picture considerably. For example, when requirements for design and analysis are added, the architecture becomes like Figure 2, a much more complex concept.
Figure 1. Notional Smart Product Model Architecture A cursory review of the requirements imposed on a smart product model shows the complex and extensive need for geometry and topology, domain decompositions of product space, performance and property data, materials, and many other data. As product life cycles evolve from concept to manufacturing to maintenance, it is clear that for a smart product model to characterize reality it must be flexible, extensible, and incorporate a superset of all requirements imposed on it. If the domains we call design and analysis use applications such as CAD, FEA, and CFD to fulfill these functions then the smart product model must be capable of representing
47
and communicating the required data to these applications. This means, for example, that if a particular application needs discretized data of a certain form, then the smart product model must be capable of generating the data in the required form.
Figure 2. Notional Architecture Data Requirements It is evident that one of the major roadblocks to integration is confusion over the form of representation of geometry. Geometry is the common element in the design and analysis of complex systems and the form of the representation determines the level of effort involved in generation of analysis data, provides the means to communicate the design for product review and mockup, and describes the product for manufacturing. But often the form will change as particular applications change. Further, different organizations and disciplines may require different forms or views of the geometric representation. Thus there are conflicting requirements on the geometric form. On the one hand, there is a desire for a single, common form to aid in overall product data integration while on the other hand there are requirements, both technical and historical, for different forms supporting different disciplines and organizations. It is not surprising then that when complex systems are designed, the integration of performance data with geometry becomes critical and problematic. In this paper, data is viewed from two perspectives: Topological Views and Common Views emphasizing the way designers and engineers view design, performance, process, and product data (see Section 4). Smart product models start with methods and algorithms used to characterize shape, behavior, and content. These methods are fundamental to the modeling process. Likewise, the use of these mathematical methods and the associations made among them, topological and common views, provide the building blocks for system design. Through proper model topology we provide a framework for mul-
48
tidisciplinary integration necessary for a fully capable design environment. This framework is implemented using three component technologies: GEML, GOBS, and LEAPS. GEML provides the mathematical foundation necessary for characterization of geometry, analysis, and their coupling. GOBS takes this mathematical foundation and builds unique topology models that allow for multiple disciplines to access to the design without duplication and redundancy. Finally, LEAPS combines the modeling of GOBS with an application framework providing an environment for designing and assessing complex systems. The following three sections describe these component technologies and the relationships among them.
3
Mathematical Methods: GEML
The Geometry and Engineering Mathematical Library, GEML, provides a mathematical basis for modeling geometry. GEML is a C++ implementation of the DTJNTURBS [9] subroutine library. The objective of GEML is to provide a suite of mathematical objects for design and analysis. This object library includes functions to: 1) import, export, modify, and evaluate geometry data; 2) develop, modify, adapt, grid or mesh data with associated spline functions; and 3) store resultant geometry, grid, and analysis data in a common framework with dependencies maintained. Using these objects, a GEML based framework can contain and control all information relating to geometry, grids, and analysis. Since analysis is dependent on grids, and grids on geometry, the coupling of these elements in an object-oriented data structure is logical. GEML provides for flexible representation and object dependency. It includes such features as geometry varying with time; mapped geometry (composition of functions); grids denned in the parametric domain, and n-dimensional functions for analysis and behavior modeling. In order to model geometry, grids, and analysis data it is necessary to have a mathematical basis which facilitates representation of all three and allows for both implicit and explicit coupling of the data. In GEML, splines and composition of functions provide the essential building blocks for geometry and analysis description and coupling. The splines used are unrestricted in the number of independent and dependent variables allowed and this permits the modeling and interpolation of any number of geometric and analysis variables such as Cartesian coordinates, velocities, pressure and temperature; and with any number of independent parameters such as time, frequency, and iteration. For instance, traditional geometric entities like parametric spline surfaces (x,y,z) — f(u,v) are extended to n-dimensional mappings as, for example, (x, ?/, z, Vx, Vy, Vz, Cp, temp,) = g(u, v, time). The removal of dimensional constraints affords tremendous flexibility in the modeling of objects. Composition of functions permits straight forward and timely reparameterization of surface elements along with attached behavior models. We illustrate the concepts of n-dimensional splines and composition of functions as follows. Begin with a surface S parameterized as F(u, v, time] with a single time slice shown in Figure 3. The function F is an example of an n-dimensional spline. Now suppose there is a subsurface S* that is of particular interest. To obtain a parameterization of S* proceed by finding a function g mapping a second
49
domain D• B, such that / is continuous, 1 — 1, has a continuous inverse and maps onto all of B. Such a function is known as a homeomorphism. The two objects in Figure 1 are also homeomorphic. The important summarizing observation is that neither combinatorial topology nor point-set topology provides sufficient capability for more subtle topological characteristics that are crucial for today's increasingly sophisticated geometric models. The new frontiers in topological modeling are presented in the next subsection. 1
Combinatorial topology is often referred to as symbolic topology or adjacency topology in the CAGD literature
127
Figure 1. Same combination represents different 2.2
objects.
Topological Equivalence
While the relations of combinatorial topology and the equivalence relations defined by homeomorphisms are quite powerful and useful, they fail to distinguish how an object is embedded in three-dimensional space. To expand upon the discussion of Subsection 2.1, consider a very simple closed curve, the circle. It is the special planar case of an unknot, which is pictured in Figure 2(a). The important intuitive generalization from the planar circle to the unknot is that the unknot is any closed curve in M3 formed from certain permissible deformations of the circle, where the types of permissible deformations will be formally described in the following definition of ambient isotopy. A more complex knot, the trefoil is depicted in Figure 2(b). Its essential distinguishing characteristic is the presence of 3 crossings, as shown by the hidden surface rendering in Figure 2(b). The knot with four crossings is shown in Figure 2(c), and is called the figure-8 knot. It is easy to see that all these knots are homeomorphic to each other, yet it is often important to distinguish among these curves 2. Indeed, this can be accomplished with additional topological techniques. Such techniques are based on a stronger notion of topological equivalence than homeomorphism, known as an ambient isotopy. The formal definition of an ambient isotopy follows. Definition. If X andY are subsets ofM3, then X and Y are ambient isotopic if there exists a continuous function H : M3 x [0,1] —> M3 such that for each t € [0,1], H(-,t) is a homeomorphism such that • H(-,ty is the identity and
• H(X, 1) = Y. Consider the unknot, the trefoil knot and the figure-8 knot in Figures 2(a), 2(b) and 2(c), respectively. None of these knots can be continuously deformed into the other without breaking a strand. Hence, none of these three knots are ambient isotopic, but they are homeomorphic. It is worth noting that the existence of an ambient isotopy between two sets can be very hard to detect. Consider the tangled mess in Figure 3, and note that 2
For a listing of standard knot types, the reader is referred to [1, 32].
128
Figure 2. (a) Unknot. (b) Trefoil knot, (c) Figure-8 knot. it is ambient isotopic to the circle, meaning it can be 'untangled' without breaking a strand to form the circle 3.
Figure 3. This tangled mess is ambient isotopic to the unknot. Recently, considering generation of self-intersections during perturbations of geometric objects has been fundamental for effective algorithms to preserve isotopy class during these perturbations [3, 4, 5]. This recent work on computational ambient isotopy builds upon the considerable computational geometry literature for detection of self-intersections in curves and surfaces [19, 20, 44].
3
Frontiers in Computational Topology for CAMD
This section describes integration of theory for useful molecular modeling. It is composed of four sub-sections. Subsection 3.1 describes how the abstract mathematics of knots has been useful in forming conceptual models of molecules. Subsection 3.2 then describes an example of how some of this abstract theory could be used as the theoretical basis for molecular simulation algorithms. Subsection 3.3 then points out the difficulties in implementing efficient algorithms for even some relatively simple problems in knot theory, leading to reasonable pessimism about the effectiveness of knot theory for practical implementations. The section is concluded in Subsection 3.4 with an alternative approach to overcome that expressed pessimism. The proposed focus then becomes on approximations to create practical models, such that these approximations maintain topological equivalence under ambient isotopy. 3
An animation of the knot in Figure 3 can be seen using Robert Sharein's KnotPlot tool [36]. Knot images in Figures 2, 3, and 5 are partially created from KnotPlot.
129
The authors' specific mathematical contributions are then articulated, suggesting how these approximations can be useful in molecular simulations. 3.1
The Conceptual Role for Knots
For CAGD and computer graphics, static models are often sufficient to represent complex shapes for product design. However, form and function are integrally related in the life sciences, and form is rarely static. Biochemical processes are dominated by dynamic changes which modify function. Existing geometric methods for simulating these dynamics are computationally intensive. While geometry is the correct mathematics for capturing static form and rigid motion changes, topology is more focused upon how an object deforms in time. Hence, the faithful integration of computational topology and geometry is emerging to model dynamic changes in molecular function within the domain of CAMD, and such simulations have the opportunity to leverage and improve more than two decades of experience with geometric models within CAGD. The same formalisms about topology discussed in Section 2 may be equally well suited for advancing CAMD. Note that contemporary use of computational topology for CAMD by the bio-molecular community is mostly combinatorial, particularly when simulating molecules by traditional 'ball-and-stick' models. Such combinatorial models are composed topologically of vertices and edges, where the vertices correspond to individual atoms and the edges represent the bonds between atoms4. Current 'textbook' methods classify protein structure as a hierarchy of four subset structures (See [8, 34]), where the secondary structure contains critical combinatorial topology information. A model of the protein, carbonic anhydrase I, is shown in Figure 4(a), and its corresponding secondary structure topology diagram of its peptide chain is shown in Figure 4(b). The Nl and (72 represent the ends of a fragment of the chain. The circles represent alpha and 3io helices, and the triangles represent beta strands. (The interested reader can obtain more information on these diagrams from [24], and can easily produce such diagrams on the internet [40] directly from protein structural information contained in the Protein Data Bank(PDB) [31]. The PDB ID for this particular protein is Ihcb. For the reader who wishes to see a more detailed color image of the carbonic anhydrase I protein in Figure 4(a), an online image can be viewed from the PDB.) In addition to the combinatorial approach to macro-molecular modeling, the mathematics and computer science communities have also begun to formulate a topological perspective of bio-molecules. For example, Edelsbrunner and his associates create new computational topology algorithms to model macro-molecules [9, 10, 21]. The role of knots in understanding molecular dynamics in DNA was discovered nearly two decades ago by Wasserman et. al. [41, 42] and Sumners [38]. These researchers applied topology to predict knots in DNA molecules, and confirmed their results with experimentation. They also showed that specific enzymes can 4
Geometrically, these are well known as simplicial 1-complexes [6].
130
Figure 4. (a) Carbonic anhydrase I (Ihcb). (b) Topology diagram. cause a change of knot type, and advocated the use of topology to characterize the role of enzymatic action on DNA structure, since no matter how much a DNA molecule is twisted or distorted, without 'breakage' caused by reaction with specific enzymes, the DNA topology remains invariant. Although DNA is commonly thought of as a long, thin double helix, in reality, DNA mostly exists in a supercoiled form - meaning it is twisted and tangled in order to be in a state of minimum energy [37]. Furthermore, two or more strands of supercoiled DNA can also be linked together to form what biologists call catenanes. Note that links are simply a generalization of knots in the mathematical knot theory literature [1, 32]. Such linking and supercoiling of DNA is studied in the life sciences, and presents challenges to the modeling of DNA structure and function [35]. It is interesting to note that knots have been found to naturally occur in the primary structures of proteins. Taylor [39] describes an algorithm which he used to scan 3,440 protein structures in the Protein Data Bank, and found that eight of these contained knots. The protein in Figure 4 is one of these, and contains the trefoil knot. Since knot theory is emerging as useful mathematics for research in the disciplines of molecular modeling and geometric design, some related questions about efficiency for knot recognition algorithms are presented in the next section. 3.2
Algorithmic Efficiencies for Knots
A knot can be visualized as a closed loop of string. The way in which the strands of a knot are entwined is far more important than the size or shape of the knot, and helps distinguish different types of knots. Examples of knots are shown in previous sections of this paper. The important point to note is that topological equivalence of knots is determined by the notion of ambient isotopy. The interested reader is referred to the excellent text by Adams [1] for the introductory details of knot theory that are not included in this article.
131
A fundamental question in knot theory is how to determine when two knots are equivalent. An active area of research in knot theory today is finding mathematical methods that determine topological invariants of knots. Such invariants are useful for classifying and distinguishing different types of knots, and are typically given by a polynomial of one or two variables. For example the well-known Jones polynomial [18] of the simple trefoil knot depicted in Figure 2(b) is given by V(i) = —t~4 + t~3 + t~l. However knot invariants are typically very hard to compute, and may become intractable as the knot complexity increases. Computation of the Jones polynomial is known to be ^tP-hard [17]. In addition, it is known that even the problem of determining whether an object is knotted or not is in NP [14], which presents challenges for developing practical applications. Hence, these authors have shifted their attention to determining when an object and its approximation are ambient isotopic, even while the knot type of either may not be known. Both effective theory and algorithms for determination of knot equivalence have been developed [3, 4, 5, 33] and should gain an increasing prominence in CAGD and CAMD. 3.3
Knot representations
Computations for knots must be performed on some representation of the knot. Typically, knots are represented combinatorially by identifying crossings with their adjacent crossings on an oriented knot projection. An example of an oriented projection of the trefoil knot is shown in Figure 5. Note that each crossing is labeled numerically, and the sign on the crossing indicates whether it is an over crossing or under crossing.
Figure 5. An oriented projection of the trefoil knot with labeled crossings. These knot representations are important for algorithms to distinguish between geometric objects in differing isotopy equivalence classes. For example, the intersection between two surfaces could result in a knotted intersection curve, yet current methods for surface intersections do not specify the isotopy class of the intersection set. This is shown hi Figure 6, where the intersection curve of the two surfaces in Figure 6 (a) would result in the unknot, and the two surfaces in Figure 6(b) would result in the trefoil knot. In these surface intersection examples it is important to produce a knot, but in other instances, a knot may be an unwanted artifact of a poor approximation. The next subsection discusses how an approxi-
132
mation algorithm can change the isotopy class of a geometric object, a predicament that has only recently been considered in the literature [2, 3, 5, 22, 33].
Figure 6. Surface-to-surface trefoil knot.
3.4
intersection resulting in the (a) unknot. (b)
Algorithms for isotopically equivalent approximations
Approximations are central to geometric modeling and molecular design for computational efficiency, and problems can arise in preserving the isotopic equivalence during an approximation [2, 22, 33]. The relevant summary of the previous material of this section is to advise the reader that fundamental knot computations are not likely to be efficient for practical computations. In particular, performing explicit computation of the isotopy class of a knot and its approximant in order to determine isotopy equivalence is unlikely to be tractable. Hence, a more subtle approach is required, which includes additional specific information about the type of approximation process being undertaken. The goal then becomes to demonstrate that specific approximation processes will produce isotopic approximations. Note that this is similar, in spirit, to an approach throughout topology. Namely, topology has many theorems that discuss which topological characteristics (e.g., compactness, connectedness, closure, Euler number) are preserved under specific types of functions (e.g., continuous, homotopic, homeomorphic, diffeomorphic). To motivate that approach, the difficulties that can arise if approximation is done without incorporating specific topological constraints are shown now. Then, recent work by these authors, as well as others will be summarized to indicate progress on this problem. Consider the smooth cubic-spline curve in 3-dimensions shown in Figure 7(a). Note that spline curves are often approximated by piecewise linear or piecewise polynomial curves for display and analysis. A good piecewise linear approximation that maintains equivalent topology is shown in Figure 7(b). However, it is easy to see that a poor approximation, shown in Figure 7(c), can result in a different knot. This particular example results in the figure-8 knot. Hence, a poor approximation as shown in Figure 7 could be detrimental to
133
Figure 7. (a) Smooth spline, (b) Good approximation, (c) Knotted approximation.
many analyses. Therefore it is important to consider ambient isotopy as the relevant topological equivalence relation. Recent work [2, 3, 5, 22, 33] develops sufficient conditions for preserving ambient isotopy of manifolds, with quantitative bounds on piecewise linear approximations, which may be useful for practical computational applications. Developing algorithms to preserve the isotopy class on resultant approximants is expected to have an important role in both CAGD and CAMD. Recent results in that regard are now stated. Approximation of spline curves is fundamental for visualization and simulations, where these approximations are often obtained by subdivision methods. Hence, it becomes important to understand when the control points obtained under repeated subdivision form a PL curve that is ambient isotopic to the original spline. Sufficient conditions are given in the following theorem by the present co-authors. The details of the proof are contained in a pre-print by the present authors [25]. Theorem 1: Let B be a non-self-intersecting C2 Bezier curve with regular Bezier parametrization in M3. Then subdivision will produce a control polygon of B that is ambient isotopic to B, provided that nontrivial knots are not introduced during this approximation process. (Note that B may be open or closed.) The criterion about nontrivial knots is, admittedly, informally expressed, above. The interested reader is referred to the full paper [25] for a rigorous formulation of this condition about knots. However, the role of this non-knotting hypothesis was unexpected. After all, the intuition is that the control polygon can be made arbitrarily close to the curve, so after sufficiently many subdivisions one would naively expect them to have the same topological characteristics. However, the authors do not see how to eliminate the non-knotting hypothesis. This will continue to be investigated, but it has implications for the broad use of subdivision to approximate curves and surfaces. Namely, there is some risk, that if this nonknotting condition is violated, then that specific approximation will not have the same topological characteristics as the original geometric object. The authors know of no geometric design or molecular design system that now explicitly checks this
134
criterion. Hence, this observation has broad software design implications, since the problem of recognizing when a PL knot is the unknot continues to attract considerable theoretical interest [7, 13, 14], but the development of practical algorithms remains elusive. Note that subdivision is a recursive algorithm. Theorem 1, as stated, provides the valuable insight that subdivision can be used to create an ambient isotopic approximation. However, the stopping criterion can now only be detected by specific geometric checks for this containment. Since these geometric tests are computationally expensive, ongoing work is exploring if the number of iterations can be analyzed in advance to change the algorithm from its current recursive form into a more efficient iterative style. For surfaces, the ambient isotopy problem is considerably more difficult. However, there are some initial results, to which co-author Peters has contributed [30, 33]. These are specialized to the circumstances of compact connected C2 manifolds without boundary, which are embedded in M3. Such manifolds are broadly assumed within CAGD as the bounding surfaces for design objects containing compact volumes. Similar utilities are expected within CAMD. Again, the proof relies upon a recursive approach to determine if the approximation error is within a specific upper bound. However, the resulting surface is no longer from subdivision, but is a PL surface created with rectangular patches that are parallel to the standard three co-ordinate planes associated with a right-hand co-ordinate system within R3. In the re-statement of that theorem, below, this will be referred to as a piecewise box approximation. Theorem 2: Let M be a compact connected C2 manifold without boundary, which is embedded in IR3. Then for an appropriately chosen value of e > 0, there exists a piecewise box approximation of M such that the approximation error is less than e and is ambient isotopic to M. While it is easy to see the applicability of these theorems to CAGD, their usefulness to CAMD remains speculative, but promising. The proposed path to that transition is to consider that ball-and-stick models have been common in molecular modeling. Recently, such models have even been integrated with fast methods for computing their orbitals [11], where the coupling with a static model of a benzene molecule is used as an example. However, simulation algorithms must extend beyond these restrictions on two fronts 1. by replacing the ball-and-stick models with surface and solid models that more fully capture volumetric and physical properties of macro-molecules, and 2. by replacing static models with dynamic ones. Arguably, the fulfillment of objective #1, above, could be achieved for static molecular models with current spline modeling capabilities of existing CAGD systems. However, the creation of accurate molecular models would either entail the use of high degree splines or multiple intersecting lower degree splines. Hence, the efficiency demands for objective #2, above, would lead naturally to approximation of these models with PL models, including the approximation of knotted intersection
135
curves that were discussed in Subsection 3.3. The coiled nature of macro-molecules (proteins, DNA, RNA), some of which are known to contain knots, will provide challenging test cases for algorithms based upon the above theorems to deliver ambient isotopic approximations merely for the static cases. Moreover, during simulations, it is expected that life scientists will either require that the isotopy class of a molecule be preserved, or that they will be informed when the isotopy class changes as a result of a chemical or biological process. There remain significant challenges to support such simulations, but the preceding two theorems are presented as fundamental foundations towards that goal.
4
Conclusion
This paper was motivated by the authors' observation that communication about topology is often ambiguous in the literature concerning geometric and molecular modeling. The importance of guaranteeing topological equivalence during approximation is stressed. The criterion proposed for topological equivalence is stronger than the traditional usage based upon homeomorphism. This stronger criterion of ambient isotopy additionally characterizes how an object is embedded in R3. This permits distinguishing between different types of knots, which are all homeomorphic. The isotopy equivalence class of a design model is an important topological aspect of a model. However, computations to determine knot type are known to be intractable, thereby presenting major algorithmic challenges to determining the isotopy equivalence class of a geometric model. Because of this problem, these authors present sufficient conditions for approximations to be isotopically equivalent to the original model, instead of trying to explicitly determine the isotopy equivalence class of each approximation. This comparative approach holds great promise for both CAGD and CAMD. While CAGD models are typically static, it is noted that in the life sciences form and function are inextricably related, and form is rarely static. Topology is more suited than geometry to capture the increased emphasis upon dynamic change in macro-molecular simulations. Hence, it is important to be precise about which topological specializations should be modeled for simulation and preserved during approximation for both CAGD and CAMD. Opportunities exist to leverage topological issues in CAGD and CAMD to benefit both domains, where the transition to support CAMD is discussed in detail in the immediately preceding section. The development of sophisticated algorithms capable of correctly capturing all topological characteristics for simulation is a challenge, and remains on the frontiers of modern computational and mathematical sciences.
136
Bibliography [1] C. C. ADAMS, The Knot Book: An Elementary Introduction to the Mathematical Theory of Knots, W.H. Freeman and Company, 2001 [2] N. AMENTA, T. J. PETERS AND A. RUSSELL, Computational topology: ambient isotopic approximation of 2-manifolds, Theoretical Computer Science, 305, 2003, 3-15. [3] L.-E. ANDERSSON, S. M. DORNEY, T. J. PETERS, N. F. STEWART, Polyhedral perturbations that preserve topological form, Computer Aided Geometric Design, (12)8, 1995, 785-799. [4] L.-E. ANDERSSON, T. J. PETERS, N. F. STEWART, Self-intersection of composite curves and surfaces, Computer Aided Geometric Design, (15)5, 1998, 507-527. [5] L.-E. ANDERSSON, T. J. PETERS, N. F. STEWART, Equivalence of topological form for curvilinear geometric objects, International Journal of Computational Geometry and Applications, (10)6, 2000, 609-622. [6] R. H. BlNG, The Geometric Topology of 3-Manifolds, American Mathematical Society, Providence, RI, 1983. [7] J. S. BIRMAN AND M. D. HiRSCH, A new algorithm for recognizing the unknot, Geometry and Topology, 2, 1998, 175 - 220. [8] C. BRANDEN AND J. TOOZE, Introduction to Protein Structure, 2nd edition, Garland, 1999. [9] T. K. DEY, H. EDELSBRUNNER AND S. GUHA, Computational topology, Invited paper in Advances in Discrete and Computational Geometry, B. Chazelle, J. E. Goodman and R. Pollack eds. Contemporary Mathematics, AMS, Providence, 1998. [10] H. EDELSBRUNNER, Biological applications of computational topology, Chapter 63 of Handbook of Discrete and Computational Geometry, eds. J. E. Goodman and J. O'Rourke, CRC Press, Boca Raton, Florida, to appear. [11] G. FANN, G. BEYLKIN, R. J. HARRISON, K. E. JORDAN, Singular operators in multiwavelet bases, IBM J. Res. Dev. 48, 2004, 161-171.
137
[12] R. T. FAROUKI, Closing the gap between CAD model and downstream application, SLAM News, 32(5), June 1999. [13] W. HAKEN, Theorie der Normalflachen, Acta. math, 105, 1961, 245 - 375. [14] J. HASS, J. C. LAGARIAS AND N. PIPPENGER, The computational complexity of knot and link problems, Jounal of the ACM, 46, 1999, 185-211. [15] M. W. HlRSCH Differential Topology, Springer, New York, 1976 [16] C. M. HOFFMAN, Geometric and Solid Modeling, Morgan Kaufman, 1989. [17] F. JAEGER, D. L. VERTIGAN AND D. J. A. WELSH, On the computational complexity of the Jones and Tutte polynomials, Proc. Cambridge Philos. Soc., 108, 1990, 35-53. [18] V. F. R. JONES, A polynomial invariant for knots and links via Von Neumann algebras, Bulletins of the American Mathematical Society, 12, 1985, 103-111. [19] S. KRISHNAN AND D. MANOCHA, An efficient surface intersection algorithm based on lower-dimensional formulation, ACM Transactions on Graphics, (16)1, 1997, 74-106. [20] D. LASSER, Calculating the self-intersections of Bezier curves, Computers in Industry, (12)3, 1989, 259-268. [21] J. LIANG, H. EDELSBRUNNER, P. Fu, P. V. SUDHAKAR AND S. SUBRAMANIAM, Analytical shape computation of macromolecules: II. inaccessible cavities in proteins, PROTEINS: Structure, Function, and Genetics, 33, 1998, 18-29. [22] T. MAEKAWA, N. M. PATRIKALAKIS, T. SAKKALIS AND G. Yu, Analysis and applications of pipe surfaces, Computer-Aided Geometric Design, 15(5), 1998, 437-458. [23] M. MANTYLA Computational topology: a study on topological manipulations and interrogations in computer graphics and geometric modeling, Acta Polytechnica Scandinavica, Mathematics and Computer Science Series 37, Finnish Academy of Technical Sciences, Helsinki, 1983. [24] I. MlCHALOPOULOS, G. M. TORRANCE, D. R. GILBERT AND D. R. WESTHEAD, TOPS: an enhanced database of protein structural topology, Nucleic Acids Research, 32, 2004, 251-254.
[25] E. L. F. MOORE AND T. J. PETERS, Integration of computational topology and curve sudivision, pre-print, www.cse.uconn.edu/~tpeters. [26] J. R. MUNKRES, Topology, 2nd Edition, Prentice Hall, 1999. [27] M. NEAGU, E. CALCOEN, AND B. LACOLLE, Bezier curves: topological convergence of the control polygon, in Mathematical Methods in CAGD: Oslo 2000, T. Lyche and L. L. Schumaker (eds.), Vanderbilt University Press, Nashville, TN, 2001, 347-354.
138
[28] Emerging challenges in computational topology, Report from the NSF-funded Workshop on Computational Topology, Miami Beach, FL, June 11-12, 1999. [29] J. O'ROURKE, Computational Geometry in C, Cambridge, 1998. [30] T. J. PETERS, J. BISCEGLIO, R. R. FERGUSON, C. M. HOFFMANN, T. MAEKAWA, N. M. PATRIKALAKIS, T. SAKKALIS AND N. F. STEWART, Computational topology for regular closed sets (within the I-TANGO project), invited article, Topology Atlas, vol. 9, no. 1 (2004) 12 pp., http: //at. yorku. ca/t/a/i/c/50. htm. [31] Protein Data Bank, http://www.pdb.org/ [32] D. ROLFSEN, Knots and Links, AMS Chelsea Publishing, Providence, 2004. [33] T. SAKKALIS, T. J. PETERS AND J. BISCEGLIO, Application of ambient isotopy to surface approximation and interval solids, CAD, Solid Modeling Theory and Applications, G. Elber & V. Shapiro (ed.), 36 (11), 1089 - 1100. [34] T. SCHLICK, Molecular Modeling and Simulation: An Interdisciplinary Guide, Springer, 2002, 61-89. [35] T. SCHLICK, Modeling superhelical DNA: recent analytical and dynamic approaches, Current Opinion in Structural Biology, (5), 1995, 245-262. [36] R. SHAREIN, http://www.pims.math.ca/knotplot/ [37] R. R. SINDEN, DNA Structure and Function, Academic Press, 1994. [38] D. W. SUMNERS, Lifting the curtain: Using topology to probe the hidden action of enzymes, Notices of the AMS, 42(5), May 1995, 528-537. [39] W. R. TAYLOR, A deeply knotted protein structure and how it might fold, Nature, 406, 2000, 916-919. [40] Toplogy of Protein Structure, http://www.tops.leeds.ac.uk/ [41] S. A. WASSERMAN, J. M. DUNCAN AND N. R. COZZARELLI, Discovery of a predicted DNA knot substantiates a model for site-specific recombination, Science, 229, 1985, 171-174. [42] S. A. WASSERMAN AND N. R. COZZARELLI, Biochenmical topology: applications to DNA recombination and replication, Science, 232, 1986, 951-960. [43] K. J. WEILER, Topological Structures for Geometric Modeling, Ph.D. Thesis, Comput. Syst. Engin., Renselaer Polytechnic Inst., 1986. [44] C. WONJOON, T. MAEKAWA AND N. M. PATRIKALAKIS, Topologically reliable approximation of composite Bezier curves, Computer Aided Geometric Design, (13)6, 1996, 497-582.
139
Discretize then Optimize John T. Betts*, Stephen L. Campbell 1 Introduction Computational techniques for solving optimal control problems typically require combining a discretization technique with an optimization method. One possibility is to Discretize Then Optimize, that is first discretize the differential equations and then apply an optimization algorithm to solve the resulting finite dimensional problem. Conversely one can Optimize Then Discretize, that is write the continuous optimality conditions first and then either discretize them or discretize a functional analytic method for solving the necessary conditions. The goal of this paper is to compare the two alternatives and assess the relative merits. There are numerous variations on how to solve optimal control problems. We are interested in problems which may be high dimensional and which can have several inequality constraints with complicated switching strategies between them. We also are interested in situations where functions are not given by "simple" formulas but may involve complicated computer implementations. These considerations are typical of many industrial applications and guide some of the discussion that follows. Our intention is not to review the many software packages that are available and in production. Additional references are in the bibliography of [2]. Here we focus on the more fundamental issue of the order of the discretization and optimization processes. The genesis of this paper was the earlier technical report [4]. Some of the ideas, including some discussions of the example in Section 2 appear in [5, 6]. In addition to presenting an improved understanding of the problem described in [4], this paper contains results, discussion, and the consideration of issues not found in [5, 6].
* Mathematics and Engineering Analysis, The Boeing Company, P.O. Box 3707, MS 7L-21, Seattle, Washington 98124-2207 tNorth Carolina State University, Department of Mathematics, Raleigh, NC 27695-8205. Research in part by the National Science Foundation under DMS-0101802, ECS-0114095, DMS0209695, and DMS-0404842.
140
2
High Index Partial Differential-Algebraic Equations
Our ultimate goal is to optimize systems described by nonlinear (e.g. Navier-Stokes) partial differential equations subject to inequality constraints. In these problems one often wants the control to be applied on the boundary. As a model problem we will focus on a particular example which can be used to illustrate several points. Heat transfer can be described by the partial differential equation
where the spatial domain is 0 < x < TT and the time domain is 0 < t < 5. Conditions are imposed on three boundaries of this domain,
The input temperatures Uo(t) and u^(t] at the ends of the domain are viewed as control variables. In addition, the temperature over the domain is bounded below according to
where
is a prescribed function with a = .5, b = .2, and c = 1. Finally, we would like to choose the controls uo(t] and uv(t} to minimize
For our example we set the constants Qi = Q2 = 10~3. This is typical of the situation in practice where the primary interest is minimizing some function of the state but small weights are added to the controls to help regularize the problem numerically. One way to solve this problem is to introduce a discretization in the spatial direction. We take a uniform spatial grid, Xk = k6 = k^ for k = 0 , . . . , n. If we denote y(xk,t} = yk(t), then we can approximate the partial differential equation (1) by the following system of ordinary differential equations:
141
This (method of lines) approximation is obtained by using a central difference approximation to |^| and incorporating the boundary conditions (3) and (4) to replace ?/o = UQ and yn = u^. As a consequence of the discretization, the single constraint (5) is replaced by the set of constraints
The boundary conditions (2) are imposed by setting 7/fc(0) = 0. Furthermore, if we use a trapezoidal approximation to the integral in the or-direction, then the objective function (7) becomes
2.1
State Vector Formulation
Since we will consider several aspects of this problem, it is helpful to introduce a more compact notation. We define a normalized time t = 62r and use "/" to denote differentiation with respect to r in which case
Using this notation we can rewrite the differential equations (8) as
where the state vector is yT = (7/1,7/2, ••• ,2/n-i), and the control vector is UT = (UO^UTT). The symmetric, tridiagonal, (n — 1) x (n — 1) matrix F has —2 on the main diagonal and 1 on the two adjoining diagonals. The only nonzero elements in the rectangular (n — 1) x 2 matrix G are G\^ — 1 and when F5 extends beyond the interval [a,b].
2.2
Properties of LA and WLA
Expressing errors of approximation provides a good start for comparing the properties of LA and WLA.
expresses the error of approximation by LA in the Max or ^oo-norm. The error of approximation for WLA in the ^oo-norm has the similar expression
The error of approximation of f ( x ) by g(x) in the Z^-norrn,
uses the constants that appear in Equations (6) and (7). Evaluating the integral in Equation (8) over the subintervals of g partitioned by the subintervals of / produces the square root of a weighted sum of squares of the constants, namely,
Consider the case where M = 2; i.e. {£j} = {^1,^2}- LA produces the singleton value,
and WLA yields a corresponding singleton value,
161
If M = 2 and the vertices in the partitions {xi} are uniformly distributed, then 7LA = 'JWLA • For partitions of g with more than two subintervals, uniformity in the distributions of x^ in the subintervals of g can make WLA and LA equivalent. Sometimes g must preserve certain qualitative characteristics of /. For example, in three dimensions each shell finite element in a surface mesh can have a different uniform thickness, g can be constrained to preserve the volume of material. In one dimension the comparable conserved quantity would produce
for a > 0, 1 < i < N — 1, and for 7,> 0, 1 < j < M — I . Without positivity of Q and 7j,
has the geometric interpretation of conserving the sum of rectangular areas given by heights |Q| and widths Xi+i — Xi. When Ci > 0, Equations (12) and (13) can also refer to preserving cumulative distributions. For a > 0, 1 < i < N — 1, let Si — Y^k=ick- For 7j > 0, 1 < j < M — 1 and Sj = Z)L=i7fc choosing 7, so that the 8j 's approximate the Si's corresponds to cumulative distributions functions addressed by the Kolmogorov-Smirnov goodness-of-fit test [10]. Among conserved quantities, the values in {7^} occasionally must also attain the extreme values in the set {cj}. Achievement of this constraint depends on the partition. Clearly, the singleton partition {^1,^2}? a = £1 and 6 = £2, cannot attain a maximum and a minimum if the set {Q} contains more than one distinct value. To enforce this constraint when {ci} contains at least two distinct values, the partition {£j} must have two or more intervals. Suppose Ci > 0, 1 < i < A7", and the two intervals in the partition {^1,^2,^3} have lengths o;s and UL where us < UL. If Cimin and Cimax represent distinct minimum and maximum values in {Q} respectively, then including the attainment of extremal values Cimin and Cimax along with Equations (12) and (13) as conserved quantities restricts the values of 71 and 72 of g and limits g to two values; namely v\ or V2 where
Consequently, a necessary condition for g with partitions {£/}, 1 < j' < M, M > 4, to attain these three positive conserved quantities (the area V in Equations (12) and (13) ), the minimum Cimin and the maximum Cimax] is
2.3
Parametric formulation
The intrinsic value for parameterizing the piecewise linear mappings on F lies in its direct application to piecewise linear planar and spatial curves. This subsection
162
introduces parameterization by using B-splines. The advantage of using B-splines becomes obvious from the notation introduced by Carl DeBoor [2]. Given a partition {U}, ti < t^ < ... < tN of an interval, the lowest order Bsplines, Ni^(t), represent characteristic functions on the partition; for 1 < i < N—1,
The next order B-splines, the hat functions AT ii2 (t), 0 < i < N, defined by
requires prepending to> ^o < ^i and appending tN+1, tN < tN+1 to {ti}. Parameterizations in software for Computer Aided Design typically use Ni^(t) and distribute ti e [0,1] uniformly. Use Equations (16), (17) and the partitions {ti}, ti = iri~p Q