Springer Aerospace Technology
Jens Eickhoff
Onboard Computers, Onboard Software and Satellite Operations An Introduction With 169 Figures and 33 Tables
123
Prof. Dr.-Ing. Jens Eickhoff Institute of Space Systems (IRS), University of Stuttgart, Germany
ISBN 978-3-642-25169-6
e-ISBN 978-3-642-25170-2
DOI 10.1007/978-3-642-25170-2 Springer Heidelberg Dordrecht London New York Springer Series in Aerospace Technology ISSN 1869-1730
e-ISSN 1869-1749
Library of Congress Control Number: 2011940959 © Springer-Verlag Berlin Heidelberg 2012 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
Cover design: WMXDesign GmbH, Heidelberg Cover figure derived from original in ISSN 2191-2696, Issue 2. Original by Sabine Leib EADS Cassidian and Jens Eickhoff. Printed on acid-free paper Springer is part of Springer Science+ Business Media (www.springer.com)
Foreword The development of satellites is always driven by their applications, so the payload and its satellite infrastructure can fulfill all envisioned tasks and sometimes this is under great autonomy. The brain of the satellite is the Onboard Computer with the Onboard Software providing functions, procedures and services in preparation for the different tasks. Lastly Spacecraft Operations will succeed only when the Space and Ground Segment are interlinked optimally through appropriate data handling and management concepts. There are many examples where the flexibility of the spacecraft’s operation system determined failure or success of a mission. Completely unexpected mission scenarios or onboard failures are common situations for science and exploration satellites especially. Communication satellites also profit from the reliability and flexibility of the onboard systems as the spectacular recovery of the European Artemis satellite demonstrated in 2003 when Artemis could be repositioned in its correct orbit after 18 months of recovery activities. Also new rules avoiding space debris and for deorbiting satellites at the end of their life call for very robust and flexible onboard computer systems to ensure full operational capability at the end of satellite lifetime when some components like gyros might have already failed. This book entitled "Onboard Computers, Onboard Software and Spacecraft Operations – An Introduction" covers in a broad yet detailed way the important aspects for satellite development and operation. To our knowledge it is the first book completely covering the whole subject including particularly subtopic interdependencies. It is a result of a manuscript which has been used and consistently taught as an examined lecture series at the University of Stuttgart for several years. The book is equally applicable for students as well as experts of many engineering disciplines. It is suitable for an introductory course as well as a reference text in modern system engineering.
September 2011 Prof. Dr. Hans-Peter Roeser Managing Director Institute of Space Systems University of Stuttgart
Prof. Dr. Volker Liebig Director of Earth Observation Programmes European Space Agency
Preface After being engaged by the University of Stuttgart, Institute of Space Systems as System Engineering Coach from Industry for the small satellite project “Flying Laptop” begin of 2009 the main challenges in this project turned out to be ● ● ●
the satellite onboard computer design, the onboard software design and the spacecraft's operational concept.
The source of this difficulty was not the spacecraft complexity nor the lack of available industrial technology. It was the fact that neither of these topics was so far addressed in any lecture in Stuttgart and that no adequate introductory literature existed on the market for students to be instructed before being able to contribute in such difficult engineering tasks of the satellite program. In particular no literature addressed the system engineering interdependencies between these three topics. Thus all students and PhD candidates had to be trained up in parallel to the spacecraft design, development and verification processes already underway. From this situation the idea for a lecture evolved, designed to cover in a system engineering approach all these three issues – onboard computers, onboard software and satellite operations – including their interrelations. The lecture was highly accepted and after two years the improved manuscript could be enhanced for release as a textbook. Students' high interest and the demand for study thesis topics, diploma theses and doctoral theses together with the chance of hands-on experience in the institute's satellite project clearly confirmed the idea for this lecture series. I hope this book contributes to imparting background knowledge to the students, enabling them to professionally begin their industry or agency career in these complex domains of satellite onboard computer or payload controller design, onboard software or the operations of spacecraft.
Immenstaad, 2011
Jens Eickhoff
Acknowledgments This manuscript covers a broad spectrum of technology aspects and would not have become so educational without the availability of instructive graphical material from Industry and Agencies. Therefore I thankfully refer to the courtesy of the following figure and photo providers: ● ● ● ● ● ● ● ● ●
Institute of Space Systems, University of Stuttgart, Germany ESA/ESOC Space Operations Center, Darmstadt, Germany Astrium GmbH – Satellites, Friedrichshafen, Germany Aeroflex, Colorado Springs, USA Aeroflex Gaisler, Göteborg, Sweden RUAG Aerospace Sweden AB, Göteborg, Sweden BAE Systems, Manassas, USA DLR/GSOC Space Operations Center, Oberpfaffenhofen, Germany Jena Optronik GmbH, Jena, Germany
All figures used from industrial providers are cited with the according source and copyright information. All publicly available figures from ESA and NASA Internet pages are used according to the copyright and usage conditions cited there, e.g.
[email protected], and are also cited with according copyright owner information. Figures and photos under GFDL or Creative Commons License taken from Wikipedia are also cited accordingly. For this book I am specially indebted to Prof. Dr. Volker Liebig for initiating the provision of ESA figure and photo material for the operations chapters 14 and 15 and to Mr. Nic Mardle, CryoSat Spacecraft Operations Manager at ESOC, who carefully selected the appropriate material to optimally complement the text. Furthermore I'd like to express my gratitude to Prof. Dr. Hans-Peter Röser at the Institute of Space Systems for engaging me in 2003 as a visiting lecturer and in 2009 as System Engineering Coach for the FLP small satellite project and to my Astrium site Director in Friedrichshafen, Eckard Settelmeyer, for supporting this part-time academic coaching activity. I am very much obliged to Dave T. Haslam who performed the proofreading of the book manuscript as a native English speaker. At Springer-Verlag GmbH I was very well supported by Mrs. Carmen Wolf and Dr. Christoph Baumann concerning all topics on layout and the like which typically arise during book authoring. Special thanks to Dr. Baumann for considering my draft cover ideas. Finally I want to thank my family and especially my wife for her encouragement and motivation, and for bearing me spending many evenings in front of the computer during lecture development and the later manuscript upgrade to this book.
Most grateful for all the support I received, Jens Eickhoff
Contents List of Abbreviations..................................................................................................... XV
Part I
Context
1 Introduction.................................................................................................................3 1.1 Design Aspects....................................................................................................4 1.2 Onboard Computers and Data Links..................................................................6 2 Mission / Spacecraft Analysis and Design.................................................................7 2.1 Phases and Tasks in Spacecraft Development..................................................8 2.2 Phase A – Mission Analysis.................................................................................9 2.3 Phase B – Spacecraft Design Definition...........................................................10 2.4 Phase C – Spacecraft Design Refinement.......................................................14 2.5 Phase D – Spacecraft Flight Model Production................................................15 2.5.1 Launcher Selection....................................................................................15 2.5.2 Launch and Early Orbit Phase Engineering..............................................16 2.5.3 Onboard Software and Hardware Design Freeze.....................................17
Part II
Onboard Computers
3 Historic Introduction to Onboard Computers............................................................21 3.1 Human Space Mission OBCs...........................................................................23 3.1.1 The NASA Mercury Program.....................................................................23 3.1.2 The NASA Gemini Program.......................................................................24 3.1.3 The NASA Apollo Program........................................................................29 3.1.4 The Space Shuttle Program......................................................................32 3.2 Satellite and Space Probe OBCs......................................................................34 3.2.1 The Generation of digital Sequencers.......................................................34 3.2.2 Transistor based OBCs with CMOS Memory............................................35 3.2.3 Microprocessors in a Space Probe............................................................38 3.2.4 MIL Standard Processors and Ada Programming.....................................41 3.2.5 RISC Processors and Operating Systems on Board.................................42 3.2.6 Today's Technology: Systems on Chip......................................................46 3.3 Onboard Computers of Specific Missions.........................................................49 4 Onboard Computer Main Elements..........................................................................51 4.1 Processors and Top-level Architecture..............................................................54 4.2 Computer Memory............................................................................................56 4.3 Data Buses, Networks and Point-to-Point Connections...................................58 4.3.1 OBC Equipment Interconnections.............................................................58 4.3.2 MIL-STD-1553B.........................................................................................58 4.3.3 SpaceWire.................................................................................................60 4.3.4 CAN-Bus....................................................................................................61 4.4 Transponder Interface.......................................................................................62 4.5 Command Pulse Decoding Unit........................................................................64 4.6 Reconfiguration Units........................................................................................65
XII 4.7 Debug and Service Interfaces...........................................................................66 4.8 Power Supply....................................................................................................68 4.9 Thermal Control Equipment..............................................................................69 5 OBC Mechanical Design..........................................................................................71 6 OBC Development....................................................................................................75 6.1 OBC Model Philosophy.....................................................................................76 6.2 OBC Manufacturing Processes.........................................................................80 7 Special Onboard Computers....................................................................................81
Part III
Onboard Software
8 Onboard Software Static Architecture......................................................................87 8.1 Onboard Software Functions............................................................................88 8.2 Operating System and Drivers Layer................................................................91 8.3 Equipment Handlers and OBSW Data Pool.....................................................92 8.4 Application Layer...............................................................................................94 8.5 OBSW Interaction with Ground Control............................................................95 8.6 Service-based OBSW Architecture.................................................................101 8.7 Telecommand Routing and High Priority Commands.....................................111 8.8 Telemetry Downlink and Multiplexing..............................................................113 8.9 Service Interface Stub.....................................................................................115 8.10 Failure Detection, Isolation and Recovery....................................................116 8.11 OBSW Kernel................................................................................................117 9 Onboard Software Dynamic Architecture...............................................................119 9.1 Internal Task Scheduling.................................................................................120 9.2 Channel Acquisition Scheduling......................................................................122 9.3 FDIR Handling.................................................................................................125 9.4 Onboard Control Procedures..........................................................................126 9.5 Service Interface Data Supply........................................................................128 10 Onboard Software Development..........................................................................129 10.1 Onboard Software Functional Analysis.........................................................130 10.2 Onboard Software Requirements Definition.................................................132 10.3 Software Design............................................................................................135 10.3.1 Structured Analysis & Design Technique...............................................136 10.3.2 Hierarchic Object-Oriented Design........................................................138 10.3.3 The Unified Modeling Language – UML................................................140 10.4 Software Implementation and Coding...........................................................147 10.5 Software Verification and Testing..................................................................148 10.5.1 Functional Verification Bench (FVB)......................................................150 10.5.2 Software Verification Facility (SVF).......................................................152 10.5.3 Hybrid System Testbed (STB)...............................................................156 10.5.4 Electrical Functional Model (EFM).........................................................160 10.5.5 Onboard Software Test Sequence.........................................................163 11 OBSW Development Process and Standards......................................................165 11.1 Software Engineering Standards – Overview...............................................166 11.2 Software Classification According to Criticality.............................................169 11.3 Software Standard Application Example.......................................................170
XIII
Part IV
Satellite Operations
12 Mission Types and Operations Goals..................................................................179 13 The Spacecraft Operability Concept....................................................................185 13.1 Spacecraft Commandability Concept...........................................................187 13.2 Spacecraft Configuration Handling Concept.................................................187 13.3 PUS Tailoring Concept..................................................................................189 13.4 Onboard Process ID Concept.......................................................................190 13.5 Task Scheduling and Channel Acquisition Concept......................................191 13.6 The Spacecraft Mode Concept.....................................................................192 13.6.1 Operational Phases...............................................................................192 13.6.2 System and Subsystem Modes.............................................................193 13.6.3 Equipment States versus Satellite Modes ............................................196 13.7 Mission Timelines..........................................................................................196 13.7.1 LEOP Timeline.......................................................................................197 13.7.2 Commissioning Phase Timeline............................................................198 13.7.3 Nominal Operations Phase Timeline.....................................................199 13.8 Operational Sequences Concept..................................................................200 13.9 System Authentication Concept....................................................................203 13.10 Spacecraft Observability Concept...............................................................204 13.11 Synchronization and Datation Concept.......................................................206 13.12 Science Data Management Concept..........................................................208 13.13 Uplink and Downlink Concept.....................................................................208 13.14 Autonomy Concept......................................................................................211 13.14.1 Definitions and Classifications.............................................................211 13.14.2 Implementations of Autonomy and their Focus...................................214 13.14.3 Autonomy Implementation Conclusions..............................................215 13.15 Redundancy Concept..................................................................................216 13.16 FDIR Concept.............................................................................................219 13.16.1 FDIR Requirements.............................................................................220 13.16.2 FDIR Approach....................................................................................220 13.16.3 FDIR and Safeguarding Hierarchy......................................................222 13.16.4 Safe Mode Implementation..................................................................223 13.17 Satellite Operations Constraints.................................................................225 13.18 Flight Procedures and Testing....................................................................226 14 Mission Operations Infrastructure........................................................................233 14.1 The Flight Operations Infrastructure.............................................................234 14.2 Support Infrastructure...................................................................................240 15 Bringing a Satellite into Operation........................................................................243 15.1 Mission Operations Preparation....................................................................244 15.2 Launch and LEOP Activities..........................................................................246 15.3 Platform and Payload Commissioning Activities...........................................250 Annex: Autonomy Implementation Examples............................................................253 Autonomous onboard SW / HW Components......................................................254 Improvement Technology – Optimizing the Mission Product................................255 Enabling Technology – Autonomous OBSW for Deep Space Probes..................258 References................................................................................................................261 Index..........................................................................................................................277
List of Abbreviations General Abbreviations a.m. cf. e.g. i.e. w.r.t.
above mentioned confer example given Latin: id est that is with respect to
Technical Abbreviations AES AFT AGC AIT AOCS APID ASIC ATV BC BGA BIOS CADU CAN CASE CDMU CDR CISC CLTU CM CPU DDF DHS DJF DLR DMA DMAC DORIS DPS DRD EBB ECC EDAC EEPROM EFM EM EMC EQM ESA
Advanced Encryption Standard Abbreviated Function Test Apollo Guidance Computer Assembly, Integration and Test Attitude and Orbit Control System Application ID Application Specific Integrated Circuit Autonomous Transfer Vehicle Bus Controller Ball Grid Array Basic Input / Output System Channel Access Data Unit Controller Area Network Computer-Aided Software Engineering Control and Data Management Unit Critical Design Review Complex Instruction Set Computer Command Link Transfer Unit Apollo Command Module Central Processing Unit Design Definition File Data Handling System Design Justification File Deutsches Zentrum für Luft- und Raumfahrt Direct Memory Access Direct Memory Access Controller Doppler Orbitography and Radiopositioning Integrated by Satellite Shuttle Data Processing System Document Requirement Definition Elegant Breadboard ESTRACK Control Center Error Detection and Correction Electrically Erasable Programmable Read Only Memory Electrical Functional Model Engineering Model Electromagnetic Compatibility Engineering Qualification Model European Space Agency
X VI ESD ESTRACK FAR FDIR FM FOC FOCC FOD FOM FOS FPGA FVB G/S GDC GEO GOM GPL GPS GSWS HITL HK HOOD HPC HPTM HW I/O IC IF IRS JTAG LCB LED LEO LEOP LGPL LM LVDS MAP-ID MC MCS MMFU MMI MMU MSG MTL MTQ NASA NCO
Electrostatic Discharge ESA Tracking Network Flight Acceptance Review Failure Detection, Isolation and Recovery Flight Model Flight Operations Center Flight Operations Control Center – ESOC Terminology – see FOC Flight Operations Director Flight Operations Manual – also called SSUM Flight Operations Segment ESOC control Infrastructure including antenna stations Field Programmable Gate Array Functional Verification Bench Ground Station Gemini Digital Computer Geostationary Earth Orbit Ground Operations Manager GNU Public License Global Positioning System Galileo Software Standard Hardware in the Loop Housekeeping Hierarchic Object-Oriented Design High Priority Command High Priority Telemetry Hardware Input / Output Integrated Circuit Interface Institut für Raumfahrtsysteme, Institute of Space Systems, University of Stuttgart, Germany Joint Test Actions Group Line Control Block Light Emitting Diode Low Earth Orbit Launch and Early Orbit Phase Lesser GNU Public License Apollo Lunar Module Low Voltage Differential Signal Multiplexer Access Point Identifier Magnetic Core Mission Control System Mass Memory and Formatting Unit Man Machine Interface Memory Management Unit MeteoSat 2nd Generation Master Timeline (on board) Magnetotorquer National Aeronautics and Space Administration Numerically Controllable Oscillator
X VI I NRZ NTP OBC OBCP OBDH OBSW OBSW-DP OBT OIRD P/E PF PC PCB PCDU PDGS PDHT PDR PFM PGS PID PL PMC PPS PROM PRR PUS QM QR RAM RF RISC RIU ROM RT RTOS RWL S/C SA SADT SBC SCOE SCV SDD SEU SIF SMD SoC SOCD SOM SPACON
Non Return to Zero Network Time Protocol Onboard computer Onboard Control Procedure Onboard Data Handling Onboard software Onboard software data pool Onboard time Operations Interface Requirements Document Program / Erase (cycle) Spacecraft Platform Personal Computer Printed Circuit Board Power Control and Distribution Unit Payload Data Ground Segment – ESOC Terminology – see PGS Payload Data Handling and Transmission Preliminary Design Review Proto Flight Model Payload Ground Segment Process Identifier (for an OBSW process) Payload Payload Management Computer Pulse Per Second Programmable Read Only Memory Preliminary Requirements Review ESA Packet Utilization Standard Qualification Model Qualification Review Random Access Memory Radio Frequency Reduced Instruction Set Computer Remote Interface Unit Read Only Memory Remote Terminal Realtime Operating System Reaction wheel Spacecraft Solar Array Structured Analysis and Design Technique Single Board Computer Special Checkout Equipment Spacecraft Configuration Vector Software Design Document Single Event Upset Service Interface Surface Mounted Device System on Chip Spacecraft Operations Concept Document Spacecraft Operations Manager Spacecraft Controller
X VI II SRD SRDB SRR SRS SSR SSS SSUM ST STB STR SUITP SVF SVS SVT SW TC TM UML VC WS
System Requirements Document Satellite Reference Database System Requirements Review Satellite Requirements Specification Solid State Recorder Software System Specification Space Segment User Manual – also called FOM Subservice Type (PUS) System Testbench Star Tracker (sometimes also Star Camera) Software Unit and Integration Test Plan Software Verification Facility Software Validation Specification System Validation Test Software Telecommand Telemetry Unified Modeling Language Virtual Channel Workstation
The best way to predict the future is to invent it. Alan Kay
Part I
Context
Introduction
1
Introduction
Rosetta and Lander Philae © ESA
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology, © Springer-Verlag Berlin Heidelberg 2012
3
4
Introduction
Although the payloads of a satellite such as radar or optical instruments are the principle performance driver for a spacecraft, the platform control functionality plays a significant role in mission efficiency. Considering key characteristics like required payload data geolocation precision of today's Earth observation missions, the requirements towards the satellite platform control functionality are even more continuously increasing. The same trend can be detected for specific missions like Earth gravity field measurements, for deep space missions and for latest concepts on Earth observation from geostationary orbit positions. The platform control functionality is centrally driven by the functionality included in the onboard software, (OBSW), and the operational flexibility from ground – being based on onboard software functions and features. The performance of the onboard software itself is driven respectively limited by the performance of the available onboard computer, (OBC), hardware. Thus the chain of spacecraft operations from ground, complemented by the OBSW and controlling platform and payload equipment via the OBC hardware is the key system engineering challenge.
1.1
Design Aspects
In a spacecraft, (S/C), development project the initial design requirements do not however cover details concerning the onboard computer, the software or operations procedures and so on. The spacecraft mission concept requirements for a satellite development B/C/D phase are usually laid down in two key documents, namely the ● ●
“System Requirements Document”, (SRD), and the “Operations Interface Requirements Document”, (OIRD).
The SRD covers technical requirements on both space and ground segment of the mission. The OIRD covers requirements on how to operate the spacecraft from ground. The S/C manufacturer takes these primary input documents and develops a derived requirements set exclusively focused on the spacecraft in a so-called “Satellite Requirements Specification”, (SRS). The SRS thus comprises design and performance requirements to all S/C equipment, functionality and performance, especially reflecting ● ● ● ● ● ● ●
instrument / payload requirements, attitude and orbit control system, (AOCS), design and performance requirements, power subsystem and control requirements, thermal subsystem and control requirements, onboard data handling subsystem, (DHS), requirements, spacecraft “Failure Detection, Isolation and Recovery”, (FDIR), requirements and ground segment compatibility requirements.
This is the design baseline for the spacecraft and implicitly for the design of onboard software features for spacecraft control and secondarily for the onboard computers and the operations concept. All three domains have to be designed together complementing each other with the according specifics.
Design Aspects
5
Concerning the onboard computers, the software and the operations concept a number of aspects have to be taken into consideration. The onboard computers compared to standard industry embedded controllers or automotive controllers have to provide ● ● ● ●
● ●
●
significant failure robustness only achievable by internal redundancy, electromagnetic compatibility, (EMC), to the space environment conditions and in addition radiation robustness against high energetic particles. The latter cannot be achieved by standard highly integrated circuit, (IC), designs as used in today’s PC microprocessors. Space application processors require a lower circuit integration density and further manufacturing specifics. This again results in lower achievable processor clock frequencies (2066 MHz are typical values). Furthermore onboard computers today still have to serve a large number of different types of interfaces such as: ◊ Serial or LVDS interfaces on the transponder side. ◊ Analog and data bus interfaces on platform and payload equipment side. And finally also these interface connections at least partly need to be redundant.
Similar dedicated constrains affect the onboard software of a satellite. The OBSW needs to be a ● ● ●
realtime control software, allowing both interactive spacecraft remote control and automated/ autonomous control. The onboard software concept typically today is a service based architecture covering several control and input/output, (I/O), levels: ◊ Data I/O handlers and data bus protocols, ◊ control routines for payloads, AOCS, thermal and power subsystems, ◊ up to Failure Detection, Isolation and Recovery routines.
The operations concept of the spacecraft has to be detailed concerning the: ● ● ●
●
●
command and control of payload and platform via the cited service based onboard software. The operations concept has to be based on the international spacecraft uplink / downlink data transmission standards. The OBSW telecommand / telemetry, (TC/TM), packet management in the OBSW service architecture must comply to the customer's baseline such as the ESA “Packet Utilization Standard”, (PUS). The S/C mission operations concept has to be elaborated concerning ground station visibilities, the utilized ground station network, link budgets and operational timeline commanding from ground. Furthermore the operations concept must ◊ support control of all nominal platform and payload functions from ground, ◊ the control of all FDIR and recovery operations from ground and ◊ the handling of OBSW updates, mission extension functions and software patches from ground.
The detailed design requirements for onboard software, onboard computers and spacecraft operations result from the mission analysis performed and the selected spacecraft design concept.
6
1.2
Introduction
Onboard Computers and Data Links Payload Module
Service Module
© Astrium
OBC
MMFU
PMC
X-Band Downlink for Science Data
S-Band Up-/Downlink for S/C Cmd/Ctrl
Figure 1.1: Modular satellite and its onboard computers. Satellite platform control is usually performed via a bi-directional telecommand (TC) / telemetry (TM) radio data link in S-band (2.0 to 2.2 GHz). Science TM downlink (unidirectional) is usually performed via X-band (7.25 to 7.75 GHz). For both links usually the same data protocol standards are applied. On older satellites or space probes the onboard computer, (OBC), exclusively controls the S/C platform while a dedicated payload management computer, (PMC), operates the payload instruments. On newer spacecraft mostly one single OBC controls both platform as well as instruments. Usually in addition a so-called “Mass Memory and Formatting Unit”, (MMFU), is on board for storage of both housekeeping and science telemetry. In case the MMFU is integrated into the OBC, such computers are often called “Control and Data Management Unit“, (CDMU). Figure 1.2: Satellite block-diagram with central CDMU. © Astrium GmbH
Mission / Spacecraft Analysis and Design
2
Mission / Spacecraft Analysis and Design
Rosetta approach to Steins © ESA
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology, © Springer-Verlag Berlin Heidelberg 2012
7
8
Mission / Spacecraft Analysis and Design
2.1
Phases and Tasks in Spacecraft Development
The following figure shows the phase breakdown of spacecraft development. Listed in addition are also the main tasks to be performed within each phase. Figure 2.2 depicts additionally the prescribed review milestones according to ECSS-E-M30A. Phase 0/A Evaluation of mission and compliant payload design solutions Definition of mission objectives and constraints ●Definition of a mission baseline and alternatives / variants ●Analysis of minimum requirements ●Documentation ●
Phase B/C
Conceptualization Design refinement, of mission, design verification payload and spacecraft design Payload requirements analysis ●Definition of alternative payload concepts ●Analysis of resulting spacecraft / orbit / trajectory requirements and constraints ●Standardized documentation ●
System design refinement and design verification ●Development and verification of system and equipment specifications ●Functional algorithm design and performance verification ●Design support regarding inter faces and budgets ●
Phase C/D
Phase E
Production, assembly, integration and test
Spacecraft operations
Subcontracting of component manufacture ●Detailed design of components and system layout ●EGSE develop ment and test ●Onboard software development and verification ●Development and validation of test procedures ●Unit and subsystem tests ●
Software verification ●System integration and tests ●Validation regarding operational and functional performance ●Development and verification of flight procedures ●
Ground segment validation Operator training ●Launch ●In orbit commissioning ●Payload calibration ●Performance evaluation ●Prime contractor provides trouble shooting support for spacecraft ●
●
Figure 2.1: Tasks in Spacecraft Development Phases. ● ● ● ● ●
PRR SRR PDR CDR QR FAR
Preliminary Requirements Review System Requirements Review Preliminary Design Review Critical Design Review Qualification Review Flight Acceptance Review Phases 0+A
B
C
D
E
F
MDR PRR
Mission SRR PDR
Tasks
●
Requirements Definition
CDR
Design Definition QR
Verification & Qualification
FAR
Production Launch
Operation PSP Layer 0 PSP Layer n
Definition Requirements
Deorbiting
Verification Production
Figure 2.2: Spacecraft development phases and reviews.
Source © ECSS-M30A
Phases and Tasks in Spacecraft Development
9
Mission analysis is already performed in the early phases 0/A of a project. From these analysis phases result the requirements towards the space and ground segment of the mission which are further refined in phase B up to the PDR review. The system design – also concerning OBCs, OBSW and Operations Concept – start after SRR. Thus over phases A-C up to CDR the following elements must be defined: ● ● ● ● ● ● ● ● ● ● ●
S/C payloads and their functions S/C orbit / trajectories / maneuvers S/C operational modes Required S/C AOCS and platform subsystems Used onboard equipment and according design Ground / space link equipment Onboard functions for system and equipment monitoring and control Autonomous functions – e.g. for the “Launch and Early Orbit Phase”, (LEOP), timeline execution FDIR functions, Safe Mode handling etc. Test functions Identification of functions being realized in hardware respectively in software
All these are essential drivers for OBC and OBSW design, the spacecraft's top level and subsystem design as well as for the spacecraft operations concept.
2.2
Phase A – Mission Analysis
Mission analysis serves for determining the optimum orbit w.r.t. ● payload mission product quality, ● required target revisit times, ● possible ground station contacts for mission product downlinks and ground servicing. Resulting from these are requirements towards ● mission product data storage aboard, ● onboard timelines / autonomy, ● data transmission link budgets. From this elementary assessment follows the definition of ● characteristics of payload instru- Figure 2.3: Example: LEOP orbit ground ments, tracks and station visibility. © Astrium GmbH ● operational orbit and LEOP orbit / trajectory conceptualization, ● S/C geometrical concept: ◊ Body mounted solar array, (SA), deployable SA, deployable antennas, ◊ deployable booms parts, ◊ etc.
10
Mission / Spacecraft Analysis and Design
Next follows the conceptual requirements definition and technology selection for the main functional components such as ● ● ● ●
AOCS subsystem sensors / actuators, power subsystem equipment, thermal subsystem equipment, data handling subsystem equipment.
And finally come the first definitions on ● ● ●
elementary PL modes, elementary S/C modes, plus non functional design data such as budgets (mass, power).
The following shall be the first of four consecutive figures restating and sketching out from top to bottom for each development phase the subsequent level of growing design detail. Table 2.1: Phase A design perimeter.
2.3
Phase B – Spacecraft Design Definition
Phase B serves as first complete design definition on system level. This includes a number of detailed analyses in various fields. Without claiming completeness of the list the most prominent ones shall be cited including their subtasks. One is the refinement of of the orbit definition, which includes ● ● ● ●
nominal operations orbit, transfer orbits / trajectories including LEOP trajectories, orbit control maneuvers and de-orbiting / re-orbiting after end of life.
Closely associated with the orbits, maneuvers and trajectories is the definition of the spacecraft's operational modes in nominal and failure conditions. The figure below depicts an example of a spacecraft level mode diagram. It includes notation of
Phase B – Spacecraft Design Definition
11
possible transitions between spacecraft modes and identification of transition triggers respectively, and required commanding to invoke the according mode transition. At this level detailed telecommands are obviously not yet defined. However these identified modes are already of relevance as they are to be controlled later by the onboard software. The next step of design refinement in phase B concerns the elaboration of a complete satellite Figure 2.4: Satellite modes and transitions. © Astrium GmbH product tree with all main physical and functional elements, i.e. including onboard software as a product tree item and eventually any software included for satellite instruments to be developed or software for subsystem controllers. Figure 2.5 shows an example excerpt from such a product tree at the phase B development stage.
Figure 2.5: Phase B product tree example.
© Astrium GmbH
12
Mission / Spacecraft Analysis and Design
Next after the completion of PCDU::BatteryBypass_LogicalOperation the spacecraft product tree is the identification of the individual types of equipment Parked Armed entry: to be used for the mission – entry: MilBusBypassStatus = 0 [hlcBypassArmingOn == 1] MilBusBypassStatus = 1 BypassMilBusOffSel . = 0 BypassMilBusOffSel . = 1 i.e. the selection to use star BypassSelection = Parking BypassSelection = Parking tracker X from supplier Y. In the ideal process this [MilBusCmdPark == 1] [MilBusCmdSelection == output] [MilBusCmdBypassOff == 1] selection is foreseen to be made already at the end of Selected Disarmed phase B. In real projects entry: entry: however the situation may MilBusBypassStatus = 1 MilBusBypassStatus = 0 BypassMilBusOffSel . = 0 BypassMilBusOffSel . = 0 BypassSelection = output arise that certain selected bypassFired = 0 BypassSelection = output equipment has not yet reached the required quali[MilBusCmdBypassFire == 1 || MilBusCmdBypassOff == 1] fication level. In such cases multiple alternative solutions must be kept under conFigure 2.6: Equipment mode diagram example. sideration. For those units where dedicated equipment already could be selected via the supplier documentation then automatically the equipment modes, transitions, telecommands and telemetry becomes available. Output selection and firing executed via Mil1553 command
Another step in phase B is a first allocation of such equipment operational modes to the nominal and non-nominal spacecraft modes respectively. This identifies mode statuses for the diverse equipment to be switched by the OBSW during spacecraft mode transitions plus possible unit A/B redundancy configurations.
Figure 2.7: Equipment operational modes versus spacecraft modes.
© Astrium GmbH
Phase B – Spacecraft Design Definition
13
With this information becoming available a first definition of variable sets – so-called data pools – for the OBSW can be defined, namely the definition of ● ● ●
variables to be managed via spacecraft telecommands and telemetry, equipment onboard command and telemetry parameters, and the complementary set of data bus interface variables to be managed. Table 2.2: Phase B design perimeter.
In phase B of the S/C development the OBSW architectural design already starts and the subsequent stages are incrementally defined as OBSW is usually developed in a stepwise approach. Concerning the large amount of design refinements performed in the next phase C only those shall be followed further which concern the onboard computers, the software and the S/C operations from ground respectively.
14
Mission / Spacecraft Analysis and Design
2.4
Phase C – Spacecraft Design Refinement
The first step in phase C is the freeze of the product tree and completing the selection of suppliers for onboard equipment. These final decisions then allow ● ●
●
●
the completion of interface definitions between onboard equipment (hardware, signal types / levels and data protocols), the design consolidation for interfaces between OBC and onboard equipment ◊ either implemented via data buses or ◊ as low level line interfaces via a so-called “Remote Interface Unit”, (RIU) connected to the core OBC.1 Furthermore the design for so-called “High Priority Command”, (HPC), interfaces can be finalized. Such HPC lines are commandable from ground even when the OBSW has problems or is down for emergency reconfiguration. And with the consolidation of the electrical and data handling design via RIU finally the onboard software variable sets (“data pools”) can be refined for ◊ ground/space TC/TM, ◊ for the core OBC, ◊ for data handled via RIU and ◊ for TC/TM data of onboard equipment like sensors / actuators / instruments. Table 2.3: Phase C design perimeter.
1
Such a RIU in most cases is connected via a data bus to the OBC and provides all required types of low level interfaces like analog, serial, bi-level, pulse for control of simple equipment like heaters, simple sensors etc.
Phase C – Spacecraft Design Refinement
15
After phase C the following design information has been collected: ● ● ● ● ● ● ● ●
Mission concept including orbit, transfer orbits and maneuvers Spacecraft product tree Spacecraft budgets Spacecraft modes and transitions Selected equipment types from dedicated suppliers Allocation of equipment modes to spacecraft modes Equipment modes, interface types OBC Equipment bus interfaces
● ● ● ●
OBC to RIU interfaces RIU to equipment interfaces High priority command interfaces Data pool definitions for ◊ ground / space telecommand / telemetry, ◊ onboard communication and ◊ OBC internal onboard software data pool for OBC internal algorithms
During phase C thus significant design input for the OBSW is consolidated and during this phase the OBSW development is enhanced to detailed design and coding as well as verification of first versions. The detailed roadmap is project specific.
2.5
Phase D – Spacecraft Flight Model Production
In phase C the design of the spacecraft was completed and Engineering Models of the diverse equipment on board (including instruments and payloads) were developed and qualified. Phase D thereafter is devoted to the production of the S/C Flight Model. At the beginning of this phase procurement for all flight models of the required equipment and of the spacecraft structure and flight harness is performed by the S/C prime contractor. During the assembly, integration and test, (AIT), program they subsequently are assembled.
2.5.1 Launcher Selection Another important step at the beginning of phase D, after project CDR is the final selection of the launcher since for at least most conventional Earth Observation and science satellites missions multiple launcher options exist. During previous design phases the S/C design has deliberately been formulated for compatibility with the 2-3 most likely carriers. The primary selection of a potential launcher which is performed during phase B already evaluates parameters like ● ● ●
mass to orbit suitability for according orbit depending on inclination, escape velocity, and launcher upper stage reignition requirements overall launcher ΔV
The final selection in phase D then is mainly driven by launch slot availability, cost and status of launcher qualification for new types. The following figures show a
16
Mission / Spacecraft Analysis and Design
Plesetzk: Plesetsk
62.70° N 40.35° E
Figure 2.8: Rokot launcher and launch site Plesetzk.
© DLR
typical example for competing launcher systems for Earth Observation satellites of the 1000 kg class at orbit altitude of approximately 700km.
Kourou: 5,23° N 52,79° W Figure 2.9: VEGA Launcher and launch site Kourou. © ESA and DLR
With the final selection of the launcher already implicitly a number of operational edge conditions are frozen, namely the required interfaces between operations center and launch site, the first ground contact times and some required antenna stations. This directly leads over to the topic of engineering the launch and early orbit phase in detail.
2.5.2 Launch and Early Orbit Phase Engineering Launch and early orbit phase engineering implies the detailed development of the automated sequences on board the satellite from separation detection. These include
Phase D – Spacecraft Flight Model Production
● ● ● ● ●
17
the OBC taking over control of the S/C after being deployed by the launcher's upper stage, automatic position and attitude / rotational rate detection, automated rate damping, automatic deployment (antennas and solar panels), to establishment of ground station contact.
Such sequences are subject to tests in S/C assembly phase prior to launch and will be treated in more detail in part IV of this book.
Figure 2.10: Launch sequence and satellites deployment in orbit.
© DLR
2.5.3 Onboard Software and Hardware Design Freeze The final design freezes at the beginning of phase D after CDR comprise definition of ● ● ● ● ● ●
operationally used unit redundancies and redundancy configurations (not all combinations are usually foreseen for operational use), the applied line interconnection redundancies, secondary functions like equipment mode commands, reconfiguration functions, low level “Failure Detection, Isolation and Recovery”, (FDIR), final consolidation of data protocols and bus access sequences, finalization of FDIR concept and last but not least functions for S/C AOCS in orbit characterization and the complements for payload instruments characterization.
18
Mission / Spacecraft Analysis and Design Table 2.4: Phase D design perimeter.
Keep it simple: As simple as possible, but no simpler. Albert Einstein
Part II
Onboard Computers
Historic Introduction to Onboard Computers
3
Historic Introduction to Onboard Computers
Apollo 11 launch © NASA
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology, © Springer-Verlag Berlin Heidelberg 2012
21
--
2
+ % %% % ( $ J 77 + + ( A ( " F 78 8 F/?9 8I E/1H/ I: 2% ,*(( " R & ( /-/1H/ + ( % % ( (H= 78 I:= & 8 -./1H- ( ) + % @ # ' !' ) @# > ! # + > ! ) $*2 # & ! ) ( # + % A % %
5!#! ,+D 1 + /. '(*6 + % 7 ) ) $ *> - (-./1II> /E /1II+ ) > A % 6 + > ?
+ 7 : !# % % >
FH
2
+ 7( ( :!((# ' /. @ // ; (( @ 3 ;+ % + (( J '$ %
8 F/E?> ,*(( '(*6 + @ = ++9 !((# C !D C/0 # 8 )C "+ % % + J
'A
DE
% ? /11.; ( ( % 93+3/IE. 6+ % $ A @ % 3 % $) !+("#( '("=% !TF/V#= (
? + %% "F-!TF-V TFFV# 3F-3 '(">I + ' % & " (A &4 ! ()& (A# + &2!(#+% % "F- !'*?+H1/+H1-+H1F# "F- !'*?+H1E# ( '(" F- ! "F-?/-0 # -D '$ + C A % + F- !F- # /H !HD #00 9 * 7: %% 7 : !# + 93+3/IE. #!-
' ,
/+ ( -' --
($C$ ($C$ ($$
(*
C$'$ $
1
&0,
,
'
H'5#%* 0#5# 0(
'
0(
B +
'
5#%
/+ CJ
"0(
8 F-1?$+H119 *F38+ ,& (A C (* ' C % DF8 % ( '$% (A( % (" ("9 + 9 * ?(9&'9@&'989
+ 9 */3D J>29+ ( ' 8'&( Z J7 @ %
8'&(C% 8 8'&( ('( J 7 %$:! $# 8'&( ( "+3(G
$ 8'&(
D0
2 ' E8F
0/E #F
( E #F "E8F
0E 8F
"$E -F "0E 5-F
/!,0 E5F
1E8F
* E#F E#5F 1E#F E #F
, 1E F
"( *E F "( 1E#F
> E##E 9F
E##F
EHF
"+( /EEF "+
"+( /EEF "+
+ * (
+O
0($
/EEF "+
"(
"+(
+2>
2K+
,CE #F >,)$E F ,)$ E8F ,C 58) E F ,C )E9F ,C#E #F ,C#)E9F ,C5)E;F ,C8 E8HF ,C8)EHF ,C9E 8F
E##F
"+( /EEF "+
+ * (
+O
0(
/EEF "+
"(
"+(
+2>
2K+
0
/EEF "+
"(
"+(
+2>
2K+
9 (
0
9 (
( % L"(
+ (
(
995
+ (
0$
995
( ,3E #F
"( E#F
0B1E8F 0B E 8F 0BE 8F 0B" E8F 0B>' E8F
0/ E#F
( E#F " E8F
0 E8F
"$E -F "0E5-F
/!,0 E5F
1EHF
> E#< F >#E F
,C#E5F ,C#)E F ,C5)E#F ,C8E9 F ,C8)E#F ,C9E8F
>,)$E#9F ,)$E5F ,C 58)E F
1E8F
*E #F E#5F 1 E#F E#F
, 1E F
"( * E F "( 1E#F
> E#< F >#E 9 F
>,)$E F ,)$E8F ,C 58)E F ,C )E9F ,C#E#F ,C#)E9F ,C5)E;F ,C8E8HF ,C8)EHF ,C9E8F
2K+
8@ A[3\
>(0( 0,0"D/
A
*++2+
99
9C
2' !F-#
+('
+L'' ''
(
C
+A'
,($
$&,,0
' C
C
'((
+('
/EEF A
@
+ L''
+ $ " 8 ?8 3.E.H3 3A3 3% ?-( -..0
E#F
,/,
/), !0(E8F / !,0 E5F
CE %%
8!5! $/1
8 # ! /# ! -#C /FEE C % % -
8 D0?
A* ' 33'
H/
+ % % % ! 8 "-F-"D-- # C % % % )
> )
)
&
(8"
(8&
' A2
)
> )
)
&
(8"
(8&
' L $ !'$#
' A2
>'0
# B 0(
> #!-
' ,
/+ ( -' --
($C$ ($C$ ($$
(*
C$'$ $
1
&0,
,
'
H'5#%* 0#5# 0(
'
0(
B +
'
5#%
/+ CJ
"0(
8 D/D?$+H119 *F38+ ,()& (A
AC ) % (0( 0,0"D/
99 *++2+
99 *++2+
9C
2' !F-#
+('
'' +L''
(
A
9C
2' !F-#
+('
+L'' ''
(
C
+A' C
,( $
,($
$&,,0$
C ' C
C ' C
$&,,0
'A(
+('
/EEF A
@
+L''
C
2'
*+(*@+2+
'((
+('
/EEF A
@
+ L''
C
(
*+(*@+2+
+A'
+2(
,( $$B/0 ,,0$B/0 ( $B/0 0B/0 0/B/0 0( $B/0
+A'
>,B/0$0
"'
@
,( $B/0 ,,0B/0 ( B/0 0B/0 0$B/0 0( B/0
"'
+ $ " 8 ?8 3.E.H3 3A3 3% ?-( -..0
E#F
,/,
/), !0( E8F / !,0 E5F
CE E.> + ! @%# A % A
) ) 7"$:!@# 78 9: A $ % ! DF# @ %
'
H1
% +
% ! #8AC 3 63 ?RT. V
@A "D-- %
[[ \\[R3\"C9/?c c3\ 3\/.\ >
"C9/
C c %c
+
"D--
A "D-- @A8
'%
Wc c?
A @A &'c" %
>
(*A @A8
(*A
+
'%
A @A (*A %
8 /./F?+ $96
" " #' 8) %
%c
AC
%c ??/ - %
"D-- %
WA? W J!#
8 /./D?$9
+ !#
/DD
%
(% " # ( % %
% ; )8 ' $
@AC
-...
> @ A +
A'
) '
'"
"
\
[S
8 /./E?$9 % ) " # $ /
AC (+
AC%
-
" +'
7)7 $ 3-...
F
+ 2C(+
7)7
(+
8 /./H?$9
/DE
" # ! +
/E/
A
A ( '= A 8;A + A; + >8 Y 9Y
%
A
8
AC % ( 8 %
% 8>A
> +
/EF
+ A) ) AC3' >8 AC A % % + % ) % +
/EE
+ ACC ) % + AC AC + % A AC %
8 /.-I?%),( ( ) % 3 Y% Y!8# '$
C !TEEV#+ AC %
$ % ) + % )3
/EH
%
8 /.-0?% ) A ,(
-!9!5 C**,E,$F + 7 9:8 ) + ) ) % + Y + Y!+A# + ) @ + % AC A
( % % >8 A A + >8;A 8'&( ( % >8 ) @ +A % AC 8'&(( = = A + 2C @ C ) & 3>6 + A) +@+38 % ' AC A + % 93+3/EEF@ AC
> +
/EI
+A ) ,(
A
A @
3 out of 4 redundancy
Electronics
1 unit internally redundant
Magnetometer
--
3 units non redundant
Magnetotorquer
--
3 units internally redundant
Wheel Assembly
4 wheels in tetrahedron -> 3 out of 4 redundancy
Wheel Drive Electronic
1 unit internally redundant
Thruster
2 RCS branches -> entire branch switchover
Latch Valve
2 RCS branches -> entire branch switchover
Pressure Transducer
2 RCS branches -> entire branch switchover
Heaters
--
Not redundant
Thermistors
--
Internal thermistor triples, majority voting in OBSW
Sensor 1
--
1 unit, non redundant
Sensor 2
--
1 unit internally redundant
Payload Data Processor
--
2 units, cold redundant
MMFU
--
1 unit internally redundant
AOCS
FOG
Reaction Wheel
Reaction Control System
Thermal
Payload Instruments
Payload Data Hdlg.
Redundancy Concept
Equipment
219
Subunit
Redundancy
Modulator
2 units, cold redundant
Amplifier
2 units, cold redundant
X-band Transmitter
Depending on the redundancy design, the according TM and TC for each redundant unit equipment must be made available for operations on ground. Units separate from each other – even when operated in cold redundancy – must be commandable individually and telemetry must be uniquely identifiable as coming from the nominal or redundant source. In case of PUS commanded intelligent units, internally redundant (cold redundant) units may be addressable by the same APID from ground. In contrast thereto reusing the example above where 2 star trackers out of 3 will be used at a time, these 3 star trackers are individual units and each one of them requires a separate TM and TC set with separate APID. A further topic is the coupling between units or subunits respectively. This concerns whether the OBC processor module A is only coupled to safeguard memory A or whether both A and B units are cross coupled. Such design decisions later essentially drive system commandability from ground. While in the above example of the OBC processor module and the safeguard memory the choice for full cross coupling will be obvious, such decisions are less trivial e.g. for payload sensor coupling to payload data handling chain equipment, MMFU and the like. The design of the system redundancies and cross couplings directly has high influence on the spacecraft commandability concept as was presented in chapter 13.1 and even more on the spacecraft observability concept, see chapter 13.10. Concerning the redundancies available on board and the operational preselection the following basic principle of “Health overrules redundancy preselection” is explained at hand of an example: ●
In the above table there are 2 operational STR occurrences needed during operations – to be selected out of 3 available ones.
●
Assuming operation is performed with STRs 1 and 2.
●
In case of necessary reconfiguration – e.g. STR1 to be deactivated and STR3 to be taken into operation, the SCV health entry information overrides redundancy reconfiguration.
●
If STR3 were marked “non-healthy” – in the SCV, this reconfiguration approach would be rejected. See also chapter 13.2.
13.16 FDIR Concept “Failure Detection, Isolation and Recovery”, (FDIR), was already explained as key functionality of the OBSW. Obviously not all failures are subject to onboard
220
The Spacecraft Operability Concept
identification and not all failures are subject to onboard recovery. The FDIR concept to be worked out for the spacecraft during the engineering phase follows some basic requirements and principles, implements a certain failure hierarchy – specifying furthermore on which level the failure is to be fixed – and finally it implements a consistent approach for the functionality transferring the spacecraft to Safe Mode and how to recover from there. A properly defined Safe Mode with full S/C observability is essential for FDIR operations. The Safe Mode must also assure a proper balance of the S/C produced and consumed resources (mainly power) since the diagnosis of failures plus recovery in most cases will not be possible within one ground contact (in particular not for polar orbiting Earth observation satellites).
13.16.1 FDIR Requirements Typical requirements for FDIR design at the beginning of the S/C system engineering phase request that: ●
A clear hierarchy is to be defined, which type of failure is to be identified and managed on which level FDIR level.
●
The S/C must be able to reach its Safe Mode autonomously.
●
The Safe Mode, if triggered, shall not limit ground in any way w.r.t. spacecraft observability and commandability.
●
Ground may also be allowed to submit commands which are blocked for the OBSW or are not allowed in that sequence for the OBSW.
●
Ground must be able to perform a detailed status analysis and failure event history analysis for unique failure identification.
●
Ground may alter operational limits to avoid future Safe Modes – e.g. in cases of failures triggered by equipment degradation.
●
Obviously – but not trivial to realize – the transition to Safe Mode itself shall not endanger the S/C, i.e. for example shall not require potentially hazardous commands or command sequences.
●
Also in Safe Mode the OBC shall be running and shall allow for OBSW patch and dumps and memory patch and dump functions.
●
For all failures imagined during S/C engineering it must be assured that they clearly can be distinguished due to their symptom sets.
13.16.2 FDIR Approach The FDIR approach is based on sequences of failure detection in onboard TM or corresponding variables in the OBSW-DP and as a result on onboard and ground TC actions for isolation and recovery. These may not necessarily be unique due to the engineered redundancies and unit internal and external cross couplings. For each
FDIR Concept
221
potential failure these chains of failure detection and resulting failure handling – at least failure isolation, preferably also including recovery – must be elaborated. Such a design is typically achieved by following the design guidelines cited below: ●
Failure detection must be based both on parameter monitoring on unit and on system level and as a complement on functional monitoring level. This implies that onboard monitoring permanently must check whether parameters are within appropriate ranges and whether all relevant processes are running, mode transitions are properly performed etc.
●
Usually the FDIR concept provides both basic approaches: ◊
Fail Operational – where a redundant equipment directly can be called into operation without endangering failure escalation (e.g. in case of a heater failure, a thermistor failure, an X-band modulator or amplifier failure).
◊
Fail to Safe Mode – which transfers the S/C to Safe Mode.
●
For the Fail Operational case the failure isolation will be performed by removing the failed equipment from the operational functional chain by reconfiguration to the redundant one. The failed unit then is listed in the SCV as non-healthy unless reset by ground intervention.
●
Onboard reconfigurations are based on OBSW functions or dedicated OBCPs according to the changed settings in the SCV and the recovery function / OBCP being triggered.
●
The Safe Mode must be properly defined. Safe Mode is usually the mode operating the S/C with equipment that has the maximum redundancy and consumes the minimum amount of resources. Besides the Safe Mode there may exist other safeguarding S/C configurations which are subject to the individual S/C design.
●
By which means Safe Mode can be triggered – OBSW functions, limit exceeds, HW alarms etc. has to be carefully engineered. OBSW triggered Safe Mode must be armed against accidental function triggering (arm and fire principle).
●
The transition to Safe Mode usually clears all HW interfaces and SW functions. In most cases this is achieved by switching over the entire S/C HW to its redundant side – which then automatically makes use of the redundant set of physical units, interconnections and cabling. And in addition by OBC reconfiguration and resulting reboot things like loaded timelines, running OBCPs or functions etc. are all cleared. This prevents the OBSW to resume interrupted functions or timelines during or after FDIR process.
●
Each OBC processor board keeps its own OBSW image in NV RAM. One OBC processor running one image keeps the S/C stable in Safe Mode. PUS Service 6 is applied for OBSW patching and Function Service 8 is used for triggering reconfiguration functions or OBCPs respectively which reconfigure to the other processor with the patched OBSW image or which reboot the same OBC processor with the patched image.
●
Obviously there are some additional constraints such as for example Safe Mode triggering during LEOP phase may not trigger deployments nor AOCS
222
The Spacecraft Operability Concept actuator control before stage separation is reached. This is usually inhibited by electrical switches and not in SW.
The overall FDIR concept in summary is closely tied to previously treated concept design steps such as the commandability concept, the observability concept, the S/C mode concept and the S/C redundancy concept.
13.16.3 FDIR and Safeguarding Hierarchy Already it was indicated that an FDIR concept usually follows a hierarchical approach. Figure 13.7 below depicts such an approach – again for a fictional S/C. Level 4 Handled by Ground Level 3 Handled by OBC HW Reconfiguration Unit Level 2 Handled by S/C System SW
Level 1 Handled by Subsystem SW
Level 0 Unit internal Handling
Major overall system failures - Communication failures - Deployment failures - etc. Hardware induced alarm - Multiple EDAC alarms - S/C power failures - etc. System Malfunction - Attitude computation inconsistencies - S/C power failures - etc. Subsystem Malfunction - subsystem equipment failure - subsystem intercommunication failure - etc. Unit internal Malfunction Unit internal Malfunction - internally recoverable - requiring instant reaction - short current protection - EDAC error or similar - etc. - etc.
Data bus Malfunction - recoverable failure - MIL-bus retries - etc.
Figure 13.7: FDIR and safeguarding hierarchy example. ●
The lowest level comprises the handling of failures entirely on unit level, either because it is feasible – such as EDAC error handling – or because the equipment by default provides this feature, or because a certain FDIR function on lowest level is extremely time critical – such as reaction to short currents or overvoltage. This level also comprises data bus failures invoked by electromagnetic effects and the like.
●
The next higher levels 1 and 2 cover failures being handled on OBSW level, either on subsystem control level or requiring upper system level. Examples are also indicated in the figure. On this level of above equipment there are monitors available for limit check of unit parameters but also subsystem level abstract verifications such as for example a plausibility check of GPS provided position against internal solution from orbit propagator functions.
●
Level 3 then comprises failures which need hardware reconfigurations via the OBC's reconfiguration unit. These include the monitoring and reaction to HW alarms and the like.
FDIR Concept
●
223
And finally level 4 comprises the failures that cannot be handled on board the S/C itself without ground intervention at all.
Each level of FDIR handling function can escalate the failure to the next higher layer in case the problem cannot be isolated or recovered on its level. E.g. many system level failures may lead to hardware alarms triggering reconfigurations on Level 3 – such as power failures or OBC watchdog failures. Vice versa failure recovery is always performed from higher to next lower level. E.g. in case of a 2 out of 3 redundancy for star trackers as in the example of table 13.8, if star tracker 3 so far is off and star tracker 2 reports failures or shows failure symptoms the AOCS subsystem FDIR level can reconfigure the S/C to using STRs 1 and 3 for further operation. Again it must be remembered, that a simple equipment reconfiguration to its redundant occurrence – triggered on whatever FDIR level – and keeping the rest of the S/C on nominal side can only be applied with restrictions. Depending on root cause – this approach might lead to killing the redundant unit too. Therefore this method is avoided in all severe FDIR cases and the entire S/C is reconfigured to Safe Mode which – as was cited – usually reconfigure the entire S/C including buses and power lines to the redundant side.
13.16.4 Safe Mode Implementation Having explained the FDIR hierarchy the Safe Mode shall be described again in a bit more detail. Since transition of the S/C to Safe Mode breaks all onboard functions and thus all mission product generation by means of the above cited hierarchical FDIR approach the cases for Safe Mode triggering shall be limited as far as possible. The need for automated Safe Mode triggering is also driven by how fast ground is able to identify failure symptoms and ground is able to trigger isolation and recovery activities. The possibilities in this area for a permanently visible geostationary satellite differ significantly from those for a polar orbiting LEO spacecraft. The guidelines for a Safe Mode configuration are as follows: ●
OBC will preferably operate on the redundant side – including OBC HK mass memory unit and safeguard memory for SCV and including CCSDS processing unit.
●
OBSW is operational in Safe Mode controlling S/C in a way to assure attitude stability and sufficient power generation by solar array pointing. OBSW in particular will also perform S/C limit monitoring with dedicated Safe Mode settings.
●
The main data bus on board will be operating on the redundant side.
●
The OBC I/O unit, (RIU), will be operating on the redundant side.
●
The Power Control and Distribution Unit, (PCDU), will at least be operated on its redundant controller side. PCDU LCL bank redundancy switching is usually only applied in case of failures in the PCDU itself.
224
The Spacecraft Operability Concept Power bus voltage monitoring is performed by PCDU applying dedicated Safe Mode limits.
●
AOCS will operate on the redundant side – including reaction control system.
●
Unnecessary equipment such as payload instruments and other equipment from the payload data handling chain (X-band transmitter and MMFU) is not used due to interrupted mission product generation and will be shut off or down to a low resource consuming state.
●
The “Payload Data Handling and Transmission”, (PDHT), subsystem (X-bandtransmitter, MMFU) and the payload instrument(s) will be switched off or down to safe configurations.
●
S-band receivers will – if not affected themselves by the failure – remain hot redundant.
●
S-band Transmitter will – if not affected itself by the failure – remain on the nominal side.
Transition to Satellite Safe Mode: Safe Mode can be induced from ground via the following mechanisms: ●
By execution of a dedicated High Priority Command for Safe Mode
●
By execution of a dedicated Safe Mode TC function or OBCP – representing a critical command and requiring an Arm-And-Fire mechanism – which triggers a dedicated alarm to the OBC reconfiguration module
Safe Mode can be induced on board at least by the following mechanisms: ●
Failures detected by the AOCS
●
Failures detected by essential system monitors
●
System undervoltage detection (via PCDU logic)
●
Failures during repeated OBC reconfiguration sequences of the S/C
Recovery from Safe Mode: A key principle of Safe Mode is that Safe Mode recovery requires ground interaction. No auto-recovery from Safe Mode is foreseen in contrast to other potential safeguarding modes of a specific mission. For recovery command from Safe Mode the following steps are required as minimum in most cases: ●
Configuration of the spacecraft SCV for nominal operations after completion of failure diagnosis
●
In case OBSW patches – were applied – selection of the OBSW boot image
●
Reboot of the desired OBC redundancy with the selected / patched OBSW image and loading of the SCV
●
Wait until OBSW has applied SCV and has switched all redundancies to desired settings
FDIR Concept
225
●
Perform all S/C system mode transitions to a nominal mode including AOCS subsystem to a nominal AOCS mode
●
Preparation of nominal S/C operations by resource reconditioning, loading of new mission timeline etc
13.17 Satellite Operations Constraints While the previous chapters treated the satellite operations functional design and the functional behavior here in short the topic of operational constraints shall be tackled. In general all operational constraints are highly S/C design specific. They can be broken down into ● ●
S/C platform operational constraints and payload instrument operations constraints.
For both classes operational constraints arising from ● ●
resource limits or functional dependencies
can be identified. The resource limit constrains are intuitively to understand. Optical payloads for example may be only operated in sunlight conditions. In eclipse phase their operation – except for dark image calibrations – makes no sense. On the other hand the overall payload operational time between two ground station passes may be limited due to the limited amount of science data storage resources on board. Multiple payloads here also might compete for the memory resource. As another example Synthetic Aperture Radar instruments or radar scatterometers – especially when operated in eclipse phase – are typical payloads with operational constraints due to their high power consumption. Constraints due to functional unit interdependencies for example might be that due to the common use of A/D input converters of the MMFU two payloads may not be operated in parallel. Or – even if this is not a desirable case – there might be constraints limiting the MMFU to perform in parallel science data recording and playback data streaming via X-band to the PGS. Another type of common operational constraint is that during certain S/C AOCS modes – like target rollover bidirectional measurements or for some S/C even spin stabilized Safe Mode – no X-band downlink is possible due to the antenna pointing angle limits or even the rotating solar array interference with the necessary antenna pointing direction. Operational constraints increase as soon as a data routing “equipment” like a data bus or OBC I/O unit (RIU) has a failure. The details of remaining operational flexibility is then highly dependent on the engineered redundancy concept.
226
The Spacecraft Operability Concept
13.18 Flight Procedures and Testing A spacecraft has usually different data links for platform control and for science data downlink8. The flight control systems and the data processing systems for platform and payload differ to a certain extent. Common for both – platform and payload control – is, that for a standard satellite all commanding is performed via one TC link. Also all S/C housekeeping telemetry – for both platform and payload – is downlinked via a common S-band TM link to the platform control station – the “Flight Operations Center”, (FOC). This allows full operational observability of the system's status, health and resources. Payload science data are downlinked to the “Payload Ground Segment”, (PGS). In some cases also a copy of the platform HK data is downlinked to the PGS. In such case however the platform data usually serve to cross verify payload timestamping and geolocation parameters as well as to cross verify proper platform health during the entire mission product generation to avoid science measurement misinterpretations. Such complementary or ancillary data have already been mentioned.
Figure 13.8: Connection of S/C ground and space segment.
© ECSS
The CCSDS standard protocol for telecommand and telemetry transmission was already treated. In the ESA ECSS compliant missions the transmitted information is encoded in PUS conformal TC and TM packets respectively. ●
8
A TM packet contains a set of onboard variable values in its packet body and in packet header the submitting unit / process as well as packet generation time is included. This already requires ◊ the definition of which packets exist (e.g. for one equipment, a S/C subsystem and finally on system level), ◊ the definition of which packet comprises which variables in which data format and which calibration characteristics
In some missions they can even be served by different ground segments. An example for such a configuration is the European satellite navigation system Galileo.
Flight Procedures and Testing
◊
●
227
and the packet generation frequency (which is a configurable parameter for the OBSW).
The TC direction comprises TC packets which are routed by APID on board and which are identified by packet type and with this trigger according functions in the targeted equipment (OBC / OBSW or other equipment). This already requires ◊ the definition of which command packets exist (e.g. for each equipment, a S/C subsystem controller in the OBSW and finally on OBSW system level) ◊ and the definition of which command packet needs additional control parameters and in which data format and with which calibration characteristics.
All these details are stored in the ground segment in a so-called “Satellite Reference Database”, (SRDB). All these TCs with their command parameters and the TM packets with their onboard variable data from the OBSW data pool form the lowest information level of S/C command.
))
)) )
))
))
)
TC/TM Database (SRDB)
Figure 13.9: Satellite Reference Database in ground segment. However it is very cumbersome to command via low level commands transitions like the satellite switching from LEOP mode after launcher separation to a nominal mode with lots of onboard units to be activated and their telemetry to be checked. To ease the command and control of the S/C for the ground staff two layers of abstraction are introduced. ● On board OBSW functions are introduced which can be triggered / activated from ground via the already cited PUS Service 8 (Function management service). An example could be a function for activation of a payload from ground where the OBSW executes the detailed steps from power supply switch via payload controller boot control, initial PL onboard data bus TM verification, power consumption control etc.
228
●
The Spacecraft Operability Concept Flight Procedures are another means for increasing the level of commanding. Flight Procedures are somewhat the complement to OBCPs. While an OBCP is a sort of “command script” executed on board, a Flight Procedure is a “command script” implemented in the ground control system. An example could be a flight procedure which submits the function commands for the AOCS to switch from idle mode to fine pointing, for the data handling subsystem (MMFU etc.) to prepare for science data recording and for the payload to switch on – all for preparation of a payload instrument measurement on board.
Flight Procedures can comprise both low level commands to S/C units, higher level commands to S/C subsystems and system level commands and they can trigger onboard functions and OBCPs. Any command defined in the SRDB (and thus implemented in the OBSW) can be included in a Flight Procedure. Both individual commands and entire flight procedures can be commanded from a S/C ground control console. An example for a ground control system – here a SCOS 2000 from ESA / ESOC – is shown in figures 13.10 and 13.11. They show both TC/TM log windows as well as graphic parameter displays, so-called synoptic displays.
Figure 13.10: Command log of S/C (here during OBSW test on SVF). © IRS, Universität Stuttgart
Flight Procedures and Testing
229
Figure 13.11: Command log of S/C (here during OBSW test on SVF). © IRS, Universität Stuttgart
Flight procedures allow the definition of ● ●
●
absolute and relative time tags for the individual commands, command flow IF / Then branching according to “return values” received via TM back from the S/C during procedure execution – provided ground contact exists and DO / WHILE loop constructs – as far as supported by the procedure execution engine in the ground control system.
The execution, i.e. the subsequent submission of such Flight Procedure command sequences and the according branching in IF / Then cases requires more than just a simple playlist of commands. It requires the commands being embedded into a script language and the execution of such scripts by the ground control console. Thus an according procedure execution engine has to be coupled to (or must be integrated into) the ground control system. For SCOS several script languages and execution engines are available, one for the older TCL language (cf. [115]), and one for the newer PLUTO language which is standardized by the ECSS (cf. [114]) and is applied in modern ESA missions. To avoid writing such procedures via a text editor and inducing errors via typos etc. Flight Procedures are nowadays defined by means of flow chart editors as for
230
The Spacecraft Operability Concept
example the one depicted in figure 13.12. They provide different views on the task flow and offer the operator to select commands and according parameters as they are defined in the SRDB.
Figure 13.12: Definition and test of flight procedures via the MOIS flowchart editor. © IRS, Universität Stuttgart
As already indicated there are Flight Procedures defined for the S/C system level control, for subsystem control and equipment control. An example structure could be structured as follows: ●
System Level procedures
●
Subsystem level procedures: ◊ Data Handling Subsystem procedures ◊ Electrical Power Subsystem procedures ◊ Attitude and Orbit Control Subsystem procedures ◊ Reaction Control Subsystem procedures ◊ S-band Subsystem procedures ◊ Thermal Control Subsystem procedures ◊ Payload Data Handling and Transmission Subsystem procedures
●
Equipment control procedures (including data bus control) ◊ Data Management procedures (TM packet enable / disable etc.)
Flight Procedures and Testing
◊ ◊
◊
231
Generic PUS procedures (TM packet activation / deactivation etc.) Platform equipment procedures: ► On-Board Computer and RIU procedures ► AOCS Sensor and Actuator Procedures (dedicated ones for each equipment type) ► Mass Memory Formatting Unit Procedures Payload Procedures (dedicated ones for each payload type)
Each of these procedure sets comprise ● ● ●
nominal operations procedures, contingency case procedures and procedures triggering OBCPs from ground – e.g. for reconfiguration.
For platform specific operations there exist procedures for dedicated mission phases, namely the ● ● ● ●
Pre-Launch Phase procedures, LEOP Phase procedures, Commissioning Phase procedures and End of Mission procedures.
Flight Procedures and the entire operational S/C command sequences have to be tested first at the S/C manufacturer's premises in frame of the Functional Verification campaign. This covers the first level of tests of the OBC / OBSW / CPDU as receiver of the Flight Procedure commands with the transmitting “ground station”. However in this context the S/C always is still commanded via the checkout Control Console – also called Core EGSE, not yet via the FOC.
TM/TC Frontend
X-Band SCOE
Power Frontend
DSL
Figure 13.13: SVT test constellation. To assure full compatibility with both “Flight Operations Center”, (FOC) and the “Payload Ground Segment”, (PGS), multiple so-called “System Validation Tests”, (SVT), are carried out during the subsequent integration of the spacecraft. In this
232
The Spacecraft Operability Concept
context ”system“ refers to the entire assembly, space + ground segment. SVTs are tests conducted by the agency being connected via a high performance data link (DSL or similar) and via S-band and X-band SCOE to the S/C which is physically located in the manufacturer's integration hall. During SVTs the spacecraft commanding is performed via the same Flight Procedures and low level TCs as later used for S/C in orbit. TM is acquired also by FOC and PGS and is evaluated by the mission ground segment accordingly.
Mission Operations Infrastructure
14
Mission Operations Infrastructure
GOCE operations © ESA
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology, © Springer-Verlag Berlin Heidelberg 2012
233
234
14.1
Mission Operations Infrastructure
The Flight Operations Infrastructure
The missions operations infrastructure shall be explained by redirecting the reader first to figure 13.8 of the S/C ground segment infrastructure which depicts the interconnections of FOC, PGS, ground communications system and the antenna ground stations. Key element in the FOC is the “Mission Control System“, (MCS), for the S/C platform. An Example of such a system – the ESA SCOS 2000 – already was presented in figures 13.10 and 13.11. The PGS is targeted for download of payload data from the S/C and it hosts the infrastructure and team for mission product data processing over the diverse levels. The PGS – “PDGS” in ESOC terminology – furthermore is responsible for mission product archiving and mission product distribution to customers or into the public domain.
Figure 14.1: Mission operations infrastructure and ground communications system. Example: CryoSat-2 Mission. © ESA / ESOC As input to mission operations the PGS usually collects the user requests from the S/C users – in the Earth observation and science domain called “principle investigators” – and the PGS prepares the initial mission planning and hands it over to the FOC for integration into the overall satellite mission timeline. The PGS usually has no command uplink to the spacecraft.
The Flight Operations Infrastructure
235
Via the ground communications system the FOC is connected to the S-band antenna ground stations and the PGS is connected to the X-band science data link antenna ground stations. The antenna ground stations are positioned at “strategically” important points all over the Earth to achieve optimum S/C visibility. No space agency owns antenna stations at all important positions on the globe. Therefore – to support especially the LEOP phase of a new S/C but occasionally also during commissioning, normal or FDIR phases - the space agency may procure the use of other agencies or commercially operated stations for a limited period.
Figure 14.2: Ground station network example. © ESA / ESOC
The ground station visibility ranges as function of S/C altitude and link budget are known already at S/C design phase. With the orbit analysis performed during S/C engineering phase the station visibilities for a full orbit repeat cycle are computed – see figure 2.3 and figure 14.3. Based on this information the FOC can activate the antenna ground stations accordingly for each S/C contact which is especially important during the LEOP phase to be able to properly track all S/C activities such as deployments, equipment activations, mode transitions and the like.
236
Mission Operations Infrastructure
Figure 14.3: Ground station visibilities per orbit.
© ESA / ESOC
Unlike S/C operations in AIT or OBSW testing in a simulation infrastructure it is not possible to monitor and control an entire S/C via single or dual screen setup of a Mission Control System as is depicted in figures 13.10 and 13.11. Operations Mission Control Systems like SCOS are scalable and the data TM streams from the S/C can be routed to multiple workstations – each of them handling the data for a specific functional domain, like AOCS, power, thermal and payload instruments.
Figure 14.4: Flight Operations Center control room example.
© ESA / ESOC
The Flight Operations Infrastructure
237
The more challenging the mission, the more sophisticated the FOC infrastructure is designed. For a standard Earth observation scientific satellite there will be typically one to two user work places, with 2-3 screens each, per main functional domain, i.e. for: ● ● ● ● ● ● ●
Overall system control Data handling control AOCS control Power control Thermal control One for the entire payload data handling chain (PLs, MMFU, X-band) One per payload instrument9
The following figure depicts the functional domain driven workplace allocation in the main control room, (MCR), by example of an ESA / ESOC mission – CryoSat-2. There are workplaces which provide overview and key parameter visibility to the spacecraft operations manager and the flight operations director and furthermore the workplaces monitoring detailed information for the individual subsystem controllers.
Figure 14.5: Mission control room and workplaces – schematic.
9
© ESA / ESOC
Payload operations normally is not yet part of the LEOP phase but the according operations workplaces are mentioned here already.
238
Mission Operations Infrastructure
The workplaces are in detail: ● ● ●
● ● ● ●
Overall mission control (Flight Operations Director) Overall S/C system control (Spacecraft Operations Manager) Subsystems operations engineers: ◊ AOCS ◊ Data handling ◊ RF subsystems (S-band and if applicable X-band) ◊ Power ◊ Thermal ◊ Payload(s) as far as applicable during LEOP 10 Mission Control Systems (monitors the performance of all SCOS servers and clients) – the data systems manager Ground Stations and interface to ECC (Ground Operations Manager) Spacecraft Controller, SPACON, for anything beyond the control scope of an individual subsystems operations engineer Analyst
The ground communications system routes all the downlinked TM from the antenna stations into the database of the mission control system. This step already includes certain TM sequence and consistency checks on frame level and higher. The MCS software decommutates the relevant subset of TM parameters for the individual users according to the operational domains and forwards the data cyclically to the workstations. The subsystem operations engineers can configure their workstation displays to visualize principally any TM parameter received from the S/C – even outside their dedicated operational domain. Vice versa S/C command and control is handled by building and uplinking of command sequences from a command stack. For S/C commanding only a subset of the FOC workstations are used. The workplaces of the individual subsystem operations engineers have command and control access and so does the Spacecraft Controller, (SPACON). The other workstations serve purely for monitoring. The command packets, subsequently released from the command stack, are generated by the MCS using the relevant information in the satellite database, and then are forwarded via the Network Interface System to the selected antenna ground station for transmission to the satellite. The entire workstation infrastructure including TM/TC Database, servers etc. is redundant as it is indicated by the “backup” workstations marked in figure 14.5. The same applies to the overall network infrastructure – see “red / green” network connections for the diverse stations in the same figure. This guarantees S/C operability even in the case that one entire MCS branch fails. Especially during LEOP phase the flight operations team is enhanced by the inclusion of the experts from industry and also the agency project team, who have been responsible for the procurement of the satellite and launcher. The Project 10
In the example of CryoSat for the payloads only the navigation solution receiver DORIS was part of the LEOP phase. Therefore an according DORIS workstation can be found in figure 14.5. CryoSat still used the DORIS radiopositioning system (cf. [123]) instead of GPS.
The Flight Operations Infrastructure
239
Representative, who is normally located in the Main Control Room next to the Flight Operations Director, provides the authority from the agency project management. The industry team and the specialists from the agency project team are usually located in a so-called “Project Support Room”, (PSR), and have visibility and parameter read access to the operations being performed by the flight control team in the mission control room.
Figure 14.6: Project Support Room with S/C Supplier Workstations.
© ESA / ESOC
Figure 14.6 shows the workstations placed in the Project Support Room where the following shall be cited – again as example from the CryoSat2 mission: ● ● ● ●
A dedicated workstation for the star tracker supplier since for this mission it was a new and mission critical element. An AOCS analysis workstation for e.g. computation of data for orbit correction maneuvers. A workstation for the satellite geodesy system “Doppler Orbitography and Radiopositioning Integrated by Satellite”, (DORIS) – cf. [123]. A dedicated workstation for OBSW runs, testing etc.
This assistant expert team monitors S/C health in parallel to the Subsystem Operations Engineers and provide expertise in case of any unforeseen deviation
240
Mission Operations Infrastructure
from expected behavior of the S/C. Anomaly situation treatment is performed under the management guidance of the Flight Operations Director. Any failure detection or isolation and recovery activities are to be signed off by quality assurance before command submission. During the platform commissioning phase and even later during the payload commissioning phase the level of support is subsequently reduced but key members of the agency project team and the industry team remain on-site at the MCC until the S/C is declared ready for nominal operations.
14.2
Support Infrastructure
Besides the FOC / PGS control and monitoring infrastructure, the ground communications system and the antenna stations, the ground infrastructure comprises a significant number of additional tools which are not directly involved in daily S/C command and control. Out of these the three most important shall be cited:
Spacecraft Simulator:
Figure 14.7: The system simulation environment SIMSAT by ESOC.
© ESA / ESOC
One is the system simulation infrastructure. The simulator is either a system functionally comparable to the SVF treated in chapter 10.5.2 or is even a direct derivative of this. It is used already prior to launch for validation of flight procedures as was explained in chapter 13.18 and for training of the mission control team. After launch it serves for validation of operational conditions, simulation and debugging of
Support Infrastructure
241
failure conditions of the satellite, for symptom analysis and for pretest of recovery activities. Furthermore it serves for verification of OBSW patches before uplink.
Flight Dynamics Infrastructure: The next element to be mentioned is the Flight Dynamics Infrastructure. The flight dynamics team has to perform continuous orbit monitoring and monitoring of specific AOCS equipment and according equipment parameters. Orbit position tracking is performed via S/C TM from its position receivers (GPS / Galileo / GLONASS – or in the depicted example case of CryoSat-2 – the DORIS system). In addition this can be supported by tracking via laser retroreflectors from ground. The determined orbit is continuously compared to reference data and via appropriate tools an orbit propagation into the future is performed, taking actual space weather information into account. By these means the need times for orbit correction maneuvers can be predicted and according time slots can be reserved in the mission planning. The detailed quantitative design of the individual orbit correction maneuvers is also elaborated by the Flight Dynamics Team. Flight dynamics also considers all continuous parameter changes in the S/C over lifetime such as change of center of gravity and change of mass due to fuel consumption. In addition flight dynamics has to handle all parameters which result from performance degradation over the mission lifetime – like RWL bearings friction. For these tasks also simulation based infrastructures are used – in most cases implemented on the basis of MatLab / Simulink or Embedded MatLab.
Mission Planning Facility: The final type of infrastructure to mention are the mission planning systems. If the mission is not targeted for a continuous measurement – like GOCE was for the Earth gravity field measurement – dedicated target observations are the normal case for an Earth observation satellite. The planning for mission segments – typically a segment between two ground visibilities – in the first place is driven by so-called “ user requests” or observation-requests which the PGS receives. Such a request then includes a time window for the observation, the desired payload, payload operations parameters like observation spectral band and target coordinates – or even a target area. Such user request files from the PGS plus flight dynamics information, ground station visibility information and dedicated operational steps foreseen by the flight control team are combined into mission timelines, i.e. TC command sequences for uplink to the satellite. Performing this for multiple user requests from different users which easily start to compete for resources is not a trivial task and requires dedicated SW infrastructure. As already was explained earlier, modern satellites like the ESA GMES program S/C support both time-tagged command as well as position-tagged command since via onboard GPS / Galileo receivers they always are informed about the current position and velocity vector and via orbit propagator functions in the OBSW they can predict their position. This type of “position-tagging” is extremely useful for payload operations which are geo-located or for science data downlink operations tagging to dedicated ground stations.
Bringing a Satellite into Operation
15
Bringing a Satellite into Operation
Ariane V164 © ESA / Arianespace
J. Eickhoff, Onboard Computers, Onboard Software and Satellite Operations, Springer Aerospace Technology, © Springer-Verlag Berlin Heidelberg 2012
243
244
15.1
Bringing a Satellite into Operation
Mission Operations Preparation
For the mission operations team it is an essential task to familiarize oneself with the ground segment infrastructure, the Mission Control System, its control consoles, databases etc. The ground operations team must be in the position to exercise all nominal and contingency operations for LEOP phase, the commissioning phase and the routine operations phase. Satellites are usually operated in two shifts per day. The A or prime team is the one which has already participated in the System Validation Tests and in verification of the S/C Flight Procedures. This team handles the critical operations sequences. The B or secondary team is subsequently trained up. This can comprise less experienced operations engineers or operations experts from other missions. Training for a two shift team plus backup personnel may consist of: ●
Classroom training and facility familiarization.
●
Training and simulation sessions performed before launch: ◊ S/C Operations controlling the real spacecraft (e.g. in SVT) or the S/C simulator. ◊ The first simulations are “nominal” to allow all team members to become familiar with the sequence of operations to be performed. ◊ A series of simulations of the critical phases with an increasing level of complexity for all teams follow. ◊ Anomalies on the simulated satellite, ground segment facilities, launcher and ground stations are injected in increasing numbers and levels of difficulty, culminating in parallel failures of different systems. ◊ Shift handover, both in nominal situations and also in the case where anomalies have prevented one team from completing all of the planned operations. ◊ Routine operations over several days are trained – with simulated S/C – to allow the spacecraft controllers and subsystem operations engineers to validate the systems and procedures to be used after the LEOP phase. The ground segment infrastructure and the antenna station network are included – partly as simulations – via so-called Mission Readiness Tests, to validate the ground stations using an already flying satellite as the target.
●
Participation and training of all external partners.
●
Verification of event sequences (uninterrupted).
●
Usually two launch rehearsals one or two of them performed with: ◊ Full included FOC ◊ Potential antenna stations ◊ A simulated S/C to achieve first acquisition operations to be performed following the countdown activities ◊ And the launch site interface – personnel, data lines from launch site to FOC, Go / No-Go flags transmission, launcher and AIT.
Mission Operations Preparation
245
Detailed complete system simulators resulting from the simulator infrastructures implemented for OBSW testing – as described in chapter 10.5.2 – can finally be applied to support spacecraft operations. The SVF configuration is the most appropriate setup. It can be modified such that the control console is replaced by the flight operations system installed in the FOC. The SVF simulator's interfaces and the data protocols between the simulator and the control console are already implemented to be compatible with the Mission Control System of the FOC. The resulting simulator setup in the ground station can be used for ● ●
training of the spacecraft operations staff, and for, tests of OBSW patches and bug fixes on the simulator before they are uplinked to the real spacecraft.
The acceptance of such simulators originating from spacecraft system development largely varies from space agency to agency. Some are using the system simulators with the argument that such a simulator has already passed a comprehensive verification process and has a very good validation quality – such as DLR / GSOC applied the S/C supplier's simulators from engineering and AIT phase for satellite operations in the TerraSAR-X project and Eumetsat applied the MeteoSat simulator from industry – especially for the 2nd MeteoSat Generation, (MSG).
OBC Model
Equipment Model Equipment Model Equipment Model Equipment Model
TM/TC Frontend
Simulator Kernel
S/C Simulator
System Models (Env./Dyn./etc.)
Some agencies do not accept system simulators from the S/C development cycle because of their philosophy to use only tools for operations support which are developed independent from the S/C AIT. This approach minimizes the risk of potential inherent development process errors and such errors can be spotted during operation. The "ESA Space Operations Center", (ESOC), for example has developed its own system simulation infrastructure called SIMSAT which was already presented in figure 14.7. For technical details please also refer to [32].
Simulator I/O GCS LAN
Figure 15.1: System simulator for spacecraft operations support. The second major topic for simulators in mission operations preparation was also already mentioned as a side topic. It must be assured, that the technical infrastructure of both FOC and PGS are 100% compatible with the real S/C, not only a with simulation. For this purpose – and not so much for reasons of crew
246
Bringing a Satellite into Operation
familiarization – the so-called “System Validation Tests”, (SVT) are performed during S/C integration phase D. “System” here refers to the overall mission system, i.e. including both ground segment as well as space segment. ●
As cited in chapter 13.18 during the SVTs the S/C – positioned in the clean room at manufacturers premises – is commanded remotely from the FOC.
●
Multiple such tests are performed with increasing functional test scope – the ECSS standards require 4 of them, SVT 0 to SVT 3.
●
In the higher ones also payloads are operated and payload science data are recorded as far as possible under clean room conditions – significant limitations will exist for example for radar instruments.
●
Payload science data playback from MMFU via X-band link (excluding RF part) is then streamed to PGS for verification of compatibility of PGS tools with X-band data stream and formats.
15.2
Launch and LEOP Activities
A number of activities are to be carried out during the so-called “Launch and Early Orbit Phase”, (LEOP). For these activities the S/C prime contractor supports the operations team as defined in the catalog of their phase E tasks. Days before launch in the FOC all control stations are rechecked and flight operators prepare for the launch date. Shift plans are frozen and last organizational topics are clarified.Although the details for the LEOP phase activities differ highly from mission to mission – especially for the Earth observation and science domain – the general activities include the following: ●
Final pre-launch check of the ground systems including FOC, antenna ground stations and communication links.
●
The launcher fairing being closed and the S/C being connected to ground via the umbilical connector.
●
Final pre-flight check of S/C at the launch site.
●
Final pre-flight check of the launcher Go / No-Go signals.
●
In case of launch with running S/C OBC continuous monitoring of proper autoboot of S/C OBC and OBSW into launch mode during the early phase of the count down is performed.
●
During the ascent phase of the launch no signals are available.
●
In the case launcher separation happens during ground station visibility, LEOP tasks comprise monitoring of the execution of the post-separation configuration operations, performed autonomously by the satellite in the frame of the LEOP Autosequence which are: ◊
In case of cold launch (OBC off) first of all a proper boot of OBC / OBSW.
◊
Ground connection establishment with the OBSW – automatically transmitted TM from S/C.
Launch and LEOP Activities
247
◊
Establishing command link with the satellite and starting the orbit determination via radiometric data (ranging).
◊
Performing auto-deployments and initiation of deployments for antennas, solar arrays respectively.
◊
Control of attitude stabilization respectively its monitoring in case of autosequence based attitude acquisition.
◊
Via the last two steps verifying that the satellite configuration is as expected after launcher separation w.r.t. approximate orbit, attitude and correct deployments.
ground
controlled
●
In case of launcher separation out of ground station visibility the first step in vicinity of a station is a TC based ground contact establishment and to command the downlink of the LEOP autosequence TM packet history. for verification of proper S/C status.
●
Further following steps – which for some missions are already part of the commissioning – comprise the commanding required for transition of the satellite into the higher operational modes needed for payload activation and for commissioning operations. For example switch on of AOCS units, power subsystem equipment and of thermal subsystems needed for payload operations.
●
Furthermore the detailed verification of correct orbit and the preparation of potentially necessary orbit acquisition maneuvers in most cases is still counted as part of the LEOP phase.
Figure 15.2: Mission operations
© ESA / ESOC
248
Bringing a Satellite into Operation
For a simpler Earth observation satellite these LEOP activities all together sum up to two to tree days. For more complex missions or satellites with specific orbits – like e.g. the Hubble Space Telescope – these tasks can consume a few weeks all in all. The same applies for constellations like TerraSAR-X / TanDEM-X or navigation constellations like GPS / Galileo / GLONASS. All these activities from the countdown tasks, the activities performed only seconds after launcher separation to those performed days after launch are precisely planned beforehand. Each shift performs their scheduled operations. The plans take into account the availability of ground station visibilities, plus any constraints coming from the different support facilities.
Figure 15.3: Post separation activities.
© ESA / ESOC
Then on day 0 during countdown, step by step the launch resource criteria – also called Go / No-Go criteria – are checked, namely the data links to antenna stations, to the launch site, the telemetry channel from the launcher to FOC etc. Please also refer to figure 15.4. Finally the S/C is launched and after upper stage separation it starts executing the key parts of its LEOP autosequence. At first successful ground contact essential telemetry is downlinked and the operators get first visibility of the status. A S/C telemetry monitoring desktop example is provided in figure .15.5
Launch and LEOP Activities
Figure 15.4: Launch Go / No-Go criteria.
249
© ESA/ESOC
Figure 15.5: S/C telemetry monitoring desktop – example: CryoSat-2.
© ESA/ESOC
250
15.3
Bringing a Satellite into Operation
Platform and Payload Commissioning Activities
If all goals of the LEOP phase plan have been successfully achieved the Flight Operations Director will declare the LEOP completed and the Commissioning Phase can start. The key task of the commissioning phase is the subsequent taking into operation of the so-far unused platform equipment and of all payload instruments, to verify all operational modes and to perform for both the platform and the payload all calibration and performance characterization tasks. The distribution of S/C platform and payload commissioning tasks between LEOP phase and a dedicated S/C commissioning phase is highly mission specific. On the one hand, the LEOP phase might not even cover all AOCS modes and use all AOCS equipment. An example is TerraSAR-X where the reaction wheels were first activated during platform commissioning – not yet during LEOP phase. On the other hand LEOP already might include initial payload switch-on and checkout and X-band data downlink. For payload instrument commissioning the detailed tasks are again highly dependent on the instrument characteristics and mission type and have to be analyzed individually per mission. Payload calibration methods are: ●
Calibration via flyover of reference targets and comparison of received to expected results. This is a typical method for Earth observation satellites.
●
Radio signal quality measurements. This method is essential for telecom satellites.
●
Pointing to reference targets and calibration of sensor with previously acquired target characteristics from previous missions etc. This method is typically applied for space telescopes and the like.
●
Platform characterization may imply previously performed specific platform equipment operations, such as STR characterizations or GPS geolocation characterization.
●
The commissioning phase in many cases includes the calibration / characterization of ground processing facilities in the PGS for higher level mission product data from the raw measurements.
Similarly to the LEOP the S/C commissioning phase is planned in detail before launch, but the planning is generally at a higher level and the activities are not usually time critical and are subject to change depending on the satellite performance and operations during the LEOP phase. The commissioning phase may last from several weeks to a number of months, depending on the S/C type, orbit, number and type of payloads etc. An example for such a commissioning phase planning is given in the figure 15.6 below.
Platform and Payload Commissioning Activities
251
Figure 15.6: S/C system commissioning schedule – Example: CryoSat-2.
© ESA/ESOC
After platform and payload commissioning the S/C supplier's tasks are done and the normal operations phase with continuous mission product generation starts under sole responsibility of the operations team.
Figure 15.7: Kiruna antenna station facilities.
© ESA
Annex: Autonomy Implementation Examples
Annex: Autonomy Implementation Examples
New Horizons © NASA
253
254
Annex: Autonomy Implementation Examples
Autonomous onboard SW / HW Components In October 2001 ESA launched the first satellite of the PROBA series – “Project for Onboard Autonomy”. With these satellites new technologies heading for higher levels of onboard autonomy and higher automation levels in satellite operations were tested. PROBA 1 served for in-flight testing of following technologies (cf. [108]): ●
●
● ● ●
First in orbit use of Europe's 32bit space application microprocessor – the ERC32 chip set. First use of a digital signal processor, (DSP), as an instrument control computer ICU. Figure A1: PROBA 1. © ESA First in orbit application of a newly designed “autonomous” star sensor. Use of onboard GPS for the first time. And following innovations in the onboard software: ◊ ESA for the first time flying an OBSW coded in C instead of Ada. ◊ ESA for the first time flying an OBSW based on an operating system instead of a pure Ada coded OBSW implementation (VxWorks was applied here). ◊ The GNU C compiler for the ERC32 target was finally validated by flying a GNU C compiled OBSW running on the ERC32.
The achieved new onboard functionalities were: ● ● ● ●
For the first time having an ESA satellite with position determination in orbit by means of GPS. Attitude determination through an active star sensor automatically identifying star constellations. Autonomous prediction of navigation events (target flyover, station flyover) A limited onboard “mission planning” functionality based thereupon.
Annex: Autonomy Implementation Examples
255
Improvement Technology – Optimizing the Mission Product This example (cf. [109]) depicts a combined ground / space architecture of the ESA study “Autonomy Testing” where the design of a potential onboard mission planning function for payload operation was analyzed. The idea behind this is that users “only” needs to transmit their observation requests (“user requests”) to the combined system consisting of space segment (simulated satellite) and ground segment (simplified ground station). The customer requesting a mission product defines by which payload, in which operating mode, with which settings, they want to have which target area observed in which time window.
● ● ● ● ●
It was analyzed in how far it would make sense to implement parts of the mission planning and overall system timeline generation (ground + space) on board the spacecraft to shorten mission prediction response times. In such cases the satellite constantly has to collect customer requests from the various sequentially visible ground stations and is equipped with an intelligent mission planning system. This system generates a detailed timeline comprising all commands for all involved platform subsystems – mainly AOCS – and the involved payload(s). Autonomous On-board Architecture:
Onboard System Controller
Timeline Generator
Supervisor
TINA Timeline Generator, providing onboard generation of directly executable mission timelines from user requests and platform service requests.
VxWorks Solaris
Simulator
TT & C
Subsystems
Payloads
Stimuli / Observations
Environment Simulation
Test Preparation Environment
Test Exec ution Environment
Test Infrastructure: Simulated Satellite and Space Environment: - SSVF simulator - Spacecraft model, and environment models derived from SSVF Ground segment/checkout system: - SSVF/CGS configuration - TINA console for user-request definitions
Central Ground System
Mode Preparation Environment
System Supervisor (from DLR MARCO study) Level of autonomy scaleable from simple macro-command execution via onboard control procedures processing up to onboard timeline execution
TINA Console
EMCS
Figure A2: Onboard autonomy test infrastructure: "Autonomy Testbed".
© Astrium GmbH
256
Annex: Autonomy Implementation Examples
The prototype from the ESA “Autonomy Testing” study consisted of: A Core EGSE acting as a simplified ground station ● A satellite simulator ● An onboard computer board as simplified single board computer ● An onboard software with a macrocommand interface (somewhat like OBCPs) running on this board ● A mission planning algorithm which created an activity timeline from the cited user requests including all macrocommands to the onboard software. The onboard software executed the spacecraft macrocommands in the generated mission timeline and thus controlled the simulated satellite. In this autonomy testbed complex scenarios were tested which comprised: ●
● ● ●
Nominal operational cases in which user requests were uplinked, processed and the results were downlinked at the next ground station contact. Furthermore scenarios which lead to planning conflicts on board and where the user requests could only be partially satisfied within the operating period. And finally scenarios during which manually injected equipment failures occurred and where initially a suitable error recovery needed to be identified and to be performed – followed by a replanning of the activities since after error recovery the satellite had already missed some of the observation targets. See also figure A4.
Figure A3: Autonomy testbed setup.
© Astrium GmbH
Such mission planning algorithms impose high requirements towards ● ●
the onboard software (which needs to intercept any potentially erroneous commands, which might be created by the mission planning tool), and to the spacecraft simulation infrastructure which has to reflect sufficiently realistically the overall scenario including payload operations.
Annex: Autonomy Implementation Examples
257
Generate Recover TL for failed Queue Execute Rest
Failure
Onboard
Diagnose Execute Cont TL‘s
Exec TL1 STL1.......N .....
.....
.....
!!
Uplink TL1 (STL1...N)
Exec TL3
Uplink TL2 (STL1...N) Downlink Status 1
Prep TL2
Uplink TL3 (STL1...N)
Uplink TL4 (STL1...N)
Downlink Status 2 with missed UR‘s from STL2
Orbit Set 1
On Ground
Recover failed Queue Execute Rest
Generate Cont. TL‘s
....
Orbit Set 2
Validate TL2
Orbit Set 3
Prep TL3
Validate TL3
Figure A4: Autonomous recovery scenario on board.
Prep TL4
Orbit Set 4
Validate TL4
© Astrium GmbH
258
Annex: Autonomy Implementation Examples
Enabling Technology – Autonomous OBSW for Deep Space Probes
Figure A5: New Horizons Probe.
© NASA
In spring 2006 NASA launched the deep space probe “New Horizons” to explore the trans Neptunian objects Pluto and Charon. It represents probably the highest level of onboard autonomy ever flown to date. The onboard software of New Horizons is based on a case based decision algorithm and a rule chainer algorithm. In place of onboard control procedures as used in conventional satellites here structures are implemented applying Artificial Intelligence techniques to control the nominal approach maneuvers as well as the error recovery. Cases are implemented on the lower processing level to identify abstract symptoms from parameter measurements and above these cases a rule network is implemented for situation analysis and system control. The following figure provides a sketch of a small extract from the overall rule network – here for the handling of an error during Pluto approach. The failure can either be handled or results in the space probe going to Safe Mode – depending on the detailed conditions. The rule network implements a forward chaining method for processing. For explanation of the figure below also please refer to [110] and [111]: ● ● ● ●
The Rxxx-identifiers represent rules. The Myyy-identifiers represent macros which are executed by the activated rules. All spacecraft commands initiated by rules are encapsulated in such macros. The transition times for the rules / macro execution are depicted as well (some cover several days due to spacecraft coast or approach phases).
Annex: Autonomy Implementation Examples
● ●
259
For the rules / macros the onboard processor executing them is shown (in this extract from the rule network P3 and P5 are cited) and in the rule identification information is contained (for details see [110]): ◊ The rule priority ◊ The rule persistence ◊ The methodology how the rule result is to be handled by the inference system, when the rule result is obviously outdated ◊ The state during the loading of the rule into memory (active / inactive).
Figure A6: Extract of a rule-based mode-transition network of an OBSW (from [110]) © NASA
References
261
Tranquility Base here, the Eagle has landed. Neil Armstrong July 20 , 1969, 20h 17m 43s UTC
References
262
References
References on Missions driving OBC / OBSW Technology General: [1]
Tomayko, James: Computers in Spaceflight: The NASA Experience http://www.hq.nasa.gov/office/pao/history/computers/Part1-intro.html
NASA Mercury Program: [2]
http://www.nasa.gov/mission_pages/mercury/missions/program-toc.html
[3]
http://www-pao.ksc.nasa.gov/kscpao/history/mercury/mr-3/mr-3.htm
NASA Gemini Program: [4]
N.N.: On the Shoulders of Titans: A History of Project Gemini – (NASA report SP-4203)
[5]
http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19780012208_19780 12208.pdf
[6]
N.N.: Project Gemini – A Chronology (NASA report SP-4002) http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19690027123_19690 27123.pdf
[7]
Leitenberger, Bernd: Das Gemini-Programm. Technik und Geschichte, Books on Demand, 2008 ISBN 3-8370-2968-9
NASA Apollo Program: [8]
Apollo Mission: http://spaceflight.nasa.gov/history/apollo/index.html
[9]
Apollo Guidance Computer: http://authors.library.caltech.edu/5456/1/hrst.mit.edu/hrs/apollo/public/visu al3.htm and http://de.wikipedia.org/wiki/Apollo_Guidance_Computer
References
[10]
Tomayko, James: The Apollo guidance computer: Hardware. In: Computers in Spaceflight: The NASA Experience. NASA
[11]
Tomayko, James: The Apollo guidance computer: Software. In: Computers in Spaceflight: The NASA Experience. NASA
NASA Space Shuttle Program: [12]
N.N.: IBM and the space shuttle, http://www-03.ibm.com/ibm/history/exhibits/space/space_shuttle.html
[13]
Space Shuttle Onboard Computer: http://en.wikipedia.org/wiki/IBM_AP-101
[14]
Tomayko, James: Computers in Spaceflight: The NASA Experience, http://www.hq.nasa.gov/office/pao/History/computers/Ch4-2.html
NASA Mariner Program: [15]
Dunne, James A. ; Burgess Eric: NASA History Office: The Voyage of Mariner 10 http://history.nasa.gov/SP-424/sp424.htm
[16]
Tomayko, James: Computers in Spaceflight: The NASA Experience, Appendix IV – Mariner Mars 1969 Flight Program http://www.hq.nasa.gov/office/pao/History/computers/Appendix-IV.html
[17]
Hooke, A.J.: In Flight Utilization of the Mariner 10 Spacecraft Computer, J. Br. Interplanetary Society, 29, 277 (April 1976).
NASA Voyager Program: [18]
N.N: JPL News & Features Engineers Diagnosing Voyager 2 Data System http://www.jpl.nasa.gov/news/news.cfm?release=2010-151
263
264
References
NASA Galileo Mission: [19]
N.N: NASA: Solar System Exploration: Galileo JPL: Galileo Project Home http://solarsystem.nasa.gov/galileo/
[20]
Tomayko, James: Computers in Spaceflight: The NASA Experience, Chapter Six: Distributed Computing On Board Voyager and Galileo http://history.nasa.gov/computers/Ch6-3.html
[21]
Thomas, J. S.: A command and data subsystem for deep space exploration based on the RCA 1802 microprocessor in a distributed configuration Jet Propulsion Laboratory, 1980 Document ID: 19810003139 Accession Number: 81N11647
References on Microprocessors for Space CDP1802: [22]
N.N.: CDP1802 datasheet, http://homepage.mac.com/ruske/cosmacelf/cdp1802.pdf
[23]
N.N.: RCA 1800 Microprocessor User Manual for the CDP1802 COSMAC Microprocessor
Am2900: [24]
N.N.: The Am2900 Family Data Book http://www.bitsavers.org/pdf/amd/_dataBooks/1979_AMD_2900family.pdf
MIL-STD-1750 compatibles: [25]
N.N.: MIL-STD-1750 A http://www.xgc.com/manuals/m1750-ada/m1750/book1.html
References
265
[26]
N.N.: Dynex Semiconductor MA31750 Processor (Datasheet) http://www.dynexsemi.com/assets/SOS/Datasheets/DNX_MA31750M_N _Feb06_2.pdf
[27]
N.N.: UT1750AR RadHard RISC Microprocessor Data Sheet http://aeroflex.com/ams/pagesproduct/datasheets/ut1750micro.pdf
RS/6000 – RAD6000: [28]
http://en.wikipedia.org/wiki/IBM_POWER
[29]
RAD6000™ Space Computers http://www.baesystems.com/BAEProd/groups/public/documents/bae_publ ication/bae_pdf_eis_sfrwre.pdf
MIPS R3000 (Mongoose V): [30]
N.N: Synova Inc. http://www.synova.com/proc/processors.html
ERC32 and LEON: [31]
N.N.: Sun SPARC: http://en.wikipedia.org/wiki/Sun_SPARC
[32]
ESA microelectronics ERC32 website: http://www.esa.int/TEC/Microelectronics/SEM2XKV681F_0.html
[33]
N.N.: SPARC Series Processors ERC32 Documentation, http://klabs.org/DEI/Processor/sparc/ERC32/ERC32_docs.htm
[34]
N.N.: LEON2 and 3 VHDL Code (under LGPL), http://www.gaisler.com/
[35]
N.N.: LEON Processors, http://www.gaisler.com/cms/index.php? option=com_content&task=section&id=4&Itemid=33
266
References
[36]
LEON 3 Single Board Computers: http://www.gaisler.com/cms/index.php? option=com_content&task=view&id=189&Itemid=120 and http://www.gaisler.com/cms/index.php? option=com_content&task=view&id=315&Itemid=212
[37]
Koebel, Franck; Coldefy, Jean-François: SCOC3: a space computer on a chip An example of successful development of a highly integrated innovative ASIC, Microelectronics Presentation Days ESA/ESTEC, March 2010, Noordwijk, Netherlands
[38]
Poupat, Jean-Luc; Lefèvre, Aurélien; Koebel, Franck: OSCAR: A compact, powerful and versatile On Board Computer based on LEON3 Core Data Systems in Aerospace, DASIA 2011 Conference, 17 - 20 May, 2011, San Anton, Malta
Diverse: [39]
Weigand, Roland: ESA Microprocessor Development Status and Roadmap Data Systems in Aerospace, DASIA 2011 Conference, 17 - 20 May, 2011, San Anton, Malta
References on Programming Languages HAL/S: [40]
Highlevel Assembler Language / Shuttle – HAL/S: http://en.wikipedia.org/wiki/HAL/S
[41]
NASA Office of Logic Design: http://klabs.org/DEI/Processor/shuttle/ ● HAL/S Compiler System Specification ● HAL/S Language Specification ● HAL/S Programmer's Guide
References
267
● ●
HAL/S-FC User's Manual Programming in HAL/S
JOVIAL: [42]
JOVIAL (Jules Own Version of the International Algorithmic Language): http://en.wikipedia.org/wiki/JOVIAL
[43]
N.N.: MIL-STD-1589C, MILITARY STANDARD: JOVIAL (J73) United States Department of Defense. 6 JUL 1984 http://www.everyspec.com/MIL-STD/MIL-STD+(1500+-+1599)/MIL-STD1589C_14577/.
Ada: [44]
Ada: http://en.wikipedia.org/wiki/Ada_(programming_language)
[45]
Barnes, John: Programming in Ada 2005 Addison-Wesley Longman, Amsterdam, 2006 ISBN 978-0-321-34078-8
C: [46]
Kernighan, Brian W.; Ritchie, Dennis M.: C Programming Language, Prentice Hall, 2nd edition, 1988 ISBN: 978-0131103627
C++: [47]
Stroustroup, Bjarne: The C++ Programming Language Addison Wesley, Reading, Massachussetts, 2nd Edition, 1993, ISBN: 0-201-53992-6
[48]
Eckel, Bruce: Using C++, Covers C++ Version 2.0, Osbourne McGraw Hill, Berkeley 1989, ISBN: 0-07-881522-3
268
[49]
References
Ellis, Margaret A.; Stroustroup, Bjarne: The Annotated C++ Reference Manual Addison Wesley, Reading, Massachussetts, 1990, ISBN: 8131709892
Assembler to C: [50]
Patt, Yale; Patel,Sanjay: Introduction to Computing Systems: From bits & gates to C & beyond, McGraw-Hill, 2nd edition, 2003 ISBN: 978-0072467505
References on Realtime Operating Systems VxWorks: [51]
http://www.windriver.com/products/vxworks
RTEMS: [52]
OAR Corporation: http://www.rtems.com
References on Data Buses and other Interfaces MIL-STD-1553B: [53]
MIL-STD-1553B: Digital Time Division Command/Response Multiplex Data Bus. United States Department of Defense, September 1987. http://www.sae.org/technical/standards/AS15531
[54]
N.N.: MIL-STD-1553 Tutorial and Reference from Alta Data Technologies http://www.altadt.com/support/tutorials/mil-std-1553-tutorial/
SpaceWire: [55]
ECSS SpaceWire Standard Homepage: ECSS-E-ST-50-12C – SpaceWire – Links, nodes, routers and networks
References
269
http://www.ecss.nl/forums/ecss/_templates/default.htm? target=http://www.ecss.nl/forums/ecss/dispatch.cgi/standards/docProfile/ 100654/d20080802144344/No/t100654.htm Subpages: ECSS-E-ST-50-53C SpaceWire – CCSDS packet transfer protocol ECSS-E-ST-50-52C SpaceWire – Remote memory access protocol ECSS-E-ST-50-51C SpaceWire protocol identification [56]
ESA SpaceWire Homepage: http://spacewire.esa.int/content/Home/HomeIntro.php
[57]
http://en.wikipedia.org/wiki/SpaceWire
Controller Area Network, CAN: [58]
ISO 11898: http://www.iso.org/iso/search.htm? qt=Controller+Area+Network&searchSubmit=Search&sort=rel&type=simp le&published=true
[59]
Davis, Robert I.; Burns, Alan; Bril, Reinder J.; Lukkien, Johan J.: Controller Area Network (CAN) schedulability analysis: Refuted, revisited and revised, Real-Time Systems Volume 35, Number 3, 239-272, DOI: 10.1007/s11241-007-9012-7, http://www.springerlink.com/content/8n32720737877071/
[60]
http://en.wikipedia.org/wiki/Controller_area_network
OSI Network Model: [61]
Open Systems Interconnection model (OSI model): http://en.wikipedia.org/wiki/OSI_model
References on OBC Debug and Service Interfaces JTAG / ICD: [62]
http://en.wikipedia.org/wiki/JTAG
[63]
http://en.wikipedia.org/wiki/In-circuit_debugger
270
References
Service Interface: [64]
Wiegand, M.; Schmidt, G.; Hahn, M.: Next Generation Avionics System for Satellite Application, Proceedings of DASIA 2003 (ESA-SP-532) pp. 38 ff, 2-6 June, 2003, Prague, Czech Republic http://articles.adsabs.harvard.edu//full/2003ESASP.532E..38W/0000038.0 01.html
References on Onboard Equipment Development Technology Readyness Level: [65]
Mankins, John C.: TECHNOLOGY READINESS LEVELS, A White Paper, April 6, 1995,, Advanced Concepts Office, Office of Space Access and Technology, NASA
[66]
http://en.wikipedia.org/wiki/Technology_readiness_level
References on Technologies for persistent Memory Flash Memory Technology: [67]
http://en.wikipedia.org/wiki/Flash_memory
[68]
http://en.wikipedia.org/wiki/Solid-state_drive
Magnetoresistive Memory Technology: [69]
http://en.wikipedia.org/wiki/MRAM
[70]
http://www.everspin.com/products.html
References on Solid State Recorders Solid State Recorders: [71]
http://www.astrium.eads.net/node.php?articleid=4966
[72]
http://sbir.nasa.gov/SBIR/successes/ss/5-004text.html
References
271
References on Command / Control Standards [73]
Wertz, James R.; Larson, Wiley J. (Eds.): Space Mission Analysis and Design, Springer, Microcosm Press, 3rd edition, 2008 ISBN: 978-1-881883-10-4
[74]
ECSS-E-70-01A – Ground systems and operations – Part 1: Principles and requirements
[75]
ECSS-E-70-01A – Ground systems and operations – Part 2: Document requirements definitions Annex D: Space segment user manual (SSUM)
[76]
CCSDS 131.0-B-1 TM Synchronization and Channel Coding
[77]
CCSDS 132.0-B-1 TM Space Data Link Protocol
[78]
CCSDS 133.0-B-1 Space Packet Protocol
[79]
CCSDS 231.0-B-1 TC Synchronization and Channel Coding
[80]
CCSDS 232.0-B-1 TC Space Data Link Protocol
[81]
CCSDS 232.1-B-1 Communications Operation Procedure-1 CCSDS 732.0-B-2 AOS Space Data Link Protocol
[82]
ECSS-E-ST-50-01A Space engineering – Space data links – Telemetry synchronization and channel coding
[83]
ECSS-E-ST-50-03A Space engineering – Space data links – Telemetry transfer frame protocol
[84]
ECSS-E-ST-50-04A Space data links – Telecommand protocols, synchronization and channel coding
[85]
ECSS-E-ST-50-05A Space engineering – Radio frequency and modulation
[86]
ECSS-E-70-41A Space engineering – Ground Systems and Operations – Telemetry and telecommand packet utilization
272
References
References on SADT / IDEF0 based Software Design [87]
Marca, D.; McGowan, C.: Structured Analysis and Design Technique, McGraw-Hill, 1987 ISBN: 0-07-040235-3
[88]
N.N.: Overview of IDEF0: http://www.idef.com/idef0.htm
References on HOOD Software Design [89]
http://www.esa.int/TEC/Software_engineering_and_standardisation/TEC KLAUXBQE_0.html
[90]
Rosen J-P.: HOOD: An Industrial Approach for Software Design, Edited by: HOOD User Group ISBN: 2-9600151-0-X
[91]
Selic, Bran; Gullekson, Garth; Ward, Paul T.: Realtime Object Oriented Modeling, Wiley & Sons, 1994 ISBN: 978-0471599173
[92]
Burns, A.; Wellings, A.: Hard Real-Time HOOD: A structured Design Method for Hard Real-Time Ada Systems Elsevier Science Ltd, 1995 ISBN: 978-0444821645
References on UML Software Design [93]
Booch, Grady; Rumbaugh, James; Jacobson, Ivar: The Unified Modelling Language User Guide, Addison Wesley Longman, Reading, Massachussetts, 1999 ISBN: 0-201-57168-4
[94]
Rumbaugh, James; Jacobson, Ivar; Booch, Grady: The Unified Modeling Language Reference Manual,
References
Addison Wesley Longman, 1999, ISBN: 020130998X [95]
Si Alhir, Sinan: Learning UML O'Reilly, 2003, ISBN: 0-596-00344-7
[96]
N.N.: OpenAmeos – The OpenSource UML Tool, http://www.openameos.org/
[97]
Fowler, Martin: 3rd Edition, Addison-Wesley Longman, Amsterdam, 2003 ISBN: 978-0321193681
References on Simulation and Verification Testbeds [98]
Eickhoff, Jens: Simulating Spacecraft Systems, Springer Verlag GmbH, 2009, ISBN: 978-3-642-01275-4
[99]
Eisenmann, Harald; Cazenave, Claude: SimTG: Successful Harmonization of Simulation Infrastructures, 10th International Workshop on Simulation for European Space Programmes, SESP 2008, October 7th - 9th 2008, ESA/ESTEC, Noordwijk, Netherlands
References on Software Development Standards ECSS Standards: [100]
ECSS-E-ST-40C Space Engineering – Software
[101]
ECSS-Q-ST-80C Space product assurance – Software product assurance
DO-178B: [102]
RCTA/EUROCAE: Software Considerations in Airborne Systems and Equipment
273
274
References
Certification, DO-178B/ED-12B, December 1992 [103]
http://www.rtca.org/downloads/ListofAvailable_Docs_WEB_NOV_2005 .htm
Galileo Software Standard: [104]
Montalto, Gaetano: The Galileo Software Standard as tailored from ECSS E40B/Q80, European Satellite Navigation Industries SPA, BSSC Workshop on the Usage of ECSS Software Standards For Space Projects, http://www.estec.esa.nl/wmwww/EME/Bssc/BSSCWorkshopProgrammev 5.htm
NASA Standards (Top level view): [105]
http://sweng.larc.nasa.gov/process/documents/wddocs/LaRC_Local_Version_of_ SWG_Matrix.doc
MIL Standards: [106]
MIL-STD-2167A, Military Standard, Defense System Software Development, Department of Defense, Washington, D.C., February 29, 1988.
References on Onboard Autonomy [107]
ECSS-E-ST-70-11C, Space segment operability
[108]
http://www.esa.int/SPECIALS/Proba_web_site/SEMHHH77ESD_0.html
[109]
Eickhoff, Jens: System Autonomy Testbed Product Flyer, Dornier Satellitensysteme GmbH, Friedrichshafen, Germany, 1997
[110]
Moore, Robert C.: Autonomous Safeing and Fault Protection for the New Horizons Mission to Pluto, The Johns Hopkins University Applied Physics Laboratory, Laurel,
References
275
Maryland, USA, 57th, International Astronautical Congress,Valencia, Spain, October 2.-6., 2006 [111]
http://www.nasa.gov/mission_pages/newhorizons/main/index.html
References on Flight Operations [112]
ECSS-E-ST-70-11C Space Engineering – Space segment operability
[113]
ECSS-E-ST-70-31C Space Engineering – Ground systems and operations – Monitoring and control data definition
[114]
ECSS-E-ST-70-32C Space Engineering – Test and operations procedure language
[115]
http://en.wikipedia.org/wiki/Tcl
[116]
http://en.wikipedia.org/wiki/Advanced_Encryption_Standard
References Diverse [117]
N.N.: Radiation Resistant Computers: http://science.nasa.gov/science-news/science-at-nasa/2005/18nov_eaftc/
[118]
N.N.: AMBA on chip bus architecture: http://www.arm.com/products/system-ip/amba/amba-openspecifications.php
[119]
Eickhoff, Jens; Stevenson, Dave; Habinc, Sandi; Röser Hans-Peter: University Satellite featuring latest OBC Core & Payload Data Processing Technologies, Data Systems in Aerospace, DASIA 2010 Conference, Budapest, Hungary, June, 2010
276
References
[120]
Eickhoff, Jens; Cook, Barry; Walker, Paul; Habinc, Sandi A.; Witt, Rouven; Röser, Hans-Peter: Common board design for the OBC I/O unit and the OBC CCSDS unit of the Stuttgart University Satellite "Flying Laptop" Data Systems in Aerospace, DASIA 2011 Conference, 17 - 20 May, 2011, San Anton, Malta
[121]
Fritz, Michael; Röser, Hans-Peter ; Eickhoff, Jens; Reid, Simon: Low Cost Control and Simulation Environment for the “Flying Laptop“, a University Microsatellite Spaceops 2010 Conference, Huntsville, Alabama, USA, 25 - 30 April 2010
[122]
Rivard, Fred; Prochazka, Marek; Pareaud, Thomas: Java for On-board Software, Data Systems in Aerospace, DASIA 2011 Conference, 17 - 20 May, 2011, San Anton, Malta
[123]
Seeber, G.: Satellite Geodesy De Gruyter, Berlin, 2003, 2nd Edition
[124]
Kranz, Gene: Failure Is Not an Option: Mission Control from Mercury to Apollo 13 and Beyond, Simon and Schuster, 2000 ISBN 978074320079
Index
277
Index A Actel ProASIC.........................................47 Actel RT-AX............................................47 Ada.............33, 42, 46, 120, 135, 136, 254 Aeolus.....................................................45 ALGOL..................................................135 Algorithm in the Loop....................149, 151 AMBA bus...............................................47 AMD 2900 ..............................................40 Analog sensor equipment.......................56 Analog spacecraft control.......................23 Antenna effects.......................................73 Antenna ground station.........................235 AOCS mode..........................................193 AP-101....................................................32 Apollo program.................................24, 29 Application Process Identifier..................... .............................................95, 111, 187 Application Specific Integrated Circuit....44 ARINC 825..............................................62 ARM............................................46, 47, 54 ASIC................................................78, 156 Assembler.....................26, 30, 33, 37, 135 Assembly, Integration and Testing........160 ATLAS.....................................................41 Attitude acquisition................................197 Attitude and Articulation Control Subsystem.....................................36, 38 Attitude and Orbit Control System..........56 ATV........................................................211 Authentication.......................................203 Autocode...............................................148 Autonomy..............................................163 Autonomy testbed.................................256 B Ball Grid Array.........................................72 Bepi Colombo.................................61, 127 Bit failure...............................................126 Bitslice arithmetic logical unit..................40 Boot.......................................................246 Boot loader..............................................91 Boot memory.....................................54, 56 Boot report............................................118 Breadboard Model..................................77 Built-in self test........................................37 Bus controller....................................54, 59
C C .............................................42, 136, 254 C++.......................................................136 Calibration.............................................250 CAN bus..................................................47 CANaerospace........................................62 Cassini....................................................41 CCSDS...........................................62, 154 CCSDS packet......................................102 CCSDS processor...................63, 101, 114 CCSDS standard..............................62, 95 Channel Access Data Unit................63, 95 Channel acquisition table.............123, 124 CISC........................................................43 Classroom training................................244 Clock module........................................207 Clock strobe..........................................207 Closed-loop...........................................160 CMOS memory.......................................36 Code inspection....................................135 Code instrumentation..............................67 Columbus Software Development Standard............................................168 Command and Data Subsystem.............38 Command Link Transfer Unit............62, 95 Command Pulse Decoding Unit................. .............................................64, 112, 187 Commissioning phase..........................250 Commissioning Phase..........................192 Compact PCI...........................................47 Consultative Committee for Space Data Systems.........................................62, 95 Control and Data Management Unit.........6 Control console.............................152, 153 Controller Area Network..........................61 Controller in the Loop...149, 156, 157, 160 Controller network...................................58 Core Data Handling System.................118 Critical Design Review..............................8 CryoSat....................................................... ......................45, 52, 120, 158, 193, 237 Current free encoding.............................59 D Data bus............................................54, 58 Data downlink.......................................209 Data management autonomy...............213
278 Data pools...............................................13 De-orbiting..............................................10 Death-report..........................................118 Debug interface.......................................54 Debug support unit.................................66 Debugger..............................................155 Deep Space Network..............................34 Deep space probe................................183 Deployment...................................197, 247 Development board................................77 Diagnostic packet.................................204 Digital Command Sequencer..................34 Digital Equipment PDP 11......................40 Digital signal processor.........................254 Direct Memory Access Controller...........42 DO178B................................................167 Docking maneuver..................................24 Document Requirements Definition......176 Documents Requirements List.............173 Doppler effect........................................209 Dynamic RAM.........................................57 E ECSS standards...................................167 EDAC memory........................................57 Electrical Functional Model...................160 Electrically Erasable PROM....................56 Electromagnetic compatibility...................5 Elegant Breadboard................................77 EMC tightness.........................................73 Enabling technology.............................213 Encryption.............................................203 End-of-Life Disposal Phase..................193 Engineering Model............................15, 77 Engineering Qualification Model.............76 Environmental Control and Life Support System.................................................24 Envisat..............................................41, 42 Equipment handler..................................92 Equipment health status.......................205 Equipment operational modes................12 Equipment states..................................196 ERC32................................45, 46, 52, 120 Error Detection and Correction.......57, 126 ERS-1/2............................................42, 76 ESA Space Operations Center.............245 Ethernet..................................................47 European Cooperation for Space Standardization..................................167 Event.............................................205, 206
Index Event history.........................................204 Event TM packet...................................117 Event-action-table.................................205 F Fail Operational....................................221 Fail to Safe Mode..................................221 Failure Detection, Isolation and Recovery .....................................4, 5, 17, 116, 219 FDIR and safeguarding hierarchy.........222 FDIR autonomy.....................................213 FDIR concept........................................220 FDIR function........................................163 Field Programmable Gate Array.............44 Flash EEPROM.......................................56 Flash memory.........................................83 FlatSat...........................................161, 162 Flight Acceptance Review.........................8 Flight Data Subsystem............................36 Flight Dynamics Infrastructure..............241 Flight Model......................................15, 76 Flight Operations Center.......................180 Flight operations director......................237 Flight Operations Director.....................238 Flight Operations Manual.............186, 201 Flight Procedure...................................228 FORTRAN.......................................42, 135 FPGA........................................78, 79, 156 FPGA board............................................77 Function Tree................................130, 131 Functional domain........................236, 237 Functional requirements.......................180 Functional sequence monitoring...........205 Functional Verification Bench...............150 G GAIA.....................................................127 Galileo Navigation System.......................... ..................45, 52, 54, 62, 156, 161, 168 Galileo Software Standard....................168 Gemini Digital Computer...................25, 30 Gemini program......................................24 GEO satellite.........................................182 GIOVE-A.................................................62 GNU Ada compiler..................................42 Go / No-Go criteria................................249 GOCE.....................................................45 GPS......................................................254 GR UT699...............................................48 GRACE...................................................76
Index Ground communications system..........238 Ground segment infrastructure.............234 Ground station visibility plan.................198 H Hardware / software compatibility tests 156 Hardware alarm....................................223 Hardware in the Loop...................150, 160 Hardware verification..............................80 Harel state machine..............................145 Hierarchic Object-Oriented Design.......138 High energetic particle............................22 High Priority Command............................... .............................64, 112, 166, 202, 224 High Priority Telemetry......63, 66, 114, 205 High Priority Telemetry log....................118 Highlevel Assembler Language / Shuttle .........................................33, 40, 42, 135 Housekeeping.......................................204 Housekeeping data memory...........57, 114 Housekeeping packet...........................204 HPTM log..............................................118 Huygens..................................................41 HW Trap........................................118, 126 I I/O Board.................................................52 IBM System 360......................................32 IDEF0....................................................136 IEEE 1355...............................................60 IEEE 802.3..............................................59 IEEE standards.....................................167 In circuit debugger..................................66 Instrument operations sequence..........200 Integral....................................................41 Integrated circuit...............................30, 37 Integrated circuits...................................36 Intel 80386..............................................46 Intel 80486..............................................49 Intel 80x86..............................................43 Interface driver........................................91 Internal memory......................................54 Interrupt.................................................125 IP-Core..............................................77, 79 J Java......................................................127 Joint Test Actions Group.........................66 JOVIAL............................................42, 135 JTAG interface........................................66
279 L Launch and Early Orbit Phase.................... ........................9, 16, 180, 192, 197, 246 Launcher separation.............................197 Launchpad..............................................67 LEO satellite..........................................181 LEON........................................46, 54, 120 LEON3FT................................................46 LEOP Autosequence............200, 201, 246 Limit violation........................................205 Line Control Block.................................126 Lock-in amplifier....................................210 Logging mechanism..............................204 M MA31750.................................................41 Magnetic core memory...............26, 30, 33 Magnetic tape.............................27, 37, 58 Magnetoresistive Random Access Memory..........................................56, 57 Man Machine Interface...........................27 Manufacturing.........................................80 MAP-ID..................................112, 187, 203 Mariner missions.....................................34 Mars Express..........................................41 Mars Reconnaissance Orbiter................44 Mass Memory and Formatting Unit............ .............................................6, 54, 57, 82 Master Timeline Manager.....................121 Mechanical loads....................................72 Memory failure......................................126 Mercury program....................................23 Metamodel....................................140, 147 MeteoSat...............................................245 MetOp.........................................41, 42, 76 Microprocessor.......................................54 MIL-STD-1553..........58, 92, 122, 125, 156 MIL-STD-1750..............41, 42, 45, 46, 120 MIL-STD-1815........................................41 MIPS.................................................46, 54 Mission analysis........................................9 Mission Control System........234, 236, 244 Mission execution autonomy................212 Mission lifetime.....................................241 Mission planning...................241, 254, 256 Mission planning tool............................256 Mission scenarios.................................180 Mission timeline....................................196 Mode concept...............................192, 193 Monitor..................................................206
280 Monitoring.....................................204, 221 Motorola 68xxx........................................46 Multiplexer Access Point Identifier........112 Multitasking.............................................30 N Navigation receiver.........................82, 131 New Horizons........................................258 NOAA-18.................................................42 Nominal operations orbit.........................10 Nominal Operations Phase...................192 Non Return to Zero.................................62 NV RAM................................................221 NV ROM..................................................56 O OBCP....................................................205 Onboard autonomy...............................211 Onboard computer....................4, 6, 22, 52 Onboard computer housing....................72 Onboard computer mechanical design...72 Onboard computer models.....................76 Onboard computer simulation model....155 Onboard Control Procedure........................ ...................................110, 120, 201, 206 Onboard data handling..........................111 Onboard software...............................4, 88 Onboard Software Data Pool..........93, 187 Onboard software dump...............106, 220 Onboard software dynamic architecture ...........................................................120 Onboard software function.......................... ...................................163, 201, 205, 221 Onboard software kernel...............118, 120 Onboard software patch............................. ....................37, 106, 193, 220, 224, 241 Onboard software requirements...........132 Onboard software static architecture......90 Onboard software tests.........................135 Onboard synchronization......................206 Onboard time..........................................94 Operating system....................................30 Operational constraints.........................225 Operations concept...............................180 Operations Interface Requirements Document..............................................4 Operations plan....................................180 Operations procedure...........................180 Orbit analysis........................................235 Orbit control maneuver...................10, 200
Index Oscillator...............................................206 OSI layer model..........................59, 60, 92 P Packet Category....................................112 Packet store..........................................109 Packet structure....................................204 Packet Utilization Standard........5, 92, 102 Parameter monitoring...........................205 Parameter Type Code...........................204 Payload commissioning................198, 251 Payload Data Handling and Transmission ...........................................................224 Payload Ground Segment....................226 Payload management computer...............6 Payload Management Computer............42 Performance characterization...............250 Phase locked loop oscillator.................210 Pioneer..............................................34, 37 Platform commissioning........................198 Playback Telemetry.........................63, 114 Position-tagged command....................241 Power Control and Distribution Unit.....223 Power supply....................................54, 68 PowerPC.......................43, 44, 46, 54, 120 Pre-launch Phase.................................192 Preliminary Design Review.......................8 Preliminary Requirements Review...........8 Printed circuit board..........................69, 72 PROBA-1........................................45, 163 Process ID............................112, 190, 204 Process improvement technology........214 Product tree ............................................11 Program / erase cycle.............................83 Programmable Read Only Memory........56 Project for Onboard Autonomy.............254 Proto Flight Model...................................76 PSLV.......................................................41 PUS event.............................................117 PUS monitor..........................................117 PUS services........................................102 Q Qualification Review.................................8 R RAD6000..........................................43, 44 RAD750..................................................44 Radiation.............................................5, 22 Radiation hard circuitry...........................78
Index Radio Technical Commission for Aeronautics........................................167 Random Access Memory........................56 Ranging.........................................210, 247 RCA (CDP) 1802 microprocessor...........39 Re-orbiting..............................................10 Read-only core rope memory.................30 Realtime Operating System..............44, 91 Realtime Telemetry.........................63, 114 Reconfiguration.....................................221 Reconfiguration log.......................118, 205 Reconfiguration unit..........................54, 65 Recovery sequence..............................200 Recovery thread...................................125 Redundancy........................................5, 65 Redundancy concept............................216 Redundancy design..............................219 Remote Interface Unit.......................52, 54 Remote terminals....................................59 Rendezvous maneuver...........................24 Requirements analysis.........................135 Review milestones................................170 Review of design..................................135 RISC.................................................42, 43 RMAP protocol........................................92 Rosetta....................................................41 Router.....................................................58 RS/6000..................................................43 RTEMS....................................................46 Rule network.........................................258 S S/C configuration..................................188 S/C control..............................................90 S/C mode................................10, 193, 250 S/C status.............................................188 Safe Mode................................................... ...........................117, 206, 211, 220, 223 Safe Mode recovery..............................224 Safeguard memory..................................... ...............................54, 57, 126, 187, 204 Satellite operations...................................4 Satellite Reference Database...............227 Satellite Requirements Specification........4 Scheduling cycle...................................121 Science data management...................208 Science data memory.............................57 SCOC3....................................................48 Sentinel 1 to 4.........................................45 Service Interface.......54, 67, 115, 128, 155
281 Service Type.........................................102 Shift handover,......................................244 Shift plan...............................................246 Shock loads............................................72 Shuttle Data Processing System............32 SIMSAT.................................................245 Simulation session................................244 Simulator interface card........................158 Simulator telemetry packet...................154 Simulator-Frontend.......................159, 161 Sine vibration..........................................72 Single Event Upset.........................57, 126 SkyLab....................................................32 SMOS.....................................................45 Software coding....................................147 Software design....................................135 Software development standard...........166 Software engineering............................167 Software functional analysis.................130 Software in the Loop.....................149, 152 Software product assurance.................167 Software requirements definition..........132 Software verification and testing...........148 Software Verification Facility....................... .............................................27, 152, 245 Solid State Recorder...............................82 Space Segment User Manual.......186, 201 Space Shuttle..........................................32 Space Transportation System................32 Spacecraft commandability..................187 Spacecraft Communication and Command Subystem.............................................35 Spacecraft configuration handling........187 Spacecraft Configuration Vector.....57, 187 Spacecraft Controller On a Chip.............48 Spacecraft observability........................204 Spacecraft operations...........................245 Spacecraft operations concept.................9 Spacecraft Operations Concept Document ...................................................186, 201 Spacecraft operations manager...........237 Spacecraft Operations Manager...........238 Spacecraft State Vector........................202 SpaceWire............................47, 58, 60, 92 SpaceWire routing..................................61 SPARC............................................46, 120 Special Checkout Equipment...............160 Sputnik....................................................23 State machine.......................................145 Static RAM..............................................56
282 Structure ID...........................................204 Structured Analysis and Design Technique ...........................................................136 Subservice Type...................................102 SuperH..............................................46, 54 Surface Mounted Device........................72 SWARM............................................45, 76 System 4 Pi.............................................32 System initialization sequence.............200 System log............................................118 System on Chip................................46, 92 System reconfiguration sequence........200 System Requirements Document.............4 System Requirements Review.................8 System simulation.................................240 System Testbench.................................156 System Validation Test..................231, 246 T TanDEM-X.......................................45, 248 Task.......................................................120 Task control...........................................120 Task scheduling....................................191 Technology Readiness Level..................76 Telecommand frame.............................112 Telecommand processing.......................89 Telemetry generation..............................89 Temperature cycles.................................72 TerraSAR-X.............................45, 245, 248 Test Readiness Review........................170 Thermal control equipment.....................69 Thread...................................................120 Time-tagged command.........................241 Timeline.................................................255 TM/TC-Frontend...................154, 156, 159 Trajectory................................................10 Trajectory injection................................197 Transfer orbit...........................................10 Transition to Safe Mode........................224 Transition to Safe Mode sequence.......200 Transponder interface.............................54 TSC21020...............................................49
Index TSC695...................................................45 TTL circuits..............................................36 U Umbilical connector.................67, 115, 246 UML activity diagram............................144 UML class diagram...............................141 UML communication diagram...............146 UML component diagram.....................142 UML composite structure diagram........141 UML deployment diagram.....................142 UML object diagram..............................143 UML package diagram..........................143 UML profiling.........................................147 UML sequence diagram........................146 UML state machine diagram.................145 UML timing diagram..............................147 UML use case diagram.........................144 Unified Modeling Language..................140 Unit reconfiguration sequence..............200 Unit switch-on sequence.......................200 US Federal Aeronautics Association....167 User request.................................214, 241 User requests........................................255 UT699...............................................46, 47 V Venus Express........................................41 Viking Mars landers................................35 Virtual Channel................98, 109, 112, 114 Vostok.....................................................23 Voyager missions........................34, 35, 38 VxWorks..........................................44, 254 W Wear prevention techniques...................83 Wear problem.........................................83 Work memory..........................................56 X XML.......................................................154 XMM........................................................41