P@4 W! McGraw-HillEnterprise
Computing Series DB2 Universal Database Developer’s Guide for Call Level Interface by San...
167 downloads
1212 Views
170MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
P@4 W! McGraw-HillEnterprise
Computing Series DB2 Universal Database Developer’s Guide for Call Level Interface by Sanders ISBN 0-07-134572-8 Enterprise Java Developer’s Guideby Narayanan/Liu ISBN 0-07-134673-2 Web Warehousing and Knowledge Management by Mattison ISBN 0-07-0041103-4 ODBC 3.5 Developer’s Guide by Sanders ISBN 0-07-058087-1 Data Warehousing, Data Mining & OLAP by BersodSmith ISBN 0-07-006272-2 Data Stores, Data Warehousing and the Zachman fiamework by Inman ISBN 0-07-031429-2
Copyright 82000 by The McGraw-Hill Companies, Inc. All Rights Reserved. Printed in the United States of America. Exceptas permitted under the United States Copyright Act of 1976,no part of this publication maybe reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher.
l 2 3 4 5 6 7 8 9 0 AGMIAGM 9 0 4 3 2 1 0 9 Pm 0-07-135390-9 Part Of ISBN 0-07-135392-5 The sponsoring editor for this book was Simon Yates, and the production supervisor was Clare Stanley. It was set in Century Schoolbook by D&G Limited, LLC. Printed and bound by QuebecorlMartinsburg. Throughout this book, trademarked names are used. Rather than puta trademark symbol after every occurrenceof a trademarked name, we used the names in aneditorial fashion only, and t o the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have beenprinted with initial caps. ~
Information contained in this work has been obtained by The McGraw-Hill Companies,Inc. ("McGraw-Hill") from sources believed to be reliable. However, neither McGraw-Hill nor its authors guarantees the accuracy or completeness of any information published herein and neither McGraw-Hill nor itaauthors shall be responsible for any errors, omissions, or damages arising out of use of this information. This work is publishedwith the understanding that McGraw-Hill and its authors are supplying information but are not attempting to renderengineering or otherprofessionalservices. If such services are required, the assistance of an appropriate professional should be sought.
f4
This book is printed on recycled, acid-free paper containing a minimum of 60% recycled de-inked fiber.
project of‘this ma ~fferentpeople, I cont~butions :
niver-
rmation about u
as one o
ost of all, I would like to th
*
“scenesautho~zers t
l!M!
IDEDICATION To my son, Tyler Mamk43anders.
CONTENTS xv
Foreword
xvii
Introduction Part 1
Basic Database Concepts
1
Chapter 1
DB2 Database Architecture
3
The Relational Database Relational Database Objects Databases Table Spaces Tables Indexes Views Packages (Access Plans) Triggers Aliases Event Monitors Schemas System Catalog views Recovery Log Files and the Recovery History File Configuration Files DB2 Database Directories Physical DatabaseDirectory Volume Directory System Directory Workstation Directory Database Connection Services Directory
Chapter2DatabaseConsistencyMechanisms What Is Data Consistency7 Transactions Concurrency and Ransaction Isolation Levels Repeatable Read Read Stability Cursor Stability Uncommitted Read Specifying the Isolation Level Locking Lock Attributes Lock States
4 4 5 5 7 10 12 13 13 14 14 14 15 16 17 17 18 19 19 20 20
23 24 24 25 27 28 28 28 29 29 30 30
Contents Locks and Application Performance mansaction Logging
32 38
Part 2
Application Development Fundamentals
Chapter 3
Getting Started with Dl32 Application Development 45
43
What Is a DB2 Database Application? Designing a DB2 Database Application Elements of a DE2 Database Application High-Level Programming Language SOL Statements CL1 Function Calls API Function Calls Establishing the DB2 Database Application Development Environment Establishing the D62 Database Application Testing Environment Creating a Testing Database Creating Testing Tables and views Generating Test Data Managing Transactions Creating and Preparing Source Code Files
46 46 48 49 49 51 52
Writing API Applications
61
The Basic Structure of an API SourceE-Code File Types of API Function Calls API Naming Conventions API Data Structures Error Handling Evaluating Return Codes Evaluating SOLCA Return Codes Evaluating SOLSTATES Creating Executable Applications Running, Testing, and Debugging API Applications
62 62 66 67 70 70 70 71 71 71
Part 3
Appllcatlon Programming Interface ( N I ) Functions
75
Chapter 5
Program Preparation and General Programming APls
77
Embedded SOL Application Preparation Exception, Signal, and Interrupt Handlers Pointer Manipulation and MemoryCopy Functions
78 79 79
Chapter 4
55 56 56 56 57 57 58
Contents Specifying Connection Accounting Strings Evaluating SQLCA Return Codesand SQWATE Values PRECOMPILE PROGRAM BIND REBIND GET INSTANCE INSTALL SIGNAL HANDLER INTERRUPT GET ADDRESS COPY MEMORY DEREFERENCE ADDRESS SET ACCOUNTING STRING GETERRORMESSAGE GETSQLSTATEMESSAGE GET AUTHORIZATIONS Chapter 6
79 81 82 99 103 107 109 112 116 117 118 119 122 125 128
DB2Database ManagerControl and Database
Control APls
133
The DB2 Database Manager Sewer Processes Creating and Deleting DB2 Databases Starting and Stopping DB2 Databases Retrieving and Setting Other Connection Setting Values Controlling DB2 Database Manager Connection Instances The DB2 Database Manager and DB2 Database Control Functions START DATABASE MANAGER START DATABASE MANAGER STOPDATABASEMANAGER FORCE APPLICATION CREATEDATABASE DROP DATABASE ACTIVATEDATABASE DEACTIVATE DATABASE ATTACH AlTACH AND CHANGE PASSWORD DETACH QUERY CLIENT SET CLIENT QUERY CLIENT INFORMATION SET CLIENT INFORMATION
134 134 134 135 135 135 136 137 142 146 150 159 160 163 165 169 173 174 179 180 185
Contents Chapter 7
Chapter 8
DB2 Database Manager and Database Configuration APls
187
Configuring D62 D62 Database Manager Configuration Parameters D62 Database Configuration Parameters The D62 Database Manager and Database Configuration Functions Get Database ManagerConfiguration Get Database ManagerConfiguration Get Database ManagerConfiguration Defaults UPDATE DATABASE MANAGER CONFIGURATION RESET DATABASE MANAGER CONFIGURATION GET DATABASE CONFIGURATION GET DATABASE CONFIGURATION DEFAULTS UPDATE DATABASE CONFIGURATION RESET DATABASE CONFIGURATION
189 190 191 204 207 212 213 225 228 233
Database, Node, and DCS Directory Management APls
235
D62 Directories The System Database Directory Volume Directories The Workstation (Node) Directory The Database ConnectionServices (DCS) Directory RegisteringIDeregistering D62 Database Servers with NetWare The D62 Database, Node, and DCS Directory Management Functions CATALOG DATABASE CATALOG DATABASE UNCATALOG DATABASE CHANGE DATABASE COMMENT OPEN DATABASE DIRECTORY SCAN GET NEXT DATABASEDIRECTORY ENTRY CLOSE DATABASE DIRECTORY SCAN CATALOG NODE UNCATALOG NODE OPEN NODE DIRECTORY S C A N GET NEXT NODE DIRECTORY ENTRY CLOSE NODE DIRECTORY SCAN CATALOG DCS DATABASE UNCATALOGDCSDATABASE OPEN DCS DIRECTORY S C A N GET DCS DIRECTORY ENTRIES
188 188 189
236 236 236 237 237 237 238 239 240 244 247 250 253 255 256 262 264 267 269 270 273 276 278
Contents 280 28 1 28 1 286
GET DCS DIRECTORY ENTRYFOR DATABASE CLOSE DCS DIRECTORY SCAN REGISTER DEREGISTER
Chapter 9
Table and Table Space Management
APls
289
Table Spaces and Table Space Containers Reorganizing Table Data Updating Table Statistics The D62 Table and Table Space Management Functions OPEN TABLESPACE QUERY OPEN TABLESPACE QUERY FETCH TABLESPACE QUERY CLOSE TABLESPACE OUERY TABLESPACE QUERY SINGLE TABLESPACE QUERY GET TABLESPACE STATISTICS OPEN TABLESPACE CONTAINER QUERY FETCH TABLESPACE CONTAINER QUERY CLOSE TABLESPACE CONTAINER QUERY TABLESPACE CONTAINER QUERY FREE MEMORY REORGANIZE TABLE RUN STATISTICS
Chapter 10
290 290 29 1 29 1 292 293 296 300 300 304 307 310 314 316 316 320 320 324
APls
329
Database Migration Recovering from an "Inconsistent" State Creating Backup Images Restoring Datbase and Table Spaces from Backup Images Performing Redirected Restore Operations Using Roll-Forward Recovery Recovery History Files The D62 Database Migration And Disaster Recovery Functions MIGRATE DATABASE MIGRATE DATABASE RESTART DATABASE BACKUP DATABASE RESTORE DATABASE RECONCILE SET TABLESPACE CONTAINERS ROLLFORWARD DATABASE
330 330 33 1
DatabaseMigrationandDisasterRecovery
332 332 333 333 335 336 337 339 342 351 36 1 365 372
Contents
Chapter 1 1
Chapter 12
Chapter 13
ASYNCHRONOUS READ LOG OPEN RECOVERY HISTORY FILE S C A N GET NEXT RECOVERY HISTORY FILE ENTRY CLOSE RECOVERY HISTORY FILE S C A N UPDATE RECOVERY HISTORY FILE PRUNE RECOVERY HISTORY FILE
387 394 399 404 405 409
DataHandling APls
41 5
Exporting Data Importing Data Loading Data Supported Export, Import, and Load File Formats The DE2 Data Handling Functions EXPORT IMPORT LOAD LOAD QUERY QUIESCE TABLESPACES FOR TABLE
41 6 41 6 41 7 41 9 420 42 1 430 446 464 466
D62DatabasePartitionManagementFunctions
47 1
Nodegroups.And Data Partitioning V p e s Of Parallelism V 0 Parallelism Query Parallelism Enabling Query Parallelism Enabling Data Partitioning The DE2 Database Partition Management Functions ADD NODE DROP NODE VERIFY CREATE DATABASE AT NODE DROP DATABASE AT NODE SET RUNTIME DEGREE GET TABLE PARTITIONING INFORMATION GET ROW PARTITIONING NUMBER REDISTRIBUTE NODEGROUP
472 472 473 473 475 475 476 477 480 482 485 487 490 494 500
Database Monitor and lndoubt lkansaction Processing APls
505
The DB2 Database System Monitor Database System Monitor Switches When Counting Starts Retrieving Snapshot Monitor Data
506
506 507 508
Contents Working with Multiple Databases How the WePhaseCommit Process Works Recovering from Errors Encountered While Using WePhase Commits Manually Resolving lndoubt ltansactions Wo-Phase Commit Processing Using an XA-Compliant lkansaction Manager The DB2 Database Monitor and lndoubt ltansaction Processing Functions GETNPDATE MONITOR SWITCHES RESET MONITOR ESTIMATE DATABASE SYSTEM MONITOR BUFFER SIZE GET SNAPSHOT LIST DRDA INDOUBT TRANSACTIONS LIST INDOUBT TRANSACTIONS COMMIT AN INDOUBT TRANSACllON ROLLBACK AN INDOUBT TRANSACTION FORGET TRANSACTION STATUS
509 509
Thread Context Management Functions
553
Contexts The DB2 Thread Context Management Functions SET APPLICATION CONTEXT TYPE CREATE AND ATTACH TO AN APPLICATION CONTEXT DETACH AND DESTROY APPLICATION CONTEXT ATTACH TO CONTEXT DETACH FROM CONTEXT GET CURRENT CONTEXT INTERRUPT CONTEXT
554 554 555 559 562 566 567 568 572
Chapter 14
51 1 512 514 515 516 52 1 524 527 536 540 545 548 552
Appendix A ODBC Scalar Functions
577
Reference Appendix B SQLSTATE
58 1
Appendix C How theExampleProgramsWereDeveloped
61 5
Bibliography
627
Index
627
API Index
639
This Page Intentionally Left Blank
m
P4 FOREWORD Relational database technology wasinvented in IBM research more than 20 years ago. In 1983, IBM shipped the firstversion of DB2 for MVS. In 1997,IBM delivered its flagship relational technology onthe AS/400 and OS/2. As we enter the21st century, IBM has continued to extend its award winning database technology with additional function and support for additional platforms. Today, DB2 Universal Database is the most modern database on the planet, supporting the world’s most popular system platforms (IBM OS/390, IBM OS/400, IBM RS/6000, IBM OS/2, Sun Solaris, HP-UX, Microsoft , Windows NT, SCO Openserver, and Linux). DB2 Universal Database, whichk t shipped in 1997,has evolved to meet the rapidfire changes within corporations around the world. Traditional companiesare transforming their core business processes around the Internet. New e-companiesare being formed, and a new generation of Web-based applications are being written.You might ask, ‘What is an e-business anyway?” e-business is buying and selling on the Internet. e-business is being open 24hours-a-day, sevendays-a-week, without having to be there at the company e-business is about reaching new customers, and e-business means working togetherin different ways. Some have said that e-business changes everything-or does it? e-business demands highly scalable, available, secure, and reliable systems. e-business demands industrial strength database technology-the kind that DB2 has delivered to more than 40 million users over the last15 years. IBM’s DB2 Universal Database team has been hard at work delivering enhancementsto DB2 Universal Databaseto make it the foundation for e-business. Today, users can accessDB2 Universal Database from the Web. Application developerscan write DB2 applications and stored procedures using Java or JDBC. Database administrators can administer DB2 databases from Web browsers, and DB2 is themost highly scalable, available, robust database in the world. e-business poses a number of new requirements on the database, as well-access from any type of device. New, pervasive devices will be used to access DB2 databases. e-businesses will have a growing need to leverage informationand knowledge, which will drive business intelligenceand knowledge-based applications whichrequire support formulti-terabyte databases to grow to petabytes. These applications will require advanced analytical functions to be supported in the database engine. They will also require access to rich content-documents, images, text, video, and spatial data. DB2 Universal Database has been extendedto deliver this rich contenttoday. The next millenniumwill bring withit enormous change. The next millennium also will bring with it incredible opportunityfor information technology professionalsand those who support database systems. The new economy will be based on information exchange, and database professionalswill be the stewards of this critical corporateasset. I encourage you to take advantage of the opportunity that Roger Sanders is providing to learn more aboutDB2 Universal Database.I also encourageyou to obtain a certification in DB2 Universal Database. Your time will be well spent. DB2 Universal Databaseis the foundation for e-business for thousands of companies today,and we have only begun. Janet Perna General Manager,Data Management Solutions IBM Corporation
This Page Intentionally Left Blank
m m INTRODUCTION DB2 Universal Databaseis a robust database management systemthat is designed to be used fora variety of purposes in a variety of operating system environments. DB2 Universal Databaseis not a new product; it has existed in some formor another since 1989. The earliest version was called Database Manager,and that version was bundled with OS12 in a product calledOS12 Extended Edition.This product wasIBM’s first attempt to put its popular Database 2 product (which had been available forMVS operating systems onIBM mainframes since 1983) on a PC. Through the years, IBM’s PC versionof DB2has matured to the point wherethe program is now one ofthe most powerhl database products available fora wide variety of platforms. DB2 Universal Database provides a rich set of programming interfaces (Structured Query Language, a Call-Level Interface, and numerous Application ProgrammingInterface function calls) that can be used to develop severalMerent kinds of applications. “his book, one of a seriea of books that describe eachof these programming iuterfacesin detail, is designed to provide you with a conceptual overviewof DB2 Universal Database-as well as a comprehensive referencethat m e r s DB2 Universal Database’s Application Programming Interface.
m
BW Why I Wrote This Book Although DB2 Universal Database has been available since 1989, only a handful of books have beenwritten about the product. And,as theDB2 product evolved, manyof the books that were written were not revisedto reflect the differences in theproduct. Eventually, they went outof print. By 1993, whenthe DB2/2 GA product was released (with DB2/6000 followingshortly after), no book existed that focused on DB2 application development. Robert Orfali and Dan Harkey’s ClientlServer Programming with OS12 2.1 contained four chapters covering the Extended Services 1.0 database manager and later DB2/2. However, because this book addressed clientlserver programming rather thanDB2 application programming,its information aboutDB2 was limited.This situation meant that IBM’s product manuals and online help werethe only resources available to application developers writing applications for DB2/2. In thesummer of 1992, while developing a specialized DB2 application(then called the Extended Services 1.0 Database Manager) that used manyof DB2’sAppZicationProgramming Interface (API) calls, I discovered how lacking (particularly in the areaof examples) someof the IBM manuals for this product really were. Becausethere were no other reference books available, I had to spend a considerable amount of trial-anderror programming to complete my DB2 application.I immediately saw the need fora good DB2programming reference guide. This inspiration ultimately led to the writing of myfirst book, The Developer’s Handbook to DB2 for Common Servers. Since that book was written,DB2 has undergone two more revisions,and several new features have been added to an already rich application development toolset. As I began revising my original book, I discovered that it would be impossible toput a thorough reference forthis toolset in a single book. My editor, Simon Yates, decided to do the next best thing: to put this information in a series of books, where each book addressed a specific aspectof DB2 Universal Database’s rich development toolset.
Introduction
F!FW Who I s This Book For? This book is for anyone whois interested increating DB2 Universal Database applications usingDB2's Administrative Application Programming Interfaces (APIs).The book is written primarily fordatabase application programmersand analysts who are familiar with DB2 and are designing and/or coding software applicationsthat performone or more DB2 administrative tasks. Experienced C/C++ programmers with little experience developingDB2 database applications will benefit most from the material covered in this book. Experienced DB2 A P I application developers whoare familiar with earlier versions of the DB2 product will also benefit from this book, because the book describes in detail new features that areonly availablein the latest release of DB2Universal Database. In either case, this book is meant to be a single resource that provides you with almost everythingyou needto know in order to design and develop DB2database applications usingDB2 APIs. To get the most out of this book, you should have a working knowledge of the C++ programming language. An understanding of relational database concepts and Structured Query Language (SQW will also be helpful, although not crucial.
How This Book I s Organized T h i s book is divided into three major parts. Part 1discusses basic relational database
concepts. Before you can successfully develop a DB2API application, youmust fist have a good understanding of DB2's underlying database architecture and data consistency mechanisms. Two chapters in this section are designed to provide youwith that understanding: Chapter 1and Chapter 2. Chapter 1explains relationaldatabase concepts and describes the components #of a DB2 Universal Database. This chapter also describesthe internal file structures used by DB2 fordata and database object storage. Chapter 2 discusses the mechanisms that DB2 provides formaintaining data integrity, These mechanisms include transactions, isolation levels, row- and table-level locking, and transaction logging. Together,these two chapters laythe groundwork forthe restof this book. Part 2 discusses DB2 application development fundamentals. Once you have a good understanding of DB2'sunderlying database architecture and consistency mechanisms, you also need to understand general database application developmentas it applies to DB2. The two chapters in this section, Chapters 3 and 4, describe the different typesof applications that can be developed for DB2and provide you with an understanding of the methods usedto develop applications using DB2 APIs. Chapter 3 discusses the application development processas it applies to DB2. This chapter describes basic DB2 application design and identifies the main elements 'of a DB2 application. The chapter also explains how the database application development and testing environment are established before the application development process begins.
Introduction Chapter 4 explains how to write Application ProgrammingInterface ( N I ) applications and identifies the main components of a API application. This chapter also describes the steps you must taket o convert A P I application source-code files into executable programs. Part 3 contains information about each DB2 A P I function that can be used in an application. This section is designed to be a detailed A P I function reference. The ten chapters in this section groupthe API functions according to their functionality. Chapter 5 examines the basic set of DB2 APIs that are used to prepare and bind embedded SQL applications, along with the APIs that aretypically usedin all DB2 A P I applications. This chapter also contains a detailed reference section that covers each program preparation and general application development A P I function provided by DB2. Each A P I function describedin this chapter is accompanied bya Visual C++ example that illustrateshow to code the A P I in an application program. Chapter 6 shows how an application canstart, stop, andto a certain extent, control the DB2 Database Manager background server processes. This chapter also contains a detailed reference section that covers each A P I function that can be used to interact with the DB2 Database Manager. Each API function describedin this chapter is accompanied by a Visual C++ example that illustrates how to code the A P I in an application program. Chapter 7 describes how DB2 uses configuration filesto manage system resources. This chapter also contains a detailed reference section that covers each A P I fundion that can be used t o view, modi^, or reset DB2 Database Manager and DB2 database configuration files. Each A P I function described in this chapter is accompanied by a Visual C++ example that illustrates how to code the A P I in an application program. Chapter 8 examines the sub-directories that DB2 uses to keep track of databases, remote workstations (nodes), and DRDA servers. This chapter also containsa detailed reference section that covers eachA P I function that can be used to scan and retrieve entries stored in the database, node, and DCS sub-diredories. Each API function described in thischapter is accompanied by a Visual C++ example that illustrateshow to code the API in anapplication program. Chapter 9 shows how DB2 database tables can be stored in different table spaces, and how data stored in tables can be reorganized so faster access plans can be generated. This chapter also containsa detailed reference sectionthat covers eachA P I function that can be used to manage tables and table spaces. EachA P I function described in this chapter is accompanied by a Visual C++ example that illustrates how to code the API in an application program. Chapter 10 examines the mechanisms that DB2 provides for migrating, backing up, restarting, and restoring databases. This chapter also containsa detailed reference section that covers eachAPI function that can be used to migrate, backup,restart, restore, and perform a roll-forward recoveryon a DB2 database. Each API function described in thischapter is accompanied by a Visual C++ example that illustrates how to code the A P I in an application program.
iI I
I
" . " I
n
,
Introduction
l
Chapter 11shows how data stored in a database table can be exported to an external file, and how data stored in external files can be imported or bulk-loaded into database tables.T h i s chapter also contains a detailed reference section that covers each API function that can be used to move data between a database and one or more external files. EachAPI function describedin this chapter is accompanied bya VisualC++ example that illustrates how to code the API in anapplication program. Chapter 12 looks at database partitioning and node (workstation) management in an multi-partitioned database environment. This chapter also contains a detailed reference sectionthat covers each API function that can be used to manage database nodes and obtain partitioning information. Each API function described in this chapter is accompanied bya Visual C++ example that illustrates how t o code the API in anapplication program. Chapter 13 examines the database activity monitorand two-phase commit processing. This chapter also contains a detailed reference sectionthat covers each API function that can be used to monitor database activity and manually process indoubt transactions that were created when acritical error occurred during two-phase commit processing. EachA P I function describedin this chapter i s accompanied bya Visual C++ example that illustrates how to code the API in an application program. Chapter 14 describes the mechanisms usedby DB2 to work with threads in multithreaded applications.T h i s chapter also contains a detailed reference section that covers each API function that can be used to manage thread contexts. EachAPI function described in this chapter is accompanied by a Visual C++ example that illustrates how to code the CL1 API in an application program.
NOTE: The concepts covered in Chapters 1 3 are repeated in each book in this series. If you have another book in thisseries and are already familiar with thisinformation, you may want to skip these three chapters.
m
larrnsrl About the Examples The example programs provided are an essential part of this book; therefore, it is imperative that they are accurate. To make the use of each DB2 API function call clear, I included onlythe requiredoverhead in each exampleand provided limited emr-checking. I have also tried to design the example programsso they verifythat theAFT function call being demonstrated actually executed as expected. Forinstance,an example programillustrating database cofiguration file modification mightretrieve and display a value before and afterthe modification, to verify that the API function used to modify the data worked COlTeCtly I compiled and tested almost all of the examples in this book with Visual C++ 6.0, running against the SAMPLE database that isprovided with DB2 Universal Database, Version 5.2. AppendixC showsthe steps I used to create the testenvironment and the steps I used to reproduce and test all of the examples providedin t h i s book.
Introduction
m m Feedback and Source Code on theCD I have tried to make sure that all the information and examples providedin thisbook are accurate; however, I am not perfect. If you happen to find a problemwith some of the information in this book or with one of the example programs, please send me the correction so I can makethe appropriate changes in futureprintings. In addition, I welcome any comments you might haveabout this book. The best way to communicatewith me is via e - m d a r-bean8ere~indepring.com. t As mentioned earlier, all the example programs provided in this book have been tested for accuracy: Thus, ifyou type them in exactly as they appear in the book, they should compile and execute successfully.To help you avoid all that typing, electronic copies of these programs have been provided on the CD accompanyingthis book.
Limits of Liability and Warranty Disclaimer
,
Both the publisher and I have used our best efforts in preparing the material in this book. These efforts include obtaining technical information from IBM,as well as developing and testing the example programsto determine their effectiveness and accuracy. We make no warranty of any kind, expressed or implied, with regard to the documentation and example programs provided in this book. We shall not be liable in any event for incidental or consequential damages in connection with or arising out of the furnishing, performance,or use of either this documentation or these example programs.
This Page Intentionally Left Blank
This Page Intentionally Left Blank
DB2 Databbase Architectu[re 2 applications, you Before you begin developing DB2 databasc need to understand the underlying architecture of DB2 Uniuersal Database (UDB), Version 5.2. T h i s cha]pter is designed to introduce you t o the architecture used byDI32 UDB. T h i s chapter begins with a descriptionof the relational Idatabase model and its data-handlingoperations. This i s followed. by an introduction to the dataobjects and support objects that mlake up a DB2 database. Finally, the directory, subdirectory,and :lile-naming conventions used by DB2 forstoring these data a n dIsystem objects are discussed. Let’s begin by defining a relational database man- I agement system.
; l
4
Concepts Database Part 1: Basic
.
The Relational Database DB2 UDB, Version 5.2, is a 32-bit relational database management system. relational A database management system is a database management system that is designed around a set of powerful mathematical conceptsknown as relational algebra. The first relational database model was introduced in theearly 1970s by Mr. E. F. Codd at the IBM San Jose Research Center. This model is based on the following operationsthat are identified in relational algebra: SELECTION-This operation selects one or more records froma table based on a specified condition. PROJECTION-This operation returns a column or columns froma table based on. some condition. JOIN-This operation enables you to paste two or more tables together. Each table must have acommon column beforea JOIN operation can work. UNION-This operation combines two like tables to produce set a of all records found in both tables. Eachtable must have compatible columns before a UNION operation can work.In other words, each field in the first table must match each field in the second table. Essentially, a UNIONof two tables is the same as the mathematical additionof two tables. DIFFERENCE-This operation tells you which recordsare unique to one table when two tablesare compared. Again, each table musthave identical columns before a DIFFERENCE operation can work. Essentially, a DIFFERENCE of two tablesis the same as the mathematical subtractionof two tables. INTERSECTION-This operation tells you which recordsare common to twoor more tables whenthey are compared. This operation involves performingthe UNION and DIFFERENCE operations twice. PRODUCT-This operation combines two dissimilar tables to producea set of all records foundin both tables. Essentially,a PRODUCT oftwo tables is the same as the mathematical multiplication of two tables. The PRODUCT operation can often produce unwanted side effects, however, requiring you to use the PROJECTION operation to clean them up.
As you can see, in a relational database data is perceived to existin one or more twodimensional tables. These tablesare made up of rows and columns, where each record (row) is divided into fields (columns) that contain individual pieces of information. Although data is not actually stored this way, visualizing the data aas collectionof twodimensional tables makesit easier to describe data needs in easy-to-understand terms.
M
pmsll
Relational Database Objects A relationaldatabase system is more than justa collection of two-dimensional tables. Additional objects existthat aid in data storage and retrieval, database structure control, and database disaster recoveW. In general, objects are defined as items about which DB2 retains information. With DB2, two basic types of objects exist: data objects and support objects.
Chapter 1:DB2 Database Architecture Data objects are the database objects that are used to store and manipulate data. Data objects also control how user data (and some system data) is organized. Data objects include B Databases
U Table Spaces U Tables
U User-Defined Data Types (UDTs) B User-Defined Functions (UDFs) B Check Constraints
U Indexes U Views U Packages (accessplans)
Triggers B Aliases B Event Monitors
Databases A database is simply a set of all DB2-related objects. When you create a DB2 database, you are establishing an administrative entity that provides an underlying structure for an eventual collection of tables, views, associated indexes, etc.-as well as the table spaces in which these items exist. Figure 1-1 illustrates a simple database object. The database structure also includes items such as system catalogs, transaction recovery logs, and disk storage directories. Data (or user) objeds arealways accessed from within the underlying structure of a database.
Table Spaces A table space logically groups (or partitions) data objects such as tables, views, and indexes basedon their datatypes. Upto three tablespaces can be usedper table. Typically, the first table space is used for table data(by default),while a second table space is used as a temporary storage area for Structured Q u e y Langzuzge (SQL) operations (such as sorting, reorganizing tables, joining tables, and creating indexes). The third table space is typically used forlarge object (LOB) fields. Table spacesare designed to provide a levelof indirectionbetween user tables and the database in which they exist. Two basic types of table spaces exist database managed spaces (DMSs) and system managed spaces (SMSs). For SMS-managed spaces, each storage space is a directory that is managed by the operating system's file manager system. For DMS-managed spaces, eachstorage space is either a fixed-size, pre-allocated fileor a specific physical device (such as a disk) that i s managed by the DB2 Database Manager.As mentioned earlier,table spaces can also allocate storage areas for LOBsand can control the device, file, or directory where both LOBs and table data aret o be stored.Table spaces can span
Part 1: Basic Database Concepts DATABASE
Figure 1-1 Database object and its related data objects multiple physical disk drives, and their size can be extended at any time (stopping and restarting the databasei s not necessary). Figure1-2 illustrates how you can use table spaces to direct a database object to store ita table data on one physical disk drive-and store the table’s corresponding indexes on another physical disk drive.
Chapter 1: DB2 Database Architecture
1
PHYSICAL DISK DRIVE 1
-
DATABASE
U B
I
P
TABLESPACE
'
7
PHYSICAL DISK DRIVE L
l
rr INDEXES
-
A
TABLESPACE
Figure 1-2 Using table spaces to separate the physical storage of tables and indexes
NOTE: You should recognize that the table space concept implemented by DB2 Universal Database is differenth m the table space concept used byDB2 for OSI390.
Tables The table is the most fundamentaldata object of a DB2 database. All user datais stored in and retrieved from one or more tables in a database. Two types of tables can exist in a DB2 database: base tables and result tables. Tables that are created by the user to store user data are known as base tables. Temporary tables that are created (and deleted)by DB2 fiom oneor more base tables to satisfythe result of a query are known as result tables. Each table contains an unordered collection of rows, and a k e d number of columns. "he definitionof the columns in the table makes up the table structure, and therows contain the actual table data. The storage representation of a row is called a record, and the storage representation of a column is called a field. At eachintersection of a row and column in a database tableis a specific data item calleda value. Figure 1 3 shows the structure of a simple database table.
l II
1
k"l 8
L " . .. -.
I
."
! : .
Concepts P Database art 1: Basic
,
.
DATABASE TABLE A
-
051061
061061
1009
DAY HEPBURN ROBINSON HOLDEN
1010
CRAWFORD
1004 1005 1006
RECORD (ROW)
Figure 1-3
COOPER BOGART BACALL
1003
1007 1008
101151 11118C 091101 1 m r 031021
~~
Simple database table
DATA TYPES Each columnin a table is assigned a specificdata type during it$creation. This action ensures that only data of the correct type is stored in the table's columns. The following data types are available in DB2: SMALLIN!I"A small integer is a binary integer with a precisionof 15 bits. The range of a small integer i s -32,768 to +32,767. INTEGER (INT)-An integer is a large binary integer with a precision of 31 bits. The range of an integer is -2,147,483,648 to +2,147,483,647. BIGINT-A big integer is a large binary integer with a precisionof 63 bits. F'LOAT (REAL)-A single-precision, floating-point numberis a 32-bit approximation of a real number. Therange of a single-precision, floating-point number is 10.0, to 10.0 E+38. DOUBLE-A double-precision, floating-pointnumber is a 64-bit approximation of a real number. The number can be zero or can range from -1.79769E"308to -2.225E-307 and 2.225E307 to 1.79769E+308. DECIMAL (DEC, NUMERIC, NUM)-A decimal value is a packed decimalnumber with an implicit decimal point. The position of the decimal pointis determined by the precision and scale of the number. Therange of a decimalvariable or the
Chapter 1:Architecture DB2 Database
.
9 ~~
numbers in a decimal columnis -n to +n, where the absolute valueof n is the largest number that can be represented with the applicable precisionand scale. CHARACTER (CHAR)-A character string isa sequence of bytes. Thelength of the string is thenumber of bytes in thesequence and must be between 1and 254. VARCHAR-A varying-length character string isa sequence of bytes in varying lengths, up to 4,000 bytes. LONG VARCHAR-A long, varying-lengthcharacter string is a sequence of bytes in varying lengths, up to 32,700 bytes. GRAPHIC-A graphic string isa sequence of bytes that represents double-byte character data. The length of the stringis the number of double-byte characters in the sequence and must be between 1and 127. VARGWHIC-A varying-length graphicstring is a sequence of bytes in varying lengths, up t o 2,000 double-bytecharacters. LONG VARGRAPHIC-A long, varying-length graphicstring isa sequence of bytes in varying lengths, up t o 16,350 double-bytecharacters. BLOB-A binary large object sting is a varying-length string, measured in bytes, that can be up to 2GB (2,147,483,647 bytes) long. A BLOB is primarily intended to hold nontraditional data, such as pictures, voice, and mixed media. BLOBScan also hold structured data for user-defined typesand functions. CLOB-A character large object string is a varying-length string measured in bytes that can be up to 2GB long. A CLOB can store large, single-byte character strings or multibyte, character-baseddata, such as documents written with a single character set. DBCLOB-A double-byte character large object string is a varying-length string of double-byte characters that can be up to 1,073,741,823 characters long. A DBCLOB can store large, double-byte, character-baseddata, such as documents written with a single character set. A DBCLOBis considered to be a graphic sting. DATE-A date is a three-part value (year, month, and day) designating a calendar date. The range of the year part is0001 to 9999, the range of the month part isone to 12, and the range of the day part isone to n (28,29,30, or 31),where n depends on the month and whether the year value corresponds to a leap year. TIME-A time is a three-part value (hour,minutes, and seconds) designating a time of day under a 24-hour clock. The range of the hour part is zero to 24, the range of the minutes part is0 to 59, and the range of the seconds part isalso 0 to 59. Ifthe hour part is setto 24, the minutes and seconds must be 0. TIMESTAMP-A timestamp is a seven-part value (year, month, day, hour, minutes, seconds, and microseconds) that designates a calendar date and time-of-day under a 24-hour clock. The ranges for each part are thesame as defined for the previous two data types, whilethe range for the fractional specification of microseconds is 0 to 999,999. DISTINCT W E - A distinct type is a user-defined data type that shares its internal representation (source type) with one of the previous data types-but is considered
Part 1: Basic Database Concepts to be a separate, incompatible type for most SQL operations.For example, a user can definean AUDIO data type for referencingexternal .WAV files that use the BLOB data type for their internalsource type.Distinct types do not automaticay acquire the functions and operators of their source types, becausethese items might no longer be meaningful. However, user-defined functions and operators can be created and applied to distinct types to replace this lost functionality. For more informationabout DB2 data types, refer to the IBM DB2 Universal Database SQL Reference, Version5.2 product manual.
CHECK CONSTRAINTS When youcreate or alter a table, you can alsoestablish restrictions on data entry for one or more columns in the table. These restrictions, known as check constraints, exist to ensure that none of the dataentered (or changed) in a table violates predefined conditions. Three types of check constraints exist, as shown in thefollowing list Unique Constraint-A rule that prevents duplicate values from beingstored in one or more columnswithin a table Referential Constraint-A rule that ensures that values stored in one or more columns in a table can be foundin a column of another table n b l e Check Constraint-A rule that sets restrictions on all data thatis added to a specific table The conditions defined fora check constraint cannot contain any SQL queries, and they cannot refer to columns within another table. Tables can be defined with or without check constraints, and check constraints can definemultiple restrictions on the data in a table. Check constraints are defined in the CREATE TABLE and ALTER TABLE SQL statements. If you define a check constraint in the ALTER TABLE SQL statement for a table that already contains data, theexisting data will usually be checkedagainst the new condition before the ALTER TABLE statement can be successfully completed. You can, however, placethe table in a check-pending state with the SET CONSTRAINTS SQL statement, whichenables the ALTER TABLE SQL statement to execute without checking existing data. If you place a table in a check-pending state, you must execute the SET CONSTRAINTS SQL statement again after the table has been altered to check the existing data and return the table t o a normal state.
Indexes An in&x is an ordered set of pointers to the rows of a base table. Each index is based on the values of data in one or more columns(refer to the definition of key, later in this section), and more than one index can be defined for a table. An index uses a balanced binary tree (a hierarchical data structure in which each elementhas at most one predecessor but can have many successors)to order the values of key columns in a table. When you indexa table by oneor more of its columns, DB2can accessdata directly and more efficiently because the index is ordered by the columns to be retrieved. Also, because an index is stored separately from its associated table, the index providesa way to define keysoutside of the table definition. Once youcreate an index, the DB2 Data-
Chapter 1:DB2 Database Architecture base Manager automatically builds the appropriate binary tree structureand maintains that structure. Figure 1-4 shows a simple table and its corresponding index. DB2 uses indexes t o quickly locate specific rows (records) in a table. If youcreate an index of frequently uged columnsin a table, you will see improved performanceon row access and updates. A unique index(refer to the following paragraph) helps maintain data integrity by ensuring that each rowof data ina table is unique. Indexesalso provide greater concurrency when more than one transaction accesses the same table. Because row retrieval is faster, locks do not last as long. These benefits, however,are not without a price. Indexesincrease actual disk-space requirements and cause a slight decrease in performance whenever an indexed table's data is updated, because all indexes defined forthe table must also be updated. A key is a column (or set of columns) in a table or index that is used to identify or access a particular row (or rows) of data. A key that i s composed of more than one column is called a composite K e y . A column can be part of several composite keys.A key that i s defined in such a way that thekey identifies a single row of.data within a table is called a unique key. A unique key that i s part of the definition of a table is called a primary key. A table can haveonly oneprimary key, and the columns of a primary key cannot contain null (missing) values.A key that references (or points to) a primary key in another table is called a foreign key. A foreign keyestablishes a referential link to a primary key, and the columns defined in each key must match. In Figure 1-4, the EMPID columnis the primary key for Table A.
INDFXA
ROW 1 ROW 2 ROW 3 ROW 4 ROW 5 ROW 6 ROW 7 ROW 8 ROW 9 ROW 10 ROW 11
Figure 1-4 Simple database table and its corresponding index, where the EMPID column is the primary key
i
I
12
i j ,
.
'
:
I .
Part 1: Basic Database Concepts
Views A view is an alternative way of representing data that exists in one or more tables. Essentially, a view is a named specificationof a result table. The specificationis a predefined data selection that occurs whenever the view is referenced in an SQL statement. For this reason, you can picture a view as having columns and rows, just like a base table. In fact, a view can be usedjust like a base table in most cases. Althougha view looks likea base table, a view does not existas a table in physical storage-so a view does not contain data. Instead, a view refers to data stored in other base tables. (Although a view might referto another view, the reference is ultimately to data stored in one or more base tables.) Figure 1-5 illustrates the relationship between two base tables and a view. TABLE B
TABLEA
VIEW A
Figure 1-5 In this figure, a view is created from two separate tables. Because the
EMPID column is commonin both tables, the EMPID column joins the tables to create a single view.
Chapter 1:DB2 Database Architecture A view can include any numberof columns from one or more base tables. A view can also include any numberof columns from other views, so a view can be a combination of columns from both viewsand tables. When the column of a view comes from a column of a base table, that column inherits anyconstraints that apply to the column of the base table. For example, if a view includesa column that is a unique key for its base table, operations performed against that view are subject to the same constraint as operations performed against the underlying base -table.
Packages (Access Plans) A package(or access plan) is an object.that contains control structures (known as sections) that areused to execute SQL statements. If an application program intends to access a database using static SQL,the application developermust embed the appropriate SQL statements in the program source code. When the program source code is converted to an executable object(static SQL)or executed (dynamicSQL),the strategy for executing each embedded SQL statement is stored in a package as a single section. Each section is the bound (or operational) form of the embedded SQL statement, and this form contains information such as which index to use and how to use the index. When developing DB2 UDB database applications, you shouldhide package creation from users whenever possible. Packages and binding are discussed in more detail in Chapter 3, “Getting Started with DB2 Application Development.”
Triggers A trigger is a set of actions that are automatically executed (or triggered) when an INSERT, UPDATE, or DELETE SQL statement is executed against a specified table. Whenever the appropriateSQL statement is executed, the trigger is activated-and a set of predefined actionsbegin execution.You can use triggers along with foreign keys(referential constraints) and check constraints to enforce data integrity rules. You can also usetriggers to apply updates to other tables in the database, to automatically generate and/or transform values for inserted or updated rows, and to invoke userdefined functions. When creating a trigger, in order to determine whenthe trigger should beactivated, you must firstdefine and then lateruse the following criteria: Subject table-The table for whichthe trigger is defined Trigger event-A specific SQL operation that updates the subject table (could be an INSERT, UPDATE, or DELETE operation) Activation time-Indicates whether the trigger should be activated before or aRer the trigger event is performed on the subject table Set ofaffected rows-The rows of the subject table on which the INSERT, UPDATE, or DELETE SQL operation is performed Trigger granularity-Defines whether the actions of the trigger will be performed once forthe whole SQL operation or once for eachof the rows in the setof affected rows
Part 1: Basic Database Concepts Wggered action-Triggered action is an optional search condition and a set of SQL statements that are to be executed wheneverthe trigger is activated. The triggered action is executed only ifthe search condition evaluates to TRUE. At times, triggered actions might need to refer to the original values in the set of affected rows. "his reference can be made with transition variables and/or transition tables. Transition variables are temporary storage variables that use the names of the columns in the subject table and are qualified by a specified name that identifies whether the reference is t o the old value (prior t o the SQL operation) or the new value (after the SQL operation). Transition tables also use the names of the columns of the subject table, but they have a specifiedname that enables the complete set of affected rows to be treated as a single table.As with transition variables, transition tables can be defined for both the old values and the new values. Multiple triggers can be specified for a single table. The order in which the triggers are activated is based on the order in which they were created, so the most recently created trigger will be the last trigger to be activated. Activating a trigger that executes SQL statements may causeother triggers to be activated (or even the same trigger to be reactivated). "his event is referred to as trigger cascading.When trigger cascading occurs, referential integrity delete rules can also be activated, thus a single operation can significantly change a database. Therefore, whenever you create a trigger, make sure to thoroughly examinethe effects that thetrigger's operationwill have on all other triggers and referential constraints defined forthe database.
Aliases An alias isan alternate name for a table or view. Aliases can be referenced in the same way the original table or view is referenced. An alias can also bean alternatename for another alias. This process of aliases referring to each other is known as aliaschaining. Becausealiases are publicly referenced names, no special authority or privilege is needed to use them-unlike tables and views.
Event Monitors An event monitor observes each event that happens to another specified object' and records all selected events to either a named pipe or to an external file. Essentially, event monitorsare "tracking" devicesthat inform other applications (either via named pipes or files) whenever specified event conditions occur. Event monitors allow you to obseme the events that take place in a database whenever database applications are executing against it. Once defined, event monitors can automatically be started each time a database is opened.
Schemas All data objects are organized (bythe database administrator) into schemas, which provide a logical classificationof the objects in the database. Object names consistof two parts. The fist (leftmost) part iscalled the qualifier, or schema, and the second (right-
Chapter 1:DB2 Database Architecture most) part iscalled the simple (or unqualified)name. Syntactically, these two parts are concatenated as a single string of characters separated by a period. Whenan object such as a table space, table, index, view, alias, user-defined data type, user-defined function, package, event monitor,or trigger is created, that object is assigned to an appropriate schema basedon its name. Figure1-6 illustrates how a table is assigned to a particular schema during the table creation process. NOTE: If no schema is specified when an object is created, the DB2 Database Manager
uses the creator’s userID as the default schema. This section completes the discussion of data objects. Now let’s examine DB2 support objects. Support objects are database objects that contain descriptionsof all objects in the database, provide transaction and failure support, and control system resource usage. Support objects include the following items: H System catalog tabledviews
Log files and the Recovery History file DB2 Database Manager configuration fdes H Database configuration fdes
System Catalog Views DB2 creates and maintains a set of views and base tables for each database created. These views and base tables are collectively known as the system catalog.The system PAYROLL
(SCHEMA) DATABASE
“CREATE TABLE PAYROLL.STAFF”
Figure 1-6 Implementing schemas with the CREATE SOL statement
Part 1: Basic Database Concepts catalog consistsof tables that contain accurate descriptions of all objeds in the database
at all W.DB2 automatically updates the system catalogtables in response to SQL data definition statements, environment routines,and certain utility routines. “he catalog viewsare similar to any other database views, with the exception that they cannot be explicitlycreated, updated (withthe exception of some specific updateable views), or dropped. You can retrieve data in the catalog viewsin thesame way that you retrieve data from any other view in thedatabase. For a completelisting of the DB2 catalog views, refer to the IBM DB2 Universal Database SQL Reference, Version 5.2product manual.
Recovery Log Files and the Recovery HistoryFile Database recovery logfiles keep arunning record of all changes madeto tables in a database, and these files serve two important purposes. First, they provide necessary support for transaction processing. Becausean independent record of all database changes is written to the recovery log files, the sequence of changes making up a transaction can be removed from the database if the transaction is rolled back. Second, recovery log files ensure that a system power outageor application error will not leavethe database in an inconsistent state. In the event of a failure, the changes that have been made,but that have not been made permanent (committed)are rolled back.Furthermore, all committed transactions, which might not have been physically written t o disk, are redone. Database recovery loggingis always active and cannot be deactivated. These actions ensure that theintegrity of the database is always maintained. You can also keep additional recovery log files to provide forward recovery in the event of disk (media)failure. The roll-forward database recovery utility uses these additional database recovery logs, called archived logs, to enable adatabase to be rebuilt to a specific point in time. In addition to using the information in the active database recovery logto rebuild adatabase, archived logsare used to reapply previous changes. For roll-forward database recovery to work correctly,you are required to have both a previous backup versionof the database and a recovery log containing changes made to the database since that backup was made.The following list describes the types of database recovery logsthat can exist: Active log files-Active log files contain information for transactions whose changes have notyet been written to the database files. Active log files contain information necessary to roll backany active transaction not committed during normal processing. Active log files also contain transactions that arecommitted but are not yet physicallywritten from memory(bufferpool) to disk (database files). Online archived log files-An activity parallel to logging exists that automatically dumps the active transaction log fileto an archive log file whenevertransaction activity ceases, when the active log file is closed, or when the active log file gets full. An archived logis said to be “online” whenthe archived log is stored in the database log path directory. Ofline archived log files-Archived log filescan be stored in locations other than the database log path directory.An archived log fileis said to be “offline” when it is not stored in the database log path directory.
Chapter 1:DB2 Database Architecture " I
\%
'
..
NOTE: If an online archived logfile does not contain anyactive transactions, thi file will be overwritten the next time an archive log file is generated. On the other h a n d , if an archived log file contains active transactions,the file will not be overwritten by other active transaction log dumps until all active transactions storedin the file have been made p e m n e n t . If youdelete an active log file, the database becomes unusable and must be restored before the database can be used again. Ifyou delete an archived log file(either online or offline) roll-forward recovery will only be possible up to the point in time coveredby the log filethat was written to before the deleted log file was created. The recovery history file contains a summary of the backup informationthat is used to recover part or all of the database to a s@c point in time. A recovery history fileis automatically mated when a database is created, andthe file is automatically updated whenever the database is backed up, restored, or populated withthe LOAD operation.
Configuration Files Similar t o all computer software applications, DB2 UDB uses system resources both when it is installed and when it is running. In most cases,run time resource management (RAM and shared control blocks, for example) are managed by the operating system (OS). If, however,an application is greedy for system resources, problems can occur for boththe application and for other concurrently running applications. DB2 provides two sets of configuration parametersthat can be used to control its consumption of system resources. Oneset of parameters that is used for the DB2 Database Manager itself exists in a DB2 Database Manager configuration file. This file contains values that are to be used to control resource usage when creating databases (database code page, collating sequence, and DB2 release level, for example). This file also controls system resources that are used by all database applications(astotal shared RAM, for example). A second set of parameters exists for each DB2database mated and is stored in a database configuration file. This file contains parameter values that are used to indicate the current state of the database (backup pending flag, database consistency orflag, roll-forward pending flag, for example) and parameter values that d e h e the amount of system resources the database can use (buffer pool size, database logging, or sort memory size, for example). A database configuration file exists for each database, so a changeto one database codguration does not havea corresponding effect on other databases.By fme-tuning thesetwo configuration files, you can tailor DB2 for optimum performance in any numberof OS environments. For more information about DB2 configuration parameters, file referto the IBA4 DB2 Universal Database Administration Guide, Version 5.2 product manual.
DB2 Database Directories DB2 UDBuses a set of directories forestablishingan environment, storing data objeds, and enabling data access to both localand other remote workstations (nodes)and databases. Theset of directories usedby DB2 contain the following items:
H One or more physicaldatabase directories One or more volume diredories
!
j
L "
18__
Part 1: Basic Database Concepts A system directory A workstation (node) directory A database connection services directory These directories define the overall DB2 Database Manager operating environment. Figure 1-7 illustrates DB2's directory structure.
SYSTEM DIRECTORY
I 1 WORKSTATION VOLUME
PHYSICAL, DATABASE DIRECTORIES
Figure 1-7
DATABASE DIRECTORY
CONNECTION DIRECTORY SERVICES DIRECTORY
D625 directory structure
Physical Database Directory Each time a database is created, the DB2 Database Manager creates aseparate subdirectory in which to store control files (such as log header files)and to allocate containers in which default table spaces are stored. Objects associated withthe database are usually storedin the database subdirectory,but they can be stored in other various locations -including system devices. All database subdirectories are created withinthe instance defined in the DB2INSTANCE environment variableor within the instance to which the user application has been explicitlyattached.The naming scheme used forDB2 a instance or UNM Platforms is inrrtall-patb/$DB2INSTANCE/NODgnnnn.The naming scheme used on Intel platforms is drive_Ietter:\$DB2INSTANCE\NODEnnnn.In both cases, NODEis the node identifier in a partitioned database environment whereNODEOOOOis the first node, NoDEoool is the second node,and so on. The naming scheme fordatabase subdirectories created within an instance is sQLoOoOl through s ~ ~ n n n nwhere n, the number for pllpnn increases each time a new database is created. For example,directory sQLoooo1 contains all objects associated with the first database created, S Q L O O O Ocon~
Chapter 1: DB2 Database Architecture tains all objects forthe second database created, and so on. DB2 automatically creates and maintains these subdirectories.
Volume Directory In addition to physical database directories, a volume directory existson every logical disk drive available (on a single workstation)that contains oneor more DB2 databases. This directory contains one entry for eachdatabase that is physically stored on that particular logical disk drive, The volume directory is automatically created when the first database is created on the logical disk drive, and DB2 updates its contents each time a database creation or deletion event occurs. Each entry in the volume directorycontains the following information: W The database name, as provided with the CREATE DATABASE command
The database alias name (which is the same as thedatabase name) W The database comment, as provided with the CREATE DATABASE command
The name of the root directoryin which the database exists The productname and release number associatedwith the database Other system information, including the code page the database was created under and entry type (whichis always HOME) W The actual number of volume database directories that exist on the workstation, which is the number of logical disk drives on that workstation that contain oneor more DB2 databases
System Directory The systemdatabase directory is themaster directory fora DB2 workstation.This directory contains one entry for each local and remote cataloged database that can be accessed by the DB2 Database Manager froma particular workstation. Databases are implicitly cataloged when the CREATE DATABASE command or API function is issued and can also be explicitly cataloged withthe CATALOG DATABASE command or API function. The system directory existson the logical disk drive wherethe DB2 product software is installed. Each entry in thesystem directory containsthe following information: W The database name provided with the CREATE DATABASE or CATALOG DATABASE
command or API function W The database alias name (which is usually the same as the database name) W The database comment, as provided with the CREATE DATABASE or CATALOG
command or API function The logicaldisk drive on which the database exists, if it is local The nodename on whichthe database exists, ifit is remote Other system information, including where validation of authentication names (user IDS)and passwords will be performed DATABASE
i i
1 i
20
r"l I
/
ParConcepts t Database 1: Basic
i ' ,
Workstation Directory The workstation or node directory contains oneentry for each remotedatabase server workstation that can be accessed. The workstation directory alsoexists on the logical disk drive wherethe DB2 product softwareis installed. Entries in the workstation directory are used in conjunction withentries in the system directoryto make connectiuns to remote DB2 UDBdatabase servers. Entries in the workstation directoryare also used in conjunction with entries in thedatabase connection services directoryto make connections to hosts (OSl390, AS1400,etc.) database servers. Each entry in the workstation directory containsthe following information: B The nodename of the remote server workstation wherea DB2 database exists B The node name comment
B The protocolthat will be used to communicate with the remote server workstation B The typeof security checking that will be performed bythe remote server
workstation W The hostnameor address of the remote server H The servicename or port number for the remote server
DatabaseConnectionServicesDirectory
,
A database connection services directory onlyexists if the DB2 Connect producthas been installed on the workstation. This directory exists on the logical disk drive where the DB2 Connect product software is installed. The database connection services directory contains one entry for each host (OS1390, ASl400, etc.) database that DB2 can access via the distributed relational database architecture (DRDA) services. Eachentry in theconnection services directorycontains the following information: B The localdatabase name W The target database name
W The database comment W The applicationrequester library file that executes the DRDA protocolto
communicate with the host database B The user-defined nameor nickname forthe remote server database B The database system usedon the remote server workstation
B Other system information, includinga defaults override parameter string that defines SQLCODE mapping requirements, date and time formatting to use, etc.
\*
!
j
Chapter 1: DB2 Database Architecture
4
e
NOTE: l b avoid potential problems, do not create directories that use the same naming scheme as the physical database directories, and do not manipulate the volume, system, node, and databaseconnection services directories that have been created by DB2.
The goal ofthis chapter is to provide youwith an overview of the underlying architecture of a DB2 Universal Database (UDB), Version 5.2 database. You should now understand therelational database model and be familiarwith the followingdata objects and support objects: W Data objects W Databases W Table spaces W Tables W User-Defined Data Types (UDTs)
W User-Deiined Functions (UDFs) W Check constraints W Indexes W Views
.
W Packages (access plans)
W Triggers
m Aliases
W Event monitors W Support objects W System catalog views W Recovery log filesand the Recovery History file
W DB2 Database Manager configuration file W Database configuration files
Finally, you should beaware of how DB2UDB creates and uses the follofigdirectories and subdiredories on your storage media: W Physical database directories W Volume diredories
W System directory W Workstation diredory
Database Connection Services directory
!
1
I
A j ;
22. j j
PartDatabase 1: Concepts Basic You should be comfortablewith these basic DB2 database concepts before you begin your database application design work (and especially before youactually begin writing the source code for yourapplication).The next chapter continues to present these concepts bydiscussingthe database consistency mechanismsavailable in DB2 Universal Database, Version 5.2.
Consistency Mechanisms Once you understand the underlying architecture of DB2 Universal Database, you should become familiar with the mechanisms DB2 uses to provide and maintain dataconsistency. T h i s chapter is designed t o introduce you to the concepts of data consistency and to the threemechanisms DB2 uses to enforce consistency: transactions, locking, and transaction logging. The first part of this chapter defines database consistency and examines some of the requirements a database management system must meet to provide and maintain consistency. This part is followed bya close lookat the heartof all data manipulation: the transaction. Next, DB2’s lockingmechanism is described and how that mechanism is used by multiple transactions working concurrently to maintain data integrityis discussed.Finally, this chapter concludes with a discussion of transaction logging and the datarecovery process used by DB2 to restore data consistancyif application or system failure
I"1 2 4
j i,
Part 1: Basic Database Concepts
What I s Data Consistency? The best way to define data consistency is by example. Suppose your company o wa chain of restaurants, and you have adatabase designed to keep track of supplies stored in each of those restaurants. To facilitate the supplies purchasing process, your dahbase contains an inventory table for each restaurant in thechain. Whenever supplies are received or used by a restaurant, theinventory table for that restaurant is updated. Now, suppose some bottles of ketchup are physically moved from one restaurant to another. The ketchup bottle count value in the donating restaurant's inventory table needs to be lowered, and the ketchup bottle count value in the receiving restaurant's inventory table needs to be raised to accurately represent this inventory move. Ifa user lowers the ketchup bottle count from the donating restaurant's inventory table but fails to raise the ketchup bottle countin thereceiving restaurant's inventory table, the data has become inconsistent. Now, the total ketchup bottle count for the entire chain of restaurants i s incorrect. Data can become inconsistent if auser fails to make all necessary changes(as in the previous example), if the system crashes while the user is in the middle of making changes, or if an application accessingdata stops prematurely for some reason. Inconsistency can also occur when several users are accessing the same data at the same time. For example, oneuser might read another user's changes before the data has been properly updated and take some inappropriate action-or make an incorrect change based on the premature data values read. To properly maintain data consistency, solutionsmust be provided forthe followkg questions: How can you maintain generic consistencyof data if you donot know what each individual data owner or user wants? ! How can YOU keep a single application from accidentally destroying data consistency?
m How can you ensure that multiple applications accessingthe same data a t the same time will not destroydata consistency?
If the system fails while a database is inuse, how can the database be return4 to a consistentstate? DB2 provides solutionsto these questions with its transaction support, locking, and logging mechanisms.
"R P M Transactions A trunsaction, or a unit of work, is a recoverable sequenceof one or more SQL operations grouped together as a single unit within an application process. Theinitiation and termination of a transaction define the points of data consistencywithin an applicationprocess. Either all SQL operations within a transaction are applied to the datasource, or the effects of all SQL operations within a transaction are completely "undone."
l
!
'Chapter 2: Database Consistency Mechanisms
i
" .
25 " "
I
i
Transactions and commitment control are relational database concepts that have been around forquite some time. They provide the capability to commit or recover from pending changes madeto a database in order to enforce data consistency and integrity. With embedded SQL applications, transactions are automatically initiated when the application process is started. With OpenDatabase Connectivity (ODBC) and Call-Level Interface (CLI), transactions are implicitly started whenever the application begins working witha data source. Regardless of how transactions are initiated, they are terminated when they are either committed or rolled back. Whena transaction is committed, all changes madeto the datasource sincethe transaction was initiated are made permanent.When a transaction is rolled back,all changes madeto the datasource sincethe transaction was initiated are removed, and the.data in the data source is returned to its previous state (before the transaction began). In either case, the data source is guaranteed to be in a consistent state atthe completion of each transaction. A commit or roll back operation only affectsthe datachanges madewithin the transaction they end. As long as data changesremainuncommitted, other application processes are usually unable to see them,and they can be removed with the roll back operation. However, once data changes are committed, they become accessibleto other application processesand can no longer be removed bya roll back operation. A database application program can do all of its work in a single transaction or spread its work overseveral sequential transactions. Data used within a transaction is protected from being changed or seen by other transactions through various isolation levels. Transactions provide generic database consistency by ensuring that changes become permanent only when youissue a COMMITSQL statement or via APIcalls defined within a Transaction Manager.Your responsibility, however,is to ensure that the sequence of SQL operations in each transaction results in a consistent database. DB2 then ensures that each transaction is eithercompleted (committed)or removed (rolled back) as a single unit of work. If a failure occurs before the transaction is complete, DB2 will back out all uncommitted changes to restore the database consistency that DB2 assumes existed whenthe transaction was initiated. Figure 2-1 shows the effectsof both a successful transaction and a transaction that failed.
lwl l eal H Concurrency and Transaction
Isolation Levels So far, we have only lookedat transactions from a single-user data source point-of-view. With single-userdata sources, each transactionoccurs serially anddoes not haveto contend with interferencefrom other transactions. With multi-user data sources, however, transactions can occur simultaneously, and each transactionhas the potential to interfere with another transaction. Transactions that have the potential of interfering withone another are said to be interleaved,or parallel, transactions. Transactions that runisolated from each otherare said to be seriulizable, which means that the results of running them
Part 1: Basic Database Concepts A SUCCESSFUL. TRANSACTION Locks are acquired at theStart of the
START TRANSACTION
Transaction.
SQL COMMAND SQL COMMAND
When theCO" IT SQL statementis executed, all changesare made permanent.
SQL COMMAND
corn END TRANSACTION
U
Locks are released at the End of the Transaction.
AN UNSUCCESSFUL TRANSACTION
START TRANSACTION COMMAND SQL
rlr
Locks are acquired at the Start of the
Transaction.
- I
occurs, the When an Error Condition R0 LLBACK SQL statementis executed and a llchanges made to the database are removed.
ROLLBACK Locks are released at the End of the Transaction.
END TRANSACTION
U Figure 2-1 Events thattake place during the executionof a successful andan unsuccessful transaction
simultaneously are the same as the results of running them one right after another (serially). Ideally,all transactions should be serializable. So why should transactions be serializable? Consider the following problem. Suppose a salesman is entering orders on a database system at the same time a clerkis sending out bills. Now, suppose the salesman enters an order from Company X but does not commit the order (the salesman is still talking to the representative from CompanyX). While the salesman is on the phone, the clerk queriesthe database for alist of all outstanding orders, seesthe order for Company X,and sends Company X a bill. Now, suppose the representative from CompanyX decides to cancel the order. Thesalesman rolls back the transaction, because the representative changed his mind and the order information was never committed. A week later, Company X receives a bill forpart a it never ordered. If the salesman's transaction and the clerk's transaction had been isolated from each other (serialized),this problem would never have occurred. Either thesalesman's transaction would have finished before the clerk's transaction started, or the clerk's
Chapter 2: Database Consistency Mechanisms
27
transaction would have Gnished before the salesman's transaction started. In either case, CompanyX would not have receiveda bill. When transactions are not isolated from each other in multi-user environments, the following three types of events (or phenomena) can occuras aresulk H Dirty reads-This event occurs whena transaction reads data that has not
yet been committed.For example: Transaction1changes a row of data, and Transaction 2 reads the changed row before Transaction 1commits the change. If Transaction 1rolls backthe change, then Transaction 2 will have readdata that is considered never to have existed. H Nonrepeatable reads-This event occurs when a transaction reads the same row of data twice but receives differentdata values each time.For example: Transaction 1reads a row of data, and Transaction 2 changes or deletes that row and commitsthe change. If Transaction1attempts to reread the row, Transaction 1retrieves different data values (ifthe row was updated) or discovers that the row no longer exists (if the row was deleted). H Phantoms-This event occurs whena row of data matches a search criteria but
initially is not seen.For example: Transaction1reads a setof rows that satisfy some search criteria, and Transaction2 inserts anew row matching Transaction l's search criteria. If Transaction 1re-executes the query statement that produced the original set of rows, a different set of rows will beretrieved. Maintaining database consistency and data integrity while enablingmore than one application to access the same data at the same time is known as concurrency. DB2 enforces concurrency by using fourdifferent transaction isolation levels.An isolation level determines how data islocked or isolated from other processes whilethe data i s being accessed.DB2 supports the following isolation levels: H Repeatable read H Read stability H Cursor stability H Uncommitted read
Repeatable Read The repeatable read isolation level locks all the rows an application retrieves within a single transaction.If youuse the repeatable read isolation level, SELECT SQL statements issued multiple times withinthe same transaction will yield the same result. A transaction running under the repeatable read isolation level retrieve can and operateon the same rows as many timesas needed until thetransaction completes. However,no other transactions can update, delete, or insert arow (which would affect the result table being accessed) until the isolating transaction terminates. Transactions running under the repeatable read isolation level cannot see uncommitted changes of other transactions. The repeatable read isolation level does not allow phantom rows to be seen.
Part 1: Basic Database Concepts
Read Stability The read stability isolation level locks only those rows that an application retrieves within a transaction. This feature ensures that any row read by a transaction is not changed by other transactions until the transaction holding the lock is terminated. Unfortunately, if a transaction using the read stability isolation level issues the same query more than once, the transaction can retrieve new rowsthat were entered by other transactions that now meet the search criteria. T h i s event occurs because the read stability isolation levelensures that all data retrieved remains unchanged until thetime that thetransaction sees the data,even when temporary tables or row blocking is used. Thus, the read stability isolation level allows phantom rows to be seen and non-repeatable reads to occur.
Cursor Stability The cursor stability isolation level locks any row being accessed bya transaction, as long as thecursor is positioned on that row. This lock remains in effect until thenext row is retrieved (fetched)-or until the transaction is terminated. If a transaction running under the cursor stability isolation levelhas retrieved a row froma table, no other transactions can update or delete that row as long as the cursor is positioned on that row. Additionally, ifa transaction running under the cursor stability isolation level changes the row it retrieved, no other application can update or delete that row until the isolating transaction is terminated. When a transaction has locked a row with the cursor stability isolation level,other transactions can insert, delete, or change rows on either side of the locked row-as long as the locked row is not accessed via an index. Therefore, the same SELECT SQL statement issued twice within a single transaction might not always yieldthe same results. Transactions running under the cursor stability isolation level cannot see uncommitted changes made by other transactions. With the cursor stability isolation level, both nonrepeatable reads and phantom reads are possible.
Uncommitted Read The uncommitted red isolation level allows a transaction to accessuncommitted changes madeby other transactions (in either the same or in different applications).A transaction running under the uncommitted read isolation level does not lock other applications outof the row it is reading-unless another transaction attempts to drop or alter thetable. If a transaction running under the uncommitted read isolation level accesses a read-only cursor,the transaction can access most uncommitted changes made by other transactions. The transaction cannot access tables, views, and indexes that are being created or dropped by other transactions, however, until those transactions are complete. All other changes madeby other transactions can be read before they are committed or rolled back.If a transaction running under the uncommitted read isolation level accesses an updateable cursor,the transaction will behaveas if the cursor stability isolation level were in effect. With the uncommitted read isolation level, both noyepeatable reads and phantom reads are possible.
Ff !
II
!
Chapter 2: Database Consistency Mechanisms R
Table 2-1 shows the four transaction isolation levels that are supported by DB2 Universal Database,as well as the types of phenomena that can occurwhen each one is used. Table 2-1 Transaction isolation levels supportedby DB2 and the phenomena that can occur when each is used m”Tr”.:,
UncommittedYes Read
Dirty R
,., ;,. e
~
,, .:
”
a
,.,,
*.
,
:;,
> “
d
,
,
,
m
.>
m
,
.
..
.~
s
Yes
No
Yes
Read Stability
No
No
Yes
Repeatable Read
No
No
No
Cursor Stability
,I
P
Yes
Specifying the Isolation Level You specify the isolation level foran embedded SQL application either at precompile time or when bindingthe application to a database. In most cases,you set theisolation level for embedded SQL applications with the ISOLATION option of the command line processor PREP or BIND commands. In other cases, you can set an embedded SQL application’s isolation levelby using the PREP or BIND API functions. The isolation level for a CL1 application is setby CL1statement handle attributes. The default isolation level used for all applicationsis thecursor stability isolation level.
Locking Along with isolation levels,DB2 uses locks to provide concurrency controland to control data access. A lock is a mechanism that associates a data resource with a single transaction, with the purpose of controlling how other transactions interact with that resource whilethe resource is associated withthe transaction that acquired the lock. The transaction with whichthe resource is associated is said to “hold”or “own” the lock. When a data resource in the database is accessed by a transaction, that resource is locked according to the previously specified isolation level. This lock prevents other transactions from accessing the resource in a way that would interfere with the owning transaction. Once the owning transaction is terminated (either committed or rolled back), changes madeto the resource are either made permanent or are removed, and the dataresource is unlocked so it can be used by other transactions. Figure 2-2 illustrates theprinciples of data resource locking. If one transactiontries to access a data resource in a way that is incompatible witha lock held by another transaction,that transaction must wait until the owning transaction has ended. This situationis known as a lock wait.When this event occurs, the transadion attempting to access the re9ourcesimply stops execution untilthe owning transaction has terminated and the incompatible lock is released. Locks are automatically provided by DB2 foreach transaction, so applicationsdo notneed to explicitlyrequest data r e s o wlocks.
Part 1: Basic Database Concepts
I
TRANSACTION2
c
I H-
STARTTRANSACTION
SQL C O M M A N D SQL C O M M A N D SQL C O M M A N D COMMIT
END TRANSACTION
Figure 2-2 DB2 prevents uncontrolled concurrent table accessby using locks. In this example, Transaction 1 has locked tableh and Transaction2 must wait until the lock is released before it can execute.
Lock Attributes All locks used by DB2have the following basicattributes: Object-The object attribute identifies the data resource beinglocked. Tablesare the only data resource objectsthat can be explicitly lockedby an application. DB2 can set locks on other types of resources, suchas rows, tables, etc., but these locks are used for internal purposes only Size--The size attribute specifies the physical sizeof the portion of the dataresource that is being locked.A lock does not always have to control an entire data resource. For example, rather than giving an application exclusive control over an entire table, DB2 can only givethe lock exclusive control over the row that needs to be changed. Duration-The duration attribute specifies the length of time a lock is held. The isolation levels described earlier control the duration of a lock. Mode-The mode attribute specifies the type of access permitted forthe lock owner, as well as thetype of access permittedfor concurrent users of the locked data resource. Mode is sometimes referredto as the “state”of the lock.
Lock States As a transaction performs its operations, DB2 automatically acquireslocks on the data resources it references. These locksare placed on a table, a row (or multiple rows),or both a table and a row (or rows). The only object a transaction can explicitly acquirea lock for is a table, and a transaction can only changethe stateof row locks by issuing
Chapter 2: Database Consistency Mechanisms
,
I
,
j
31
i
l
a COMMIT or a ROLLBACK SQL statement. The locksthat areplaced on a data resource by a transaction can have oneof the following states: Next Key Share (NS)-If a lock is set in the Next Key Share state, thelock owner and all concurrent transactions can read-but cannot change-data in the locked row. Only individual rows can be lockedin the Next KeyShare state. This lock is acquired in place of a Share lock on data thatis read using the read stability or cursor stability transaction isolation level. Share @)--If a lock is set in the Share state,the lock ownerand anyother concurrent transactions can read-but cannot change-data in the locked table or row. As long as a table is not Share locked, individual rows in that table can be Share locked. If, however, a table is Share locked, no rowShare locks can be set in thattable by the lock owner. Ifeither a table or a row is Share locked, other concurrent transactions can read the data, but they cannot change the data. Update (u)-If a lock is set in the Update state, the lock owner can update data in the locked data table. Furthermore, the Update operation automatically acquires Exclusive lockson the rows it updates. Other concurrent transactions can readbut not update-data in the locked table. Next Key Exclusive m - I f a lock is set in the Next Key Exclusivestate, the lock owner can read-but not change-the locked row. Only individual rows can be locked in theNext Key Exclusivestate. This lock is acquired on the next row in a table when a row is deleted fromor inserted into the index fora table. Next Key Weak Exclusive W ) - I f a lock is set in the Next Key Weak Exclusivestate, the lock owner can read-but not change-the locked row. Only individual rows can be lockedin the Next Key Weak Exclusivestate. This lock is acquired on the next row in a table when a row is inserted into the index of a non-catalog table. Exclusive K L " f a table or row lock is set in the Exclusive state, the lock owner can both read andchange data in the locked table, but only transactions using the uncommitted read isolation level can access the locked table or row(s). Exclusive locks are best used with data resources that areto be manipulated withthe INSERT, UPDATE, and/or DELETE SQL statements. Weak Exclusive W)-If a lock is set in the Weak Exclusivestate, thelock owner can read andchange the locked row. Onlyindividual rows can be lockedin the Weak Exclusive state. This lock is acquired on a row whenthe row is inserted into a noncatalog table. Super Exclusive @)-If a lock is set in the Super Exclusivestate, the lock owner can alter a table, drop a table, create an index, or drop an index. T h i s lock is automatically acquired on a table whenever a transaction performs any oneof these operations. No other concurrent transactions can read or update the table until this lock is removed.
In addition to these eight primary locks, there are four more special locksthat are only used on tables. They are called intention locks and are used to signify that rows
L??-: I
i l
I
PartDatabase 1: Concepts Basic
!
within the table may eventually become locked. These locksare always placed on the table before any rows within the tableare locked. Intention locks can have one of the following states: Intent None 0 - I f an intention lock is set in the Intent None state, thelock owner can read data in the locked data table, including uncommitteddata, butcannot change this data.In this mode, no row locks are acquired by the lock owner,so other concurrent transactions can read and change data in the table. Intent Share (IS)-If an intention lock is set in the Intent Share state, lock the owner can readdata in thelocked data table but cannot changethe data.Again, because the lock owner does notacquire row locks, other concurrent transactions can both read and change data in thetable. When a transaction owns an Intent Sharelock on a table, the transaction acquires a Sharelock on each row it reads. T h i s intention lock is acquired whena transaction does not conveythe intentto update any rows in thetable. Intent Exclusive (nrs-If an intention lock is set in the Intent Exclusive state, thelock owner and any other concurrent transactions can read and change data in the locked table. When the lock ownerreads data from the datatable, the lock owner acquires a Sharelock on each row it reads and an Update and Exclusive lock on each row it updates. Other concurrent transactions can bothread and update the locked table. This intent lock is acquired whena transaction conveys the intentto update rows in the table. The SELECT FOR UPDATE,UPDATE T ERE, and INSERT SQL statements convey the intentto update. Share with Intent Exclusive (SIx)-If an intention lock is set in the Share with Intent Exclusive state, thelock ownercan bothread and change data in the locked table. The lock owneracquires Exclusive lockson the rows it updates but not on the rows it reads, so other concurrent transactions can readbut not update data in the locked table.
As a transaction performs its operations, DB2 automatically acquires appropriate locks as dataobjects are referenced. Figure2-3 illustrates thelogic DB2 uses to determine the type of lock to acquire on a referenced data object.
Locks and Application Performance When developing DB2applications,you must be aware of several factors concerningthe uses of locks and the effect they have on the performance of an application. The following factors can affect application performance: W Concurrency versus lock size
H Deadlocks W Lock compatibility W Lock conversion
H Lock escalation
Chapter 2: Database Consistency Mechanisms
SQL COMMAND I
INDEX SCAN
TABLE SCAN
Nh+l
ACQUIRE UPDATE
INTENT SHARE (IS) LOCK TABLE LOCK TABLE LOCK TABLE LOCK TABLE
ACQUIRE
ACQUIRE SHARE (S) LOCKS ON ALL REFERENCED ROWS
INTENDED ?
INTENTEXCLUSIVE (JX)
EXCLUSIVE ( X ) OR
ACQUIRE ACQUIRE UPDATE
INTENDED ? ACQUIRE
SHARE (S)
EXCLUSIVE (X)
NO LOCKS ACQUIRED ON REFERENCED ROWS
SHARE (S) LOCKS ON ALL REFERENCED ROWS
Figure 2-3 Logic usedby DB2 to determine which type of lock(s) to acquire
CONCURRENCY VERSUS LOCK SIZE As long as multiple transactions access tables for the purpose of reading data, concurrency should be only aminor concern. What becomes moreof an issue is the situation in which at least one transaction writes to a table. Unlessan appropriate index is defined on a table, there is almost no concurrent write accessto that table. Concurrentupdates are only possible with Intent Share or Intent Exclusive locks. Ifno index exists forthe locked table, the entire table must be scanned forthe appropriate data MW (table scan). In thiscase, the transaction must hold either a Share or an Exclusive lock on the table. Simply creating indexeson all tables does not guarantee concurrency.DB2's optimizer decides for you whether indexes are used in processing yourSQL statements, so even if you have defined indexes, the optimizer mightchoose to perfom a table scan for any of several reasons: No index is defined for yoursearch criteria (WHEREclause). The index key must match the columns usedin theWHERE clause in order forthe optimizer t o use the index to help locate the desired rows. If you chooseto optimize for high concurrency, makesure your table design includesa primary key for eachtable that will be updated. These primary keys shouldthen be used wheneverthese tables are referenced with an UPDATE SQL statement. Direct access might be faster thanvia an index. Thetable must be large enough so the optimizer thinks it is worthwhile to take theextra step of going through the index, rather than justsearching all the rows in thetable. For example, the optimizer would probablynot use any index definedon a table with only four rows of data.
M l j
I l."
i
34 .
!
Part 1: Basic Database Concepts
, ;
"!
W A large number of row lockswill be acquired. If many rows in a table will be
accessed by a transaction, the optimizer willprobably acquire a table lock. Any time one transaction holds a lockon a table or row, other transactions might be denied access until the owner transaction has terminated. To optimize for maximum concurrency, a small, row-level lock is usually better than a large table lock. Because locks require storage space (to keep) and processing time (to manage), you can minimize both of these factors by using one large lock-rather than many small ones.
DEADLOCKS When two or more transactions are contending for locks, a situation
known as a dedZock can occur. Consider the following example. Transaction1 locks "able A with an Exclusive lock, and Transaction 2 locks "able B with an Exclusive lock. Now, suppose Transaction1 attempts to lock "able B with an Exclusive lock, and Transaction 2 attempts to lock "able A with an Exclusive lock. Both transactions will be suspended
until their second lock request is granted. Because neither lock request can be granted until one ofthe transactions performs aCOMMITor ROLLBACK operation-and because,neither transaction can perform a COMMIT or ROLLBACK operation becausethey are both !suspended (waitingon locks)-a deadlock situation has occurred. figure 2-4 illustrates this scenario.
Transaction 1 is waiting for Transaction 2 to release its lock on TableB.
I
I TRANSACTION 2
I PTART TRANCAf'TlfJN Transaction 2 is waiting for Transaction 1 to release its lock on TableA.
END TRANSACTION
Figure 2 4
Deadlock cycle between two transactions
l
1 ;
Chapter 2: Database Consistency Mechanisms
i 35 1 " A
A deadlockis more preciselyreferred to as a "deadlock cycle," becausethe transactions involved in a deadlock forma circle of wait states. Each transaction in the circle is waiting for a lock held by one of the other transactions in the circle. Whena deadlock cycle occurs,all the transactions involved in thedeadlock willwait indefinitely-unless an outside agent performs some actionto end the deadlock cycle. Becauseof this, DB2 contains an asynchronous system background process associated with each active database that is responsible for finding and resolving deadlocks in the locking subsystem. "his background process is called the deudZock detector. When a database becomes active, the deadlock detector is started as partof the process that initializes the database for use. The deadlock detectorstays "asleep" most of the time but "wakes up" at preset intervals to look for the presence of deadlocks between transactions using the database. Normally, the deadlock detectorsees that there areno deadlockson the database and goes back to sleep. If, however, the deadlock detector discoversa deadlock on the database, the detector selects one of the transactions in thecycle to roll back and terminate. The transaction that is rolled back receivesan SQL error code, and all of its locks are released. The remainingtransaction can then proceed, because the deadlock cycle is broken. The possibilityexists (although unlikely) that more than one deadlock cycle exists on a database. If this is the case, the detector willfind each remainingcycle and terminate one of the offending transactions in thesame manneruntil alldeadlock cycles are broken. Because at least two transactions are involved in a deadlock cycle, you might assume that two data objects are always involved in the deadlock. " h i s is not true. A certain type of deadlock, known as a conversion deadlock, can occur on a single data object. A conversion deadlock occurs when two or more transactions already hold compatible locks on an object, and theneach transaction requests new, incompatible lock modeson that same object. A conversiondeadlock usually occurs between two transactions searching for rows via an index (index scan). Using an index scan, each transaction acquires Share andExclusive locks on rows. When eachtransaction has readthe same row and then attemptsto update that row, a conversion deadlocksituation occurs. Application designersneed to watch out for deadlock scenarios when designing highconcurrency applicationsthat aret o be run by multiple concurrent users. In situations where the same set of rows will likely beread and thenupdated by multiple copies of the same application program,that program should be designedto roll back and retry any transactions that might be terminated as a result of a deadlock situation. As a general rule, the shorter the transaction, the less likely the transaction will be to get into a deadlock cycle.Setting the proper interval for the deadlock detector (in the database configuration file)is also necessary to ensure good concurrent application performance. An interval that is too short will cause unnecessary overhead,and an interval that is too long will enable a deadlock cycleto delay a process for an unacceptable amount of time. You must balancethe possible delaysin resolving deadlocks withthe overhead of detecting the possible delays.
LOCK C0"PATIBILITY
If the state of onelockplaced on a data resource enables another lock to be placedon the same resource,the two locks (or states)are said to be Compatible. Whenever one transaction holds a lock on a data resource and a second transaction requests a lock onthe same resource,DB2 examines the two lockstates
36
I 1
Part 1: Basic Database Concepts to determine whether they are compatible. If the locks are compatible, the lock is granted to the second transaction (aslong as no other transaction is waiting forthe data resource). Ifthe locks are incompatible, however,the second transaction must wait until the first transaction releases its lock. (In fact, the second transaction must lwait until all existing incompatible locksare released.) Table2-2 shows a lock compatibility matrix that identifies whichlocks are compatible and which are not.
Table 2-2 Lockcompatibilitymatrix
none none
IN
IS
NS
S
IX
SIX
U
Nx
NW
X
W
Z
X
YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES YES ' NO YES YES YES YES YES YES YES YES NO NO NO NO NO YES YES YES NO NO YES YES YES NO NO NO YES YES YES YES YES YES YES NO NO YES NO NO NO NO NO YES YES YES NO NO YES NO NO NO NO NO NO NO YES YES YES NO NO NO NO NO NO NO NO NO NO YES YES YES YES YES NO NO NO NO NO NO NO NO YES YES NO YES NO NO NO NO NO NO NO NO NO YES YES NO YES NO NO NO NO NO NO NO YES NO YES YES NO NO NO NO NO NO NO NO NO NO NO
W
YES YES
Z
YES
IN IS
NS S IX SIX
U Nx
NW
YES
YES YES
NO
NO
NO
NO
NO
NO
NO
NO
YES
NO
NO NO
NO
NO
NO
NO
NO
NO
NO
NO
NO
NO NO
Adapted fromIBM DB2 Universal Database Embedded SQL Programming Guide, page 143. YES
NO
Locks are compatible, therefore the lock requested is granted Locks are not compatible; therefore,the requesting transaction must wait for the held lock to be released or fora timeout to occur.
LockTyps IN Intent None IS Intent Share
NS Next Key S
IX SIX
Update
U NX NW X
Share Share Intent Exclusive ShareWith Intent Exclusive Next Key Exclusive NextKeyWeakExclusive Exclusive
Weak Exclusive W Z Exclusive Super
Chapter 2: Database Consistency Mechanisms
LOCK CONVERSION
When a transaction accesses a data resource on which the transaction already holds a lock-and the mode of access requires a more restrictive lock than the one the transaction already holds-the state of the lock is changed to the more restrictive state. The operationof changing the state of a lock already held to a more restrictive state iscalled a loch conversion.Lock conversion occurs because a transaction can only hold one lock on a data resource at a time. The conversion case for row locks is simple. A conversion only occurs ifan Exclusive lock is needed and a Share or Update lock is held. More distinct lock conversions exist fortables than for rows.In most cases, conversions result in the requested lock state becoming the new state of the lock currently held whenever the requested state is the higher state. IntentExclusive and Share locks, however, are special cases, because neither is considered to be more restrictive than the other, If one of these locks is held and the other is requested, the resulting conversion is to a Share with Intent Exclusive lock. Lock conversion can cause lockstoonly increase restriction. Once a lock has been converted,the lock stays at the highest level obtained until thetransaction is terminated.
LOCK ESCALATION All locks require space for storage, and because this space is finite, DB2 limits the amount of space the system can use for locks. Furthermore, a limit is placed on the space eachtransaction can use forits own locks. A process known as lock escalation occurs when too many record locks are issued in the database and one of these space limitations is exceeded. Lock escalation is theprocess of converting several locks on individual rows in a table into a single, table-level lock. When a transaction requests a lock after the lock spaceis full,one of its tables is selected-and lock escalation takes place t o create space in thelock list data structure. If enough spaceis not freed,another table is selected for escalation, and so on, until enough spacehas been freed for the transaction to continue. If there is still not enough spacein the lock list after all the transaction's tables have been escalated,the transaction is asked to either commit or roll back all changes made sinceits initiation (i.e., the transaction receives aq SQL error code, and the transaction is terminated). An important point to remember is that anattempted escalation only occursto the transaction that encounters a limit. T h i s situation happens because,in most cases,the lock storage space willbe filled whenthat transaction reaches its own transaction lock limit. If the system storage lock spacelimit is reached, however, a transaction that does not hold many locks might try to escalate, fail, and then be terminated. This event means that offending transactions holding many locks overa long period of time can cause other transactions to'terminate prematurely. If escalation becomes objectionable, there aretwo ways t o solve the problem: Increase the number of locks enabledin thedatabase configuration file (witha corresponding increasein memory). T h i s solution mightbe the best if concurrent access to the table by other processes is important. A point of diminishing returns exists on index access and record locking, even when concurrency is theprimary concern. The overheadof obtaining record-level locks can impose more delays to other pnjcesses, which negatesthe benefits of concurrent accessto the table. Locate and adjust theoffending transaction(s1, which might be the one(s) terminating prematurely, and explicitly issue LOCK TABLE SQL statements within
Part 1: Basic Database Concepts the transaction(s1. Thischoice might be the best if memory sizeis crucial, or if an extremely high percentageof rows are being locked. W Change the degree of transaction isolation being used. m Increase the frequency of commit operations.
Transaction Logging Transaction logging is simply a method of keeping track of what changes have been made to a database. Every change made to a row of data ina database table is recorded in theactive log fileas anindividual log record. Each log record enables DB2 to either remove or apply the data change to the database. To fully understand transaction logging operations,you should know what the transaction log contains, how transaction logging works, how the transaction log gets synchronized, and how to manage log file space.
HOW TRANSACTION LOGGINGWO= Each changeto a row in a table is made with an INSERT, UPDATE, or DELETE SQL statement. If you use the INSERT SQL statement, a transaction record containingthe new row is writtento the log file. If you use the UPDATE SQL statement, transaction records containingthe old row information and the new row information are written to the log file (two separate records are written). If you use the DELETE SQL statement, a transaction record containingthe old row information is written to the log file. These types of transaction log records make up the majority of the records in the transaction log file.Other transaction records alsoexist, which indicate whether a ROLLBACK or a COMMIT SQL statement was usedto end a transaction. These records end a sequence of data log recordsfor a single transaction. Whenever a ROLLBACK or a COMMIT log record is written, the record is immediately forced out to the active log file. This action ensures that all the log records of a completed transaction are in the log file and will not belost due to a system failure. Because more than one transaction might be using a database at any given time,the active log file containsthe changes madeby multiple transactions. To keep everythingstraight in the log, each log record contains an identifier of the transaction that created the record. In addition, all the log records fora single transaction are chained together. Once a transaction is committed, all log records for that transaction are no longer needed (&er all changes madeby that transaction are physically written to the disk). If a ROLLBACK occurs, DB2 processes each log record written by the transaction in reverse order and backs out all changes made. Both "before" and "after" image UPDATE records are writtento the log filefor this reason.
LOG FILEAND DATABASE SYNCHRONIZATION DB2 can maintain consistency only by keeping the log file and database synchronized. This synchronization is achieved with a write-ahead logging technique. When a transaction changes a row in a table, that change is actually made in a memory buffer contained in the database buffer pool and is writtento the disk later. As a result, themost current data changes made to a working database are in the buffer pool, not on the disk. Write-ahead logging
P Chapter 2: Database Consistency Mechanisms
,
'
I
i
31
I
preserves consistency bywriting the log recordof a row change to the disk beforethe change itself is written from the memory bufferto the disk. Log records are written to disk whenever atransaction terminates-or whenever the buffer pool manager writes the memory bufferto the disk database. If the system crashes,the log file and database will no longer be synchronized.Fortunately, the log filecontains a recordof every uncommitted change made to the database, becausethe log recordof the change is forced to disk before the actual change is written. T h i s event enablesthe recovery processto restore the database to a consistent state. The recovery processis discussed in more detail in the Database Recovery section later in this chapter.
MANAGINGLOG F'ILE SPACE It wasmentioned earlier that DB2 writes records sequentially to the log filein order to support transactions. Because the log file grows until thefile is reset, if nolimits were imposed onthe log file size,all free space on the system disk would eventuallybecome full of logrecords. DB2's LogManager controls the size of the log file,and whenever possible,the Log Manager resets the log to an empty state. The growthof the log is controlled by the initial size of the primary log files, the size limit for each secondary log and file,the number of primary and secondary log files being used. When the primary log fileis filled, the Log Manager allocates space for a secondary log file,and the overflow is stored in that secondary file. Whenever the primary log file becomesempty due t o transaction inactivity (i.e., no transactions have uncommitted records in the log), the primary log fileis reset and any secondary log filesthat have been allocated are released. If atransaction r u n s out of log space, either because the maximum primary log file size was reached and a secondary file was not used, or because there was too little disk space to allocate the next secondary log file, a roll back occurs and the transaction is terminated. Regardless of cause, this process continuesuntil the log's inactive state is reached and the log i s reset to its minimum size. If two or more continuously overlappingtransactions (e.g., high volume and high activity rate) arerunning, the primary log file might never reset. be Continuously overlapping transactions are not likely,but they can happen when two or more transactions starting at close intervals use the same database. When designing adatabase system in which the transaction arrival rate is high, you should increase the log file size to reduce the probability of transactions being rolled back due to insufficient logfile space. You can also prevent the primary log file from beingreset if a lengthy transaction (one that causes manylog records to be written before they are committed)is running. But first, you must consider how these transactions are used, as well as the amount of log file space needed to support them, when designing the database system. If other transactions are &g concurrently with a lengthy transaction, the log file space requirement will increase.A lengthy transaction should probablyrun by itself, and the transaction should probably openthe database for exclusive usageand fill up the log file before committing its changes.Any transaction that never ends execution (i.e., never performs a ROLLBACK or COMMIT) is a faulty application, because the transaction will eventually cause itselfand possibly other transactions to fail.
Part 1: Basic Database Concepts DATABASE RECOVERY Database recovery is the process of returning the data in a database to a consistent state aftera system failure (such as a power failure in' the middle of a work.session) occurs. If a DB2 database is active when a system failure occurs,that database is left in an inconsistent state until the database is accessed again. At that time, a special recovery processis executed that restores the database to a new, consistent state. This new, consistent state isdefined by the transaction boundaries of any applicationsthat were usingthe database when the system failure occurred. This recovery process is made possible by the database log file (see Recovery Log File in Chapter 1,"DB2 Database Architecture"). Becausethe log file contains botha "before" and "after" imageof every change madeto a row, all transaction records stored in the log file canbe either removed from or added to the database as necessaq. DB2 determines whetherdatabase recovery is needed by examining the recovery log file the firsttime a database is opened after a system failure occurs. Ifthe log file shows that the database was notshut down normally,the disk imageof the database could be inconsistent. That's because changes made by completedtransactions (still in the memory buffers) might have been lost. To restore the database to a consistent state, DB2 does the following actions: Any change madeby a transaction that was in flight (had not been committed or rolled back)is removed from the database. DB2 works backwardthrough the log file; ifan uncommitted changeis found, the record is restored to the "before" image retrieved fromthe log file. Any change madeby a committed transaction that isnot found in the database is written to the database. As DB2 scans the log file, any committed log records found that arenot in thedatabase are written to the database. W If a transaction was in the process of being rolled back,the roll back operation is completed so that all changes madeto the databaseby that transaction are removed. Because DB2 knows that changes are only consistent whenthey are explicitly committed, all work done by in-flight transactions is considered inconsistentand must be backed out of the database to preserve database consistency. As described previously, during the recovery process DB2 must scan the log file to restore the databaseto a consistent state. While scanning the log file, DB2 reads, the database to determine whether the database contains the committed or uncommitted changes. If the log file is large, you could spend quite a while scanningthe whole log and reading associated rows from the database. Fortunately, scanningthe whole logis usually unnecessary, becausethe actions recordedat the beginning of the log file have been in thelog file longerthan the other actions. "he chance is greater, then, that their transactions have been completed and that the data has already been written to the database; therefore, no recovery actions are required for the log records generated by these transactions. If some way existedto skip theselog recordsduring the m v e r y process, the length of time necessaryto recover the entire database could be shortened.This is the purpose of the soft checkpoint,which establishes a pointer in the log at which to begin database recov-
Chapter 2: Database Consistency Mechanisms
,
I
I
,
1
l
i
,
j
"
41
1I
I
ery. All log file records recorded before this checkpoint are the result of completed transactions, and their changes have already been written to the database. A soft checkpoint is most useful when log files are large, becausethe checkpoint can reduce the number of log records that are examined during database recovery; the more o h n the soft checkpoint is updated, the faster the database can be recovered.
S U R l R l A R Y m m I m I
I I I
You will find it extremely important to understand the mechanisms DB2 uses to ensure database consistency before designing yourdatabase application. Unfortunately, this aspect is one of the more complicated topics of database application design."his chapter was designed to provide you with an overview of the database consistency mechanisms found in DB2 Universal Database, Version 5.2. You should now know what database consistency is and how to maintain it. You should also be familiar with transactions and how your application uses them tomaintain data integrity. Furthermore, you should be familiar with the following transaction isolation levels: H Repeatable read
W Read stability W Cursor stability W Uncommitted read You should also understand the following lock attributes: M Object W
Size
W Duration
W Mode
And you shouldunderstand the difference betweenthe following lockstates:
W Intent None (IN) W Intent Share (IS) M Next Key Share (NS) W Share (S) W Intent Exclusive (IX) W Share with Intent Exclusive (SIX) W Update (U) M Next Key Exclusive(NX) W Next Key Weak Exclusive(NW) W Exclusive (X)
F
I
" " L
42
Concepts PDatabase art 1: Basic R Weak Exclusive (W) R Super Exclusive (Z)
You should also be familiar with lock size, deadlocks, lock compatibility, lock conversion, and lock escalation.Finally, you should beaware of howtransaction logging works and how transaction logsare used to restore database consistency in theevent of a system failure. As you build your database application, you will need to understand most of the information coveredin this chapter. Incorporatingthis information in your application during the design and development process will help you catch and hopefully avoid potential problemsin your application design.
This Page Intentionally Left Blank
Appllcatlo Development The DB2 database application development process begins with the application designand continues with the actual source code development. This chapter is designed to introduce you to the elements that can be usedto drive your application's design. The first part of this chapter defines a simple application program and explains how a DB2 database application differs.This is followed by a n introduction to the four main elements that are used to develop DB2 applications. Next, directions for establishing a DB2 database application development and testing environment are discussed. Finally, a brief overview of transaction management and source code creation and preparation is provided. We will begin by answering the question, 'What is a DB2 datal:base application?"
efore i d e n t i ~ n gthe basic elements of a 2 database application, let’s examine the basic elements of a simple application. Most simple applications contain five essential parts: Logic (decision control)
ory (data storage and retrieval) thmetic (calculation) ~ n p uis t defined as the way an application receives the information it needs in order to produce solutions for the problems that it was designed to solve. Once input has been received, logic takes over and determines what information should be placed in or taken out of memory (data storage) and what ~ ~ i t ~ m eoperations tic should be pe~ormed.Nondatabase applications use ~ n c t i o n supplied s by the operkting system to store data in (and retrieve data from) simple, b ~ e - o ~ e n t files. e d Once the application has produced a solution to the problem that it was designed to solve, it provides approp~ateoutput of either an answer or a specific action.
method of data storage/ret~evaland ating system file input / o u ~ p (UO) u~ i more than just data storage and retmeva control (logic); thanks to the non~rocedural
3-1 illustrates the es abase application pro
ow can the f ~ c t i o n adependencies l in the database be isolated?
can have DI32 atements with
etti
nt has been ~ ~ e p ~the ~ ~pplic~tion e d , desi conside~~tions should include the followin saction definitions
G
in the ~ ~ p ~ i c a t i o ~
~ocessc
n. ~ p ~ l i c a ~ i o n
Chapter 3: Getting Started with DB2 Application Development
High-Level Programming Language A high-level programming language provides the framework within which all SQL statements, CL1function calls,and API calls are contained. This framework enablesyou to control the sequence of your application's tasks (logic) and provides a way for your application t o collect user input and produce appropriate output. A high-level programming language also enables you to use operating system calls and DB2 application elements (SQL statements, CL1 function calls, and API calls) within the same application program.In essence, the high-level programming language can take care of everything except data storage and retrieval. By combining OS calls and DB2 elements, you can develop DB2 database applications that incorporate OS-specific file U0 for referencingexternal data files.You can also use the high-level programming languageto incorporate Presentation Manager functions, User Interface class library routines, and/or Microsoft Founclation Class ( W C ) library routines in the application for both collectinguser inputand displaying application output. Additionally, by building a DB2 database application with a high-level language, you can exploitthe capabilities of the computer hardware to enhance application performance (i.e., optimizing for high-level processors such as the Pentium I11 processor) and simplify user interaction (i.e., using special YO devices suchas light pens and scanners).DB2 Universal Database,Version 5.2, providessupport for the following high-level languages:
mc R c++ R COBOL
m FORTRAN m mxx n Visual BASIC (through the DB2 Stored Procedure Builder)
SQL Statements SQL is a standardized language that is used to define, store, manipulate, and retrieve data in a relational database management system. SQL statements are executed by DB2, not by the operating system. Because SQLis nonprocedural by design, it is not an actual programming language;therefore, mostdatabase applicationsare a combination of the decision and sequence controlof a high-level programming language and the data storage, manipulation, and retrieval capabilities of SQL statements. Two types of SQL statements can be embedded in an application program: static SQL statements and dynamic SQL statements. Each has its advantages and disadvantages.
STATIC SQL A static SQL statement is an SQL statement that is hard-coded in an application program when a source codefde is written. Because high-level programming language compilers cannot interpret SQL statements, all source code files containing static SQL statements must be processed by an SQL precompiler before they can be compiled. Likewise, DB2 cannot work directly with high-level programming language
l
Ii
P Part 2 Application Development Fundamentals variables. Instead, DB2 must work with host variables that are deflned in a special place within an embedded SQL source code file (so the SQL precompiler can recognize them). The SQL precompiler is responsible for translating all SQL statements found in asourcecodefile into their appropriate host-languagefunctioncalls and for converting the actual SQL statements into host-language comments. The SQL precompiler is also responsible forevaluating the declared data types of host variables and determining which data conversion methodsto use when movingdata to-and-from the database. Additionally, the SQL precompiler performserror checkingon each coded SQL statement and ensures that appropriate host-variable data types are used for their respective table column values. Static SQL has one distinct advantage over dynamic SQL. Because the structure of the SQL statements used is lmown at precompile time,the work of analyzing the statement and creating a package containing data access a planis done duringthe development phase. Thus, static SQL executes quickly, because its operational form already existsthe in database at application run time. The down sideto this property is that all static SQL statements mustbe prepared (i.e., their access planmust be stored in the database) before they can be executed,and they cannotbe altered at run time. Becauseof this characteristic, if an application uses static SQL,its operational package(s1 must be "bound" to each database the application will work with before the static SQL statements can be executed.
NOTE: Because static SQL applications require prior knowledge of database, table, schema, and field names, changes made to these objects after the application is developed could produce undesirableresults.
DYNAMIC SQL Although static SQL statements are fairly easy t o use, they are limited because their format must be known in advance by the precompiler, and they can onlyuse host variables.A dynamic SQL statement does not have a precoded, fixed format, so the data objects the statement uses can change eachtime the statement is executed. This feature is useful for an application that has an SQL requirement in which the format and syntax of the SQL statement is not known at the time the source code is written. Dynamic SQL statements do not haveto be precompiled(although the overhead for dynamic SQL statements sometimes has to) and bound to the database they will access. Instead, they are combined with high-level programming language statements to produce an executable program,and all binding takes place at run time, rather thanduring compilation. Because dynamic SQL statements are dynamically created accordingto the flow of application logic at execution time, theyare more powerful than static SQL statements. Unfortunately, dynamicSQL statements are also more complicatedto implement. Additionally, because dynamic SQL statements must be prepared at applicationrun time, most will execute more slowly than their equivalent static SQL counterparts However, because dynamic SQL statements use the most current database statistics during execution, there are some casesin which a dynamicSQL statement will execute faster than anequivalent static SQL statement. Dynamic SQL statements also enable the optimizer to see the real values of arguments,so they are not confinedto the use of host variables.Figure 3-2 shows how both static SQL and dynamicSQL applications interact with aDB2 database.
F I
Chapter 3: Getting Started with DB2 Application Development
,
:
51
STATIC SQL APPLICATIONS DATABASE
APPLICATION CONTAINING STATIC SQL STATEMENTS
The operational form of static SQL statements are stored as packages in the database. to access table Applications containing static SQL statements use these packages data at application runtime. DYNAMIC SQL APPLICATIONS DATABASE
APPLICATION CONTAINING DYNAMIC SQL STATEMENTS
The operational form of dynamic SQL statements are automatically created at application run time. Temporary access plans, generated when dynamic SQL statements are prepared, are then used to access table data.
Figure 3-2 How SOL applications interact witha D62 database
CL1 Function Calls DB2’s Call Level Interface (CLI) is a collection of API function calls that were developed specifically for database access. To understand the call level interface, you need to understand the basis of DB2’s CL1 and how it compares with existing, callable, SQL interfaces. In the early 199Os, the WOpen Companyand the SQLAccess Group (SAG),now a part of WOpen,jointly developed a standard specification for a callable SQL interface called the XIOpen Call-Level Interface, or X/Open CLI.The god of the WOpen CL1 was to increase the portability of
52
1 :
i :
Part 2: Application Development Fundamentals database applications by enabling them to become independent of any one database management system’s programming interface. Most of the WOpen CL1 specifications were later accepted as part of a new IS0 CL1internationalstandard. DB2’s CL1is based on this IS0 CL1 standard interface specification. In 1992, Microsoft Corporation developed a callable SQL interface, ODBC, for ‘the MicrosoR Windows operating system. ODBC is based on the WOpen CL1 standards specification but provides extended functions that support additional capability. The ODBC specification also defines an operating environment where database-specific ODBC drivers are dynamically loaded (basedon the database name provided with’the connection request)at application run time by an ODBC Driver Manager.This Driver Manager providesa central pointof control for each datasource-specific library (driver) that implements ODBC function calls and interactswith a specific database management system (DBMS). Byusing drivers,an application can be linked directlyto a single ODBC driver library,rather thanto each DBMS itself. Whenthe application runs, the ODBC Driver Manager mediatesits function calls and ensures that they are directed to the appropriate driver. Figure3-3 shows how CL1 applicationsinteract with a DB2 database via the ODBC Driver Manager and the DB2 CL1driver. Applications that incorporate DB2’sCL1 are linked directly to the DB2CL1 load library. TheDB2 CL1load library canthen be loaded as an ODBC driver byany ODBC Driver Manageror it can be used independently. DB2’s CLI provides support forall ODBC 3X Level 1fundions except SQLBulkoperationeO;all ODBC Level 2 functions except SQmrivere( ) ; some WOpen CL1 functions,and some DB2-specific functions. The CL1 specifications defined for ISO, WOpen, ODBC, and DB2 are continually evolving in a coop erative mannerto produce new functionsthat provide additional capabilities. The important difference between embedded dynamic SQL statements and CL1function calls lies in how the actual SQL statements are invoked. With dynamic SQL,an application prepares and executes SQL for a single DBMS-in this case, DB2. For a dynamic SQL application to work with a different DBMS, the application would have to be precompiledand recompiled forthat DBMS. With CLI,an application uses procedure calls at execution time to perform SQL operations. BecauseCL1 applications do not have to be precompiled, they can be executed on a variety of database systems without undergoing any alteration.
API Function Calls Application Programming Interface (API)function callsare a collectionof DB2 productspecific function callsthat provide services otherthan the data storage, manipulation, and retrieval servicesthat are provided by SQLstatements and CL1function calls.API calls are embedded within a high-level programming languageand operate in a fashion similar to other host-language function calls. Each API function has both a call and a return interface, and the calling applicationmust wait until a requested API function completes beforeit can continue. The services provided by DB2 API function callscan be divided intothe following categories: W Database manager control APIs
Database manager configuration APIs
: i 53
Chapter 3: Getting Started with DB2 Application Development
I
APPLICATION
r-
S DB2 CL1
r RETURN
CALL
ODBC DRNER MANAGER
DB2 ODBC DRIVER
S t
TEMPORARY
TABLES
ACCESS PLAN
RETURN
W DATABASE Figure 3-3 How CU applications interact witha DB2 database
H Database controlAPIs W Database configurationAPIs
H Database directory managementAPIs W Clienthemer directory managementAPIs
H Node managementAPIs W Network support APIs H Backuphecovery APIs
t
" I
I
j
P!
1 54 l
Part 2: Application Development Fundamentals m Operational utility APIs Database monitoring APIs Data utility APIs General application programmingAPIs Application preparation APIs W Remote server APIs Table space managementAPIs Transaction APIs Miscellaneous APIs
An application can use APIs to access DB2facilities that arenot available via SQL statements or CL1 function calls.In addition,you can write applications containingonly APIs that will perform the following functions: I Manipulate the DB2 environment by cataloging and uncataloging databases and workstations (nodes), by scanning system database andworkstation'directories, and by creating, deleting, and migrating databases Perform routine database maintenance by backing up and restoring databasesand by exporting data to and importing data from external data files Manipulate the DB2 database manager configuration fileand other DB2 database configuration files Perform specific clientkerner operations I Provide a run-time interface for precompiled SQL statements Precompile embedded SQL applications m Bulk loadtables by importing data from external data files Figure 3-4 illustrates how an application containing the BACKUP A P I interacts with the DB2 Database Managert o back up a DB2 database.
-
BACKUP API APPLICATION
APPLICATION CONTAINING THE BACKUP
A P I FUNCTION CALL
CALL 4
+
R E m
PERFORM BACKUP OPERATION
Figure 3 4 How a BACKUP MI call is processed by'DB2
DATABASE
a
Chapter 3: Getting Started with DB2 Application Development
FM M Establishing the DB2 Database Application Development Environment Before you can begin developing DB2 database applications, you must establish the appropriate application developmentioperating system environment by performing the following steps:
1. Install the appropriate DB2 Universal Database software product on the workstation that will be used for application development.the If application will be developedin a client-server environment, youmust installthe DB2 Universal Database server softwareon the workstation that will act as the server and install the appropriate DB2 Universal DatabaseClient Application Enabler (CAE) software on all client workstations.You also must install a communication protocolthat is common to both client and server workstations. 2. Install and properly configurethe DB2 Universal DatabaseSoftware Developer's Kit (SDK) softwareon all workstations that will be used for application development. 3. Install andproperly configurea high-level language compileron all workstations that will be usedfor application development. 4. Make sure you can establish a connection to the appropriate databaseb). For additional information on how to accomplish these tasks, refer to the installation documentation for DB2 Universal Database, DB2 Universal Database SDK, the compiler being used,and theappropriate communications package. You can develop DB2 database applications on any workstation that has theDB2 SDK installed. You can run DB2 database applications either at a DB2 server workstation or on any clientworkstation thathasthe appropriate DB2 CAE software installed. You can even develop applications in such a way that one part of the application r u n s on the client workstationand another part r u n s on the server workstation. When a DB2 database application is divided across workstations in this manner, the part thatresides on the server workstation is known as a stored procedure. To precompile, compile,and link DB2 database applications, your environment paths need to be properly set. If you follow the installation instructions that come with the DB2 Universal Database SDK and the supported high-level language compiler, your environment should automatically support application development. If, however, after installing the DB2 Universal DatabaseSDK and your high-level language compiler you are unable to precompile, compile, and link your application, check the environment paths andmake sure they point to the correct drivesand directories.
Nom: Altbugh environment paths are usually set appropriately during the installation process, the compilerldevelopment interface being usedmay require that these paths are explicitly provided.
'
i
i 56
I
Part 2: Application Development Fundamentals
PM W4 Establishing the DB2 Database
Application Testing Environment As with anyother application, the best way to ensure that a DB2 database application performs as expected is t o thoroughly test it.You must perform this testing during both the actual development of the application and a h r the application coding phasehas been completed. To thoroughly test your application,establish an appropriate testing environment that includes the following items:
A testing database Appropriate testing tables Valid test data
Creating a Testing Database If your application creates, alters, or drops tables, views, indexes, or any other data objects, you shouldcreate a temporary database for testing purposes. If your application inserts, updates, or deletes data from tables and views, you shouldalso use a testing database t o prevent your application from corrupting production-level data while it is being tested. You can create a testing database in any of the following ways: By writing a small application that calls the CREATE DATABASE API call, either with a high-level programming language (such as C) or as a command file withREXX H By issuing the CREATE DATABASE command from the DB2 command-line processor By backing up theproduction database and restoring it on a dedicated application development and/ortesting workstation
Creating Testing Tables and Views To determine which testing tables and views you will need in the test database, you must first analyze the data needs of the application (or part of the application) being tested. You can performthis analysis by preparing a listof all data needed by the application and then describing how each data item in the list going is to be accessed. When the analysis is complete, youcan construct the testtables and views that arenecessary for testing theapplication in any of the following ways:
H By writing a small application in a high-level programming languagethat executes the CREATE TABLE or CREATE VIEW SQL statement and creates all necessarytables and views. (This application could be the same applicationthat creates the testing database-provided static SQL is not used.) By issuing the CREATE TABLE or CREATE VIEW SQL statement from the DB2 command-line processor
Chapter 3: Getting Startedwith DB2 Application Development W By backing upthe production database and restoring it on a dedicated application development andor testing workstation If youare developing the database schema alongwith the application, you may need to refme the definitions of the test tables repeatedly throughout the development process. Data objects such as tables and views usually cannotbe created and accessed within the same database application, because the DB2 Database Manager cannot bind SQL statements to data objects that do not exist. To make the process of creating and changing data objects less time-consuming, and to avoid this type of binding problem, you can create a separate application that constructs all necessary data objects as you are developing the main application.When the main application developmentis complete, you can then use the application that creates the data objects to construct production databases. If appropriate, this application can then be incorporated into the main application’sinstallation program.
Generating Test Data The data an application uses during testing should represent all possible data input conditions. Ifthe application is designed to check the validity of input data, the test data should include both valid and invalid data. This feature isnecessary to verify that the valid data is processed appropriately-and the invalid data is deteded and handled correctly. You can insert test datainto tables in any of the following ways:
W By writing a small application that executes the INSERT SQL statement. This statement will insert one or more rows into the specified table each time the statement is issued. W By writing a small application that executes the INSERT SELECT SQL statement. This statement will obtain data from an existing table and insert the data into the specified table each time the statementis issued. W By writing a small application that calls the IMPORT API. You can use this A P I to load large amounts of new or existing data, or you can use this API in conjunction with the EXPORT API to duplicate oneor more tables that have already been populated in a production database. By writing a small application that calls the LOAD API. You can alsouse this API to bulk loadlarge amounts of new or existing data into a database. W By backing up theproduction database and restoring it on a dedicated application development and/or testing workstation.
...
U Managing Transactions You might recallin Chapter 2, “Database Consistency Mechanisms,” that transactions were described as the basic buildingblocks that DB2 uses to maintain database consistency.All data storage, manipulation,and retrieval must be performed within one or
I
58 i
I "
Part 2: Application Development Fundamentals
more transactions, and any application that successfully connects to a database aukmatically initiates atransaction.The application, therefore, must end the transaction by issuing either a COMMIT or a ROLLBACK SQL statement (or by callingthe SQL EnaTrane ( ) CL1 function), or by disconnecting from the database (which causes the DB2 Database Manager to automatically performa COMMIToperation).
NOTE: You should not disconnect froma database and allow the DB2 Database Manager to automaticallyend the transaction, because somedatabase management systems behave differentlythan others (for example, DB21400 will performa ROLLBACK instead of a COMMIT). The'comIT SQL statement makes all changes in the transaction permanent, while the ROLLBACK SQL statement removes all these changes from the database. Once a transaction has ended, all locks held bythe transaction are freed-and another transaction can access the previously lockeddata. (Refer to Chapter 2,"DatabaseConsistency Mechanisms," for more information.) Applications should be developed in such a way that they end transactions on a timely basis, so other applications (or other transactions within the same application) are not denied accessto necessary data resources for long periodsof time. Applications should also be developed in such a way that their transactions do not inadvertently cause deadlocksituations to occur. During the execution of an application program, you can issue explicit COMMIT or ROLLBACK SQL statements to ensure that transactions are terminated on a timely basis. Keep in mind, however,that once a COMMIT or ROLLBXCK SQL statement has been issued, its processing cannot be stopped-and its effects cannot easily be reversed.
m
FM Creating and Preparing
Source Code Files The high-level programming languagestatements in in application program are usually written to a standard ASCII text file, known as a source code file, which can be edited with any text or source code editor. The source code files must have the proper file extension for the host language in which the code is written (i.e., C source files have a .C extension, and COBOL source files havea .COB extension)for the high-level language compilerto know what to do with them. If your application is writtenin an interpreted language suchas RE- you can execute the application directly from the operating system command prompt byentering the program name after connecting to the required database. Applications written in interpreted host languages do not need to be precompiled, compiled, or linked. Howevbr, if your application waswritten in a compiled host language such as C, you must perform additional steps to build your application. Beforeyou can compile your program, you must precompile it if it contains embedded SQL. Simply stated,precompilingis the process of converting embedded SQL statements into DB2 run-timeAPI calls that a host compiler can process. The SQL callsare then stored in a package, in a bind file,or in
R Chapter 3: Getting Started with DB2 Application Development
l 4
both, depending uponthe precompiler options specified.After the progr&n is precompiled, compiled,and linked, the program must thenbe bound to the test or the production database. Binding is the process of creating a package from the source codeor bind file and storing the package in the database for later use. If your application accesses more than one database, and if it contains embedded SQL,it must be bound to each database used beforeit can be executed. Precompilingand binding are only required if the source files contain embedded SQLstatements; if they contain only CL1 function calls and/or API calls, precompilingand binding are not necessary.
S U M R l A R Y " I I " m m The goalof this chapter was to provide youwith an overview of the DB2 database application development process. You should now understand what a DB2 database application is, and you should be familiar with some of the issues that affect database appliation design. You should also be familiar with the following application development building blocks: R A high-level programminglanguage
m m
SQL statements CL1 function calls API calls
You should alsobe able to establish a DB2 database application development environment and create testing databases, testing tables, and test data. Finally, you should have someunderstanding about the way source code files are created and converted into executable programs.Chapter 4 continues to present DB2 database application development fundamentals by focusing on the development of CL1applications forDB2 Universal Database, Version 5.2.
This Page Intentionally Left Blank
Writing AY1 'I. A
DB2 Universal Database application programming interface (API) calls are a set of functions that are not part of the standard SQL sublanguage nor the Call-Level Interface (CLI) routines. Where SQL and CLI are used to add, modify, and retrieve data h m a database,API calls provide an interface to the DB2 Database Manager. API calls are o h n included in embedded SQLor CL1 applications to provide additionalfunctionality that i s not covered by SQL or CL1 (for example,starting and stopping DB2's Database Manager).You can develop completeAPI applications that control database environments, modify database con@urations, and perform administrative tasks. API applications can also be used to fine-tune database performance and perform routine database maintenance.
162 I
: :
L'
Part 2: Application Development Fundamentals
This chapter begins by describing the basic structure of an A P I database application source-code file. Then,the types of A P I calls available withDB2, their naming conventions, and the special data structuressome A P I calls use is discussed. Next information on how to evaluate API return codes and display error messages is provided. Finally, this chapter concludes witha brief discussionon using the compiler and linker to convert an A P I application source-code fileto an executable program.
m
BM The Basic Structure of an API Source-
Code File An A P I application program source-code file can be dividedinto two main parts: the header and the body. The header contains, amongother things, the host-language compiler preprocessor statements that are used to merge the contents of the appropriate DB2 A P I header file(s) with the host-language source-code file. These header files contain theA P I function prototype definitions and the structuretemplates for the special data structuresthat are required by some ofthe APIs. The body contains the local v&able declarationstatements, the variable assignment statements, and one or more A P I function calls. Thebody also contains additional processing statements and error-handling statements that may be required in order for the application to perform the desired task. The sampleC programminglanguage source code in Figure 4-1 illustrates these two parts, along with some of the C language statements that might be found in them.
m
BW Types of API Function Calls DB2's rich set of A P I function callscan be dividedinto the following categories: N Database Manager ControlAPIs N Database Manager ConfigurationAPIs N Database Control APIs N Database ConfigurationAPIs N Database Directory ManagementAPIs N ClientBerner Directory ManagementAPIs
Node Management APIs N Network Support APIs
Chapter 4 Writing API Applications API SOURCE CODE FRAMEWORK
P Include Appropriate Header Files*/ #include estdio.h> #include estdlib.h> #include <string.h> m
#include 4qljra.b #include <sqljacb.h> #include <sqlenv.h> P Declare Function Prototypes*I int main(int argc, char *argv U);
r HEADER
Y
P Declare
Procedure*I int main(int argc, char *argvO) { P Declare Local Variables*I struct sqledinfo *pDB-Dirlnfo= NULL; unsigned short usHandle = 0; unsigned short usDBCount= 0; structsqlcasqlRetCode; m
PGet The Database Directory Information *I sqledosd(0, &usHandle, &usDBCount, &sqlRetCode);
*I PScan The Dlrectory Buffer And Print Info for (; usDBCount !=O; usDBCount-) sqledgne(usHandle, &pDB-Dirlnfo, &sqlRetCode) printf("%.8s\t", pDB-Dirlnfo->alias); printf("%.8s\t", pDB-Dirlnfo->alias); printf("%.30s\n", pDB-Dirlnfo->comment); }
P Free Resources (Directory Info Buffer) */ sqledcls (usHandle, &sqlRetCode); m
P ReturnTo The Operating System*I
1
return((int) sqlRetCode.sqlcode);
Figure 4-1
Parts of an API sourcecode file.
H Backup/RecoveryAPIs H Operational Utility APIs H Database MonitoringAPIs
Data UtilityAPIs H General Application Programming APIs H Application PreparationAPIs
Remote Server Connection APIs
BODY
[ j
64 I j
Part 2 Application DevelopmentFundamentals Table Space ManagementAPIs W Transaction APIs
Miscellaneous APIs Each API function call falls into one of these categories accordingto its functionality. Thef o l l o d g describes eachof these categories in more detail.
DATABASE MANAGER CONTROLAPIs. Database Manager control APIs are a set of functions that start and stop the DB2 Database Manager background process. This background process must be running before any application can gain accessto a DB2 database.
DATABASE W A G E R CONFIGURATION "1s.
The Database Manager configuration APIs are a set of functions that can be used to retrieve, change, or reset the information stored in the DB2 Database Manager configuration file. The DB2 Database Manager configuration file contains configuration parameters that affect the overall performanceof DB2's Database Manager and its global resources. These APIs are rarely used in embedded SQL or CL1 application programs;only API application programs that provide some type of generalized database utilitieshave a use for these functions.
DATABASE CONTROL APIs. Database control APIs are a set of functions that can be used to create new databases, drop (delete)or migrate existing databases, and restart DB2 databases that were not stopped correctly (for example, databases that were open whena system failure occurred).
DATABASE CONFIGURATIONAPIs. Every database has its o w n configuration filethat is automatically created when the CREATE DATABA8E API call (or command) is executed. Database configuration APIs are a set of functions that can be used to retrieve, change,or reset the information stored in these database configuration files. Each database configuration file contains configuration parameters that deet the performance of an individual database and its resource requirements. These APIs are rarely used in embedded SQL or CL1 application programs;only API application programsthat provide sometype of generalized database utilities have a use for these functions.
DATABASE DIRECTORY MANAGEMENT APIs. Database directory management APIs are a set of functions that can be usedto catalog and uncatalog databases, change database comments (descriptions), and view the entriesstored in theDB2 database directory.
CLIENT/SERVERDIRECTORYMANAGEMENT APIs. Client'Server directory managementAPIs are a set of functions that can be used to catalog and uncatalog databases that are accessed via DB2 Connect. These APIs can also view the entries stored in the DB2 DCS directory. NODE MANAGENIENT APIs. Node management APIs are a set of functions that can be used to catalog and uncatalog remote workstations, and view the entriesstored in the DB2 workstation directory.
Chapter 4: Writing API Applications NETWORK SUPPORT APIs. Network support APIs are a set of functions that can be usedto register and deregister a DB2 database server workstation's address in the NetWare bindery (on the network server).
BACKUP/RECOVERY APIs. Backup/Recovery APIs are a set of functions that can be used t o back up and restore databases, perform roll-forward recoverieson databases, and view the entries stored in a DB2 database recovery history file. Everydatabase has its own recovery file, which is automatically created when the CREATE DATABASE AF'I call (or command) is executed. Once created, this history file is automatically updated whenever the database or its table space(s) are backed up or restored. The recovery history file can be used t o reset the database to the stateit was in at any specific point intime.
OPERATIONAL UTILITY APIs. Operational utility APIs are a set of functions that can be usedto change lockstates on table spaces, reorganizethe datain database tables, update statistics on database tables, and force all users off a database (i.e., break all connections to a database). DATABASE MONITORINGAPIs. Database monitoring APIsare a set of functions that can beused database.
to collect information about the current state
of a DB2
DATA UTILITY APIs. Data utility APIsme a set of functionsthat can be usedto import data from and export data to various external file formats. GENERALAPPLICATION PROGRAMMING APIs. General application programming M I S are a set of functions that can be usedin conjunction with embedded SQL statements, CL1 function calls, and/or other API function calls to develop robust database application programs. These APIs perform suchtasks as retieving SQL and API error messages, retrieving SQLSTATE values, installing signal and interrupt handlers, and copying and freeing memory buffers used byother APIs. APPLICATION PREPARATION APIs. Application preparation APIs are a set of functions that can be used t o precompile, bind,and rebind embedded SQL source-code files.
REMOTE SERVER CONNECTION APIs. Remote server connection APIs are a set of functions that can be usedto attach to and detach from workstations (nodes) at which instance-levelfunctions are executed. Thesefunctions essentially establish (and remove) a logical instance attachment to a specified workstation and s t a r t (or end) a physical communications connection to that workstation.
TABLE SPACE MANAGEMENT APIs. Table space management APIs are a set of functions that can be usedto create new table spaces and retieve information about existing table spaces.
TRANSACTIONAPIs. Transaction APIs are a set of functions that allow two-phase commit-compliant applications to execute problem-solving functions on transactions that are tying up systemresources(otherwise known as in doubt transactions).
166
11
Part 2 Application Development Fundamentals For DB2, these resources include lockson tables andindexes, log space, and transaction storage memory. Transaction APIs are used in applications that need to query, commit, roll back,or cancel in-doubttransactions. You can cancel indoubt transactions by removing log recordsand releasing log pages associatedwith the transaction.
MISCELLANEOUS H I S . Miscellaneous APIs are a set of functions that do ‘not fall into any of the categories previously .listed. Some of these functions canbe used to retrieve user authorization information, set and retieve settings for connections made by application processes,and provide accounting informationto DRDA servers.
API Naming Conventions Although most embedded SQL statements and CL1 functions are endowed with lbng and descriptive names,most DB2’s API functions do not follow this pattern. Instead, many A P I function names creatively packas much information as possible into eight characters. The conventions used by DB2for naming most A P I functions are as follows: U Each A P I name begins with the letterssql. H The fourth character in each API name denotes the functional area to which the A P I call belongs: -e forC-specificenvironmentservices. -U forC-specificgeneral utilities. -f forC-specificconfigurationservices. a forC-speciscapplicationservices. -m -b
forC-specificmonitorservices. for C-specific table and table-space query services.
forC-specific SQLSTATE services. for a generic (language-independent) versionof the above services. U The last four characters describe the function. As you might imagine,this fourletter limitation results insome strange abbreviations. -0
-g
t6
*4
NOTE: In later versions of DB2 Universal Database,there is m limitation on the length of API functionnames. However, thefirstfourcharacters continue following these naming conventions.
There is a C-language versionof each A P I function that has been optimized forthe C/C++ programming language. Thereis also a language-independent generic version that corresponds to almost every C-languageA P I function available.If you are developing an A P I application program withC or C++, you should use the C-language specific version of the A P I function. However, if you are developing an API application program with another programming language, suchas COBOL, REXX,or FORTRAN, you must use the generic A P I functions instead of their C-language counterparts.
Chapter 4: Writing A P I Applications
IIAPI Data Structures ORen API application source-code files must contain declarationstatements that create special data structure variables. These special data structure variables are used to either provide input information to or obtain return information fromspecific API functions. Table 4-1 lists the names of the data structure templates that are stored in theDB2 API header files, alongwith brief descriptionsof what each data structure is used for. A P I data structure templates are made available to C and C++ applications when the appropriate compiler preprocessor statements (#include aamzm.z.h>) are placed in the header portion of the source-code file. Once the structure templates are made available to the source-code file,a corresponding structure variable must be declared and initialized before it can be used in anA P I function call.In addition to MI-specific structure templates, every source-code that file contains oneor more A P I calls must, at a minimum,declare a variable of the SQLCA data structure type.This variable is used by almost everyAPI function to return statusinformation to the calling application after the API call completes execution. Table 4-1
Data Structures Used by DBZS A P I Functions
rfwd_input
Passes information neededfor a roll forward recovery operationto the ROLLFORWARD DATABASE U I .
rfwd-output
Returns information generated by a roll forward recovery operationto an application program.
aql-authorieatione
Returns user authorization information to an application program.
eql-dir-entry
Transfers Database Connection Services directory information between an application programand DB2.
eqla-flaginfo
Holds flagger information.
eqlb-flagmsge
Holds flagger messages.
eqlb-tbegtate
Returns table space statistics information to an application program.
SQLB-TBSCONTQRY-DATA
Returns table space container data to an application program.
SQLB-TBSQRY-DATA
Returns table space data to an application program.
SQLB-QUESCER-DATA
Holds table space quescer information.
eqlca
Returns error and warning information to an application program.
eqlchar
Transfers variable length data between an application programand DB2.
sqlda
Transfers collections of data between an application program and DB2.
eqldcol
k
eqle-admoptione
Pasees tabel space information to the ADD
eqle-client-info
Transfers client information betweenan application program and DB2.
eqle-conn-setting
Specifies connection setting types Crype 1or Type 2) and values.
eqlepode-appc
Passes information for catalogingAFTC nodes to the CATALOQ NODE API.
s column informationto the IMPORT and EXPORT "Is. NODE API.
1
A Part 2: Application Development Fundamentals
sqle-node-appn
Passes information for catalogingAPPN nodes to the CATAGOG NODE API.
eqle-node-cpic
Passes information for cataloging CPIC nodes to the CATALOGNODE API.
eqle-node-ipxepx
Passes information for catalogingIpx/spxnodes to the CATALOG NODE
API. eqle-node-local
Passes information for cataloging LOCAL nodes to the CATALOG NODE
API. eqle-node-netb
Passes information for cataloging NetBIOS nodes to the CATALOG NobE API.
eqle-node-npipe
Passes information for catalogingnamed pipes to the CATALOG NODE API.
eqle-node-struct
Passes information for cataloging all nodes to the CATALOGNODE API.
eqle-node-tcpip
Passes information for cataloging TCP/IP nodes to the CATALOG NOD& API .
eqle-reg-nwbindery
Passes information for registering or deregisterhg the DB2 server idfrom the bindery onthe NetWare file server.
eqle-start-options
Passes start-up option infromationto the START API.
eqleabcountryinfo
Transfers country information between an application program and DB2.
eqledbaeec
Passes creation parameters to the CREATE DATABASEAPI.
SQLETSDESC
Passes table space description informationto the CREATE DATABASE API.
SQLETSCDESC
Passes table space container description informationto the CREATE DATABASE API.
eqleabetopopt
Passes stop information to the STOP DATABASE MANAaER API.
sqledinfo
Returns database directory information about a singleentry in the system or localdatabase directory to an application program.
eqleninfo
Returns node directory information about a singleentry in the node directory to an application program.
eqlfupa
Passes database and Database Manager co&guration file information between an application programand DB2.
sqh-collected
Transfers Database System Monitorcolledion count informationbetween an application program and DB2.
eqh-recording-group
Transfers Database System Monitor monitor group information between an application programand DB2.
eqlaQtimestamlY
Holds Database System Monitor monitorgroup timestamp information.
DATABASE MANAQER
Chapter 4: Writing API Applications Table 4-1
Data Structures Used by DBZS MI Functions (Continued)
S U h
Sends database system monitor requests from an application programto
DB2. eq&obj-etruct
Sends database system monitor requests from an application programto DB2.
SUlopt eqloptheader BqlOptiOllS
Passes bind options information to the BIND API and precompile options information to the PRECOMPILE PRAPI.
SQLU-LSN
Transfers log sequencenumber information between an application program and DB2.
eqlu-media-list sqlu-rnedialiet-target eqlu-media-entry sqlu-vendor . aqlu-location-entry
Holds a list of target media (BACKUP) or source media (RESTORE) for a backup image.
SQLU-RLOG-INFO
Transfers log status information between an application programand DB2.
eqlu-tableepace-bkret-list eqlu-tablespace-entry
Passes a list of table-space namea to an application program.
eqluexpt-out
Returns information generated by an export to an application program.
eqluhinfo eqluht e9 Elqluhaam
Passes information fromthe recovery history f l e to an application Program.
epluimpt-in
Passes information needed for an import operation to the IMPORTAPI. Returns information generated by an import operation to an application
epluimpt-out
Prngram. eqluload-in
Passesinformation needed for a bulk load operation to the LOAD API.
sizluload-out
Returns information generated by a bulk load operationto an application Prngram. Passes new logfle directory information to the ROLLFORWARD DATABASE API.
eplurf-newloppath eplurfjnfo
&turnsinformation generated by a rollforwad database operation to an application program.
splupi eqlpart-key
Transfers partitioning information between an application programand
SQIIXA-RECOVER
Provides a list of indoubt transactions to an application program.
SQLXA-XID
Used to identify atransaction to an application program.
DB2.
1 7_"0
l
1
Part 2 Application Development Fundamentals
II RM Error Handling You have already seen that error handling is an important part of every DB2 embedded SQL and CL1 database application."he same holdstrue for DB2 APIapplications. Whenever an A P I call is executed, status information is returned to the calling application by the following: The API functionreturn code "he SQLCA data structurevariable Like embedded SQL and CL1 applications, the best way to handle error conditions is with a common error-handling routine.
Evaluating Return Codes Whenever an API function call is executed, a special value,known as a return code, is returned to the calling application.A common error-handling routine should first determine whetheror not an error or warning conditionhas occurred by checking this return code value. If the error-handling routine discovers that anerror or warning has occurred, it should examinethe SQL codethat is also returned and process the error accordingly At a minimum, an error-handling routine should notify users that anerror or warning has occurred -and provide enough informationso the problem can be corrected.
Evaluating SQLCA Return Codes It was mentioned earlier that each API application must declare a variable of 'the SQLCA data structuretype. Wheneveran API functionis invoked froman application program, the address of this variable is always passed as an output parameter value. This variable is then used by DB2 to store status information when the API function completes execution. If an error or warning conditionoccurs during the execution of an API function call, an errorreturn-code valueis returned to the calling applicationand additional information aboutthe warning or error isplaced in theSQLCA data structurevariable. To save space,this information is stored in theform of a coded number. However, you can invoke the QET ERROR MESSAQE AF'I, using this data structurevariable, to translate the coded number into a more meaningful description, which can then be used to correct the problem. Incorporating the GET ERROR MESSAQE API call into your API application during the development phase will helpyou quicklydetermine when there is problem a in the way an API call was invoked. Incorporating the QET ERROR MESSAQE API call into your A P I application will also let theuser know whya particularAPI function failedat application run time.
Chapter 4: Writing API Applications
Evaluating SQLSTATEs If an error or warning condition occursduring the execution of an API function c a l l , a standardized error-code value is also placed in the SQLCAdata structurevariable. Like the SQLCA return-code value, the SQLSTATE information is stored in the form of a coded number. You can use the GET SQLSTATE API to translate this coded number into a more meaningful error-message description.
m
111 Creating Executable Applications Once youhave written your API application s o d e file(s), youmust convert them into an executable DB2 database application program. Thesteps used in this process are: 1. Compile the source-code files to create object modules. 2. Link the object modulesto create an executable program.
After you havewritten an API source-codefile, you must compile it with a high-level language compiler (suchas TrisualAge C++, Visual C++, and Borland C/C++).highThe level language compiler converts the source-code fileinto an object modulethat is then used bythe linker to create the executable program. The linker combines specified object modules, high-level language libraries,and DB2 libraries to produce an executable application (providedno errors or unresolved external references occur).For most operating systems, the executable application can be either an executable load module (.m) a shared library or a dynamic linklibrary (.DLL). Figure4-2 illustrates the process of converting an A P I application source-code fileto an executable application.
!!!!!l b Running, Testing, and Debugging API
Applications Once your application program has been successfully compiled and linked, you canrun the program and determine whetheror not it performs as expected. You should be able to run your DB2 APIapplicationprogram as you would any other application program on your particular operating system. If problems occur, you canthe dofollowing to help test and debug yourcode: M When compilingand linking, specifythe proper compiler and linker options so the
executable program canbe used with a symbolic debugger(usually provided with the high-level language compiler). M Make full use of the QET ERROR MESSAGE and QET SQLSTATE API function calls. Display all error message and returncodes generated whenever an API function call fails.
Part 2: Application Development Fundamentals SOURCE F'ILE
SOURCE F'ILES
WITHOUT
CONTAINING
A P I FUNCTION CALLS
API FUNCTION CALLS
a
a
A P I SPECIFIC IEADER FILES I
l \
l
HIGH-LEVEL LANGUAGE COMPILER
OBJECT
FILES
OBJECT LIBRARIES
F'ILES
HIGH-LEVEL LANGUAGE LINKER
l EXECUTABLE PROGRAM
CALL
DB2 DATABASE MANAGER
Figure 4 2 Processfor convertingA P I sourcecode files into executable DB2 application programs.
NOTE:Because some APIs require a database connection before theycan be executed, consider creating a temporary database to use while testing yourAPI application to avoid inadvertently corrupting a production database.
Chapter 4: Writing API Applications
s
m
Y
u
"
"
n
u
m
"he goal of this chapter was to provide you with an overview of how application programming interface (-1) application source-code filesare structured and to describe the processes involvedin converting A P I application source-code filesinto executable database application programs. You should knowthat A P I functions are divided according to their functionality,into the following groups: M Database Manager ControlAPIs M Database Manager ConfigurationAPIs
M Database Control APIs
W Database ConfigurationAPIs M Database Directory ManagementAPIs W ClientBerver Directory Management APIs M Node Directory Management APIs W Network Support APIs M BackupmecoveryAPIs M Operational Utility APIs M Database Monitoring APIs W Data Utility APIs General Application ProgrammingAPIs M Application Preparation APIs W Remote Server ConnectionAPIs Table Space ManagementAPIs M Transaction APIs M MiscellaneousAPIs You should alsobe familiar with the API routine naming conventions usedby IBM and thespecial data structuresthat many of the DB2 APIs require. You should know how to detect errors by evaluating the A P I return codes and how to translate SQLCAcoded values into useful error messages with the QEI ERROR MESSAQE A P I function call. You should also befamiliar with the two-step processthat is used t o convert DB2 A P I source-code filesto executable database applications: M Compiling M Linking
Finally,you should know howto run,test, and debug yourDB2 API database application onceit has been compiledand linked.
This Page Intentionally Left Blank
This Page Intentionally Left Blank
Program .. =paration:2nd General Yrogramming APIS J
Before most DB2 applicationscan be compiled and executed, they must be prepared by the SQL precompiler, andthe packages produced must be bound to one or more databasea This chapter is designed to introduce youto the set of DB2 API functions that are used to prepare applications and bind packages to DB2 databases and to the set of DB2 API functionsthat perform general tasks in an application program. The first part of this chapter providesa general discussion about embeddedSQL application preparation.Then, the functions that areused to handle exceptions,signals, and interrupts are described.Next,information about accounting strings and the function used to set them is provided. This is followed bya brief rlinmnninn nf t.heminter mnninnlntinn nnd m m r m o w w
e sectionmpare applicaided.
L 78 " __ j :
j
Part 3: Application Programming InterfaceFunctions
M F 4 Embedded SQL Application Preparation As discussed in Chapter 4, embedded SQL source-code files must always be precompiled. The precompile process converts a source-code file with embedded SQL statements into a high-level language source-code file that is made up entirely of high-level language statements. This process is important, because the high-level languagecompiler cannot interpret SQL statements, so it cannot create the appropriate object code files that are used by the linker to produce an executable program. The precompile process alsocreates a corresponding packagethat contains, amongother things, one or more data access plans.Data access plans contain informationabout how the SQL statements in thesource-code fileare to be processed by DB2at application run time. Normally, the SQL precompiler is invoked fromeither the DB2 command-line processor or from a batch or make utility file. There may betimes, however, when the SQL precompiler needs to be invoked from an application program (for example, when an embedded SQL source-code fileis provided for a portable application and theapplication's installation program needs to precompile it in order to produce the corresponding execution package). In these cases, you can use the PRECOMPILE PROGRAMfunction t o invoke the DB2 SQL precompiler. When an embedded SQL source-code file is precompiled, the corresponding execution package produced can either be stored in a database immediately or written to an external bind fileand bound to the database later (theprocess of storing this package in the appropriate database is known as binding).By default, packages are automatically bound to the database used for precompiling during the precompile process. By specifying the appropriateprecompiler options, however, you can to elect store this package in a separate file and perform the binding process at a later time. Just as the SQL precompiler is normally invoked fromeither theDB2 command-line processoror from a batch or make utility file, the DB2 bind utility is normally invokedin the same manner. There may be times, however, when you need to invoke the bind utility from an application program (for example, whena bind file is provided for a portable application and the application's installation program needsto bind it t o the databasethat the application willrun against).You can usethe BIND function to invoke the DB2 bind utility for cases suchas these. When producing execution packages for embeddedSQL applications, the SQL precompiler determines the best access plan to use by evaluating the data objects available at package creation time. As more data objects, suchas indexes, are added to the database, older packages needto be rebound so they can take advantage of new data objecta (and possibly produce more efficientdata access plans). Ifthe bind files associated with an application are available, you can rebind older packages by re-invoking the DB2 bind utility. Ifthe bind filesare no longer available, you can still rebind existing packages by using the REBIND function. Whenthe REBIND function is invoked, the specifiedpackage is recreated from the SQL statements that were stored in the SYSCAT.STATEMENTS system catalogtable when the package wasfirst created.
j I
j
Chapter 5: Program Preparation and GeneralProgramming M I S
i 71
,
_
"
t
Exception, Signal, and Interrupt Handlers A DB2 database application programmust be able to shut down gracefully whenever an exception, signal, or interrupt occurs. Typically, this process is done through an exception, signal,or interrupt handler routine. "he INSTALL SIGNAL HANDLER function can be usedto iristall a default exception, signal, or interrupt handler routine in all DB2 applications. If this function is called before any other API functions or SQL statements are executed, any DB2 operationsthat are currently in progress will be ended gracefully whenever an exception, signal, or interrupt occurs (normally, a ROLLBACK SQL statement isexecuted in order t o avoid the riskof inconsistent data). The default exception, signal, or interrupt handler is adequate for most simple, singletask applications. However, your if application programis a multithread or multiprocess application,you might wantto provide a customized exception,signal, or interrupt handler. Ifthis situation is thecase, the INTERRUPT function shouldbe called from each custom exception, signal, or interrupt handler routine to ensure that all DB2 operations currently in progress are ended gracefullyThis API function notifies DB2 that a termination has been requested. DB2 then examines which, ifany, database operation is in progress and takes appropriate action to cleanly terminate that operation. Some database operations, suchas the COMMIT and the ROLLBACK SQL statement, cannot be terminated and are allowed to complete, becausetheir completion is necessary to maintain consistent data. ~~
~~
NOTE: SQL statements other than COMMIT and ROLLBACK should never be placed in
customized exception, signal, and interrupt handlerroutines.
Pointer Manipulation and Memory Copy Functions Because many DB2 API functions use pointers foreither inputor output parameters, some type of pointer manipulation is often required whenAPI functions are included in application programs. Some host languages, such as C and C++, support pointer manipulation and provide memory copy functions. Other host languages, such as FORTRAN and COBOL, do not. "he QET ADDRJWS, COPY MEMORY,and DEREFERENCE ADDRESS functions are designed to provide pointer manipulation and memory copy functions for applications that are written in host languages that do not inherently provide this functionality.
Specifying Connection Accounting Strings DB2 database applications designedto runin a distributed environment might need to connect to and retrieve data from a Distributed Relational Database Architecture (DRDA)application server (such as DB2 forOS/390). DRDA servers often use a process
Part 3: Application Programming Interface Functions known as chargeback accountingto charge customers for their use of system resources. By using the SET ACCOUNTING STRING function, applications running on a DB2 Universal Database workstation(using DB2 Connect)canpasschargebackaccounting information directly to a DRDA server when a connection is established. Accounting strings typically contain56 bytes of system-generated data and up to 199 bytesof usersupplied data (suffix). Table 5-1 shows the fields and format of a typical accounting string. The followingis an example accountingstring: X’3C8SQL050200S/2 CHlOEXCA
ETPDDCZ
x’05‘DEPTl
The SET ACCOUNTING STRING functioncombines system-generated data with a suffix,which is provided as one of its inputparameters, t o produce the accounting string that is to be sent to the specified server at the next connect request. Therefore, an application should call this function before attempting to connect to a DRDA application server.An application can also call this function any time it needs to change the accounting string (for example,to send a different string when a connection is made t o a different DRDA database). If the SET ACCOUNTING STRING function is not called before a connection to a DRDA application server is made, the value stored in the DB~ACCOUNT environment variable will be usedas thedefault. Ifno value exists for the DB~ACCOUNT environment variable,the value of the dfi-amount-str DB2 Database Managerconfiguration parameter will be used. Table 5-1 AccountingString Fields
acct-str-len
1
A hexadecimal value representing theoverall length of the accounting string minus1. For example, this valuewould be Ox3C for a string containing61 characters.
client-prdid
8
The product ID of the client’s DB2 Client Application Enabler software. For example, the product ID for the DB2 Universal Database, Version 5.2 Client Application Enabler is ”SQL05020.”
clientglatform
18
The platform(or operating system)on which the client application runs;for example, “OS/2,” “AE,“ “DOS,” or “Windows.”
client-appl-name
20
The first 20 characters of the application name; for example,
client-authid
8
The authorizationID used to precompile and bind the applica. tion; for example, “ETPDDGZ.”
sm-len
1
A hexadecimal value representingthe overall length of the usersupplied suffix string. This field should be set to Ox00 if no mersupplied suffix string i s provided.
suffix
199
The user-suppliedsuffix string. This string can be a value specified by an application, the value of the DB2ACCOUNT environment variable, the value of the dfi-account-str DB2 Database Manager configuration parameter, or a null string.
“CH14EX6A.”
Adapted fromIBM’s DB2 Connect Users Guio!e,Table 6, p. 74.
,
#
Chapter 5: Program Preparation and General ProgrammingAPIs
i
:
l
,
".
1 1 81
i
Evaluating SQLCA Return Codes and SQLSTATE Values Most DB2 API functions require a pointer to a SQL Communications Area (SQLCA) data structurevariable as anoutput parameter.When an API functionor an embedded SQL statement completes execution, this variable contains error, warning, or status information.To save on space, this information i s stored in theform of a coded number. If the QET ERROR MESSAGE function is executed and the SQLCA data structurevariable returned from another API function is provided as input, the coded number will be translated into more meaningfulerror message text. Standardized error code values or SQLSTATEs are also stored in the SQLCA data structure variable. Like the SQLCA return code value, the SQLSTATE information is stored in the form of a coded number. You can use the GET SQLSTATE MXSSAGE function to translate this coded number into more meaningful error message text. By including either (or both) of these API functions in your DB2 database applications,you can return meaningful error and warning information to the end user whenever error and/or warning conditions occur. The Program Preparation And General Application Programming Functions Table 5-2 lists the DB2 API functions that used prepare applications, bind packagesto a r e ,
Table 5-2 Program Preparationand General ProgrammingM I S
PRECOMPILE PROGRAM
Preprocesses a source-code filethat contains embedded SQLstatements and or an generates acorresponding package that i s stored in either the database external file..
BIND
Prepares theSQL statements stored in a bind file and generates acorresponding package that is stored in the database.
REBIND
Recreates a package that is already storedin a database without using an external bind file. Retrieves the current valueof the DB2INSTANCE environment variable.
QET INSTANCE INSTALL
SIQNAL
signal handler in aDB2 database application HANDLER Installs the default interrupt
'INTERRUPT
Program. Safely stops execution of the current database request.
GET ADDRESS
Stores the addressof one variable in another variable.
'COPY
Copies data from one memorystorage areato another.
MEMORY
DEREFERENCE
ADDRESS
SET
ACCOUNTING
STRINQ
QET
ERROR
QET
SQLSTATE
GET
AUTHORIZATIONS
MESSAGE MESSAQE
Copies data from a buffer defined by a pointerto a variablethat is directly accessible by an application. Specifies accounting information that is to be sent to Distributed Relational Database Architecture (DFLDA) servers along with connect requests. Retrieves the message text associated with an SQL CommunicationsArea error code from a special DB2 error message file. Retrieves the message text associated with a SQLSTATE value from a special DB2 error message file. Retrieves the authorizations that have been granted to the currentuser.
Part 3: Application Programming Interface Functions DB2 databases, and perform general tasks in an application program. Eachof these functions are described in detail in theremainder of this chapter.
PRECOMPILE PROGRAM Purpose
syntax
The PRECOMPILE PRWRAMfunction is used to precompile (preprocess)an application program source-code filethat contains embedded SQL statements. eqlaprep (char
SQL--1-FN SQL-I-RC
char etructeqlopt structeqlca
Parameters
*ProgramName, *M#gFileName, *PrepOptiona, *SPLcA);
PmgramName
A pointer to a location in memory wherethe name of the source-
MsgFilelvame
code file to be precompiledis stored. A pointer t o a location in memory wherethe name of the fde or device that all error, warning,and informational messages generated are to be written to is stored.
PrepOptions
A pointer to a sqlopt structure that contains the precompiler options
SQLCA
A pointer to alocation in memorywherea SQL Communications Area (SQLCA)data structurevariable is stored. This variable
(if any) that should be used when precompiling the source-code file.
returns either status information (ifthe function executed successfully)or error information (ifthe function failed)to the calling application.
Includes
#include<sql.h>
Description
The PRECOMPILE PRWRAMfunction is used to precompile (preprocess)an application program source-code filethat contains embedded SQL statements. When a sourcecode file containing embedded SQL statements is precompiled, a modified sourcefile containing host language function calls for each SQL statement used is produced, and by default, a package forthe SQL statements coded in thefile is created and bound to the database to which a connectionhas been established. The name of the package to be created is, by default, the same as the firsteight characters of the source-code file name (minus the file extensionand converted to uppercase) from whichthe package was generated. However, you can overwrite bind filenames and package names by using the SQL-BINILOPT and the SQL-PKQ-OPT options whenthis function is called. A special structure (sqlopt) is used to pass different precompile optionsto the SQL precompiler whenthis function is called. Thesqlopt structure isdefined in sql.h as follows:
Chapter 5: Program Preparationand General ProgrammingAPIs struct sqlopt {
structeqloptheaderheader; etruct sqloptione option[l];
/* /* /*
A
*/
PrecompileDind optionsheader
*/
An array of PrecompileDind
*/
options
1;
T h i s structure is composed of two or more additional structures: one sqloptheader
structure and one or more sqloptions structures. The sqloptheader structure is defined in sql.h as follows: struct sqloptheader (
unsigned long allocated;
unsigned long used;
/* Number of q l o p t i o n e structuree that* / /* have been allocated (the number of */ /* elements i n the array specified in */ /* the option parameter of the */ /* sqlopt structure) */ /* The actual numberof Bt?lOptiOnS */
/* /* /*
structures used (the actual number of type and Val option pairs eupplied)
*/
*/
*/
The sqloptwns structure is defined in sq1.h as follows: struct sqloptions
I unsigned long type; unsigned long Val; 1;
/ * Precompile/Bind option type /* Precompile/Bind option value
*/
*/
Table 53 lists thevalues that can be used for the type and ual fields of the sqloptions structure, as well as a description about what each typelval option causes the SQL precompiler (or the DB2 Bind utility) to do. The PRECOMPILE PROGRAM function executesunder the currenttransaction (which was initiated by a connection to a database), and upon completion,it automatically issues either a C O ~ orT a ROLLBACK SQL statement to terminate thetransaction.
Comments
The MsgFileName parameter can contain the path and name of an operating system fileor a standard device (such as standarderror or standard out). If the MsgFileName parameter contains the pathand name of a file that already exists, the existing filewill be overwritten whenthis function is executed. Ifthis parameter contains the pathand name of a file that does notexist, a new file will be created.
I
i
p
Part 3: Application Programming Interface Functions
Table 5-3 Values Precompile/Bind and Options
b
No
Specifies that thepackage doesnot already exist and is to be created.
SQL-ACTION-REPLACE
No
Specifies that thepackage exists and is to be replaced.This value is the default valuefor the SOL-ACTION-OPT option.
NULL
Yes
Indicates that no bind file is to be generated by the precompiler. This option can only be used withthe SQL precompiler.
sqlchur structure value
Yes
Indicates that a bind file with the specified name is to be generated by the precompiler. "his option , can only be used withthe SQL precompiler.
Yes
Specifies that row blocking should be performedfor read-only cursors, cursors not specified 88 FOR UPDATE OF, and cursors for which no static DELETE WHERE CURRENT OF statements are executed. In this case, ambiguousm o r s are treated asread-only cursors.
SQL-BL-NO
Yes
Specifies that row blockingis not to be performed for cursors. In this case, ambiguouscursors are treated asupdateable cursors.
SQL-BL-UNAMEIO
Yes
Specifies that row blockingshould be performed for read-only cursom, cursors notspecified as FOR UPDATE OF, m o r s for which no static DELETE WIIERE CURRENT OF statements areexecuted, and cursors that do not havedynamic statements associated with them. In this case, ambiguouscursors are treated as updateable cursors.
SQL-CCSIW-OPT
unsigned long integer value
No
Specifies the coded character set identifier that is to be used for double-byte characters in character column definitions specified in CREATE TABLE and ALTER TABLE SQL Statements.
SQL-CCSIDM-OPT
unsigned long integer value
No
Specifies the coded character set identifier that isto be used for
SQL-ACTION-ADD SQL-ACTION-OPT
SQL-BIND-OPT
SQL-BL-ALL SQL-BLOCK-OPT
Chapter 5: Program Preparation and GeneralProgramming APIs
.~
,...,,,,.
.--
"~
ecompilelBind Option
~
"
Value
85
: ." . .
1
Currently Supported Description mixed-byte characters in character column definitions specified in CREATE TABLEmdALTER TABLE
SQL statements. SQL-CCSIDS-OPT
unsigned long integer value
No
Specifies the coded character set identifier that is to be used for singlebyte charactersin character column definitions specified in CREATE TABLEadALTER TABLE
SQL statements. SQL-CHARSUB-BIT SOL-CHARSUB-OPT
SQL-CHARSUB-DEFAULT
No
Specifies that theFOR BIT DATA SQL character subtypeis to be used in all new character column definitions specified in CREATE TABLE and ALTER TABLE SQL statements (unless otherwise explicitly specified).
No
Specifies that thetarget systemdefined default character subtype is to be used in all new character column definitions specified in CREATE TABLEmdALTER TABLE
SQL statements (unless otherwise explicitly specified).This value i s the default valuefor the SQL-CHARSUB-OPT option.
No
Specifies that theFOR MIXED DATA SQL character subtypeis to be usedin all new character col-
umn definitions specified in TABLE and ALTER TABLE SQL statements (unless otherwise explicitly specified). SQL-CHARSUB-SBCS
SQL-CNCTLREQD-NO SQL-CNULREQD-OPT
SQL-CNULREQD-YJW
No
Specifies that the FOR SBCS DATA SQL character subtypeis to be used in all new character column definitions specified in CREATE TABLE and ALTER TABLE SQL statements (unless otherwise explicitly specified).
No
Specifies that c/c++ NULL terminated strings are not NULL terminated if truncation occurs.
No
Specifies that c/c++ NULL terminated strings are padded with blanks and always includea
1 I
" "
,
'
86 I '
Part 3 Application Programming Interface Functions
"
NULL-terminated character, even when if truncation occurs. SQL-COLLECTION-OPT
sqlchar structure value
Yes
Specities an eight-charactercollection identifier that is to be assigned to the package being created. If no collection identifieris specified, the authorization ID of the user executing the PRECOMPILE PROGM or BIND function will be used.
SQLCONNECT-2
Yes
Specifies that CONNECT Sf& statements are to be processed as Type 2 connects.
Yes
Specifies that CONNECT SQL statements are to be processedas Type 1 Connects.
Yes
Specifies that CONNECT SQL statements are to be processedas Type 2 Connects.
Yes
Specifies that a date and timeformat that is associated withthe country code of the database is to be used for date and timevalues.
SQL DATETIME-EUR
Yes
Specifies that theB M standard for European date and timefor-, mat is tobe used for date and time values.
SQL-DATETIME-IS0
Yes
Specities that theInternational Standards Organization W O ) date and time formatis to be used for date and timevalues.
SQL-DATETIME-JIS
Yes
Specifies that the Japanese Industrial Standard date and time format is to be used for date and time values.
SQLL~IATETIKE-LOC
Yes
Specifies that thelocaldate and time formatthat is associated with the country code of the database is to be used fordate andtime values.
SQL-DATETIMB-USA
Yes
Specifies that theIBM standard for the United Statesof America date and time format is to be used for date and timevalues.
SQL-CONNECT-1 SQL-CONNECT-OPT
SQL-CONNECT-2
SQL-DATETIME-OPT DATETIME-DEF SQL
-15
:
Chapter 5: Program Preparation and General ProgrammingAPIs
" "
87 1 .. 1
No
Specifies that 15-digit precisionis to be used in decimal arithmetic operations.
No
Specifies that 31-digit precision is to be used in decimal arithmetic operations.
No
Specifies that a comma is to be used as a decimal point indicator in decimal and floating point literals.
SQL-DECDEL-PERIOD
No
Specifies that a period is to be used as a decimal pointindicator in decimal andfloating point literals.
SQL-DEFERRED-PREPARE-NO
Yes
Specifies that PREPARE SQL statements are to be executedat the time theyare issued.
SQL-DEFERRED-PREPARE-YES
Yes
Specifies that theexecution of PREPARE SQL statements is to be deferred untila corresponding OPEN, DESCRIBE, or EXECUTE statement i s issued.
SQL-DEC-OPT
SQL-DEC-31
SQL-DECDEL-COBIWL SQL-DECDEL-OPT
SQL-DEFERRED-PREPARE-OPT
i.
SQL-DEFERRED-PREPARE~L Yes
Specifies that all PREPARE SQL Statements (otherthan PREPARE INTO statements which contain parameter markers) areto be executed at thetime theyare issued.
If a PREPARE SQL statement uses an INTO clause to return information to an SQL DescriptorArea (SQLDA) data structurevariable, the application must not reference the content of the SQLDA variable until the OPEN, DESCRIBE,or EXECUTE SQL statement has been executed. SQL-DEGREE-ANY SQL-DEGREE-OPT
unsigned long integer between 2 and 32767
Yes
Specifies that queries are to be executed using any degree of I/O parallel processing.
Yes
Specifies the degree of parallel I/O processing that is to be used when executing queries.
Part 3: Application Programming Interface Functions Table 5-3 Precornpile/Bind Options and Values (Continued) ";";S
f
i
z
x
,, .;mm < ,'I../
m
d
Option
.. ~
~
. *_
~
~
Value
w
a
-
-
~
CuiTently supported Description
SQL-DEGREE-1
Yes
~
D
?":
Specifies that I/O parallel processing cannot be used to execute SQL queries. This value is the default value for the SQL-DEGREE-OPT
option. Yes
SpecSes thatall database connections are to be disconnected when a COMMIT SQL statement is executed.
SQL-DISCONNECT-EXPL
Yes
Specifies that only database connections that have beenexplicitly marked for release by the RELEASE SQL statement areto be disconnected when a COMMIT SQL statement is executed.
SQL-DISCONNECT-COND
Yes
Specifies that only database connections that have beenexplicitly marked for release by the RELEASE SQL statement or that do not have any cursom that were defined as WITH HOLD open are t0 be disconnected when aCOMMIT SQL statement is executed.
SQL-DYNAMICRULES-BIND
No
Specifies that thepackage owner is to be used as the authorization identifier when executingdynamic SQL statements.
SQL-DYNAMICRULES-DEFINE
NO
Specifies that thedefiner of a userdefined function or stored procedure is to be used as theauthorization identifier when executing dynamic SQL Statementsin the user-defined function or stored procedure.
SQL-DYNAMICRULES-INVOKE
NO
Specifies that the invoker of a user-defined function or stored procedure is to be used as the authorization identifier when executing dynamicSQL statements in theuser-defined functionor stored procedure.
SQL-DYNAMICRULES-NJN
No
Specifies that theauthorization ID of the user executingthe package is to be used as the authorization identifier when executing dynamic SQL statements.
SQL-EXPLAINSL
Yes
Specifies that Explain tablesare to
SQL-DISCONNECT-OPT SQL-DISCONNECT_ADTO
SQL-EXPLAIN-OPT
Chapter 5: Program Preparation and General ProgrammingAPIs Table 5-3 Precompile/Bind Options and Values (Continued) Preco&pilmind Option
Value
c;urrently Supported Deecription be populated with information about the access plans chosen for each eligible static SQL statement a t precompile time and with each dynamic SQLstatement at application run time.
SQL-EXPLSNAP-OPT
No
Specifies that Explain tablesare to be populated with information about the access plans chosen for each SQL statement in the package.
SQL-EXPLAIN-NO
No
Specifies that Explain information about the access plans chosen for each SQLstatement in the package is not to be stored in the Explain tables.
SQL-EXPLSNAP-NO
No
Specifies that anExplain snapshot will not be written to the Explain tables for each eligible static SQL statement in the package.
SQL-EXPLSNAP-YES
No
Specifies that an Explain snapshot is to be written to the Explain tables for each eligible static SQL statement in the package.
SQL-EXPLSNAP-ALL
No
Specifies that anExplain snapshot is to be written to the Explain tables for each eligiblestatic SQL statement in the package, and that Explain snapshot information is also to be gathered for eligible dynamic SQLStatements at application runtime-even if the CURRENT EXPLAIN
SNAPSHOT
register is set to NO. SQL-FLAQ-OPT
SQL-SQL92E-SYNTAX
Yes
Specifies that SQL statements are to be checked against ISO/ANSI SQL92 standards, andall deviations are to be reported.
Yes
Specifies that SQL statements are to be checked against MVS DB2 Version 2.3 SQLsyntax, andall deviations are to be reported.
Yes
Specifies that all SQL statements are to be checked against M V S DB2 Version3.1 SQL syntax, and all deviations are to be reported.
I
90
Table 5-3
Part 3: Application Programming Interface Functions Precompile/BindOptions and Values(Continued) -.h._,,.. ., .... !... , ? \ z %.c. Curwntly
v-. P r m
.i:.2
i I
,
,
,
.I
Option
v
y
Supported Description SQL-MVSDBSV41-SYNTAX
Yes
Specifies that SQL statements are to be checked against MVS DB2 Version 4.1 SQL syntax, and all deviations are to be reported.
SQL-FUNCTION-PATH
sqlchar structure value
No
Specifies the function path to be used when resolving user-defined distinct datatypes and functions referenced in staticSQL statements.
SQL-GENERIC-OPT
sqlchar structure value
Yes
Provides a means of passing new bind options (as a single string)to target DRDA databases.
SQL-QRANT-GROUP-OPT
sqlchar structure value
Yes
Specifies that theEXECUTE and BIND authorizations are to be granted to a specific user ID. This option can only be used withthe DB2 Bind utility.
sqlchar structure value
Yes
Specifies that theEXECUTE and BIND authorizations are to be granted to a specified user ID or group ID (the groupID specified can be PUBLIC). This option can be usedonly with the DB2 Bind utility.
SQL-QRANT-USELOPT
sqlchar structure value
Yes
S p e d e s t h athe t EXECUTE and BIND authorizations are to be granted to a specific user ID.This option can only beused withthe DB2 Bind utility.
SQL-INSERT-OPT
SQL-INSERT-BUF
Yes
Specifies that insert operations performed by an application should be buffered.
SQL-INSERT-DEF
Yes
Specifies that insert operations performed by an application should not be buffered.
SQL-RE?D-STAB
Yes
Specifies that theRead Stability isolation level should be used to isolate the effects of other executing applicationsh m the application usingthis package.
SQL-NO-COMMIT
No
Specifies that commitment control is not to be used by this package
SQL-CURSOR-STAB
Yes
Specifies that the Cursor Stability isolation level should be used to
I
p”1 l
Chapter 5: Program Preparation and General Programming M I S
~
j ’.
li
91.“_I 1 . ”.
Table 5-3 Precornpile/Bind Options and Values (Continued)
isolate the effects of other executing applicationsfrom the application using thispackage. SQL-REP”
SQL-LEVEL-OPT
value structure sqlchar
SQL-LINE-MACROS
SQL-OPTIMIZE SQL-OPTIM-OPT
SQL-WNT-OPTIMIZE
SQL-OWNER-OPT
value structure sqlchar
Yes
Specifies that theRepeatable Read isolation level shouldbe used to isolate the effects of other executing applicationsfrom the application using thispackage.
Yes
Specifies that theUncommitted Read isolation level should be used to isolate the effects of other executing applicationsfrom the application usingthis package.
No
Specifies the level consistency token that a module stored in a package is to use. “his token verifies that therequesting application and the database package are synchronized. This option can only be used withthe SQL precompiler.
Yes
Specifies that thegeneration of #line macros in themodified C/C++ file produced are to be s u p pressed.
Yes
Specifies that #line macros are to be embeddedin the modified WC++ file produced.
Yes
Specifies that theprecompiler is to optimize the initialization of internal SQLDA variables that areused when host variablesare referenced in embedded SQLstatements.
Yes
Specifies that theprecompiler is not to optimize the initialization of internal SQLDA variables that are used when host variablesare referenced in embedded SQL statements.
No
Specifies an eight-character authorization ID that identifies the package owner.By default, the authorization ID of the userperforming the precompile or bind process is used to identify the package owner.
Table 5-3 PrecompileIBind Options and Values (Continued) 4.
l\ U L L
Yes
Specifies that a package is not to be created. This option can only be used with the SQL precompiler.
sqlchar structure value
Yes
Specifies the name of the package that is to be created. If a package name is not specified, the package name is the uppercase nameof the source-code file being precompiled (truncated to eight characters and minus the extension)."his option can only be used withthe SQL precompiler.
SQL-PREP-OUTPUT-OPT
sqlchar structure value
Yes
Specifies the name of the modified source-code filethat is produced by the precompiler.
SQL-QWMIIFIER-OPT
sqlchur structure value
No
Spedies animplicit qualifiername that is to used for all unqualified table names, views, indexes, and aliases contained in thepackage. By default, the authorizationID of the user performing the precompile o r bind processis used as the implicit qualSer.
SQL-QUERYOPT-OPT
SOL-QUERYOPT-0 SQL-QUERYOPT-1 SQL-QUERYOPT-2 SQL-QUERYOPT-3 SQL-QUERYOPT-5 SQL-QUERYOPT-7 SQL-QUERYOPT-9
No
Specifies the level of optimization to use whenprecompiling the static SQL statements contained in the package. The defaultoptimiaation level is SQL-QUERYOPT-5.
SQL-RELEASE-OPT
SQL-RELEASE-COMMIT
No
Specifies that resources acquired for dynamic SQLstatements are to be released at each COMMIT point. This value is the default valuefor the SQL-RELEASE-OPT option.
SQL-RELEASE-DEALLOCATE
No
Specifies that resources acquired for dynamic SQLStatements are to be released whenthe application terminates.
structure sqlchur value
No
Identifies a specific versionof a package to replace whenthe SQL-ACTION-REPLACE value is specified forthe SQL-ACTION-OPT option. "his option can only be used with theDB2 Bind utility.
SQL-REPLVER-OPT
Chapter 5: Program Preparation and General ProgrammingAPIs Table 5-3 Precompile/Bind Options and Values (Continued)
Description
Option SQL-RETAIN-NO SQL-RETAIN-OPT
SQL-RETAIN-YES
No
Specifies that EXEXUTE authorizations are not to be preserved when apackage is replaced. "his option canonly be used withthe DB2 Bind utility.
No
Specifies that MECUTE authorizations are to be preserved when a package is replaced. This option can only be used withthe DB2 Bind utility.This value is the default valuefor SQL-RETAIN-OPT.
SQL-RULES-OPT
SQL-S?iA-OPT
SQL-SQLERROR-OPT
SQL-RULES-DB2
Yes
Specifies that theCONNECT SQL statement canbe used to switch between established connections.
SQL-RULES-STD
Yes
Specifies that theCONNECT SQL statement canonly be usedto establish new connections.
SQL-SAA-NO
Yes
Specifies that themodified FORTRAN source-code file produced by the precompiler is inconsistent with the SAA definition.
SQL-BAA-YFS
Yes
Specifies that themodified FORTRAN source-code file produced by the precompiler is consistent with the SAA definition.
SQL-SQLERROR-CHECK
Yes
Specifies that thetarget system is to perform syntax and semantic checks on the SQL statements being boundto the database. If an error is encountered, apackage will not be created.
SQL-SQLERROR-CONTINUE
No
Specifies that thetarget system is to perform syntax and semantic checks on the SQL statements being bound to the database. If an error is encountered, a package will still be created.
No
Specifies that theprecompiler is to perform syntax and semantic checks on the SQL statements being precompiled. Ifan error is encountered, a package or a bind file will not be created. This value is thedefault valuefor the
1 94
Part 3 Application Programming Interface Functions
mently
BPlorted
Description SQL-SQLERROR-OPT option.
SQL-SQLonuW OPT
SQL-SQLWARN-NO
No
Specifies that warning messages will not be returned from the SQL precompiler.
SQL-SQLWARN-YFS
No
Specifies that warning messages will be returned from the SQL precompiler. This value is the default vdue for the SQL-SQLWARN-OPT option.
Yes
Specifies that theIBM DB2 rules apply for both the syntax and semantics of static anddynamio SQL statements coded in an application.
SQL-MIA-COMP
Yes
Specifies that theISO/ANSI SQL92 rules applyfor both the syntax and semanticsof static and dynamic SQL statements coded in an application (SQLCA variables are used for error reporting).
SQL-SQL92E-COMP
Yes
Specifies that theISO/ANSI SQL92 rules apply for both the syntax and semanticsof static and dynamic SQLstatements coded in an application (SQLCODE and SQLSTATE variables are used for error reporting).
No
Specifies that anapostrophe is to be used as the string delimiter within SQL statements.
No
Specifies that double quotation marks are to be used as the string delimiter withinSQL statements.
Yes
Specifies that a one-phase commit is to be used to commit the work done by each database in multiple database transactions.
SQL-SYNC-TWOPHASE
Yes
Specifies that a TransactionManager is to be used to perform two-phase commitsto commit the work done byeach databasein multiple database transactions
SQL-SYNC-NONE
Yes
Specifies that no Transaction Manager is to be used to perform two-
SQL-STANDARDS-OPT SQL-SAA-COMP
SQL-STRDEL-APOSTROPHE SQL-STRDEL-OPT
SQL-STRDEL-QUOTE
SQL-SYNCPOINT-OPT SQL-SYNC-ONEPHASE
,
Chapter 5: Program Preparation and General Programming
APIs
I
: I
1- 5 .
" "
1
Table 5-3 Precompile/Bind Options and Values (Continued) "~
Description
"he INSTALL SIQNAL HANDLER function is used to install thedefault interrupt signal handler that is provided withthe DB2 Software Development Kit (SDK). Whenthe default interrupt signal handler detects an interrupt signal (usudly Ctrl-C and/or Ctrl-Break), it resets thesignal and callsthe INTERRUPT function to gracefully stop the processing of the currentdatabase request. If an application has not installed an interrupt signal handler and an interrupt signal i s received, the application will beterminated. This function provides simple interrupt signal handling and should always be usedif an application does not have extensive interrupt handling requirements.
Comments
B If an application requires a more elaborate interrupt handling scheme, you can
develop a signal handling routine that resets thesignal, callsthe INTERRUPT function, and then performs additionaltasks. B You must call this API function beforethe default interrupt signal handler will function properly. B This function cannotbe used in applications that run on the Windows or Windows N" operating system.
Connection This function canbe called at any time;a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be established first. Authorization No authorization is required to execute this function call. See Also
INTERRUPT
Example
Thefollowing C++ program illustrates how to use the INSTALL SIQNAL HANDLER function to install thedefault interrupt signal handling routine in an embedded SQL application:
CH5EX5.SQC
/* /* /* /* /* /* /* /*
*/ NAME: PURPOSE: Illustrate How To Use The Following DBP API Function In A C++ Program: INSTALL
/ / Include The Appropriate #include <windows.h> #include #include <stdio.h> #include <sqlenv.h> #include <sql.h>
SIQNAL
Header
/ / Define The API-Class Class class API-Class
HANDLER
Files
*/ */ */
*/ */ */ */
Chapter 5: Program Preparationand General Programming APIs {
/ / Attributes public 2 struct eqlca eqlca; / / Operations public : long SetSignalHandlerO;
1; / / Define The SetSignalHandler() Member Function long API-C1ass::SetSignalHandlerO {
/ / Inetall DBZ8e Default Interrupt Signal Handler eqleisig(hsq1ca);
/ / If The Signal Handlerwas Installed Successfully, Display / / A Success Message if (sqlca.sqlcode == SQL-RC-OK) {
tout
Description
The DEREFERENCE ADDRESS function is used t o copy data from a buffer definedby a pointer to a local data storage variable in applications written in host languages that do not support pointer manipulation.This function shouldbe used onlyin applications written in either COBOL or FORTRAN. Applications written in host languages that support pointer manipulation (such as C and C++) should use the language-specific pointer manipulationelements provided. You can use this function to obtain results from other API functions that return pointers to data storage areas that contain the datavalues retrieved, such as GET NEXT
NODE
DIRECTORYENTRY.
F Chapter 5: Program Preparation and General Programming Comments
APIs
119
H The host programming language variable that contains the number of bytes of
data to be copied(theNurnBytes parameter) must be four bytes long.
Connection This function can be called at any time; a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be establishedfirst. Authorization No authorization is required to execute this function call. See Also
GET ADDRESS, COPY
Example
Because this function should be used only in applications written in either COBOL or FORTRAN,an example programis not provided. Referto the IBM DB2 Universal Database MI Reference for examplesof how this function is used in COBOL and FORTRAN applications.
MEMORY
m m SET ACCOUNTING STRING Purpose
The SET ACCOUNTING STRING function is used to specify accounting informationthat is to be sent to a Distributed Relational Database Architecture (DRDA) server with the application’s next connect request.
syntax
SQL-API-R~ SQL-API-FN sqleeact
Parameters
AccountingString SQLCA
(char *AccountingString, etruct eqlca *SQJXA);
A pointerto a location in memory wherethe accounting informationstring is stored. A pointerto a location in memory where a SQL Communications Area(SQLCA) data structure variable is stored. This variable returns either status information (ifthe function executed successfully) or error information (ifthe function failed) to the calling application.
Includes
#include <eqlenv.h>
Description
The s m ACCOUNTING STRING function is used to specify accounting information that is to be sent to a DRDA server with the application’s next connect request. An application should callthis API function beforeattempting to connect to a DRDA database (DB2 forOS390 or DB2 forOS/400). If an application contains multiple CONNECT SQL statements, you can use this function to change the accounting string before attempting to connect to each database. Referto the beginning of this chapter for more information about the format and usage of accounting string information.
Comments
Once accountingstring information has been set, it remains in effect until the application terminates.
W The accounting string specified cannot exceed 199 bytes in length (this value is : defined as SQLACCOUNT-STR-sz in thefile sqZenv.h);longer accountingstrings will
automatically be truncated. W
To ensure that the accounting string isconverted correctly whentransmitted to the DRDA server, only use the characters A to Z, 0 to 9, and underscore 0.
Connection This function can be called at any time; a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be established first. Authorization No authorization is required to execute this functioncall. See Also
Refer to the IBM DB2 Connect User’s Guide formore information about accounting strings and the DRDA servers that support them.
Example
Thefollowing C++program illustrates how to use the SET ACCOUNTINQ STRINQ functionto set accounting string information beforea connection to a database is established:
/* /* /* /*
In A C++ Program:
SET ACCOWTINQ STRINQ
*/ */
/*
*/
/ / Include The Appropriate Header Files #include <windows.h, #include #include <sqlenv.h, #include <sql.h* / / Define The API-Clase Class clase API-Clam 1: / / Attributes public: struct sqlca sqlca;
1;
*/ */
/ / Operations public: long SetAccountO;
/ / Define The SetAccountO Member Function long API-Class::SetAccount() {
/ / Declare The LocalMemory Variable8 charAccountingString[199];
i
Chapter 5: Program Preparation and General ProgrammingAPIs / / Initialize The Accounting String strcpy(AccountingString, "DB2-EXAMPLES"); / / Set The Accounting String splesact(AccountingString, &sqlca); / / If The Accounting StringWas Set, Display A if (sqlca.sqlcode PP SQL-RC-OK)
Success
Measage
{
tout #include #include <eqlutil.h> #include <eql.h>
Header
Files
/ / Define The API-Claee Claee claee -1-Claes
c
/ / Attributes public: struct eqlca eplcat
/ / Operatione public: long QetAuthorizatione (); )l
/ / Define The QetAuthorizationeO Member Function long API_Claea:rQetAuthorizatione() (
/ / Declare The Local Memory Variables etruct eql-authorizatione AuthInfoi / / Initialize The Firet Element Of The AuthInfo Structura AuthInfo.eq1-authorizatione-len = eizeof(struct eql-authorizations)t / / Retrieve The Current Ueer’e Authorizatione eqluadau(MuthInfo, brsqlca); / / If The Ueer’e Authorization Information Wae Retrieved, Dieplay / / A Meemage Stating nether Or Not The Weer Rae The / / Authorization8 NeededTo Executa Most OfThe DB2 APIe
if (eqlca.sqlcde == SQL-RC-OK) (
if (AuthInfo.epl-eysa&auth AuthInfo.eq1-eyemaint-auth AuthInfo.eql-eyectrl-auth AuthInfo.eql-&a-auth
c
== == == ==
1 II 1 II 1 It 1)
Chapter 5: Program Preparationand General ProgrammingAPIs
1
c a t #include #include <sqlenv.h> #include <sqlca.h>
Header
Files
/ / Define The API-Class Class class API-Class
i / / Attributes public : struct sqlca eqlca; / / Operations public : long QetInstanceInfoO; ?l
/ / Define The QetInstanceInfoO Member Function long API~Class::QetInstanceInfo~~
i
/ / Declare The LocalMemory Variables int Separator = O d F ; int Length; charInfoStringt711; char "Bufferr char Results191 1711; / / Attach To The Default DB2 Database Manager sqleatin("DB2", "UserID", upassword'', &sqlca); / / If Attached, Retrieve Information / / And Paree It if (sqlca.eqlcode m= SQL-RC-OK)
About The Current Attachment
i strncpy(InfoString, sqlca.sqlerrmc, 70); InfoString[691 = 0 1 Length = strlen(1nfoString); Buffer .I strrchr(InfoString, Separator); InfoStringtLength strlen(8uffer)l P l \ O ' ~
-
Instance
Part 3: Application Programming Interface Functions for (int i = 8; i
-
c
B-
0;
i-)
Length etrlen(1nfoString); Buffer = etrrchr(InfoString, Separator); if (Buffer != NULL)
c 1 else 1
etrcpy(Reeulte[il, Buffer + l); InfoStringtLength strlen(Buffer)]
-
1
:m
: : :
: : : :
: :
Description
The ATTACH AM) CHANOE PASSWORD function is used to specify the node at which instance-levelAPI functions (for example,CREATE DATABASE and FORCE APPLICATION) are to be executed andto change the user password forthe instance being attached. The node specified may bethe currentDB2 Database Managerinstance ( a s defined by the value of the DB~INSTANCEenvironment variable), another DB2 Database Manager instance on the same workstation,or a DB2 Database Manager instance on a remote workstation.When called,this function establishes a logical instance attachment to the node specified andstarts a physical communications connection to the node if one doesnot already exist.
I
m
Comments
Part 3: Application Programming Interface Functions D If a logical instance attachment to a node is established when this function is
called, the sqlerrmc field of the sqlca data structure variable (referenced by the SQLCA parameter) will containnine tokens separated by the hexadecimal value OgFF (similar to the tokens returned when a CONNECT SQL statement is successful). These tokenswill contain the following information: lbken I
The country code of the application server
lbken 2 lbken 3 lbken 4
The code page of the application server Theauthorization ID The node name, as specified with the ATTACH AND CHANQE PASSWORD function The identity and the platform type of the database server The agent ID of the agent that was started at the database server The agent index The node number of the server (always zero) The number of partitions on the server (if the server is a partitioned database server)
lbken 5 lbken 6 Ibken 7 Ibken 8 lbken 9
If the node name specified in theNodeName parameter is a zero-length string or the Nuu value, information about the current stateof attachment will be returned in thesqlenmc field of the sqlca data structurevariable (as previously outlined). If no attachment exists, an error will bereturned. The alias name specified in theNodeName parameter must have a matching entry in thelocal node directory. Theonly exception to this is thelocal DB2 Database Manager instance (as specified bythe DB~INSTANCEenvironment variable), which can be specified as the object of an ATTACH AND CHANOE PASSWORD function call but cannot be used as a node name in thenode directory.A node name in thenode directory can be regardedas an alias for a DB2 Database Manager instance. If this function is never executed,all instance-levelAPI functions are executed against the currentDB2 Database Manager instance (which is specified by the DB~INSTANCEenvironment variable). Certain functions (for example, START DATABASE MANAGER, STOP DATABASE MANAQER, and all directory servicesfunctions)are never executed remotely If an attachment already exists when this function is called witha node name specified, the currentattachment will be dropped, andan attemptto attach to the new node will be made. If the attemptto attach to a new node fails, the application is left in an "Unattached" state. Where the UserIDlPassword pair is authenticated depends on the value of the authentication parameter in theDatabase Manager configuration file, located on the node to which the application is attempting to attach. If this configuration parameter contains the value CLIENT, the UserIDIPassword pair is authenticated at the client machine from which the ATTACH AND CHANQE PASSWORD function callis issued. If this configuration parameter contains the value SERVER, the UserIDl
Chapter 6: DB2 Database Manager Control and Database Control APIs
:
i 1 7 1 1I
I
Password pair is authenticated at the node to which the application is attempting to attach. If a UserIDIPassword pair is not provided,the user ID associated with the currentapplication processwill be used forauthentication.
Connection This function establishes a DB2 Database Manager instance attachment (and RequiFements possibly a physical database connection) whenit is executed. Authorization No authorization is required to execute this function call.
see Also
ATTACH, DETACH
Example
Thefollowing C++ program illustrates how the ATTACH AND CHANQE PASSWORD function is used to attach to, change the password at, andobtain information abouta DB2 Database Manager instance:
*/
/* /*
P
/* /* /* APIe
DB2
/* /* /* /*
CH6EX7. NAME: PURPOSE: Illuetrate How To Use The Following DB2 API Functione In A C++ Program: ATTACH AND CHANQE PASSWORD OTHER
SHOWN:
DETACH
/*
*/
*/ */
*/ */ */
*/ */
*/ */
/* / / Include The Appropriate Header File8 #include <windowe.h> #include #include aq1env.b #include <eqlca.h* / / Define The API-Class Claee claee API-Claee {
/ / Attributes public: etruct eqlca aqlca; / / Operatione public : long OetInetanceInfoOl 11
/ / Define The QetInetanceInfo() Member Function long API-Claee::OetInetanceInfo() / / Declare The Local Memory Variables int Separator P OxFFl int Length; char InfoString[71]8 char*Buffer) char Reeults[9] [71] ;
Part 3: Application Programming Interface Functions / / Attach To The Default DB2 Database Manager Instance ~qleatcp("DB2~,nuserIDrn,"password", "newpass", brsqlca) ; / / If Attached, / / And Parse It
if (sqlca.sqlcode
Retrieve Information About The Current Attachment m-
SQL-RC-OK)
i strncpy(InfoString, sqlca.aqlerrmc, 70); InfoString[69] P 0; Length strlen(1nfoString); Buffer.= strrchr(InfoString, Separator); InfoString[Length strlen(Buffer)1 I ' \ O r ; for (int i = 8; i >I 0 ; i-)
-
I
-
{
Length = strlen(1nfoString); Buffer = strrchr(InfoString, Separator); if (Buffer l = NULL) {
strcpy(Results[il, Buffer + 1); InfoString[Length strlen(Buffer)]
-
P
'\O';
1 else strcpy(Results[il, Infostring);
1 / / Display The Parsed Information cout
Description
The SET CLIENT function specifies connection setting values for a DB2 application.
Part 3 Application Programming Interface Functions Before this function can be executed, an arrayof special structures (sqle-conn-setting structures) mustbe allocated. Referto the QUERY CLIENT function fora detailed description of this structure andfor more informationabout the connection options available. Oncean arrayof sqle-conn-setting structures has been allocated,the type field of each structure inthis array mustbe set t o one of seven possible connection setting options, and the corresponding value field must be set to the value desired for the specified connection option. Once the SET CLIENT function has executed successfully,the connection settings are fixed, and the corresponding precompiler options used to precompile the application’s source code modulesare overridden. All connections made by subsequent transactions will use the new connectionsettings. You can changethese new connection settings only by reexecutingthe SET CLIENT function.
Comments
Connection Requirements
If t h i s function is unsuccessful, the connection setting values for an application will remain unchanged. m The connectionsetting values for an application can only be changed when there are no active database connections associated withthe application (i.e. beforeany connection is established or after a mLgRsrt ALL SQL statement, followed bya COMKIT SQL statement, is executed). This function can only be called when no database connection exists.
Authorization No authorization is required to execute this h c t i o n call. See Also
QUERY CLIENT
Example
See the example provided forthe QUERY CLIENT function on page 175.
m m QUERY CLIENT INFORMXTION Purpose
T h e QUERY CLIENT INFORMATIONfunction is used to retrieve client information that is associated with a specific database connection.
syntax
SQL-API-RC SQL-API-FN eqleqryi(unsigned short DBAliaeLength, char unsigned short struct eqle-client-info struct eqlca
Parameters
*DBAliaP, *Nk~.mValuee, *ClientInfo, *SQLCA):
DBAliasLength The length of the databasealias name stored in the DBAlias parameter. DBAlias
A pointer to a location in memory where the alias of the database to retrieve client information fromis stored. This parameter can contain a NULL value.
Chapter 6: DB2-DatabaseManagerControlandDatabaseControl
APIs
n
1
181
NumValues
An integer value that specifies the number of client information values to retrieve. Thevalue for this parameter can be any number between 1 and 4.
ClientInfo
A pointer to a sqk-client-info structure or an array of sqle-clientinfo structures where this function is to store theclient information retrieved.
SQLCA
A pointer to a location in memory where an SQL Communications Area (SQLCA)data structurevariable is stored. This structure returns either status information (if the function executed successfully) or error information (if the function failed)to the calling application.
Includes
#include <sqlenv.h>
Description
The QUERY CLIENT INFORMATIONfunctionis used to retrieve client informationthat is associated witha specific database amnection.The infmation retrieved by this function is stored in a special structure (sqk-client-info) or an arrayof sql-client-info structures that contain one or more client information options. The sqk-client-info structure is .definedin sqknv.h as follows: struct sqle-client-info
i unsigned short type; unsignedshortlength; char
/* client information type /* The lengthofthealientinformation
/* *pvaluei ./* /* /*
1;
/*
value A pointer to location a in mcrmory that the client in€ormation value will either be written to (QUERY CLIENT INFORMATION) or read from (SET CLIENT INFORMATION)
*/ */ */
*/ */ */ */
Table 6-3 lists each valuethat can be s@ed for the type field of the sqk-client-infi structure, along witha description of each valuethat can be retrievdspecifIed for the corresponding pValue field of this structure. Before this function can be executed, an sqk-clientjnfo client information structure or an array of sqk-client-info client informationstructures must be allocated, andthe type field of each structure used must be set to one of the four possible client information values listedin Table 6-3. After this function has executed, the memory locations referenced bythe pValue field of each client information structure used will contain the current value (setting)of the client information option specified.
Comments
If this function is called with the DBAZias parameter set to NULL,client information will be retrieved for all connections (i.e.,the values that were set when the SET CLIENT INFORMATIONfunction wasused to set client informationfor all connections).
182
Part 3: Application Programming Interface Functions
Table 6-3 Client Information Settings
Information
on
SQL-CLIENT-INFO-USERID
char[25511
Specifies the authorization (user ID) for the client. This ID is for identification purposes only; it is not used for authentication.
SQL-CLIENT-INFO-WRKSTNNAME
char[255]'
Specifies the workstation name for the client
SQL-CLIENT-INFO-APPLNAME
char[255I1
Specifies the application name for the client
SQL-CLIENT-INFO-ACCSTR
char[20011
Specifies the accounting string used by the client2
Adapted from IBMs DB2 Universal Database API Reference, Table 22, p. 386. 'Some servers may truncate this value. 2Thisinformation can also be set using the SET ACCOUNTING STRING function; however, that function does not allow the accounting string to be changed once a connection exists, whereas the SET CLIENT INFORMATION function does. Refer to Chapter 5 for information about the format of this string.
The client information returned by this function can be retrieved at any time. If this function is used to retrieve the value of a client information option that has not been set, the length field of the correspondingsqle-client-info structure will be set to 0, and an empty, NULL-terminated string will be returned as the value.
Connection This function can be called at any time; however, a connection to the DB2 database Requirements specified in the DBAZias parameter must exist if this function is used to obtain client information about a specific connection. Authorization No authorization is required to execute this function call. See Also
SET CLIENT INFORMATION, QUERY CLIENT, SET CLIENT
Example
The following C++ program illustrates how to use the QUERY CLIENT INFORMATION function to obtain the current value of a client's application name:
I
I
/ * NAME:
/* /* /* /* /* /* /*
CH6EXS.SQC PURPOSE: Illustrate How To Use The Following DB2 API Functions In A C++ Program: QUERY CLIENT INFORMATION SET CLIENT INFORMATION
/ / Include The Appropriate Header Files #include <windows.h> #include #include <sqlenv.h> #include <sql.h> / / Define The API-Class Class
*/
.*/ */ */ */
*/ */ */
Chapter 6: DB2 Database Manager Control and Database Control APIs
183
class API-Class {
/ / Attributes public : struct sqlca
sqlca;
/ / Operations public : long QueryClientInfoO; long SetClientInfoO;
1; / / Define The QueryClientInfo ( ) Member Function long API-C1ass::QueryClientInfoO {
/ / Declare The Local Memory Variables char DBAlias [ 8 3 ; struct sqle-client-info ClientInfo; char ApplicationName 1201 ;
/ / Initialize The Local Variables strcpy(DBAlia8, "SAMPLE") ; / / Initialize The ClientInfo.type = Client1nfo.length ClientInfo.pValue
Client Information Structure SQLE-CLIENT-INFO-APPLNAME; = 0; = ApplicationName;
/ / Obtain Information About The Current Client Connection sqleqryi(strlen(DBAlias), DBAlias, 1, &ClientInfo, hsqlca); / / If Information About The Current Client Connection Was / / Retrieved, Display It if (sqlca.sqlcode == SQL-RC-OK) cout #include #include <eqlutil.h>
*/ */ */ */ */ */ */
*/
Header
Files
1
I
i include 6sglca.h> / / Define The -1-Class class APIClass
Class
c / / Attributes public : struct sglca
sglca;
/ / Ogerat ions
ublic : long GetDBMgrInfo();
3; / / Define The GetDBM~rInfoO Member ~ n c t i o n long ~P~-Class::GetDB~grInfo()
c / / Declare The Local Memory Variabl~s
truct sglfugd char u n s i ~ e dAnt
DBManagerInf0[21; DBPath [ 216 1 ; iUwnDB = 0;
/ / Initialize An Array Of DB2 Database
ter Structures Info[O].token = SQLF-KT~-DFTDBPATH; InfoCOl .ptmalue = DBPa Info [I].token = SQLF-KT Infotll .gtmalue = (cha tain The System Default Value Of The D nerger Configuration Par sglfdsy~(2, &DBManage~Info[Ol, &sglca); / / If The System Def It Values Of The Conf iguration P a r ~ e t / / Specified Were Retrieved, Displ if (sglca.sglcode == SQL-RC-OK)
c cout
Description
"he GET DATABASE CONFIGURATION DEFAULTS function is used to retrieve the system default values of one or more configurationparameters (entries) in a database configuration file.This function uses an array of special structures (sqlfupd)to retrieve the system default values for one or more database configuration parameters. Refer to the GET DATABASE CONFIGURATION function for a detailed
Part 3: Application Programming Interface Functions description of this structureand for more information about the database configuration parameters available. Before this function can be executed, an arrayof sqlfipd structures must be allocated, the token field of each structure inthis array must be set to one of the database configuration parameter tokens listed in Table 7-3 (refer to the GET function), and the ptrvalue field must contain a pointer DATABASE CONFIGURATION to a valid locationin memory wherethe configuration parameter value retrieved i s to be stored. Whenthis function is executed, the system default value for eachdatabase configuration parameter specified is placed in the memory storage areas (local variables) referred to by the ptrvalue field of each sqlfipd structure in thearray.
Comments
H The application that calls this function is responsible for allocating sufficient
memory for eachdata value retrieved. H The current value of a non-updatable configurationparameter is returned as that configuration parameter's systemdefault value. H If an error occurs whilethis function is executing, the database configuration information returned will be invalid. Ifan error occurs becausethe database configuration filehas been corrupted,an error message willbe returned, and you must restore the database from a good backup imageto correct the problem. U For a brief descriptionabout each database configuration fileparameter, refer to the GET DATABASE CONFIGURATION function. For detailed information about each database configuration fileparameter, refer to the IBM DB2 Universal Database Administration Guide.
Connection This function can only be called if a connection to a DB2 Database Manager instance Requirements exists. In order to retrieve default database configuration fileparameter values for a DB2 database located at a remote node,an application must attach to that node. If necessary, a temporary connection is established by this function whileit executes. Authorization No authorization is required to execute this function call.
see Also
RESET DATABASE CONFIGURATION, UPDATE DATABASE CONFIGURATION, GET DATABASE CONFIGURATION
Example
Thefollowing
c++program illustrates how to use the GET
CONFIGURATION
DATABASE DEFAULTS function to retrieve the system defaultdatabase
configuration fileparameter values for the SAMPLE database:
.CPP
/* /* /* /* /* /* /* /*
NAME:
CH7EX5 PURPOSE: Illustrate H o w To Wee The Following DB2 API Function In A C++ Program: GET
DATABASE
CONFIGURATION
DEFAULTS
*/ */ */ */ */ */ */ */
Chapter 7: DB2 Database Manager and Database Configuration
I1
APIs
1 ~
227
/ / Include The Appropriate HeaUer Files #include <windows.h> #include #include <eqlutil.h> #include <sqlca.h> / / Define The API-Class Class class API-Class i / / Attributes public : struct sqlca sqlca;
/ / Define The QetDBaseInfo() Member Function long API-Class::GetDBaseInfo() {
/ / Declare The Local Memory Variables struct sqlfupd DBaseInfo[4]; unsignedintAutoRestart = 0; unsignedintAvgAppliCatiOne = 0; unsignedint DeadlockChkTime = 0;
/ / Initialize An Array Of SAMPLE Database Configuration / / Parameter Structures DBaseInfo[O].token a SQLF-DBTN-AUTO-RESTART; DBaseInfo[O] .ptrvalue= (char * ) &AutoRestart; DBaseInfo[ll.token P SQLF-DBTN-AVQ-APPLS; DBaseInfo[l] .ptrvalue= (char * ) &AvgApplications; DBaseInfo[2].token P SQLF-DBTN-DLCHKTIME; DBaseInfo[2] .ptrvalue= (char * ) hDeadlockChkTime; / / Obtain The System Default Value Of The / / Configuration Parameters Specified SqlfCldb("SAMPLE", 3, &DBaseInfo[O], brsqlca);
SAMPLE Database
/ / If The System Default Values Of The Configuration / / specified Were Retrieved. Display Them
Parameters
if (sqlca.sqlcode == SQL-RC-OK) {
1
tout #include #include <eqlutil.h> #include <eqlca.h> / / Define The API-Class Class class APT-Class {
/ / Attributes public: struct sqlca sqlca;
1;
/ / operations public : long GetDBaseInfo(); long SetDBaseInfo () ;
/ / Define The QetDBaseInfoO Member Function long API-Class::GetDBaeeInfo() {
/ / Declare The Local Memory Variables struct eqlfupd DBaseInfo[41; unsigned intAutoRestart = 0; unsigned int DeadlockChkTime I 0; / / Initialize An Array Of SAMPLE / / Parameter Structures
Database Configuration
DBaseInfo [ O 1 .token .I SQLF-DBTN-A~O-REST?LRT; DBaseInfo[O] .ptrvalue= (char *) WiutoRestart; DBaseInfo[l] .token = SQLF-DBTN-DLCHKTIME; DBaseInfo[ll.ptrvalue = (char *) CiDeadlockChkTime; / / Obtain The Current Value Of The SAMPLE / / Configuration Parameters Specified sqlfxdb("SAMPLE", 2, &DBaseInfo[O], hsqlca);
Database
/ / If The Current Values Of The Configuration / / Specified Were Retrieved, Display Them if (sqlca.eq1code == SQL-RC-OK)
Parameters
I cout
Description
The CLOSE DATABASE DIRECTORY SCAN function is used to free system resources that were allocated.bythe OPEN DATABASE DIRECTORY SCAN function.
This function canbe called at any time; a connection to a DB2 Database Manager Connection Requirements instance or to a DB2 database does not havet o be established first.
Part 3: Application Programming Interface Functions Authorization No authorization is required to execute this function call. See Also
OPEN DATABASE DIRECTORY
Example
see the exampleprovidedfor
SCAN,GET NEXT DATAEtASE DIRECTORY ENTRY
the OPENDATABASEDIRECTORY
SCAN function on
page 251.
M!!!!
CATmOGNODE
purpo=
The CATALOGNODE function is used to store informationabout the location of another DB2 Database Manager (server) instance and the associated communications protocol that is used to access that instance in theworkstation node directory.
ssntax
SQLMI-RC SQLPPI-FN
Parameters
Nodelnfo ProtocolZnfo
SQLCA
sqlectnd (struct sqle-node-struct void struct eqlca
*N&eInfo,
*ProtocolInfo, *SQLCA)I
A pointer to a sqle-m&-struct structure that containsinformation about the node that is to be cataloged. A pointer to the appropriate protocol information structure that contains information aboutthe communications protocolthat will be used to access the specified node. A pointer to a location in memorywhere a SQL Communications Area (SQLCA)data structure variable is stored. “ h i s variable returns either statusinformation (ifthe function executed successfully)or error information (ifthe function failed)to the calling application:
Includes
#incluBe <eglenv.h>
Description
The CATALOGNODE function is used to store information about the location of another DB2 Database Manager (server) instance and the associated communications protocol that is to be used to access that instance in theworkstation node directory. “his information is needed in order foran application to establish a connection or an attachment to a remote DB2 database server. Two special structures (sqle-nodestruct and an appropriate protocol information structure) areused t o pass characteristics about a node t o the DB2 Database Manager whenthis function is called. Thefirst of these structures, sqle-node-struct, is defined in sqlenv.h as follows: etruct sqle-node-struct {
unsigned short
etruct-id;
unsignedshort
codepage:
/* /* /* /*
A
unique structure identifier value. This field must always be set to SQL-NODE-STRJD.
Codepagevalueused
for the node
*/ */ */ */
1
,
! I
i ;
. .
Chapter 8: Database, Node, and DCS Directory ManagementAPIs
i 257
I
camment */ Optionaldescriptionof the node */ */ /* Node name /* Indicates whether the protocol that */ with the node ie */ /* communicates */ /* APPC (SQL-PROTOCOL-APPC), */ /* NetBIOS (SQL-PROTOCOL-NETB), */ / * APPN (SQL-PROTOCOL-APPN), */ /* TCP/IP (SQL-PROTOCOL-TCPIP), /* TCP/IP Ueing SOCKS (SQL-PROTOCOL-SOCKS)*/ */ /* CPIC (SQL-PROTOCOL-CPIC), */ /* IPX/SPX (SQL-PROTOCOL-IPXSPX), /* LOCAL protocol€or an instance on the*/ /* same workstation (SQL-PROTOCOL-LOCAL) */ /* or a named pipe (SQL-PROTOCOL-NPIPE) */
/* comment char char unsigned char protocol;
[311 t nodename[91 I
/*
1;
The secondspecial structure used bythis function (the protocol information structure) is determined bythe communications protocolthat is to be usedto communicate with the cataloged node.This structure can beany of the following DB2-defined structures: D sqle-node-appc Advanced Program-to-Program Communications (APPC) protocol D sqle-node-appc
Advanced Peerto-Peer Networking (APPN) protocol sqle-node-mtb NetBIOS protocol U sqle-node-tcpip TCP/IP H sqle-node-cpic Common Programming Interface Communications (CPIC) protocol U sqle-node-ipmpxInternetworkPacketExchangeISequencedPacketExchange U sqle-node-local
D sqle-node-npipe
(IPWSPX) protocol Local node Named pipe
The sqle-node-appc structure is defined in sq1env.h as follows: struct eqle-node-appc The logical unit (SNA port) name used*/
char
locallu [g] ;
/* /*
char
partner-lu 91 ;
/* The logical unit (SNA port)name at
char
mode L91 ;
/*
/* /*
/*
to establish the connection. the remote DB2 instance. The name of the tranemiseion mode to use. This field ie usually eet to "SQLL0001".
*/ */
*/
*/ */ */
Part 3 Application Programming Interface Functions The sqle-node-appn structure is defined in sq1env.h as follows: struct sqle-node-appn i char networkidL 9I ; char remote-lu L9 1 ; char
local-lu 91 ;
char
mode L93 ;
/*
/*
/* /* /*
/*
/* /*
The network ID The logical unit (SNA port) name at the remote DB2 instance The logical unit (SNA port) name used to establish the connection The name of the transmission mode to use. This field is usually set to "SQLLOOOl".
*/ */ */ */ */
*/ */ */
The sqle-node-mtb structure is defined in sq1env.h as follows: struct sqle-node-netb
c
unsigned short adapter;
char
/* /* /*
The LAN adapter number. This parameter*/ can be set to any of the following */ values : */ */ /* SQL-ADAPTER-O (adapter number 0 ) , */ /* SQLJDAPTER-l (adapternumber l), */ /* SQL-ADAPTER-MIN (the m i n i m */ /* adapter number), or */ /* SQL-ADAPTER-MAX (the maximum number. */ /* adapter remote-nnameL91 ;/* The workstation name thatis stored */ */ /* in the nname parameterof the /* Database Manager configuration file */ workstation. */ /* on the remote must field */ /* beThis */ /* NULL-terminated or blank filled */ /* up to 9 characters.
The sqle-node-tcpip structure is defined in sq1env.h as follows: struct sqle-node-tcpip {
char hostname char
1;
L2561 ;
service-name L 15I ;
*/ /* The name of the TCP/IP host that /* the DB2 instance (server) resideson */ /* The TCP/IP service name (or port */ /* number) of the DB2 instance (server) */
The sqle-node-cpic structure is defined in sqZenv.h as follows:
Chapter 8: Database, Node, and DCS Directory ManagementAPIs eym_8estgamet91;/* The symbolic destination name of /* the remote partner unsigned short security-type; /* The security type used. This field /* must be set to /* SQL-CPIC-SECURITY-NONE, /* SQL-CPIC-SECURITY-PROGFW4, /* or SQL-CPIC-SECURITY-SAME. Char
*/
*/ */ */ */ */ */
1;
The sqle-node-ipmpx structure is defined in sqlenv. has follows: struct sqle-node-ipxspx {
char
/*
fileserver 1491 ;
/* /* /* /* /*
char objectname t491;
The name of the Netware file server */ where the DB2 server instance is */ registered */ The name particular aof */ DB2 server instance that is stored */ in the Netware file aerverbindery. */
The sqle-node-local structure is defined in sqlenv. has follows: struct sqle-node-local
i char
instance-name 19I ;
/*
/* /*
/*
The name of a DB2 Database Manager instance. This field must be NIILLterminated or blank filled up to 9 characters.
*/
*/ */ */
?;
The sqle-node-npipe structure is defined in sqlenv.h as follows: struct sqle-node-npipe
c char computername
char
[l61;
instance-namet91;
/*
/* /* /* /*
The name of the computer that a DB2 Database Manager instance (server) resides on The name ofaDB2DatabaseManager instance
*/
*/ */ */
*/
1;
Comments
W This fundion will automatically create a node directory ifone does not already exist. On OS12 and Windows, the node directoryi s stored on the disk drive that
contains the DB2 Database Manager instance that i s currently being used. On all other systems, the node directoryi s stored in thedirectory wherethe DB2 product was installed. YOUcan use the OPEN NODE DIRECTORY SCAN, GETNEXTNODEDIRECTORY
Part 3: Application Programming Interface Functions ENTRY, and CLOSE NODE DIRECTORY SCAN functions to list the contents Of the node directory. Together,these three functions work likean SQL cursor (i.e., they use the OPEN/FETCWCLOSE paradigm). W If directory cachingis enabled, database, node, and DCS directory filesare cached in memory. An application’s directory cache i s created during the firstdirectory lookup. Becausethe cache is only refreshed whenan application modifies oneof the directory files, directory changes made by other applications might notbe effective until theapplication is restarted.To refresh DB2’s shared cache (server only), an application should stopand then restart thedatabase. To refresh an application’s directory cache, the user should stopand then restartthat application. For more information about directory caching, refer to the GET DATABASE MANAGER C O N F I ~ T I O function. N
Connection This function can be called at any time; a connection t o a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be established first. Authorization Only users with either System Administrator(SYSADM) authority or System Control (SYSCTRL) authority can executethis function call.
see Also Example
VNCATALOG NODE, OPEN NODE DIRECTORYSCAN,GET NEXT NODE DIRECTORY ENTRY, CLOSE NODE DIRECTORY SCAN
ThefollowingC++program illustrates how to use the CATALOG catalog a remote workstationnode:
/* /* /* /* /* /* /*
NODEfunction to
CH8EX5.CPP PURPOSE: Illuetrate How TO Uee The Following DB2 API Function In A C++ Program:
N A M E :
CATALOQ NODE
/ / Include The Appropriate Header Files #include *windowe.h, #include #include aq1env.b #include aq1ca.b
/ / Define The API-Claee Claee claee API-Claee / / Attributes public : etruct eqlca eqlcar / / operatione public 2
*/
*/ */
*/ */ */ */
i 1 i 1
Chapter 8: Database, Node, andDCS Directory ManagementAPIs long
1;
!I
261
CatalogNode();
/ / Define The CatalogNodeO Member Function long API-C1ass::CatalogNodeO {
/ / Declare The Local Memory Variable8 struct sqle-node-struct NodeInfo; struct sqlegode-netb Protocol; / / Initialize The Node Information Data Structure NodeInfo.struct-id = SQL-NODE-STR-ID; strcpy(Node1nfo.comment. "Test Database Serverw); strcpy(NodeInfo.nodename. "TESTSVR"); NodeInfo.protoco1 = SQL-PROTOCOL-=TB; / / Initialize The NetBIOS Protocol Data Structure Protocol.adapter = SQL-?DAPTER-O; strcpy(Protocol.remote-nname, "TESTSVR"); / / Catalog A New Worketation Node sqlectnd(&NodeInfo, (void *) &Protocol, &eqlca); / / If The New Worketation Was Cataloged, Display A Success if (eqlca.sqlcode =- SQL-RC-OK)
Message
{
1
tout
Description
The UNCATALOG
Comments
W The CATALOG
NODE function is used
to delete an entry from the node directory.
NODE function can be used to recatalog a node that was removed (uncataloged)from the node directory. YOU Can use the OPEN NODE DIRECTORY SCAN, GET NEXT NODE DIRECTORY NODE DIRECTORY functions SCAN to list the contents of the ENTRY, and CLOSE node directory. Together,these three functions work like a SQL m o r (i.e., they use the OPEN/FETCWCLOSE paradigm). W If directory cachingis enabled, database, node, and DCS directory filesare cached in memory. An application’s directory cacheis created during the firstdirectory lookup. Becausethe cache i s only refreshed whenan application modifies oneof the directory files, directory changes made by other applications might nottake effect until theapplication is restarted.To refresh DB2’s shared cache (server only), an application should stopand then restart thedatabase. To refresh an application’s directory cache,the user should stop and then restartthat application. For more information about directory caching, refer t o the GET DATABASE W A G E R CONFIGURATION function.
Connection This function can be called at any time; a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be established first. Authorization Only users with either System Administrator(SYSADM) authority or System Control (SYSCTRL) authority are allowed to execute this function call. See Also
CATALOG NODE, OPEN NODE DIRECTORY SCAN,GET NEXT NODE DIRECTORY SCAN ENTRY, CLOSE NODE DIRECTORY
Example
Thefollowing C++ program illustrates how to use the UNCATALOG uncatalog a remote workstationnode:
NODEfunction to
Chapter 8: Database,Node,and DCSDirectoryManagement
HEEX6.CPP
j!
MIS
" .
/* /*
/* /* /*
263
*/ NAME: PURPOSE: Illustrate How To Use The Following DB2 API Function In A C++ Program:
"
*/
*/ */ *I
/* /* /*
*/
UNCATALOO NODE
*/
*/
/ / Include The Appropriate Header Files #include <windows.h> #include #include <sqlenv.h> #include <eqlca.h> / / Define TheAPI-Clam Class class API-Class
/ / Attributes public : struct eqlca
1;
sqlca;
/ / Operatione public : long UncatalogNodeO;
/ / Define The UncatalogNodeO Member Function long API-Class::UncatalogNode()
c / / Declare The Local M e m o r y Variables char NodeName[ 91 ; / / Initialize The LocalM e m o r y Variables strcpy(NodeName, "TESTSVR") ;
/ / Uncatalog The Specified sqleuncn(NodeName, Gsulca); / / If The Node
Node
Was Uncataloged, Display A Succese Message
if (sqlca.splcode == SQL-RC-OK)
c
1
1
/* /* /*
cout
Description
The CLOSE NODE DIRECTORY SCAN function is used to free system resources that were allocated by the OPEN NODE DIRECTORY SCAN function.
This function can be called at any time; a connectionto a DB2 Database Manager Requirements instance or to a DB2 database does not have to be established first. Authorization No authorization is required to execute this function call. see Also
OPEN NODE DIRECTORY SCAN,QET NEXT NODE DIRECTORY ENTRY
Example
See the exampleprovidedfor the OPEN
NODE
DIRECTORY
SCAN function on page265.
Part 3: Application Programming Interface Functions
CATALOG DCS DATABASE Purpose
The CATALOG DCS DATABASE function is used to store information abouta DRDA database in theDCS directory.
syntax
SQL-API-RC SQL-API-FN eqlegdad (struct sql-dir-entry struct splca
Parameters
DCSDirEntry SQLCA
*LJCSDirBntry,
*mm) i
A pointer to an sql-dir-entry structure that contains information about the DCS database that is t o be cataloged. A pointer to a location in memorywhere a SQLCommunications Area (SQLCA) data structure variable is stored. This variable returns either status information (ifthe function executed successfully)or error information (ifthe function failed) to the calling application.
Includes
#include <sglenv.h>
Description
The CATALOG DCS DATABASE function is used to store information abouta DRDA database in the DCS directory. Databases in thisdirectory are,accessed through an application requester, such as IBMs DB2 Connect product. Whena DCS directory entry has a database name that matches a database name in thesystem database directory, the application requester associated with the DCS database forwards all SQL requests made against that database to the remote server where the DRDA database physically resides. A special structure (sql-dir-entry) is used to pass characteristics about a DCS database to the DB2 Database Manager whenthis function is called. The sql-dir-entry structure is defined in sqlenv.h as follows: struct sql-dir-entry {
unsigned short
struct-id;
unsigned short release; unsigned shortcodepage; char ldb[91;char char ar[91; char char parm[513]
1;
comment[31]; tdb[191 i ;
This field */ /* must always be set to SQL-DCS-STR-ID. */ /* Releaselevelofthe DCS database entry*/ /* Code page valueusedforthe DCS */ comment /* database */ / * optional DCS database comment */ database /* Local name */ /*database Actual host name */ /* Application client library name */ /* Transaction program prefix, transaction*/ /* program name, SQLCODE mapping file */ /* name, disconnect option. and security */ /* option */
/ * The structure identifier.
NOTE:Each character field in thisstructure must be either NULL-terminated or blank, filled up to the specified length of the field.
Chapter 8: Database, Node, and DCS DirectoryManagement APIs Comments
~
271
I
W This function w ill automatically create a DCS directory if one does not already
exist. On OS/2 and Windows, the DCS directoryis stored on the disk drivethat contains the DB2 Database Manager instance currently being used. On all other systems, the DCS directoryis stored in thedirectory wherethe DB2 product was installed. W "he DCS directory is maintained outsideof the database. If a database is cataloged in the DCS directory, it must also be catalogedas a remote database in the system database directory.
W You can use the OPEN DCS DIRECTORY SCAN,GET DCS DIRECTORY GET DCS DIRECTORY
ENTRIES, ENTRY FOR DATABASE,andCLOSE DCS DIRECTORY
SCAN functions to obtain information about oneor more entries in the DCS
directory. W If directory cachingis enabled, database, node, and DCS directory filesare cached
in memory. An application's directory cacheis created during the first directory lookup. Becausethe cache is only refreshed whenan application modifies oneof the directory files, directory changes made by other applications mighttake not effect until the application is restarted. To refresh DB2's shared cache (server only), an application should stopand then restart the database. To refresh an application's directory cache,the user should stop and then restart that application. For more information about directory caching, refer to the GET DATABASE W A G E R CONFIGURATION function. W IBM's DB2 Connect product provides connections to DRDA application servers,
such as: -DB2 for OW390 on Systed370 and System/390 architecture host computers -DB2 for VM and VSE on Systed370 and Systed390 architecture host computers -0S/400 on Application System/400 (AS/400) host computers
Connection This function can be called at any time; a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be establishedfirst. Authorization Only users with either System Administrator(SYSADM)authority or System Control (SYSCTRL) authority are allowed to execute this function call. See Also
UNCATALOG DCS DATABASE, OPEN DCS DIRECTORY SCAN, GET DCS DIRECTORY ENTRIES, GET DCS DIRECTORY ENTRY FOR DATABASE, CLOSE DCS DIRECTORY SCAN, CATALOG DATABASE
Example
"he following c++ program illustrates how to use the CATALOG DCS DATABASE function to catalog an alias for a DCS database:
/* /* /* /*
*/ NAME: CH8EX8 .CPP PURPOSE: Illustrate How To U60 The Following DB2 API Function In A C++ Program:
*/ */
*/
Part 3: Application Programming Interface Functions /*
/* /*
CATAtoa DCS DATABASE
/*
*/
/ / Include The Appropriate #include <windows.h> #include #include <sqlenv.h, #include <sqlca.h>
Reader
*/ */ */
Files
/ / Define The API-Class Class class API-Class {
/ / Attributes public: struct sqlca sqlca; / / Operations public : long CatalogDCSDB () ;
?; / / Define The CatalogDCSDBO Member Function long API-ClaSS::CatalOgDCSDB() {
/ / Declare The LocalM e m o r y Variables struct sql-dir-entry DCSInfo; / / Initialize The DCS Database Information Data Structure DCSInfo.struct-id = SQL-DCS-STR-ID; DCSInfo.release = 0; DCSInfo.codepage = 450; StrC9Y(DCSInfO.Cmu~tnt, "DB2 For MVS Database"); StrCpy(DCSInfO.ldb, "SAMPLEDB"); strcpy(DCSInfo.tdb, u ~ ~ ~ ~ " ) ; strcpy(DCSInfo.ar, strcpy(DCSInfo.parm, "") ; / / Catalog A New DC8 Database eqlegdad(hDCSInfo, hsqlca);
/ / If The New DCS Database WasCataloged, Display A Success / / Message if (sqlca.sqlcode == SQL-RC-OK) {
>
tout wbindery structure musthave supervisoryor equivalent authority. This functioncan onlybe issued locally froma DB2 database server workstation. Remote executionof this function is not supported.
After IPX/SPX support software is installed and configured, the DB2 database server should be registered on the network server (unless IPX/SPX clients will only
be using direct addressing to connect to this DB2 server).
W Once a DB2 database server is registered on the network server,if you need to reconfigure IF'WSFX or change the DB2 server's network address,deregister the DB2 server from the network server (with the DEREGISTER function) and then register it again (usingt h i s function) after the changes have been made. W This function cannot be used in applications that run on the Windows or the Windows NT operating system. Connection This function can be called at any time; a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be established first. Authorization No authorization is required to execute this function call. See Also
DEREGISTER
Example
Thefollowing C++program illustrates how to use the REGISTER function and the DEREGISTER function to register and deregister the currentDB2 database server at a NetWare file server:
/* /* /* /*
NAME: CHEEXl1.CPP PURPOSE: Illuetrate How To Wee The Following DB2 API Function In A C++ Program:
*/ */ */
*/
/*
*/ */
REQISTER DEREQISTER
/* /* /*
*/ */
/ / Include The Appropriate #include <windows. h> #include #include <eqlenv.h> #include <eqlutil.h> #include <eqlca.h>
Header
Filee
/ / Define The API-Claee Claee claee API-Claee
I / / Attributes public : etruct eqlca sqlca;
/ / Operations public : long Re&terservcrr ()1 long DeregieterServerO?
l? / / Define
The RegieterServer() Member Function
Part 3: Application ProgrammingInterface Functions long API_Class::RegisterServer() {
/ / Declare The Local Memory Variables struct sqle-reg-nwbindery NIQInfo; DBManagerInfo; SqlfUpd StlXCt struct sqle-start-options StartOptions; struct sqleabstopopt StopOptions; char Pileserver1101 ; / / Initialize The DB2 Database / / Parameter Structure
Manager Configuration
strcpy(FileServer, '~PCEOSTu) ; DBManagerInfo.token = SQLP-KTN-FILESERVER; DBManagerInfo.ptrvalue = (char *) Fileserver; / / Store The Novel1 NetWare Pile Server / / Database Manager Configuration File
Name In The DB2
sqlfusys(1, PDBManagerInfo, Psqlca); / / Initialize / / Structure
The Stop DB2 Database Manager Options
-
0; StopOptions.ieprofile strcpy(StopOptions.profile, StopOptions.isnodenum = 0; 8topOptions.nodenum = 0; StopOptions.option = SQLE-NONE; Stop0ptions.callerac P SQLE-DROP; / / Stop The D82 Database Manager Server Processes SqlepStp(h8tOROptiOnS, &SqlCa); / / Initialize The Start DB2 Database Manager Options / / Structure strcpy(StartOptions.sqloptid, SQLE-STARTOPTID-V51); 8tartOptions.isprofile = 0; strcpy(StartOptions.profile, ""); StartOptione. isnodenum = 0; 8tartOptions.nodenum = 0; 8tartOptione.option = SQLE-NONE; StartOptions. ishostname = 0;
strcpy(StartOptions.hostname, ""); StartOptions.ieport = 0; StartOptione .port= 0; 8tartOptione.isnetname 0; strcpy(ltartOptions.netname, 8tartOptions.tblspace-type = SQLE-TABLESPACES-LIKE-CATALOG; StartOptions.tblspace-node P 0; StartOptions.iscomputer P 0; strcpy(StartOptions.cam(puter, ""1; StartOptiOnS.pUSerName P NULL; Startoptions.pPassword NULL;
-
-
/ / Re-Start The DB2 Database Manager Server Processes (This / / Will Make DB2 See The Changes Made To The Configuration
Chapter 8: Database, Node, and DCS Directory ManagementM I S / / File sqlepstart(hStartOptions, hsqlca); / / Initialize The NetWare Registry strcpy(NWInfo.uid, "userid"); strcpy(NWInfo.pswB, "password");
Information
/ / Register The CurrentDB2 Server On A NetWare sqleregS(SQL-NWBINDERY, hNWInfo, hsplca);
Data
File
/ / If The DB2 Server Was Registered On A NetWare / / Display A Success Message if (splca.sqlcode =p SQL-RC-OK)
Structure
Server
FileServer,
I 1
cout #include dostream.h, #include <sqlenv.h> #include <eqlutil.h> #include <sql.h> / / Define The API-Clase Class class API-Class {
/ / Attributes public : struct sqlca sqlca;
1;
/ / Operations public: long QetTSpaceInfo ():
/ / Define The QetTSpaceInfoO Member Function long API-Class::GetTSpaceInfo() {
/ / Declare The Local Memory Variables unsigned TSCount long ; struct SQLB-TBSPQRY-DATA **TableSpaceData; / / Retrieve Table Space Data sqllmtsq(heqlca, brTSCount, br(Tab1espaceData). SQLB-RESERVEDl, SQLB-RESERVEDZ); if (sqlca.sqlcode 11 SQL-RC-OK) return(sqlca.sqlcode); / / Display The Table space Data Retrieved cout
Description
The FETCH TABLESPACE CONTAINER QUERY function is used to retrieve (fetch) and transfer a specified numberof rows of table space container information to a userallocated memorystorage buffer that is supplied by the calling application. Thecopy of table space container data that is placed in memory represents a snapshot of the table space container information at the time this function was executed. Because no locking is performed, the information in this snapshot might not reflect recent changes made byother applications. Before this function can be executed, an arrayof SQLB-TBSCOIWQRY-PATA structures mustbe allocated, and the number of elements in this array must be stored in the MazContainers parameter. "he SQLB-TBSCONTQRY-DATA structure is defined in sq1util.h as follows:
*sffLcA,
SQLMI-FN eqlbftcq (etruct eqlca
unsigned long MsxContainers, struct SQLB-TBSCONTQRY-DATA*ContainerData unsigned long *IWmContainere);
A pointer to a location in memorywhere a SQL Communications Area (SQLCA)data structure variable is stored. This variable returns either status information (if the function executed successfully)or error information (ifthe function failed) to the calling application. MazContainers The maximum numberof rows of table space container information that theuser-allocated memorystorage buffer can hold. ContainerData A pointer to the address of an array of SQLB-TBSCONTQRY-PATA structures where this function is to store the table space container information retrieved. NumContainers A pointer to &location in memory wherethis function is to store the actual number of rows of table space container information retrieved.
etruct SQLB-TBSCONTQRY-DATA {
uneigned long uneigned long
id? rime;
/*
/*
/*
/* /*
uneigned long uneigned long
tbeID; nameLen;
/* /*
The container identifier */ The number of table epaceesharing thia*/ container. The value for this parameter*/ is alwaye 1 (DM8 table epacee can have*/ only 1 container spaceat thie time). */ The table space identifier */ The length of the container name (for */
Chapter 9: Table and Table Space ManagementAPIs languagesother than C and C++) */ container The name (NULL-terminated) */ Indicateswhetherthetablespace */ container is underthedatabase */ directory ( 1 ) or not (0). */ unsignedlongcontType; Indicateswhetherthetablespace */ container specifies a directory path */ (SQLB-CONT-PATH) , a raw device */ (SQLB-COW-DISK), or a file */ (SQLB-CONT-FILE). Note: the v a h e */ SQLB-CONT-PATH is onlyvalid for */ SMS table spaces. */ unsigned long totalPapes; /* Total number of 4KB pages occupied by */ /* the table space container (DMS table */ only) /* spaces */ unsigned long usablepagee; /* Total mrmber of 4- pages occupied by the*/ /* table space container overhead (DM8 */ only) spaces /* table */ unsigned long ok; /* Indicates whether the table space */ /* container is accessible (1) or */ /* inaccessible ( 0 ) . A value of 0 indicates */ / * an abnormal situation that might require*/ /* the database administrator's attention. */ 1; char namet2561; unsignedlongunderDBDir;
Comments
/* /* /* /* /* /* /* /* /* /* /* /*
H When the array of SQLB-TBSCONl'QRY-DATA structures that was allocated for
this function is no longer needed,it must be freed bythe application that allocated it. If this function is executed whena snapshot of table space container information is already in memory, the previous snapshot willbe replaced with refreshedtable space container information. One snapshot buffer storagearea isused fortable space queriesand another snapshot buffer storagearea isused for table space container queries. These buffers are independent of one another.
Prerequisites The OPEN TABLESPACE function is called.
CONTAINER
QUERY function must be executed before this
Connection This function can only be called if a connection to a database exists. Requirements Authorization Only users with System Administrator(SYSADM) authority, System Control (SYSCTRL) authority, System Maintenance(SYSMAMT) authority, or Database Administrator (DBADM) authority can executethis function call.
see Also
OPEN TABLESPACE CONTAINER QUERY, CLOSE TABLESPACE CONTAINER QUERY, TABLESPACE CONTAINER QUERY
Example
See the exampleprovidedfor page 312.
the OPEN TABLESPACE CONTAINER QUERY function on
Part 3: Application Programming Interface Functions
CLOSE TABLESPACE CONTAINER QUERY Purpose
The CLOSE TABLESPACE CONTAINER QUERY function is used t o terminate a table space container query request that was made by the OPEN TABLESPACE CONTAINER QUERY function.
syntax
SQL-API-RC SQL-API-FN
Parameters
SQLCA
Includes
#include <sqlutil.h>
Description
The CLOSE TABLESPACE CONTAINER QUERY function is used to end a table space container query request that was madeby the OPEN TABLESPACE CONTAINER QUERY function and to free dl associated resources.
sqlbctcq (struct sqlca
*SPLCA);
A pointer to a location in memory where a SQL Communications Area data structurevariable is stored. This variable returns either status information (ifthe function executed successfully) or error information (if the function failed) to the calling application.
Connection This function can only be calledif a connection to a database exists. Requirements Authorization only users with System Administrator (SYSADM)authority, System Control (SYSCTRL) authority, System Maintenance(SYSMAINT) authority, or Database Administrator (DBADM) authority can executethis function call.
see Also Example
OPEN TABLESPACE CONTAINER QUERY, FETCH TABLESPACE TABLESPACE CONTAINER QUERY
See the example provided for the OPEN page 312.
TABLESPACE
CONTAINER
CONTAINER
QUERY,
QUERY function on
m m TABLESPACE CONTAINER QUERY Purpose
The TABLESPACE CONTAINER QUERY function is used to retrieve a copy of the table space container information that is available for a table space (or for all table spaces in the currentconnected database) into a large DB2-allocated memorystorage buffer.
syntax
SQL-API-RC SQL-API-FN
sqlbtcq (etruct sqlca unsigned long unsigned long struct SQIa-TESmNTQRY-D?iTA
*SPLCA, TableSpaceID, *NUmContainers, **ContainerData);
F i
i
!
! j 317 I
Chapter 9: Table and Table Space Management Parameters
MIS
SQLCA
Apointer to alocation in memorywhereaSQLCommunications Area (SQLCA) data structure variable is stored. This variable returns either status information (if the function executed successfully) or error information (ifthe function failed)to the calling application.
TableSpaceID
The ID of the table space that container information is to be retrieved for. Ifthe value specified forthis parameter is SQLB-ALLTABLESPACES,a compositelist of table space container information for all table spaces in the entiredatabase will be returned.
NumContainers A pointerto a locationin memory wherethis function is to store the actual number of table space containers found forthe table space specified. ContainerData A pointerto the address of an arrayof SQLB-TBSCON!l’QRY-DATA structures where this function is to store the table space container information retrieved.
Includes
,#include <sqlutil.h>
Description
The TABLESPACE CONTAINER QUERY function is used to retrieve a copyof the table space container informationthat is available for atable space (or for all table spaces in the currentconnected database) into a large DB2-allocated memorystorage buffer. When called,this function also returns thenumber of table space containers that have been definedfor a specified table space (or for all table spaces in thecurrent connected database) to the application. “his function provides a one-call interface to the OPEN TABLESPACE CONTAINER QUERY, FETCH TABLESPACE CONTAINER TABLESPACE CONTAINER QUERY functions (which can also be QUERY, and CLOSE used to retrieve the table space container information for oneor more table spaces). When this function is executed, a memory storage buffer that is used to hold all of the table space container information retrieved is automatically allocated,a pointer to that buffer is stored in the ContainerData parameter, and the number of table space containers foundin either the specified table spaceor in the current connected database is stored in the NumContainem parameter. The memorystorage buffer that holds the table space container information is actually an array of SQLB-TBSCONTQRYDATA structures, and the value returned in the NumContainers paramekr identifies the number of elements in this array Refer to the FETCH TABLESPACE CONTAINERQ W R Y function for a detailed description of the SQU-TBSCONTQRYDATA structure.
Comments
When t h i s function is executed, if a sufficient amount of free memory is available, a memorystorage buffer will automatically be allocated. This memory storage buffer can only be freed bycalling the FREE MEMORY function. It is up to the application to ensure that all memory allocatedby this function is freed whenit is no longer needed. If sufficient memory is not available,this function will simply return thenumber of table space containers found, and no memory is allocated. If there is not enoughfree memory availableto retrieve the complete set of table space container information available at one time, you can use the OPEN TABLESPACE CONTAINERQUERY, FETCH TABLESPACE CONTAINER QUERY, and
Part 3 Application Programming Interface Functions TABLESPACE CONTAINER QUERY functions to retrieve the same table space container informationin smaller pieces.
CLOSE
Authorization Only users with System Administrator(SYSADM) authority, System Control (SYSCTRL) authority, System Maintenance(SYSMAINT) authority, or Database Administrator (DBADM) authority are allowed t o execute this function call. See Also
Example
OPEN TABLESPACE CONTAINER QUERY,FETCH TABLESPACE CONTAINER CLOSE TABLESPACE CONTAINER QUERY, FREE MEEIORY
"he following C++ program illustrates how to use the TABLESPACE CONTAINER QUERY function to retrieve information aboutthe table space containers defined for the SYSCATSPACE table space:
/* /* /*
/* /*
/* /* /*
CH9EX6.SQC P m O S E : Illustrate How TO Use The Following DB2 API Functions In A C++ Program:
NAME:
/ / Include The Appropriate #include <windows.h> #include #include <sqlenv.h> #include <sqlutil.h> #include <sql.h>
Header
Files
{
/ / Attributes public: struct sqlca sqlca; / / Operations public : long OetTSCInfoO;
1; / / Define The QetTSCIPfO() Member Function long API-Class::GetTSCInfo() {
The
*/ */ */ */ */ */
/ / Define The API-Class Class class API-Class
/ / Declare TSCCount long unsigned
*/ */
*/
QUERY CONTAINER TABLESPACE FREE MFNORY
/"
r
QUERY,
Local~ e m o r yVariables ;
Chapter 9: Table and Table Space Management MIS stact SQLB-TBSCONTQRY-DATA *TSContainerData; / / Retrieve The Table Space Container Data sqlbtcq(&sqlca, 2, &TSCCount, &(TSContainerData)); if (sqlca.sqlcode l = SQL-RC-OK) return(sqlca.sq1code);
/ / Display The Table Space Container Data Retrieved Space"; cout
Description
The MIGRATE DATmAsE function is used to convert databases created under previous versions of DB2 to DB2 Universal Database Version 5.2 format. Databases created under the following DB2 productscan be converted bythe DB2 Universal Database Version 5.2database migration process: M DB2 for OS/2, Version 1.x
DB2 for OS/2, Version 2.x M DB2 for AM, Version 1.x N DB2 forAM,Version 2.x N DB2 for HP-UX, Version 2.x M DB2 for Solaris, Version 2.x N DB2 for WindowsNT, Version 2.x M DB2 Parallel Edition, Version 1.x Once a database has been converted (migrated)to DB2 Universal Database Version 5.2 format,it cannot bereturned to its original format. Becauseof this, it is a good idea to create a backup imageof a database before it is migrated.
I 338 1 1 Comments
Part 3 Application Programming Interface Functions E A database must be cataloged in the system datlabase directory before it can be
migrated.
B You can use the Database he-Migration Tool (db2ckmig command) to determine whether or not a DB2 database can be migrated (refer to the ZBM DB2 Command Reference for more informationabout this tool).
Connection This function can be called at any time; a connectionto a DB2 Database Manager Requirements instance or to a DB2 database does not haveto be established first. When this function is called, it establishes a connectionto the specified database. Authorization Only users with System Administrator (SYSADM)authority can execute this €unction call.
see Also Example
BACXUP DATABASE
The following C++ program illustrates how to use the MIGRATE DATABASE function to convert aDB2 for WindowsNT Version 2.0 database to the DB2 Universal Database Version 5.2 format
/* /* /* /* /* /* /*
*/
NAME:
CH1OEXl.CPP PURPOSE: Illustrate H o w To Uee The Following DE2 API Function In A C++ Program:
*/
*/ */ */
MIGRATE DAT-E
/* / / Include The Appropriate #include <windows.h> #include #include <sqlenv.h> #include <sqlca.h>
*/ */ */
Header
Files
/ / Define The AFT-Claee Clase class MI-Class / / Attributes public : struct sqlca sqlca; / / operations public: long MigrateDB () i ?I
/ / Define The MigrateDB() Member Function long API-C1ass::MigrateDBO
c
/ / Migrate The DB2 Database To DB2 ODE Version 5.2 sqlemgdb (USAMPLE", UuserIDn, upassword", &sqlca);
Chapter 10: Database Migration and Disaster Recovery APIs // // // // if
1
/* /* /*
If The DB2 Database Was Migrated Successfully, Display Success Message Note: The Return Code 1103 Will Be Returned If The Database Is Already At The Current Level (sqlca.eqlcode =E SQL-RC-OK I1 sqlca.sq1code == 1103) cout
Description
The BACKUP DATABASE function is used to create a backup copy of a database or of one or more table spaces. If you havea database backup image available andthe database becomes damaged or corrupted, it can be returned to the stateit was in the last time it was backed up.Furthermore, if the database is enabled for roll-forward recovery, it can be restored to the state it was in just before the damage occurred. Table space level backup images can also be made for a database. You can usetable space backup imagesto repair problems that only affect specifictable spaces. Two special structures (sqlu-tablespace-bkrst-list and sqlu-media-list) are used to pass table space names and backup device informationto the Backup utility when this function is called. Thefirst of these structures, sqlu-tablespace-bkrst-list, is defined in sqlutil.h as follows: typedef struct sqlu-tableepace-bkret-list 1 long nun-entry$
etruct eqlu-tableepace-entry
eqlu-tableepace-bluet-list1 1
The number of entries */ in the list of table space*/ names etored in the */ / * the tableepace field */ *tableepace$ /* A pointer to an array of */ /* sqlu-tableepace-entry */ /* etructures that contain8 */a /* liet of table space names */
/* /* /*
This structure contains a pointer to an array of additional sqlu-tublespuce-entry structures thatare used to hold table space names. The sqlu-tublespuce-entry structure is defined in sq1util.h as follows: tmedef struct sqlu-tablespace-entry {
uneigned long
/* The
reserve-lent
/*
/* /*
/* /*
filler[l]
/* /*
char “=tZYtWI char 1 sqlu-tablespace-entry;
length of the table space name storedin the tableepace-entry field. This field is only ueed if the table spacename is not NULL-terminated. The table space name Reserved
*/ */ */
*/ */ */ */ */
The second specialstructure used by the BACKUP DATABASE function,the sqlu-media-list structure,is used to describe the type(s1 of media that thebackup image is to be written to.The sqlumdia-list structure is defined in sqluti1.h as -~ follows: typedef struct eqlugaedia-list {
char
media-type;
/*
Indicates that the media
/* type is one or more devices /* local /* (SQLU-LOCAL-MEDIA) , a
/* /*
char long
filler131 t
seesionst
DBP A8DM shared library (SQLU-ASDUEDIA), /vendor * aproduct shared /*library /* (SQLU-OTHER-MEDIA), /* or a user exit routine /* (SQLU-USELEXIT). Local /* devices can be any /* combinationof tapes, /* disks, or diekettee. /* Reserved /* The mmrber of entriesin /* the liet of devices /* stored in thetarget /* field
union sqlu-media-list-targets target1
/*
/* /* /* /*
/* /* 1 sqlugPedia-listI
/* /*
*/ */ */ */
*/ */
*/ */ */
*/ */ */ */ */ */
*/ */
*/ */
pointer to an array */ of one of toro type8 of */ structures that contains */ additional device */ information. Thetype */ ofstructureused is */ determined by the value */ specified in the */ media-type field. */
A
Chapter 10: Database Migration and Disaster Recovery APIs This structure contains a pointerto an arrayof structures that provide additional information about the specific media devicesto be used.This array can contain either of the following DB2-defined structures:
m
sqlu-media-entry sqlu-vendor
Local media information (SQLU-LOCALMEDIA) Other vendor specific media information (SQLU-OTHER-MEDIA)
The sqlu-media-entry structure is defined in sqluti1.h as follows: typeaef struct eqlu~nedia-entry
i unsigned long
reserve-leu;
char media-entry[2161; 1 eqlu-media-entry;
/* /* /* /* /*
The length of
the path name */ etorea in the media-entry field. */ This field is only used if the path*/ name is not NULL-terminated. */ A path valid name */
The sqlu-vendor structure is defined in sq1util.h as follows: typedef struct eqlu-vendor
i uneignea long reserve-lenl;
char
shr-libt2561;
unsigned long reserve-lenl;
char
filename[l561I
*/ / * The lengthof the sharedlibrary */ /* name stored in the 8hr-lib field. /* This field is only ueed if the shared */ / * library name is not null-terminated. */ a */ /* he name of vendor-supplied is usedfor */ / * sharedlibrarythat data */ /* etoring and retrieving input */ /* m e length op the load name stored in the */ /* eource-file /* filename fiela. This field is only ueed */ is not */ /* if thesource-filename */ /* NULL-terminated. */ /* The name of an input source file */ /* that.ie to beusedforproviding shared library / " infonnation to a
*/
1 eqlu-vendor;
The typeof structure used to provide additional information about specific media devices is determined bythe value specifiedin themedia-type field of the sqlu-media-list structure as follows:
m m
SQLU-LOCAL-MEDIA
m
SQLU-OTHER-MEDIA
SQLU-ADSM-MEDIA
One or more sqlu-media-entry structures. No structure is needed (ifthe ADSTAR Distributed Storage Manager (ADSM) shared library providedwith DB2 Universal Databasei s used). If a Merent version of an ADSM shared libraryis used, the SQLU-OTHER_MEDIA value should be used. One or more sqlu-vendor structures.
i
348
L".
i
I
"!
i ~
!
Part 3 Application Programming Interface Functions W SQLU-USER-EXIT
No structure is needed.(Note that this value can onlybe specified withOS/2.)
Refer to the IBM DB2Administration Guide fora general discussion of DB2 Universal Database's backup and restore utilities.
Comments
W If the BuckupQpe parameter is set to SQLUB-TABLESPACE, the TableSpaceList parameter must contain a pointer to a list of valid table space names. W The CaZlerAction parameter must be set to SQLUB-BACKUP, SQLUB-NOINTERRUPT, SQLUB-PARAM-CHECK or SQLUB-P--CHECK-ONLY the first time this function is called. W The CaZlerActionparameter should beset to SQLW-NOINTERRUPT whenever all media needed forthe backup operationis available anduser interaction is not needed. W The applicationID string returnedby this function can beup to 33 characters in length (including the NULL-terminator character). W The timestamp stringreturned by this function can beup to 15 characters in length (including the NuLGterminator character). The sqlu-vendor structure that theVendoroptions parameter referencesmust be flat (i.e., it must not contain any levelof indirection). Bytereversal is not performed on this structure andcode pagevalues are not compared. W Online backupsare only permitted if roll-forward recoveryis enabled. An online backup can be performed whilethe database is being accessed and modified by other applications. W In order to perform an ofltline backup, the Backup utility must beable to connect to the specified database in exclusive mode. Therefore,this function willfail if any application,including the application callingthe BACKUP DATABASEfunction, is connected to the specified database. If the Backup utility can connectt o the specified database in exclusive mode,it will lock out all other applicationsuntil the backup operation is complete. Becausethe time required to create a database backup image can besignificant (especially forlarge databases), o f h e backups should only be performed when a database will not be needed by other applicationsfor an extended periodof time.
An oftline backupoperation will fail if the specified database or table space(s) are not in a consistent state. If the specified database is not in a consistent state, it must be restarted with the RESTART DATABASE function (tobring it back to a consistent state)before this function can be executed. W Backup images can bedirected to fixed disks, diskettes, tapes, ADSM, or other vendor products that are enabled for DB2. In order to direct backup imagesto tapes in OS/2, a unique device driver for the tape drive being usedmust be installed (there is no native tape support in OS/2), and this function must be called with SQLO-USER-EXIT specified as the media type (in the media-type field of the sqlu-media-list data structurestored in the MedianrgetListparameter).
W Although you canuse the BACKUP DATABASE function to back up databases located
I
Chapter 10: Database Migration and Disaster Recovery APIs
;
i
349
I__ - . .- ..
at remote sites, the backup imageitself must be directed to devices that arelocal to the machine on which the database resides (unless ADSM or another DB2enabled vendor productis used). WithADSM and other DB2-enabled vendor products, the interface for the backup is local, but the location of the storage media to which the backup imageis to be written can be remote. H If a database that hasbeen enabled for roll-forward recovery is backed up, it can be returned to the stateit was in at a specific pointin time (refer to the msmm DATABASE and ROLLFORWARD DATABASE functions for more information). H If a database is left in a partially restored state because of a system failure during restoration, the restore operation must be successfullyrerun before this function can be executed. If the databaseis placed in the "Rollforward-Pending"state after a successful restoration, the database must also be rolled forwardto a consistent state before this function can be executed. H If a database is changed fromthe "Rollforward Disabled"to the "Rollforward Enabled" state, either the logretain or userexit database configuration parameter must be set appropriately before this function can be executed (refer to Chapter 7 for more informationabout retrieving and setting database configuration parameters). H A table space level backup image cancontain one or more table spaces. H While onetable space is being restored, all other table spaces are available for processing. H To ensure that restored table spaces are synchronized withthe rest of the database, they must be rolled forwardto theend of the recovery history log f l e (or to the point wherethe table spaces werelast used). Because of this, table space level backup images can only be made if roll-forward recovery is enabled. H A user might choose to store data, indexes, long field(LONG) data, andlarge object (LOB)data in Merent table spaces. If LONG and LOB data do not reside in the same table space, a table space backupcannot be performed. H You can backup andrestore each componentof a table by independentlybacking up andrestoring each table space in which the table components reside. H Temporary table spaces cannot be backed up. Ifa list of table spaces to be backed up contains one or more temporarytable space names,the backup operation will fail. H Table space level backupsand restores cannot be run concurrently. Connection This function can be called at any time; a connection to a DB2 Database Manager Requirements instance or to a DB2 database does not have t o be established first. When this function is called, it establishes a connection to the database specified. Authorization Only users with SystemAdministrator (SYSADM)authority, System Control (SYSCTRL) authority, or System Maintenance(SYSMATNT) authority can execute this function call.
Part 3: Application Programming Interface Functions see Also Example
tionID[33];
RESTORE DATABASE,ROLLFORWARD DATABASE,
RESTART DATABASE
The following C++ program illustrates how to use the BACKUP DATABASEfunction to back up the SAMPLE database to a subdirectory on the D: drive:
/* /* /* /* /* /* /* /*
*/ CHlOEX3.CPP PURPOSE: Illustrate How To Use The Following DB2 API Function In A C++ Program:
N A M E :
BACKUP
DATABASE
/ / Include The Appropriate #include <windws.h> #include #include #include #include #include <eqlutil.h> #include <eqlca.h> / / Define The API-Claee Claee' claee API-Clams {
/ / Attributes public: etruct eqlca eqlca;
/ / Operatione public : long ReetoreDBO;
1; / / Define The ReetoreDBO Member Function long API-C1aee::ReetoreDBO
c
/ / Declare The Local Memory Variables etruct eqlu-media-list Media-List; etructeqlu-media-entryMedia-Entry; char
-
/ / Initialize The Media List Information Data Structure Media-Liet.media-type SQLU-LWAI-MEDIA; Media-Liet.eeeeione = 1;
*/ */ */ */ */ */
*/
*/
Part 3 Application Programming Interface Functions strcpy(Media-Entry.media_entry, "D:\\Backup"); Media-List.target.media = &Media-Entry; / / Tell The UserThat The Restore Process Is Being Started cout #include #include <eqlutil.h> #include <splenv.h> #include <eql.h>
/ / Define The API-Clae6 Claee claes API-Claee
i / / Attributes public : struct eqlca eqlca; / / Operatione public : long BackupDB () ; long ReetoreDB () ;
1; / / Define The BackupDB( ) Member long API-Claee ::BackupDB ( )
Function
i
/ / Declare The Local~ e m o r yVariable6 etmct eqlu-media-list Media-Liet; etmct sqlu-media-entryMedia-Entry; ApplicationID[33] 'char Timestamp char [a71 ;
1
/ / Initialize The Media Liet Information Data Structure Yedia-Liet.media-type I SQLU-LOCAL-MEDIA; Media-Liet.ee6SiOn6 = l; etrcpy(Media-Entry.media-entry, "D:\\Backup"); Media-Liet.target.media = &Media-Entry; / / Tell The veer That The Backup Proceee Ie Being Started cout #include #include <etdlib.h> #include <sqlutil.h> #include <eqlenv.h> #include <sql.h> / / Define The API-Claee Class class API-Class
/ / Attributes public: etruct sqlca
eqlca;
/ / Operations public : long BackupDB (); long ReadLogFileO;
>; / / Define The BackupDBO Member Function long API-C1ass::BackupDBO
c
/ / Declare The LocalMemory Variable6 etruct eqlu-media-list Media-List; struct eqlu-media-entry Media-Entry; ApplicationID[33] char [27] Timestamp char
t
;
/ / Initialize The Media List Information Data Structure Media-Liet.media-type = 8QLU-LOCAL"EDIA;
*/ */ */ */ */ */ */ */
Chapter 10: Database Migrationand Disaster Recovery APIs Media-List.sessions = I; strcW(Media-Entry.media-entry, 'tD:\\Backup"); Media-List.target.media = media-Entry; / / Tell The User That The Backup Process Is Being
c a t
Description
The OPEN RECOVERY HISTORY FILE SCAN function is used to store a copyof selected records retrieved from adatabase recovery history file in memory and to return the number of records foundin therecovery history file that meet the selection criteria specified to the calling application. The copy of the recovery history file records placed in memory represents a snapshot of the recovery history file at the time the recovery history file scanis opened. This copy is never updated, even if the recovery history file itself changes. This function is normay followed by oneor more QET NEXT RECOVERY HISTORY FILE ENTRY functions calls and one c m s E RECOVERY HISTORY FILE SCAN function call. Together, these three functions worklike a SQL cursor (i.e., they use the OPEN/FETCWCLOSE paradigm. The memory buffer that is used to store the recovery history file records obtained by the recovery history file scanis automatically allocatedby DB2 and a pointer to that buffer (the buffer identifier) is stored in theHandle parameter. This identifier is thenused by subsequent QET NEXT RECOVERY HISTORY FILE ENTRY and CLOSE RECOVERY HISTORY FILE SCAN function
Part 3: Application Programming Interface Functions calls to access the information stored in memory bufferarea.
Comments
W "he values specified in the TimeStamp, ObjectName,and CallerActionparameters are combined to define the selection criteria that filters the records in therecovery history file. Only recordsthat meet the specified selectioncriteria are copied to the memory storage buffer. W If the TimeStamp parameter is set to NULL (or to the address of a localvariable that contains the value 01, time stamp information willnot be a part of the recovery history file record(entry) selection criteria. B If the ObjectName parameter is set to NULL (or to the address of a localvariable a of the recovery that contains the value o), the object name will not bepart history file record(entry) selection criteria. W The filtering effect of the ObjectName parameter depends on the type of object name specified: -If a table name is specified, only records for loads canretrieved, be because this is the only informationkept for tables in the recovery history file. -If a table space name is specified, all records can be retrieved. W If the ObjectName parameter refers to a database table name, the fully qualified table name must be specified. W If boththe TimeStamp parameter and the ObjectName parameter are setto NULL and the CallerAction parameter is set to SQLU-LIST-HISTORY, every record foundin the recovery history file will be copiedto the memory storage buffer. B An application can have upto eight recovery history file scans open at one time.
T h i s function can only be called if a connection to a DB2 Database Manager Connection Requirements instance exists. In order to open a recoveryhistory file scan for adatabase at another node, youmust first attachto that node. If necessary,this function canestablish a temporary attachment to a DB2 Database Manager instance while it is executing.
Authorization No authorization is required to execute this function call.
see Also
QET NEXT RECOVERY HISTORY FILE ENTRY,CLOSE RECOVERY HISTORY FILESCAN, UPDATE RECOVERYHISTORY FILE,PRUNE RECOVERY HISTORY FILE
Example
The followingC++ program illustrates how the OPEN RECOVERY HISTORY FILE SCAN, QET NEXT RECOVERY HISTORY FILE ENTRY, and CLOSE RECOVERY HISTORY FILE SCAN functions are used to retrieve records fromthe SAMF'LE database's recovery history file:
/* /* /*
/*
/* /* /*
CHlOEX9.CPP PURPOSE: Illuetrate How TO Use The Following DB2 API Functions In A C++ Program:
NAME:
SCAN OPEN RECOVERY HISTORY FILE QET NEXT RECOVERY HISTORY FILE
*/ */
*/
*/ */
*/
ENTRY
*/
; :
1
j 397
Chapter 10: Database Migrationand Disaster Recovery APIs /* /* /*
CLOSE HISTORY RECOVERY
*/
FILE SCAN
*/ */
/ / Include The Appropriate Header Files #include <windows.h> #include #include qeqluti1.b #include <sqlca.h> / / Define The API-Class Class class API-Class
c / / Attributes public : struct eqlca sqlca; / / Operations public : long ShowHiStOryi) ;
1; / / Define The ShowHistoryO Member Function long MI-Class ::ShowHiatory( ) {
/ / Declare The Local Handle; short unsigned short unsigned short struct sqluhinfo struct sqluhadm
Memory
Variables
Size; hlumRows i *HistoryInfo; AdminInf0 ;
/ / Open The Recovery History Pile Scan sqluhope( "SAMPLE", NULL, NULL,PhlumRowe, &Handle, SQLUH-LIST---HISTORY, NULL, heqlca); / / Allocate Memory For A Recovery Hietory / / The Three Default Table Spaces
-
File
Record Using
HiStoryInfO (StruCt Sqluhinfo *) lMllOC (SQLUHINFOSIZE(3)); HiStOryInfO->Sqln 3; / / scan The Recovery Hietory / / Information Stored There
if (sqlca.sqlcode
=p
File
Buffer And Retrieve
The
SQL-RC-OK)
c cout
Description
The PRUNE RECOVERY HISTORY FILE function is used to remove one or more records from a database’s recoveryhistory file. When recordsin a recovery history file are deleted, the actual backup imagesand load copy files that therecords refer to remain untouched. “he application that calls this function must manually delete these files to free up thedisk storage space they consume.
Comments
If the latest full database backup records need to be pruned from a recovery history file (and their corresponding files needto be deleted fromthe media (disk storage) where they are stored), the user must ensure that all table spaces, including the system catalogtable space and all usertable spaces on which the database resides, are backed up first. Failure to back up thesetable spaces may result in a database that cannot be recoveredor the loss of some portion of user data previously stored in the database.
Connection “ h i s function can only be calledif a database connection exists. In order to delete Requirements records in the recovery history file for a database other than thedefault database, an application must first establish a connection to that database before callingthis function is called. Authorization Only users with System Administrator (SYSADM) authority, System Control (SYSCTRL)authority, System Maintenance(SYS”l3 authority, or Database Administrator (DBADM)authority are allowed to execute this function call. See Also
OPEN RECOVERY HISTORY FILE SCAN,GET NEXT RECOVERY HISTORY FILE ENTRY,CLOSE RECOVERY HISTORY FILE SCAN,UPDATE RECOVERY HISTORY FILE
Example
Thefollowing C++program illustrates how to use the PRUNE RECOVERY HISTORY FILE function to remove records fromthe SAMPLE database’s recoveryhistory file:
/* /* /* /* /* /* /*
NAME:
CH1OEXll.SQC PURPOSE: Illustrate How To Use The Following DE2 API Function In A C++ Program:
*/ */ */
*l
*/ */ */
FILE HISTORY RECOVERY PRUNE
/ / Include
The
Appropriate
Header
Files
Chapter 10: Database Migration and Disaster Recovery APIs #include #include #include #include
<windowe.h> <eqlutil.h> <eql.h>
/ / Define The API-Class Class class API-Class {
/ / Attributes public : struct sqlca
eqlcal
/ / Operations public : long ShowBistoryOI
>; / / Define The ShowBietory() Member Function long API-Class: :ShowBistory() {
Size1
/ / Declare The LocalM e m o r y Variablee unsigned short Handle; short unsignedshort hlumRowe ; struct sqluhinfo *HistoryInfor / / Open The Recovery History File Scan eqluhope ("SAMPLE", NULL, NULL,&NumRowe, &Handle. SQLUH-LIST-ADM-HISTORY, NULL, &SqlCa);
/ / Allocate Memory For A Recovery History File Record Using / / The Three Default Table Spaces HistoryInfo = (struct eqluhinfo *) malloc (SQLUHINFOSIZE(3)); HietoryInfo->sqln = 3; / / scan The Recovery History File / / Information Stored There if (sqlca.eqlcode == SQL-RC-OK)
Buffer And Retrieve The
c
cout
Description
The EXPORT function is used to export (copy)data from a database to an external file. The data to be copied is specified by a SELECT SQL statement and can be written to an external file in one of three internalformats: M Delimited ASCII M Lotus Worksheet W PC Integrated Exchange Format
(m)
NOTE:LXF is the preferred formatto use when exporting data from a table. Files created in this format canlater be imported or loaded much easier into the same table or into another database table. Three specialstructures (sqldcol, sqlu-media-list, and sqlchar) are used to pass general informationto the DB2 Export utility when this function is called. An additional structure, sqluexpt-out, is used to determine how many records were actually copied t o an external file by the Export utiliQ. The first of these structures, sqldcol, is defined in sqluti1.h as follows:
423 1
Chapter 11:Data Handling APIs
/* /* /* /* /* /* /* /*
A value indicating the method to use to select and name columns within the data file The number of columns specified in the dcolneme array A pointer to an array of sqldcoln structures that contain a list of column names
*/ */ */ */ */
*/ */ */
This structure contains a pointer to an array of sqldcoln structures that are used to build a list of column names that areto be written to the external file during the Export process. Thesqldcoln structure isdefined in sqlutil. has follows: struct sgldcoln 1 short dcolnlen;
/* The
size of the data element pointed
/* bydcolnptr the field char *dcolnptri
1;
/*
/* /*
A pointer to location a in memory
to */
*/
where*/
the datael-t specified by the dColmeth*/ field of thesqldcol structure is stored*/
The second specialstructure used by this function, sqluexpt-out, is used to obtain a count of the number of records that were written to the external file after the Export operation is completed. Thesqluexpt-out structure isdefined in sq1util.h as follows: struct sqluexpt-out
c
unsigned long sizeofstruct; unsigned long rowsExported;
1;
/* /* /* /* /*
The size of the sqluexpt-out structure The number of records copied fromthedatabase to thetarget file
*/ */ */ */ */
Another structure, thesqlu-media-list structure, isused to describe the type of media that the external file is to be written to.Refer to the BACKUP DATABASE function in Chapter 10 for a detailed descriptionof the sqlu-media-list structure and for more information abouthow it is initialized.
Comments
If a list of local paths that identify where LOB data files are to be stored is specified in the LOBPathList parameter, LOB data will be written to the first path in this list untilfile spaceis exhausted, then to the second path, and so on. W When LOB data files are created during an Export operation, DB2 constructs the filenames by combining the currentbase name in the listof base LOB file names
Part 3 Application Programming Interface Functions specified in theLOBFileList parameter with the current path (obtained fromthe list of paths provided in theLOBFilePath parameter), and then appending athreedigit sequence numberto it. For example, ifthe current LOB path is the directory /uer/local/LOB/emgdata,and the current base LOB filename is resume,then the LOB files produced will be named /uer/local/LOB/empdata/ree~e. 001, /uer/local/LOB/emg-Bata/ree~e.0 0 2 , and so on.
W The dcolmeth field of the sqldcol structure specified in theDataDescriptor parameter defmes how column names are to be provided forthe exported data file. This parameter can be set to either of the following values: W
SQL_aaET?-N
Specifies that column names in theexternal file are provided via the sqldcol structure. W SQL”ETH-D (or NULL) Specifies that column names in the external file are to be derived fromthe SELECT statement specified in SelectStatement parameter (the column names specified in theSELECT statement become the names of the columns in the external file).
W If the DataDescriptor parameter is set to NULL or if the dcolmeth field of the sqldcol structure specified in the DataDescriptor parameter is setto SQLJQITH-D, the dcolnum and dcolname fields of the sqldcol structure areignored. W A warning message is issued whenever the number of columns specifiedin the external column name array (stored in the DataDescriptor parameter) does not equal the number of columns that were generated by the SELECT SQL statement that was usedto retrieve the datafrom the database. When these numbers do not match, the number of columns written to the external file is thelesser of the two numbers. Excessdatabase columns or external file columnnames are not usedto generate the external file. W The sqlca structure specified in theSelectStatement parameter must contain a valid dynamic SELECT SQL statement. This statement specifies howdata isto be extracted from the database and written to the external file. The columns for the external file (specified in the DataDescriptor parameter) and the database columns returned from the SELECT statement are matched accordingto their respective lisustructure positions. When this function is executed, the SELECT statement is passed to the database for processing,and the first column of data retrieved is placed in the first column of the external file, the second columnretrieved is placed in the second column, and so on. A warning message is issued whenever acharacter column with a length greater than 254 is selected as part of the data that to is be exported to a delimitedASCII (SQL-DEL) file. M If the MsgFilelvame parameter contains the pathand the name of a filethat already exists, the existing file will be overwritten whenthis function is executed. If this parameter contains the pathand the name of a filethat does not exist, a new file will be created. Messages placed in the external message file include
Chapter 11: Data Handling M I S on information returned from the message retrieval service. Each message begins a new line. The CallerAction parameter must be set t o SQLTJI-INITIAG the firsttime this function is called.
All table operations need to be completed and all locks must be released beforethis function is called. You can accomplishthis by issuing either a ROLLBACK or a COMMIT SQL statement &r closing all cursors that were opened withthe WITH HOLD option. Oneor more COMMTI SQL statements are automatically issuedduring the export process.
m You can use delimited ASCII format files to exchange data with many other . Database Managerand File Manager programs.
If character data containing row separators is exported to a delimited ASCII (SQL-DEL) file that is later processed by a text transfer program, fieldsthat contain row separators will either shrink or expand in size. Use the PC/IXF (SQL-IXF) file format when exporting data to files that will be imported into other databases, because PC/IXF file format specifications permit the migration of data between differentDB2 products. You can performdata migration by executing the following steps: . 1. Export the data from one database to a file. 2. Binary copy the files betweenoperating systems. This step is not necessaryif the
source andtarget databases are both accessible fromthe same workstation. 3. Import the datafrom the file into the other database.
You can useDB2 Connect to export tables from DRDA servers such as DB2 for OS/390, DB2 for W S E , and DB2 for OW400.In this case, only the PC/IXF file format is supported. Index definitionsand NOT NULL WITH DEFAVLT attributes for a table are included in PC/IXF format files whena SELECT FROM statement isspecified in the SelectStatement parameter, and the DataDescriptor parameter is setso that default column names are generated. Indexesare not saved ifthe SELECT statement specified in theSelectStatement parameter contains a join-or if the SELECT statement references views. WHERE, GROUP BY, and HAVING clauses do not affect the saving of indexes. The EXPORT utility cannot create multiple-part PC/IXF format files when executed on an AM system. The data field of the sqlchar structure specified in the FileQpeMod parameter can contain anyof the following values: udldelfl
"lobeinfilett
"coldelfl "chardelfl
Part 3: Application Programming Interface Functions
"4"
"L" "S"
These values provide additional information about the chosen file format.Only a portion of these values are used witha particular file format.If the FileQpeMod parameter is set to NULL or if the length field of the sqlchar structure is set to 0, default informationis provided forthe file format specified.
If data isbeing exportedto either a delimited ASCII(SQL-DEL) or PC/MF (SQL-IXF) format file,the FileTypeMod parameter canspecify where LOB data isto be stored. If this parameter is setto "lobeinfile,"LOB data will be stored in separate files; otherwise, all LOB data will be truncated to 32KB and stored in the exported file. When "lobeinf ile" is specified for PC/IXF files, the original lengthof the LOB data is lost, andthe LOB fle length is stored in the exported file. Ifthe IMPORT functionis later used to importthe file-and if the CREATE option is specified-the LOB value created will be 267 bytes in size. If data i s exported to a delimited ASCII (SQL-DEL) format file,the FileQpeMod .parameter can be usedto specifg characters that override the following options: Datalink delimiters
By default, the inter-field separator for a DATALINK value i s a semicolon (;l. Specifying "dldel" followed by a character will cause the specified character to be used in place of a semicolon as the inter-field separator. The character specified must be different fromthe characters used as row, column, and character string delimiters.
Double character delimiters
Specifying "ndloubledel"will cause recognitionof all double-byte character delimiters to be suppressed.
Column delimiters
By default, columns are delimited with commas. Specifying "coldel" followed by a character will cause the specified character to be used in place of a comma to signal the end of a column. By default, character strings are delimited with double quotation marks. Specifying "chardel"
Character string delimiters
Chapter 11: Data Handling APIs followed by a character will causethe specified character t o be used in place of double quotation marks to enclose a character string. Decimal point characters
By default, decimalpoints are specified with periods. Specifying“decpt” followed by a character will causethe specified character t o be used in place of a period as a decimal point character.
Plus sign character
By default, positive decimal values are prefixed with a plus sign. specifying“decplueblank”will cause positive decimal values to be prefixed with a blank spaceinstead of a plus sign.
Date format
Specifying “dateaieo”will causeall date data values to be exported in IS0 format.
If two or more delimitersare specified, they must be separated by blank spaces. Blank spaces cannot be used as delimiters. Each delimitercharacter specified must be different from all other delimiter characters being used,so it can be uniquely identified. Table 11-3 lists the characters that can be usedas delimiter overrides. If data is being exportedto a worksheet (SQL-WSF) format file,the FileQpeMod parameter can be used to specify whichrelease (version) of Lotus 1-2-3or Lotus Symphony the file is compatible with(only one product designator can be specified for a worksheet format file): 1 Causes a worksheetformatfile that is compatiblewithLotus 1-2-3 Release 1 or Lotus 1-2-3Release l a to be created. This is the default version used if no version is specified.
a
Causes a worksheet format file that is compatible with Lotus Symphony Release 1.0 to be created.
3
causes a worksheetformatfile that is compatiblewith Lotus 1-2-3 Version 2 or Lotus Symphony Release 1.1 to be created.
4
Causes a worksheetformatfile created.
L
Causes a worksheet format file that is compatible with Lotus 1-2-3Version 2 to be created.
S
Causes a worksheet format file thatis compatible with Lotus Symphony Release 1.1 to be created.
that contains DBCS characters to be
M This function will not issue a warning if you attempt to specify optionsthat are not supported by the file type specified in theFileQpe parameter. Instead, this
function willfail, and an error will be generated.
1
L
428
_L
l
i
j
i
Part 3: Application Programming Interface Functions
Table 11-3 Delimiter Characters for Use with Delimited ASCII Files . .... . . ”.._ ”. . , , , . .. . . ,. , . I , ,. -- r W - m n-” - ” : ..
-
,
,
”
marksDouble quotation
S4
37
%
ox25
sign Percent
38
ox26
Ampersand
‘
39
ox27
Apostrophe
(
40
Ox28
1
41
ox29
82
*
ox2A
42
,
Comma
44
l
ox2F
ox2C
ox2E
46
Left parenthesis parenthesis Right Asterisk Periodvalid (not
as a character string delimiter)
Slash or forward slash
47 68
0x3A
Colon
,
59
0x3B
Semicolon
62
?
63
0x3F
Question mark
-
95
Ox6F
Underscore (valid
I
124
ox7c
Vertical bar
Greater-than sign only in single-byte character systems)
These charactersare thesame for all code pagevalues. Adapted fromIBM’s DB2 UniversalDatabase Command Reference, Table6, pp. 166 and 167.
If any of the bind files (particularlydb2uexpm.bnd) that are shipped with DB2 have to be manually boundto a database, do not use the FORMAT option during the bind operation.If you do,this function will not work correctly.
Connection “his function can only be called if a connection to a database exists. Requirements Authorization Only users with System Administrator(EXSADM)authority, Database Administrator (DBADM) authority, or CONTROL or SELECT authority for eachtable and/or view specified can execute this function call. See Also
IMPORT,LOAD
Example
The following C++ program illustrates how to use the EXPORT function to copy data from the DEPARTMENT table inthe SAMPLE database to a PC/MF formatted external me:
Chapter 11: Data Handling APIs
aPileName[8011
ing[801;
/* /*
/* /* /*
E*
N m : CH11EXl.SQC PURPOSE: Illustrate How To Uee The Following DB2 API Function In A C++ Program:
*/
*/ */
EXPORT
*/
/*
/ / Include The Appropriate Header Filee #include <windowe.h> #include #include <eqlutil.h> #include <sql.h> / / Define The API-Clam Claee class API-Claee
c / / Attributes public: , etruct eqlca eqlca;
1;
*/ */ */
/ / Operations pub1ic : long ExportData (
;
/ / Define The ExportDataO Member Function long API-C1ase::ExportDataO
i / / Declare The Local Memory Variables char MsgFileName char [SO] ; char eqlchar *Selectstring; etruct struct eqluexpt-out OutputInfo;
/ / Initialize The Local Variables etrcpy(DataFi1eName. *T:\\DEPT.IXF"); etrcpy(MegFileName, UC:\\EXP-MSG.DATn); OutputInfo.eize0fStruct = SQLWXPT-OWI-SIZEI / / Define The SELECT Statement / / Data To Be Exported
That
Will Be Used To Select The
StrCpy(String, "SELECT * FROM DEPARTMENT"); Selectstring m (etruct eqlchar *) malloc (etrlen(9tring) + eizeof(etruct eqlchar)); etrncpy(Se1ectString->data, String, etrlen(String)); Selectltring->length P etrlen(String); / / Export The Data To An IXF Format Pile eqluexpr(DataFi1eName. NULL, NULL, NULL, Selectstring, SQL-IXF8 NULL, MegFileName, SQLU-INITIAL8 brOUtPUtInf01 NULL8 &eqlca) ;
/
I
3
*/
/ / /
+/
*/
I
c
in a
PI-FN
e g l ~ i ~ (char ~ r
long ct sglca
pointer to a location in memory where the external file that data is to be importe - Z i s t that contai pointer to a s ~ Z ~ - ~ e ~ i ~structure paths on the client workstation that identify where
columns in the ext of the ~ c o l ~ efie th be selected &om the external file. pointer to a s ~ Z s Ct ~~c t~ ~that ~r e sta~ementthat ide~tifiesthe action to data into tables that already contain
ctio~
pointer to a location in ~ e m o r y whe~ea string that specifies the fo~matof the external data set to one of the following values:
e
ata in the exte
ata in the external file is stored in
file. me
o i ~ t to e ~a locatio
emory w
Part 3: Application Programming Interface E'unctions IMPORT error, warning, and informational messagesare to be written to is stored.
CallerAction
Specifies the action that this function is to take when it executes. This parameter must be set to one of the following values: SQLU-INITIAL "he IMPORT operation isto be started.
m
SQLU-CONTINUE
The IMPORT operation is to be continued after the user has performed some actionthat was requestedby the Import utility (for example,inserting a diskette or mounting a new tape). W SQLU-TERMINATE
The IMPORT operation is to be terminated because the user failed to perform some actionthat was requested by the Import utility. ImportInfoIn
A pointer to a sqluimpt-in structure that contains information about the number of records to skip and the number of records to retieve before committing changesto the database.
ImportInfoOut
A pointer to a sqluimpt-out structure where this function is to store summary information aboutthe IMPORT operation.
NullZndicators
A pointer to an array of integers that indicates whether or not each column of data retrieved can containNULL values. This parameter is only used if the Filenpe parameter is setto SQL-DEII.
Reserved
A pointer that is currently reservedfor later use. For now, this parameter must always beset to NULL.
SQLCA
A pointer to a location in memorywhere a SQL Communications Area (SQLCA)data structurevariable is stored. This variable returns either status information (ifthe function executed successfully)or error information (ifthe function failed)to the calling application.
Includes
#include <eplutil.h,
Description
The IMPORT function is used to copy data stored in an external file (of a supported file format) into a database table or view. Data can be imported from any filethat uses one of the following internal file formats: W Delimited ASCII
Nondelimited ASCII Lotus Worksheet W PC Integrated Exchange Format (IXF)
Chapter 11: Data Handling APIs
NOTE: IXF is the preferred formut to use when exporting data from and importingdata to a DB2 database table.
Three special structures (sqldcol,sqlu-media-list, and sqlchar) are used to pass general informationto the DB2 Import utility when this function is called. Referto the EXPORT function fora detailed description of the sqldcol structure, and refer to the BACKUP DATABASE function for a detailed description of the sqlu-media-list structure. A special structure (thesqlloctab structure) may also be used by the sqldcol structure (if the dcolmeth field is setto SQL"P.TH-L) when this function is executed. The sqlloctab structure is defined in sq1util.h as follows: etruct eqlloctab {
struct eqllocpair locpair[ll;
/*
/* /* /*
A
pointer to an array of sqllocpair etructuree that containsa list o f ending and column starting positione
*/ */ */ */
T h i s structure contains a pointer to an arrayof sqllocpair structures that areused to build a listof starting and ending column positionsthat identify how data isstored in an external file. Thesqllocpair structure is defined in sqlutil. has follows: struct eqllocpair
c short begin-loci end-loci short
/* /* /* /*
The starting poeition of the columndataintheexternalfile The ending position of the column data in external the file
*/ */ */ */
1;
Two additional structures, sqluimpt-in and sqluimpt-out, are used to pass IMPORspecific information to and receiveI M P O R T - S ~ ~ information ~~~~C h m the DB2 Import facility whent h i s function is called. Thefirst of these structures, sqluimpt-in, passes information about whendata isto be committed to the database to the Importfacility and is deked in sqlutil. h as follows: etruct sqlubpt-in {
uneigned long eizeOfStruct; unsigned long cannnitcnt;
/* /* /*
The size of the sqluimpt-in structure */ The number of records to import before */ a COMMIT SQL statement is executed. */
Part 3: Application Programming Interface Functions
unsigned long reetartcnt;
/* /* /* /* /* /* /* /* /*
COMMIT statement is executed each */ time this number of records import&*/ are to make the additionspermanent. */ The number of recorda to skip in the file*/ before starting theImport process. This*/ field can be used aifprsviaue attempt to*/ import records failed after n number of */ rows of data were already committed */ to the database. */
A
The secondof these structures, sqluimpt-out, is used to return statistical information about the import operationto the application after all data hasbeen copied to the table. "he sqluimpt-out structure is defined in sq1util.h as fOllOWS: StruCt eqluimpt-out
c
unsigned long sizeOfStruct;
/*
The size ofthe eqluilqpt-out structure unsigned long rowsRead; /* The number of records read from /* the external file /* The number of records skipped unsigned long rms8kipped; / * before the import process was /* started unsigned long roweInserted; /* The numberof rows inserted into /* the specified database table unsigned long rowenpdated; /* The number of rows updated in /* the specified table. Indicates the / * number of recordsin the file that /* have matching primary key values /* in the table. unsigned long roweRejected; /* The number of records in the file /* that, for some reason, could not /* be imported unsigned long rwscommitted; /* The number of rows successfully /* imported and committed
/*
*/ */ */ */
*/ */
*/ */ */ "/
*/ */
*/ */ */ */ */ */ */
NOTE: Data that has minor incompatibility problems will usually be accepted by the Import facility(forexample, you can import character data by using padding or truncation and numeric data by using adifferent numeric data type). Data that hasmajor incompatibility problemswill be rejected.
Comments
The dcolmeth field of the sqldcol structure specified in the DataDescriptor parameter defines how columns are to be selected for importfrom the external data file. This parameter can be set to any of the following values:
m SQL-METH-N Specifies that column names provided in the sqldcol structure identify the data
Chapter 11:Data Handling APIs I
j
I
435
l
that is to be imported from the external file. This method cannot be used if the external file is indelimited ASCII format.
m SQL"ETH-P Specifies that starting column positions providedin thesqldcol structure identify the datathat is to be imported from the external file. This method cannot be used if the external file is indelimited ASCII format.
m SQL-METH-L Specifies that starting and ending column positions providedin the sqldcol structure identify the datathat isto be imported fromthe external file. This method is theonly methodthat can be used ifthe external file is in delimited ASCII format. SQL-METI-D
Specifies that the f i s t column in the external file is to be imported into the first column of the table, the second column in theexternal file into the second column of the table, and so on. W If the DataDescriptor parameter is setto NULL or if the dcolmeth field of the
sqldcol structure specified in theDataDescriptor parameter is set to SQLJIETH-D, the dcolnum and dcolnume fields of the sqldcol structure areignored. M If the dcolmeth field of the sqldcol structure in the DataDescriptor parameter is set
to SQL-ME~KN, the dcolnptr pointer of each elementof the dcolnume array must point to a string, dcolnlen characters in length, that contains the name of a valid column in theexternal file that is to be imported. If the dcolmeth field of the sqldcol structure in theDataDescriptor parameter is set to SQL"J$TI-P, the dcolnptr pointer of each elementof the dcolnume array is ignored and the dcolnlen field of each elementof the dcolname array mustcontain a valid column positionin the external file that is to be imported. The lowest column (byte) position value that can be specified is 1 (indicating the f i s t column or byte), andthe largest column (byte) position valuethat can be specified is determined by the number bytes containedin one row of data in theexternal file. If the dcolmeth field of the sqldcol structure in the Datdescriptor parameter is set to SQL-METH-L, the dcolnptr pointer of the firstelement of the dcolnume array points to a sqlloctab structure that consists of an arrayof sqllocpair structures. The numberof elements in this array must be stored in the dcolnum field of the sqldcol structure. Each elementin this array must contain a pair of integer values that indicates the positions in thefile wherea column beginsand ends. Thefist integer value i s the byte position(in a row) in thefile wherethe column begins, and the second integer value is thebyte position(in thesame row) wherethe column ends. Thefirst byte position valuethat can be specified is 1 (indicating the first byte in a row of data), and the largestbyte position valuethat can be specified is determined by the number of bytes containedin one row of data in the external file. Columns defined bystarting and ending byte positions can overlap. m If the dcolmeth field of the sqldcol structure in the DataDescriptor parameter is set
Part 3: Application Programming Interface Functions to SQL-~TH-L, the DB2 Database Manager will rejectan IMPORT call if a location pair is invalid becauseof any of the following conditions: - Either the beginning or the ending location specifiedis not valid. -The ending location valueis smaller than thebeginning location value. -The input column width definedby the beginningend location pair is not compatible with the datatype and length of the targetdatabase table column. B Beginnindend location pairs that have both values set to o indicate that a column i s nullable and that it i s to be filled with NULL values. B If the DataDescriptor parameter is setto NULL or if the dcolmeth fieldof the
sqldcol structure specified in the DataDescriptor parameter is setto SQL-METH-D, the first n columns (wheren is the number of database columns into which the data is to be imported)of data found in theexternal file will be imported in their natural order.
m Columns in external files can be specified morethan once, but anything that is not a valid specificationof an external column (i.e.,a name, position, location,or default) will causean error t o be generated. Every column foundin anexternal file does not have to be imported. The SQL statement specified in the ActwnString parameter must be in the following format: [Action] INTO [TableName I HIERARCHY [STARTINO TabIdame, . l l
...
where: ActionSpecifies
..
how thedata i s to be imported into the database table.The action can be any of the following values: INSERT
Specifies that imported data rows are to be addedto a table that already exists in the database-and that any data previously stored in the table should not be changed. INSERT-UPDATE
Specifies that imported data rows are to be added to a table if their primary keys do not match existing table data-and that they are to be used to update data ina table if matchingprimary keys are found. This option is only valid whenthe targettable has a primary key and when the specified (or implied) list of target columns being imported includes all columns forthe primary key. This Action cannot be applied to views. REPLACE
Specifies that all existing data ina table is to be deleted before data is imported. Whenexisting data isdeleted, table and index definitions remain undisturbed unless otherwise specified
Chapter 11: Data Handling APIs I
(indexes are deleted and replaced ifthe FilenpeMod parameter is set t o “indexixf” and if the Filenpe parameter is set to SQL-IXF). If the table is not already defined, a n error will be returned. If an error occurs after existing data isdeleted, that data will belost and can only be recovered if the database was backed up before the IMPORT function was called.
m CREATE Specifies that if the table does not already exist, it will be created using the table definition stored in thespecified PC/IxF format data file. Ifthe PC/MF file was exported from DB2 a database, indexes will alsobe created. If the specified table name is already defined, an error will be returned. This Action is only valid for PC/IXF format files. REPLACE-=TE
Specifies that if the table already exists, any data previously stored in it will be replacedwith the dataimported fromthe PC/IxF format file. Ifthe table does notalready exist, the table will be created using the table definition stored in thespecified PC/IXF format data file. Ifthe PC/MF file was exported from a DB2 database, indexes will also be created when the table is created. This Action is only valid for PC/IXF format files.anIf error occurs after existing data isdeleted fromthe table, that data will be lost and can only be recovered if the database was backed up before the IMPORT function was called. ThbleName
is to Specifies the name of the table or updatable view that the data be inserted into. A alias name can be used ifthe REPLACE, INSERT-UPDATE, or INSERTAction is specified, exceptin thecase of a down-level server.In this case, atable name (either qualified or unqualified) should always be used.
ColumnName
Specifies oneor more columnnames within the table or view into which data from the external file is to be inserted. Commas must separate each columnname in thislist. If no column names are specified, the column names defined forthe table will be used.
ALL
Specifies to import all tables specified in thetraversal order list when importing a hierarchy.
TABLES
HIERARCHY
Specifies that hierarchical data isto be imported.
STARTING
Specifies that thedefault order of a hierarchy,starting at a given sub-table name, is to be used whenimporting hierarchical data. Specifies that thenew hierarchy, sub-hierarchy,or sub-table is to be created under a given sub-table.
UNDER
SubThbleNam Specifies the parent table to use when creating one or more sub-
tables in a hierarchy. LE
Specifies that the new hierarc~y,sub-~erarchy,or created as a stand-alone hierarchy.
a ~the e C o l u ~ list n parameters ~ ~ ~ ~correspond to the The ~ a b l e ~ and ~ a ~ l e and ~ a~ ~o el ~ list a par ~ e eters of the ~ ~ ~ SE L R statement T that is used to import the data, and they have the same rest~ctions.
e columns in the ~ o l u ~ nlist ~ and a ~ the e columns (either specified or implied) in the external file are matched according to their position in the list or in the s ~ l ~ c structure ol (data from the first column specified in the s ~ l ~ c structure ol is inserted into the table or view column that corresponds to the first element of the C o l u ~ n ~ list). a ~ eIf unequal numbers of columns are specified, the number of columns actually processed is the lesser of the two numbers. This situation could cause an error message (because there might not be values to place in some NOT ~~L table columns) or an i n f o ~ a t i o n amessage l (because some external file columns are ignored) to be generated. e ~ a parameter ~ e contains the path and the name of a file that s, the existing file will be o v e ~ ~ t t when e n this ~ n c t i o nis executed.
nction is called. The caller action repeat call facility provides support for i m p o ~ i n gdata from support diskettes. t o r smust match the number of e number of elements in the ~ u Z Z ~ n ~ ~ c aarray
L statement is executed, two age identifies the number of
statement was
drive, rather than on ~ i s ~ e t t e s . non-~efaultcolumn t of table columns in remote database slower.
eter or an
in tio
import in^ to a
and the potential growth in the size ofthe database.
if a
or if an application inte leted, part or all of the o
stem failure
I
440
1i
Part 3: Application Programming Interface Functions "If it is likely that subsequent INSERT statements w l ifail and that there is potential for database damage, an error message will be written to the message He, and processing will stop.
H Data from external files cannot be imported to system catalogtables.
H Views cannot be created with a CREATE import. H
m P u c E and m P m c E - c m m m imports cannotbe performed on object tables that
have other dependents (other than themselves) or on object views whose base tables have other dependents (including themselves).To replace such a table or a view, perform the following steps:
1. h p all foreign keysin which the table i s a parent. 2. Execute the IMPORT function. 3. Alter the table to recreate the foreign keys. Ifan error occurs while recreating the foreign keys,modi@ the dataso that it maintains referential integrity. D Referential constraints and key definitionsare not preserved whentables are created (CREATEAction) from PC/IXF (SQL-IXF) format files. W You can use the IMPORT function to recover a previously exportedtable if the PC/IXF (SQLLIXF) format was used. Whenthe IMPORT function is executed, the table returns to the state it was in when the table was exported.This operation i s similar to but distinct from a backupand restore operation. U The data field of the sqlchar structure specified in the FileQpeMod parameter can
contain any of the following values:
Chapter 11: Data Handling APIs “decptx” “ndoubledel” “defer-import” “forcein” “indexixf” ”indexechema=.achema” “nochecklengthe”
These values provide additional information about the chosen file format. Only a portion of these values are used with a particular file format. If the FileQpeMod parameter is set t o NULL or if the length field of the sqlchr structure is set to 0, default information is provided forthe file format specified. If the FileQpeMod parkmeter is set to “no-type-id,” all imported data will be converted into a single sub-table. m If the source columnin anexternal file is not explicitly specified,and if the is to be loaded into is not nullable, the correspondingtable column that the data FileQpeMod parameter can be set to “nodefaulte”to keep default values from being substituted. Otherwise, adefault value is substituted if the table column is nullable and if a default value exists. A NULL is substituted if the table column is nullable and no default exists - or an error occurs ifthe table column is not nullable. If the source columnin anexternal file has been explicitly specified,and if it does not containdata for oneor more rows,the FileQpeMod parameter can be set t o “ueedefaulte”to ensure that default values are substituted. Otherwise, aNULL is substituted if the table column is nullabl-r the row is not loadedif the table column is not nullable. m If data isbeing imported fromeither a delimited ASCII(SQL-DEL) or PC/IxF (SQL-IXF) format file, the FileQpeMod parameter can specify whereLOB data is stored. If this parameter is set to “lobeinfile,” LOB data is stored in separate external files; otherwise,all LOB data isassumed to be truncated to 32KB and stored in thesame file. B If the FileQpeMod parameter is setto “lobeinfile”and the CREATE option is used, the original LOB length is lost, and the LOB value stored in thedatabase is truncated to 32KB. B If the FileQpeMod parameter is set to “~ompound-2(where x is any number between 1and 100 or 7 on DOS/Windows platforms), nonatomic compound SQL is used to insert theimported data (i.e., x number of statements will be processedas a singlecompound SQL statement). If data is being imported from a delimited ASCII (SQL-DEL) format file, the FileQpeMod parameter can be set to “noeofchar”to specify that the optional endof-file character (OxlA) is not to be recognizedas theend-of-file character.If this option is set, theend-of-file character (OxlA)is treated as a normal character.
442 j
Part 3: Application Programming Interface Functions H If data is being importedfrom a delimited ASCII (SQL-DEL) format file,the
I"ile!&peMod parameter can be set to reclennxxxx (where xxrx is a number no larger than 32767) to specify that 3c~wc1ccharacters are to be read infor eachrow. In this case, a new-linecharacter does not indicate the end of a row. If data is being imported from a delimited ASCII (sQL-D!im) format file,you can use the FilenpeMod parameter to specify characters that override the following options: Datalink delimiters
By default, the inter-field separator for a DATALINK value is a semicolon C).Specifsing "dldel" followed by a character will cause the specified character to be used in place of a semicolon as theinter-field separator. The character specified must be different from the characters used as row, column, and character string delimiters.
Double character delimiters
Specifying %odoubledel" will cause recognition of all double-byte character delimiters to be suppressed.
Column delimiters
By default, columns are delimited with commas. Specifying "coldel" followed by a character will cause the specified character to be usedin place of a comma to signal the end of a column.
Character string delimiters
By default, character strings are delimited with double quotation marks. Specifying "chardel" followed by acharacter will cause the specified character to be used in place of double quotation marks to enclose a character string.
Decimal point characters
By default, decimal points are specified with periods. Specifying"decpt" followed bya character will cause the specified character to be used in place of a period as a decimal point character.
Plus sign character
By default, positive decimalvalues are prefixed with a plus sign. Specifyingudecplusblank"will cause positive decimalvalues to be prefixed witha blank space instead of a plus sign.
Date format
Specifying "dateeieo" will cause all date values to be imported in IS0 format.
If two or more delimiters are specified, they must be separated by blank spaces. Blank spaces cannot be used as delimiters.Each delimiter character specified must be different from the delimiter characters already being usedso all delimiters can be uniquely identified. Table11-3 (refer to the EXPORT function)
Chapter 11: Data Handling APIs lists thecharacters that can be used as delimiter overrides. B If data isbeing imported froman ASCII (SOL-DEL or SOL-MC) format file,the
FileFypeMod parameter can be set to “implieddecimal”to specify that the location of an implied decimal pointis to be determined by the table column definition (for example, if the value 12345 wereto be loadedinto a DECIMAL(8,B)col-, it would be loadedas 123.45, notas 12345.00-which would otherwise bethe case). If data is being imported from a non-delimited ASCII(SOL-ASC) format file,the FileFypeMod parameter can be set to “nullindchar=g (where x equals a character) to specify that a NULL value is to be replaced witha specific character. The character specified is case-sensitive forEBCDIC data files, except whenthe character is an English character. If data isbeing imported froma non-delimited ASCII(SOL-ASC) format file,the FileFypeMod parameter can be set to “etriptblanke“to spec@ that trailing blank spaces (after the lastnonblank character) are to be removed (truncated) when data is imported. Ifthis option is not set, trailingblanks are kept. B If data is being imported froma non-delimited ASCII(SQLASC) format file,the
FileQpeMod parameter can be set to “etriptnulle”to specify that trailing NULLS (after the lastnonblank character) are to be removed (truncated) when data is imported into variable length fields. Ifthis option is not set, NULLS (0x00 characters) are kept. B The “etriptblanke”and the “striptnulle”options are mutually exclusive. If one is
specified, the other cannot be used. B If data is being imported froma P C m (SOL-IXF) format file,the FileQpeMod parameter can be set t o “defer-import” to specify that the tabledsub-tables stored
in thefile are t o be created, but thedata is not to be imported.This setting can only be used with a CREATEimport. B If data isbeing imported froma PC/MF (SOL-IXF) format file,the FileFypeMod parameter can be set to “forcein”to tell theimport utility to accept data inspite
of code page mismatches and to suppress all translations between code pages. B If data is being importedfrom a PC/IXF (SOL-IXF) format file,the FileQpeMod
parameter can be set to “nochecklengthe”to specify that checking to ensure that k e d length target fields are large enough to hold the imported data is not to be performed. T h i s option is used in conjunction withthe “forcein”option. H If data isbeing imported froma PC/IXF (SOL-IXF) format file, the FileFypeMod parameter can beset to “indexixf”to tell the import utility to drop all indexes
currently defined on the existing table and create new ones from the index definitions foundin the PC/IXF format file being imported. This option can only be used when the contents of a table are being replaced.This option cannot be used with a view. W If data isbeing imported from a PC/IxF(SQLLIXF)format file,the FileFypeMod
parameter can be set to “indexechema-8chema”(where echema is avalid schema
1 I
444
I R
i
Part 3: Application F'rogramming Interface Functions
I
name) to indicate that thespecified schemais to be used forthe index name whenever indexesare created. If no schema is specified, the authorization ID that was used to establish the currentdatabase connection will be usedas thedefault schema. T h i s function will not issue a warning ifyou attempt to specify optionsthat are not
supported by the file type specifiedin theFileope parameter. Instead, this function willfail, and an error will be generated. If data isbeing imported froma WSF (SQL-WSF) format file,the FileTypeMod parameter is ignored. "he
LOAD function is
a faster alternative to the IMPORT function.
Connection This function can only be called if a connection to a database exists. Requirements Authorization Only users with System Administrator(SYSADM) authority, Database Administrator (DBADM) authority, or CONTROL, INSERT, or SELECT authority for the specified table or view can executethis function with the INSERT action (ActionString parameter) specified. Onlyusers with SYSADM authority, DBADM authority, or CONTROL authority for the specified table or view can executethis function withthe INSERT-UPDATE,REPLACE,or REPLACE-CREATE action (Actionstring parameter) specified. Onlyusers with SYSADM authority, DBADM authority, or CREATETAB authority for the specified table or view can executethis function withthe CREATE or the REPLACE-CREATE action (Actionstring parameter) specified.
see Also Example
EXPORT, LOAD
Thefollowing C++ program illustrates how t o use the IMPORT function to insert data from an external file into the DEPARTMENT table of the SAMPLE database:
/*
CRllEX2.
/* /* /* /* /* /*
*/ NAME: SQC PURPOSE: Illustrate How To Use The Following DB2 API Function In A C++ Program:
*/ */
*/ */
IMPORT
/*
/ / Include The Appropriate Reader Piles #include *windows.h, #include 4ostream.b #include <sqlutil.h> #include <sql.h> / / Define The API-Class Class class API-Claes {
/ / Attribute8 public:
*/ */ */
Chapter 11: Data Handling APIs struct eqlca eqlca; / / Operations public: long ImportData ();
1; / / Define The ImportDataO Member Function long API-Class ::ImportData( ) {
MsgFileName[801; String[EOl;
/ / Declare The Local M e m o r y Variables DataFileName char [EO] char char struct sqlchar *ACtiOnString; struct sqluimpt-in 1nput1nfo; atruct sqluimpt-out Output1nfo;
;
/ / Initialize
The Local Variables strcpy(DataFileName, *'C:\\DEPT.IQ"); etrcpy(MsgFileName, B8C:\\IMP-MS0.DAT"); / / Initialize The Import Input Structure 1nputInfo.sizeOfStruct = SQLUIMPT-IN-SIZE; 1nput1nfo.conanitcnt 20; 1nputInfo.restartcnt = 0;
-
/ / Initialize Them o r t Output Structure OutputInfo.sizeOfStruct = SQLUIMPT~O~-SIZE;
/ / Define The Action / / Data Is Imported
String
That
Will
Be To Used Control How
strcpy(String, "REPLACE INTO DEPARTMENT") ; ActionString(structsqlchar *) malloc (strlen(String) + sizeof (struct sqlchar)); Actionstring->length = strlen(String); strncpy(ActionString->data, String, strlen(String)); / / Import Data IntoThe D E P A R m N T Table FromAn IQ Format / / File (This FileWas Created By The EXPORT Example)
sqluimpr(DataFileName, NULL, NULL, ActionString, SQL-IXF, NULL, MsgFileName, SQLU-INITIAL, &InputInfo, &OutputInfo, NULL, NULL, &sqlca); / / If The Data / / Message
Was Imported Successfully, Display A Success
if (aqlca.sqlcode
m=
SQL-RC-OK)
coutOutputInfo.rowsInserted
Description
The ADD NODE function is used t o add a new node to a parallel database system. When this function is called, database partitions are automatically created (on the new node) for eachdatabase that is currently defined in the MPP server instance, and the configuration parameters for each newdatabase partition are setto the system default values. However, these partitions cannot be usedto store user data until the ALTER NODEQROUPSQL statement has been used to add the new node to an existing nodegroup. This function uses a special structure, the sqle-acldn-options structure, to specify information about the node (ifany) in which the temporary table space definitionsfor all database partitions to be created is stored. "he sqle-addn-options structure is defined in sqlenv.h as follows:
struct sqle-addn-options {
char sqladdid
unsigned long tblspace-type;
t 8I ;
/* An
*'eye catcher" value that is used */ / " to identify the structure.This field*/ */ /* W S t be Set to SQLE_ADWPTID-V51. /* Indicates that temporary table */ /* spaces should be the same as those */ at the specified node */ /* found /* (SQLE-TABLESPACES-LIKE-NODE), the */ /* same as those found at the catalog */ each */ /* node ofdatabase /* (SQLE-TABLESPACES-LIKE_CATALOG), */ */ /* or not created at all /* (SQLE-TABLESPACES-NONE). */
Part 3: Application Programming Interface Functions SQL-PDB-NODE-TYPEtblspace-node;
/* /* /* /* /* /* /*
Specifies the node number that */ tablespacedefinitionsshouldbe */ obtained from (provided the */ tblspace-type field is set */ to SQLE-TmLESPACES-LIKE-NODE). */ Note: The node number specified must */ exist in thefile ab2nOaes.cfg. */
B This function must be called from the node that is to be
Comments
added, and it can only beissued against an MPP server. B Before a new node can beadded, sufficientdisk space must exist for eachstorage
container that will be created (for each existingdatabase) on the system. B If an add node operationfails while creating a database partition locally, a cleanup phaseis initiated,and all database partitions that have already been created are dropped (i.e.database partitions are removed from the node being added-the local node). Ifthe clean-up phase is initiated,existing database partitions on other nodes are not affected. W If this function is called whilea database creation (CREATE DATABASE)or a database deletion (DROP DATABASE) operation is in progress, an error will be returned. W If temporary table spaces are to be created withinthe database partitions that are
automatically created whenthis function is called, this function may communicate with another node in the MPP system to retieve existing table space definitions. In this case, the start-stoptime DB2 Database Manager configuration file parameter is used to specify the time, in minutes, in which the other node must respond. If this time is exceeded, an error will be returned.
Connection Database Manager Requirements first.
This function canbe called at any time; a connection to a DB2
instance or to a DB2 database does not haveto be established
Authorization
Only users with either System Administrator(SYSADM) authority or System Control(SYSCTRLJ authority are allowed to execute this function call.
See Also
DROP
Example
The followingC++ program illustrates how to use the ADD NODE function to add a new node to an MPP system:
CH12EXl.CPP
NODEVERIFY
/*
/* /* /* /*
NAME: PURPOSE: Illustrate How To Use The Following DB2 API Function In A C++ Program:
*/ */ */ */ */
Chapter 12: DB2 Database Partition Management Functions /* /*
*/
ADD NODE
*/
/ / Include The Appropriate Header Files #include <windows.h> #include #include <sqlenv.h> #include <eqlca.h,
/ / Define The API-Class Class claee API-Class i / / Attributes public: struct sqlca eqlca; / / operations public : long AddNode ();
1; / / Define The AddNodeO Member Function long API-Class::AddNodeO i / / Declare The LocalMemory Variables struct eqle-addn-options Nodeoptions;
/ / Initialize
TheAdd Node option8 Structure etrcpy(Nodeoptions.spladdid, SQLE-ADDOPTID-V51); NodeOptione.tb1space-type = SQLE-TABLESPACES-NONE; NodeOptions.tblepace-node = 0; / / Add The New Node sqleaddP(&NodeOptione, hsqlca); / / If The New Node Has Been Added, Display A Succeee if (sqlca.sqlcode == SQL-RC-OK) cout #include <sqlca.h>
Header
Files
/ / Define The API-Class Class class API-Class {
/ / Attributes public: struct sqlca sqlca;
1;
/ / Operations public : long CheclrNodeO;
/ / Define The CheclrNodeO Member Function long API-Class::CheckNoBe()
c
/ / Declare The Local Memory Variables char Message[l0241;
*/ */ */ */ */ */ */ */
Part 3: Application Programming Interface Functions / / Determine
WhetherOr Not The Current Node Ie Being Used By / / A Database sqledrpn(SQL-DROPNODE-VERIFY, NmtL, hep1Ca); / / Display The Meeeage Returned eqlaintp(Message, 1024, 7 0 . breqlca); cout #include #include <sqlenv.h> #include <sqlca.h>
*/ */
Files
Chapter 12: DB2 Database Partition Management Functions
I.
,j
489 j
class API-Class
I / / Attributes public : struct sqlca aqlca;
/ / Operations public : long SetRuntimeDegreeO;
1; / / Define The SetRuntimeDegreeO Member Function long API-Class::SetRuntimeDegree()
I / / Declare The Local char Instance 191 ; / / Obtain The / / Variable
Memory
Variables
Current Value Of The DB2INSTANCE Environment
sqlegine(Instance, hsqlca); Instance[B] = 0; / / Attach 40 The Current DB2 Database Manager Instance sqleatin(Instance, "userid", ltpasswordN,brsqlca); / / Set The Maximum Runtime Degree Of Intra-Partition Parallelism / / To Be Used To Process SQL Statements( B y All Active / / Applications)
sqlesdeg(SQL-ALL_OSERS, NULL, 16384. hsqlca); / / I€ The TEST Database Bas Been Created, Display A Success / / Meaaage if (sqlca.aqlcode == SQL-RC-OK) {
1
cout
Description
"he REDISTRIBUTE NODEGROUP function is used to redistribute data across the nodes in a nodegroup. Whenthis function is called, a redistribution algorithm selectsany partitions that areto be moved according to how data i s currently distributed. Nodegroups that contain replicatedsummary tables or tables that have been defined with DATA CAPTURE CHANGES constraints cannot be redistributed.
Comments
This function can only be called froma database's catalog node. The GET NEXT
l
j
i 502
P' Part 3: Application Programming Interface Functions
,
DATABASE DIRECTORY ENTRY function
canbe used to determine which node is the
catalog node fora database. W If a directory path isnot includedin thefile name specifiedin the
PartitionMapFile or DataDistributionFile parameter, this function assumes that the file is located in thecurrent directory. H The partition map file referencedby the PartitionMapFile parameter must be in character format, and the file must contain either one entry (for a single-node nodegroup) or 4,096 entries (for a multi-node nodegroup). Eachentry must identie a valid node number. W The data distribution file referencedby the DataDistributionFile parameter must be in character format and must contain 4,096 positive integer entries. Each entry
must indicate the weight of the correspondingpartition, and the sum of all entries should be less than or equal to 4,294,967,295. H If the CallerAction parameter is setto D, nodes listed in theAddNodeList
parameter are added to the nodegroup, and nodes listed in the DropNodeList parameter are removed from the nodegroup during the dataredistribution operation. Otherwise,these parameters, along with the AddNodeCount and DropNodeCount parameters, are ignored. W If the CallerActwn parameter is setto U, the PartitionMapFile parameter should
contain a NULL pointer. TheDataDistributionFile parameter may or may not contain a NULL pointer. W If the CallerAction parameter is set to T, the DataDistributionFile, AddNodeList,
and DropNodeList parameters should containNULL pointers. "he AddNodeCount and DropNodeCount parameters should be set to 0 , and the PartitionMapFile parameter must contain a valid file reference. H If the CallerAction parameter is set to c or R, the PartitionMapFile,
DataDistributionFile, AddNodeList,and DropNodeList parameters should contain NULL pointers, and the AddNodeCount and DropNodeCount parameters should be set t o 0. H This function performsintermittent commits while executing. H The ALTER NODEGROUP SQL statement can be used to add nodes to a nodegroup.
NOTE:h'e!.!
ADD NODE and DROP NODE SQL statements thatwere provided in DB2 Parallel Edition forAIX Version l are supported for userswith SYSADM or SYSCTRL authority. When theADD NODE SQL statement is processed, containersare created like the containers found on the lowest node number of existing nodes within the nodegroup.
R When this function executes,all packages that have a dependency on a table that
has been redistributed are invalidated.Therefore, it is important to explicitly rebind all packages that were affected immediately aRera redistribute nodegroup operation has completed. Explicit rebindingeliminates the initial delay that will
5 Chapter 12: DB2 Database Partition Management Functions
i
503
' __
.-
result the first time an SQL request attempts to use an invalid package.It is also a good idea t o update table statistics for all tables that have beenredistributed. When a redistribution operation is performed, a message fileis written to:
W The $HOMElsqlliblredistdirectory on UNIX based systems, usingthe following format: database-name.no&gmup-name.timestamp,where timestamp is thetime at which this function was called W The $HOME\sqllib\redist directory on other operating systems, usingthe following format: database-name\Frst-eight-characters-of-nodegrollp-nam\date\time, where date and time are the dateand time at which this function was called Connection This function can only be called if a connection to a database exists. Requirements Authorization Only users with either System Administrator(SYSADM) authority, System Control . (SYSCTRL) authority, or Database Administrator(DBADM) authority are allowed to execute this function call.
see Also
DROP NODE VERIFY, REBIND, RUNSTATS
Example
Thefollowing C++ program illustrates how to use the REDISTRIBUTE NODEOROUP function to activate the SAMPLE database:
/* /* NAME: /* PURPOSE: /* /* /*NODEQROUP /* /*
CHl2EX8.SQC Illustrate How To Use The Following DB2 API Function IU A C++ Program: REDISTRIBUTE
*/ Header
/ / Define The API-ClassC h S e class API-Claee
/ / operations public :
*/ */ */ */
/ / Include The Appropriate #include <windows.h> #include #include <sqlutil.h> #include <sqlca.h>
/ / Attributes public: struct sqlca
*/ */ */
eqlca;
Files
Part 3: Application Programming InterfaceFunctions long RedistributeNdegroupO; 11
/ / Define The RedistributeNodegroupO Member Function long API-C1ass::RedistributeNodegroupO
f / / Declare The LocalM e m o r y Variables char NodeQroup1201 ; / / Initialize The Local Memory Variables etrcpy(NodeGroup, t"IBMTEMPGROUP"); / / Redistribute The Nodegroup Uniformly sqludrdt(NodeOroup, NULL, NULL, NULL, 0, NULL, 0, 'U',
&sqlca);
/ / If The Nodegroup Has Been Redistributed, Display A Succeea / / Meseage if (sqlca.sqlcde l a SQL-RC-OK) cout
Description
The QET/UPDATE MONITOR SWTICHES function is used to selectively turn various database monitor switches (associated with information groupsthat areto be monitored) on or off and to query the database monitor for a group monitoring switch's current state. This function uses an arrayof six sqlm-recordingsoup structures to retrieve and update database monitor switch values. The sqlm-recordingsroup structure isdefined in sqlmon.h as follows: struct sqlm-recording-group {
unsigned long
input-state;
/*
Indicates whether the specified
/ * information group monitoring
*/
*/
i
'
. .
Chapter 13:Database Monitor and Indoubt Transaction ProcessingAPIs / I i
'
5 17 i
/ * switch shouldbe turned on
/*
/*
unsignedlongoutput-state;
/* /*
/* /*
/* /*
sqln-timestamp start-time;
/* /* /*
/* /* /* /*
I
(SQLM-ON) , turned off (SQLM-OFF), or left in its Current state (SQLM-HOLD) The current state ofthe specified group monitoring switch. Indicates whether the specified information group monitoring switchis currently turnedon (SQLM-ON) or turned off (SQLM-OFF). The date andtime that the specified group monitoring switch was turned on. If the specified group monitoring switch is turned off, this field is set to 0.
.
!
*/ */ */ */ */ */ */ *! */ */
*/
*/ */ */
*/ */
1;
This structure contains a reference to an additional structure, sqlmtimestamp, that stores timestamp information about whena group monitoring switch was turned on. The sqlmtimestamp structure is defined in sqlmon.h as follows: typedef struct sqln-timestamp {
unsigned long seconds:
/*
The dateandtime,expressed as the number seconds of since /* January 1, 1970 ( m ) . The of number /*elapsed 0 to /* microseconds,rangingfrom /* 999999, in the current second.
/*
unsigned long microsec;
*/ */ */ */ */ */
? sqln-timestamp;
An array of six sqlm-recordinggroup structures mustbe definedor allocated before this function is called. Ifthis function is used to set the value of one or more group monitor switches, each corresponding structure in the array must also be initialized beforethis function is invoked. After this function executes,this array will contain information aboutthe current state of each group monitor switch available. You can obtain information abouta specific group monitor switch by indexing the array with one of the following symbolic values: SQLM-UOW-SW
References unit of work (transaction) group monitor switch information. 8QI.d"STATEMEN'I-SW
References SQL statement group monitor switch information.
m SQLM-TABLE-SW References table group monitor switch information.
!
i
Part 3: Application Programming Interface Functions H SQLM-BUFFER-POOL-SW
References bufferpool group monitor switch information. SQLM-LOCK-SW
References lock group monitor switch information. SQWSORT-SW
References sort group monitor switch information. Refer to the beginning of the chapter for more informationabout the database system monitorelements associated with each of these monitoring group switches.
Comments
H
If database monitor data is to be collected for earlier versions ofDB2 (i.e., if the Version parameter is setto SQLM-DEMON-VERSIONI), this function cannot be
executed remotely. H You can use this function to query the current state of different information group switches without modifying them by specifying SQU-HOLD for the input-state element of each structure in the Group States array. H If this function attempts to obtain database monitor data for a version that is higher than thecurrent server version, only information that is valid forthe server's level will bereturned.
M For detailed informationon using the database system monitor,refer to the IBM DB2 Universal Database System Monitor Guide and Reference. Connection This function can only be called if a connection to a DB2 Database Manager Requirements instance exists. In order to obtain or set thedatabase monitor switchsettings for a remote instance (or for a different local instance), an application must first attachto that instance. Authorization Only users with System Administrator (SYSADM),System Control (SYSCTRL),or System Maintenance(SYSMAINT) authority can executethis function call. See Also
ESTIMATE DATABASE SYSTEM MONITOR BUFFER SIZE,GET SNAPSHOT,RFSET MONITOR
Example
Thefollowing C++ program illustrates how to use the OETIUPDATE MONITOR SWITCHES function to retrieve and change the current values of the database monitor group switches:
/*
/* /* /*
/* /* /* /*
NAME: CH13EXl.CPP PURPOSE: Illustrate How To Use The Following DE2 API Function In A C++ Program: OET/UPDATE MONITOR
SWITCHES
*/ */ */
*/ */ */ */ */
R I ! ,
.
I
I
Chapter 13: Database Monitor and Indoubt Transaction Processing APIs
I
li
I
! 5 19 I 1
I
/ / Include The Appropriate #include <windows.h> #include #include <sqlmon.h> #include <sqlca.h>
Header
Files
/ / Define The API-Class Class class API-Class {
/ / Attributes public: struct aqlca sqlca; / / Operations public: long aetsetMonswitches~);
1; / / Define The QetSetMonSwitchesO Member Function long API-Class::QetSetMonSwitches() {
/ / Declare The Local Memory Variables struct sqh-recording-group GroupStates[61; / / Initialize The Database Monitor Group States Array / / (Turn The Table SwitchOn, The Unit Of Work Switch Off, / / And Query The Settings Of The Other Switches) QroupStates[SQLM~UOW~SWl.input_state = SQLM-OFF; Groupstates [SQLM-STATEMEN-SW1 iUpUt-State = SQM-HOLD;
.
oroupStates[SQLM-T~LE-SWI .iIlpUt-State SQLM-ON; GroupStates[SQLM~B~FER_POOL~~].inpUt_state = SQLM-HOLD; GroupStates[SQLM~LOCK~SWl.input~state = SQLM-HOLD; GroupStates[SQLM-SORT-SW1.input-state = SQLM-HOLD; / / Set/Query The Database Monitor Switches s q l m o n ( S Q ~ - D B M O N ~ ~ R S I O N SNULL, , Groupstates, hSplCa);
/ / If The Database Monitor Switches Were Set/Queried, Display / / Their Current Values if (sqlca.sqlcode =a SQL-RC-OK)
tout
Description
The GET SNAPSHOTfunction is used to retrieve specific DB2 database monitor information and copy it to a user-allocated data storage buffer. Thedatabase monitor information returned represents a snapshot of the DB2 Database Manager's operational status at the time this function was executed. Therefore, you can only update this information by reexecuting the GET SNAPSHQT function call. A special structure ( s q l m )is used to pass information about the listof objects that are to be monitored to the DB2 Database System Monitor whenthis function is called. Refer to the ESTIMATE DATABASE SYSTEM MQNITQR BUFFER SIZE function for a detailed description of this structure. After this function is executed, summary statistics about the snapshot information that was collectedis stored in a sqlm-collected structure which is defined in sq1mon.h as follows: typedef struct sqlm-collected unsigned unsigned
unsigned
unsigned
unsigned
unsigned
long size;
/* /* /* /*
The size of the eqlm_collected
*/
structure */ Indicates whether DB2 Database long db2 : */ Manager instance information */ /* was collected in the snapshot (l), */ /* or not ( 0 ) . This field is obsolete */ */ /* in Version 5.0 and later. /* Thenumber ofdatabases */ long databases; /* that snapshot information was */ /* collected for. This field is obsolete */ /* in Version 5.0 and later. */ */ long tabledatabases; /* The number of databases /* that table snapshot information was */ /* collected for. This field is */ /* obsolete in Version 5.0 and later. */ long lock-databases; /* The number of databases */ */ /* that locking snapshot information was /* collected for. This field is */ */ / * obsolete in Version 5.0 and later. long applications; */ / * The number of applications /* that snapshot information was */
m
!
Chapter 13: Database Monitor and Indoubt Transaction Processing APIs
unsigned long applinfos; ’
unsigned long dcs-applinfos;
/* collected for. /* in Version 5.0 /* The number of
*/
/* /* /*
*/ */ */
/*
/* /*
/* unsigned long
server-&2-type;/*
/* /* /*
eqh-timestamp time-stamp; taken sq-recording-group group-~tates[61;
char
/* The current group switches /*monitoring /* servergrdid[201;
*/
char
This field is obsolete and later. applications that summary information was . collected for. This field is obsolete in Version 5.0 and later. The number of applications that DC9 eummnary information was collected for. This field ie obsolete in Version 5 .O and later. The DB2 Database Manager server type The date and timethesnapshot was
servernname
state of the information
*/ */ */
*/ */ */
*/
*/ */
*/ */ */
The product
name
and
/* number of the DB2 Database Manager the workstation / * onserver /* The workstation name 20I ; */
version
*/ */ stored in
the mente parameteroftheD82 */ Database Manager configuration */ */ /* fiie on the serverworkstation server-instance-name 1201 ; char /* The instance name of the */ Database Manager /* D82 */ reeerved[22]; /* Reserved future for use */ char unsigned shortnode-number8 / * The number o€ the node that sent the */ /*information snapshot */ long time-zone-disp; /* The difference, in eeconds. between */ /* Qreenwich Mean Time (GMT) and */ time /* local */ unsigned long nun-top-level-structs; /* Thetotalnumberofhigh-level */ /* structures returned in the snapshot */ /* output buffer. This counter replaces */ /* the individual countere (i.e.. fields /* 2 through 8 of this structure) which */ /* are obsoletein Version 5.0 and higher*/ unsigned long tablespace~databases; / * The number of databasee for which */ /* tablespaceenapshotinformation */ collected/* was */ unsigned long server-version; /* The version number of the server */ /* returning the snapshot data */ 1 sq-collected8
/* /*
“his structure contains referencesto t w o additional structures (sqlm-recordinggroup and sqlm-timestamp) that are usedto store information about the current state of specific group monitoring switches andthe exact date and time a group monitoring switch was turnedon. Refer to theQET MONITOR SWITCHESfunction for a
detailed descriptionof each of these structures. When snapshot information is collected,it is stored in a user-allocated buffer (whose address is stored in the Buffer parameter). Portions of this buffer must be typecast with specialstructures before the information collectedcan be extracted from it. For more information aboutthese structures, refer to the IBM DB2 Universal Database SystemMonitor Guide andReference.
Comments
The obj-type field of each sqlm-obj-stmt structure used in the array of structures referenced by the SQLMA parameter must contain oneof the following values: SQmDE2
DB2-related informationis to be monitored. SQLMA-DBASE
Database-related information is to be monitored. SQ-APPL
Application information, organized by the application ID, is to be monitored. SQLMAAQENl-ID
Application information, organized by the agent ID, is to be monitored.
m SQmDEASE-TABLES Table information for database a is to be monitored. SQLMA-DBASE-APPLS
Application information for database a is to be monitored. SQLMA-DBASE-APPLINFO
Summary application informationfor a database is to be monitored. SQ~-DBASE-LOCKS
Locking informationfor a database is to be monitored. SQI.t"DBASE-ALL
Database-related information forall active databases in theDatabase Manager instance is to be monitored.
m
m
SQLMA-APPL-ALL
Application informationfor all active applicationsin the Database Manager instance is to be monitored. SQLMA-APPLINFO-ALL
Summary application informationfor all active applicationsin theDatabase Manager instance is to be monitored.
m SQLMA-DCS~APPLINFO-ALL Summary DCS application information forall active applicationsin the Database Manager instance is to be monitored. If database monitor data isto be collected forearlier versions of DB2 (i.e.,if the Version parameter is setto SQLM-DEMON-VERSION~ or SQIX-DBMON-VERSIONI), this function cannotbe executed remotely.
Chapter 13: Database Monitor and Indoubt Transaction Processing MIS M
You can determine the amount of memory neededto store the snapshot information returned by this function by Calling the ESTIMATE DATABASE SYSTEM MONITOR BUFFER SIZE function. If one specific objectis being monitored, onlythe amount of memory neededto store the returned data structurefor that object needs to be allocated. Ifthe buffer storage area is not large enough to hold all the information returned by this function, awarning will be returned, and the information returned will be truncated to fit in the assigned storage buffer area. When this happens, you should resizethe memory storage buffer and call this function again.
M
No snapshot data will be returned by a request for table information if any of the following conditionsexist -The TABLE recording switchis turned off. -No tables have been accessed sincethe TABLE recording switch was turned on. -No tables have been accessed sincethe lasttime the RESET MONITOR function was called.
M If this function attempts to obtain database monitor data for a versionthat is
higher than thecurrent server version, only informationthat is valid forthe server's level will be returned.
For detailed informationabout using the database system monitor,refer to the IBM DB2 Universal Database System Monitor Guide and Reference. Connection This function can only be called if a connettionto a DB2 Database Manager Requirements instance exists. In order to obtain a snapshot of database monitor data settings for a remote instance (or for a different localinstance), an application must first attachto that instance. Authorization Only users with System Administrator (SYSADM) authority, System Control (SYSCTRL)authority, or System Maintenance(SYSMAJNT) authority are allowed to execute this function call.
see Also Example
SNAPSHOT
QET/UPDATE MONITOR SWITCRES, ESTIMATE DATABASE SYSTEM MONITOR BUFFER SIZE, RESET MONITOR
Thefollowing C++program illustrates how to use the GET SNAPSHOT function to collect snapshot information forthe SAMPLE database and the DB2 Database Manager:
/* /* /* /* /* /* /* /* /*
CH13EX3.CPP PURPOSE: Illustrate How To U s e The Following DE2 API.Functions In A C++ Program:
N A E I E :
ESTIMATE DATABASE GET
SYSTEM
MONITOR BUFFER SIZE
*/ */ */ */ */ */ */ */ */
ffer; ffePIndex;
Part 3: Application Programming Interface Functions / / Include The Appropriate #include <windows.h> #include #include #include <sqlmon.h> #include <sqlca.h>
Header
Files
/ / Define The API-Class Class class API-Class {
/ / Attributes public : struct aqlca sqlca;
1;
/ / Operations public : long GetMonSnapshotO; private : void Procese-DB2-Info(struct eqlm-ab2 *DB2Info); void Procees-DBase-Info(struct sql-abase *DBaseInfo);
/ / Define The QetMonSnapshot ( 1 Member long API-Class::OetMonSnapshot()
Function
{
/ / Declare The LocalMemory Variables char char Buffsize; unsigned long unsignedhhlDPStructs int m 0; StruCt eqlraa *eqlma; structeq-collectedCollected; sqlm-ab2 etruct *DB2Info; struct eqlm-dbase *DBaseInfo;
/ / Specify The Data Monitors To Collect Information For SUl- = (StruCt aqlma * ) malloc(SQLMASIZE(2)); sqlma->obj-num I 2; sqlma-~obj~var[01 .obj-type = SQm-DB2; etrcpy(sq1ma->obj-var[Ol.object, "SAMPLE"); sqlma->obj-var[ll .obj-type= SQLMA-DBASE; strcpy(sqlma-~obj~var[ll.object,"SAMPLE");
/ / Estimate The Size Of The Database Monitor Buffer Needed sqlmonez(SQIN~DBMON~VERSION5,NULL, sqlma, &Buffsize, &sqlca); / / If The
Database Monitor Buffer Size Was Estimated, Allocate / / Memory For It if (sqlca.sqlcode == SQL-RC-OK) Buffer = (char *) malloc(BuffSize); else goto EXIT; / / Collect
Monitor Snapshot Information
l .
,
,
/
Chapter 13: Database Monitor and Indoubt Transaction Processing APIs ,
533-
I ._-.
.~.
s q l m o n s s ( S Q ~ ~ D B M O N ~ V E R S I O NNULL, 5, sqlma, Buffsize, Buffer,
&Collected, &sqlca); / / If The Snapshot Information Was Collected, Display It if (sqlca.sqlcode == SQL-RC-OK)
i / / Add All NumStructs
P
Structures Returned In The Buffer Collected.db2 + Collected.databases + Collected.table-databases + Collected.lock-databases + Collected.applications + Collected.applinfos + Collected.dcs~applinfos+ Collected.tablespace~databasee;
/ / Loop Until All Data Structures Have Been Processed for (BufferIndex = Buffer; NumStructs > 0; NumStructs-) i
/ / Determine The StructureType switch ((unsigned char) *(BufferIndex + 4)) i / / Display Select DB2 Information Collected case SQLM-DB2-SS: DBZInfo = (struct sqlrq_dbZ * ) BufferIndex; Process~DB2~Info(DB2Info); BufferIndex +a DBZInfo->size; break;
/ / Display Select Database Information Collected case SQM-DBASE-SS: DBaseInfo = (struct sqlm-dbase *) BufferIndex; Procees-DBase-Info(DBase1nfo); BufferIndex +I DBaseInfo->size; break; / / If Anything Elee Was Collected, DisplayAn Error default : cout &Bath, 19); TempStr[l9] = 0; cout xid.bqual_length / / Define The API-Claee Claee claee -1-Claee i / / Attributes public: etruct eqlca eqlca; / / Operatione public: long CommbitIDTraneactionO;
1; / / Define The CommitIDTraneactionO Member Function long API_Claea::C~tIDTraneaction() {
/ / Declare The Local Memory Variables SQLXA_RECOVER *IndoubtTrane I NULL; long NumIDTrane; / / Restart The MYDBl Database cout ** YRestarting the databaee. Pleaee wait."
Description
The ROLLBACK AN INDOUBT TRANSACTION function is used to heuristically roll back an indoubt transaction. If this function is successfully executed,the specified transaction's state becomes "Heuristically RolledBack." When a transaction is initiated, thetransaction is assigned a unique XA identifier by the Transaction Manager (whichis then used to globally identify the transaction). This unique XA identifier is used to specify which indoubt transaction this function is to roll back. Refer to the LIST DRDA INDOUBT TRANSACTIONSfunction for a detailed description of the XA identifier structure (sqlxu;cid_t).
Comments
R The maximum value that can be specified for both the gtrk-length and the bqual-length fields of the sqlxa-xid-t structure is64.
You can obtainsqlxu;2id-t structure information fora particular transaction by calling the LIST INDOUBT TWSACTIONS function. Only transactions with a statusof "Prepared" or "Idle" can be placed in the "Heuristically Rolled Back" state. The Database Manager remembers the stateof an indoubt transaction, even &r the transaction has been heuristically rolled back, unless the FORQET TRANSACTION STATUS function is executed.
Connection This function can only be called if a connection to a database exists. Requirements Authorization Only users with either System Administrator(SYSADM) authority or Database Administrator (DBADM) authority areallowed to execute this function call. See Also
COMMIT AN INDOUBT TRANSACTION,LIST INDOUBT TRANSACTIONS,FORGET TRANSACTION STATUS
Exkple
Thefollowing C++ program illustrates how to use the ROLLBACK AN INDOUBT TRANSACTION function to heuristically rollback an indoubt transaction (this example was created andtested on the AD(.operating system):
/* /* /* /* /* /* /* /* /* /* /* /*
NAMg : CH13EX7. SQC PURPOSE: Illustrate How To Use The Following DB2API Functions In A C++ Program: ROLL BACK A N INDOUBT TRANSACTION FORQET TRANSACTION STATUS
OTHER DB2 APIB SHOWN: LIST INDOWT TRANSACTIONS
*/
*/ */ */ */ */ */ */
*/
*/ */ */
Piles
%
/ / Operations
ollbac~ID~ran~action():
3: ::Rollbac~I~Transaction() %
a
Variables = NULL;
tart The Databaee, Display
3 Of TBI
%
#include <process.h> / / -beginthreadO, -endthread() #include <staaef.h> #include <stdlib.h> #include #include #include <sql.h> #include <splenv.h> #include <sqlca.h> / / Define The Thread Function Prototypes void CheckKey(voi8 *Dummy)# void QetInstance(v0id *Count);
/ / Declare The GlobalMemory Variables BOOLContinueThread = TRW; / / End ThreadFlag 191 ; / / Current Instance char Instance
/* /*
*/ The Main Function
/*
*/ */
int main() / / Declare long rc char Counter
The
LocalMemory Variables
= SQL-RC-OKI E 0;
/ / Set The Application Context / / Use The Same Context)
Type To N o m 1 (All Threads Will
eqleSetTypeCtx(SQL_CTX-ORIQINAL)# / / Launch TheChecMeyO Thread To Check / / Keystroke -beginthread(CheckKey, 0, NULL); / / Loop Until CheckKey( ) Thread while(C0ntinueThread =I T R W )
For
Terminates
A
The
Terminating
Program
{
/ / Launch np To Ten QetInatanceO Threads if (Counter < 10) -beginthread(QetInstance, 0, (void *) (Counter++))# / / Wait One Second s1eep(1000L)#
Between Loop Pasees
Part 3 Application Programming Interface Functions / / Display The Current Value Of The DB2INSTANCE Environment / / Variable cout #include <sqlenv.h> #include aq1ca.b / / Define The
Thread
Header
Function
Files
Prototypes
void CheckKey(void *Dummy);
void GetInstance(void *Count); / / Declare The Global Memory Variables ContinueThread = TRUE; / / End Thread Flag Instance char [9] ; / / Current Instance void *ContextData = NULL; / / Context Data Storage Area BOOL
*/ */
*/ */
*/ */
Part 3: Application Programming Interface Functions /* /*
The Main
*/ */
Function
/*
*/
int main() {
-
/ / Declare The Local long rc SOL-RC-OK; char Counter = 0;
Memory
Variables
/ / Set The Application
Thread
Context
Type To Manual
sqlesetTypectx(sQL~cTx~I_MAN[rAL); / / Launch TheCheckKeyO Thread To Check / / Keystroke -beginthread(CheckKey, 0, m&);
For
A
Terminating
/ / Loop Until CheckKey() ThreadTerm4.nates The Program while (ContinueThread -= ' l a w ) {
/ / Launch nP To Ten GetInetance( ) Threads if (Counter 10) -beginthread(QetInstance, 0, (void *) (Counter++)) ; / / Wait One Second Sleep (1OOOL) ;
1
1
Between Loop Passee ~
/ / Display The Current Value O f The DB2INSTANCE Environment / / Variable cout #include <process.h> #include <stddef.h> #include <stdlib.h> #include #include #include <spl.h> #include <sqlenv.h> #include <splca.h>
Header
*/ */
*/ */ */ */ */
Files / / -beginthread(), -endthread()
/ / Define The Thread Function Prototypes void CheckRey(void *Dummy); void GetInstance(void*Count); / / Declare The Global Memory Variables BOOL ContinueThread = TRUE; / / End Thread Flag [g] ; / / Current Instance Instance char void *ContextData = NULL; / / Context Data Storage Area
/* /* /*
The Main Function
*/
int main()
I
/ / Declare The LocalMemory Variables long rc = SQL-RC-OK; char Counter = 0 ; / / Set The
Application
Thread
Context
Type To Manual
sqleSetTypeCtx(SQL~CTX~TI~M?iNUAL);
/ / Launch TheCheckKeyO Thread To Check / / Keystroke -beginthread(CheckKey, 0, NULL);
ForA Terminating
/ / Loop Until CheckKeyO Thread Terminates The Program while(ContinueThread I= TRW) {
/ / Launch Up To Ten QetInstanceO Threads if (Counter < 10) _beginthread(OetInstance, 0, (void * ) (Counter++)); / / Wait One Second Sleep(1000L);
Between
Loop
Passes
*/. */
iI Part 3: Application ProgrammingInterface Functions
1
1
/ / Display The Current Value Of The DB2INSTANCE Environment / / Variable tout UCUrrellt value of the DB2INSTANCE environment 11; cout column in the SYSIBM.SYSINDEXESsystem table. If the index tokenis 0, the log recordrepresents a meateldrop action that was performed on an internalindex (as opposed to a user index).
16
4
unsigned long
Index root page This is an internal indexidentifier.
Total length of Create/Drop Index Log Record : 20 bytes Adapted from Table97 on page 537 of IBM DB2 Universal DatabaseAPI ReferenceThis is an Undo log
record. This is an Undo log record.
CreateDrop Table, Rollback CreateDrop Table Log Record The creatddrop table, rollback creatddrop table log recordi s written whenever a table i s created or destroyed (dropped) or whenever a creatddrop table operation is rolled back. The structure of the creatddrop table, rollback create/drop table log record is shown in Table B-11. Table B-l 1 Create/Drop Table, Rollback Create/Drop Table Log Record Structure . ,,. .,... - ‘.&.‘J.. ,..z'\v7*."w-::*
.I
i
SiZk'cByr
i
i
~
~
.
.
"
v
.
n
"
~
~
-
~
.-"mmmrU*r.q ~ ~ U ~
DBCLOB The lengthof the fixed portionof all variable lengthfields is 4. Note: Ifthe record is an internal control record,this information cannot be viewed.
Total length of Data Record Details: 4 bytes Adapted from Table 93 on pages 633-535 of B M DB2 Universal DatabaseAFI' Reference
An insertfdelete record log recordis a Normal log record.A rollback updaiddelete record log recordis a Compensation log record. The table descriptor record describes the column formatof the table. It contains an array of column structures, whose elements represent field type, field length, null flag, and field offset. The later is the offset, h m the beginning of the formatted record, where the fixed length portion of the field is located. For columns that are nullable (as specified bythe null flag), an additional bytefollows the fixed length portion of the field. This byte contains oneof the following values: NOT NULL
(0x00):There is a valid value in the fixed length data portion of the record.
NULL
(0x01):The data field value
is NULL.
The formatted user data record contains the table data that is visible to the user. It is formatted as a fixedlength record, followed bya variable length section. All variable field types have a 4-byte fixed data portion in the fixed length section (plus a null flag, if the column is nullable). Thefirst 2 bytes of the variable length section represent the offset fromthe beginning of the fixed length section, wherethe variable data is located. The next 2 bytes specifythe length of the variable data referenced by the offset value.
Update RecordLog Record .The update record log recordis written whenever arow is updated and its storage location is not affected. The structure of the update record log record is shown in Table B-17.
0
6
6
2
char[2]
8
4
long
Padding Record ID
12
2
unsigned short
New record length
14 unsigned short 2
DMS Log Record Header Table(See
Free space
B-2)
ad21
i
j
j j 597-
Appendix B: DB2 Log Records
" .
1 F
F
unsigned short
16
2
18
variable
Old recordheader and data
variable
6
DMS Log Record Header (See Table B-2)
&cord offset
variable
2
variable
4
long
Padding
variable
2
shortunsigned
Old record,length
variable
2
shortunsigned
Free space
variable
2
shortunsigned
Record offset
variable
variable
Record ID
New record header and data
Total length of Update Record Log Record: 36 + New record length + Old record length bytes Adapted from Table 104 on page 643 of IBM DB2Universal DatabaseAPI Reference
This is a Normal log record.
W! F4 Long Field Manager Log Records Long Field Manager log records are generated whenever long field data i s inserted, updated, or deleted and only if a database'slogretain andlor userexits configurationparameter has been turned on (enabled).To conserve log space, long field data inserted into tables is not logged if the database is configured for circular logging. In addition, when a long field valueis updated, the "before" imageis shadowed and not logged. Long Field Manager log records contain the header information shownin Table B-18, along with additional, record-specific information. Table E 18 Long Field Manager Log Record Header Structure
0
1
unsigned char
Originator code (Always3)
1
1
unsigned char
Operation type The following operation type values are valid
2
unsigned short
110
Add long field record
111
Delete long field record
112
Non-update long
field record
Pool identifier
4
2
unsigned short
Object identifier
6
2
unsigned short
Parent pool identifier (Pool ID of the dataobject)
8
2
unsigned short Parent
object identifier (Object ID of the data
'
Total length of Long Field Manager Log RecordHeader : 10 bytes Adapted from Table 105 on page 544 o f IBM DB2 Universal DatabaseAPI Reference
object)
A d d / ~ e l e t e / ~ o n - U ~ ~Long a t e Field Record Log Record Structure
Offset
Size (Bytes)
0
10
C Data Type
Description
Large Object (LOB) ~anagerLog Record Header Structure
Offset
Size (3i3ytes) C Data Type
0
1
unsigned char
1
1
signed short
2
2
~nsignedshort
4
Description O ~ ~ n a tcode o r (Always 5) Operation identifier Pool identifier
ed short
Object identifier Parent pool identifier
6
2
ed short
8
2
u n s i ~ e dshort
10
1
ed char
Parent object identifier Object type
Total length of Large Object (LO
Insert LOB Data (Logging On} Log Record Structure
Offset
Size (Bytes)
0
1
11
1
C Data Type char
Description
i "Ii
F
600__ . .
.
Appendix B: DB2 Log Records
~
12long unsigned4 16
8
length Data double
variable variable 24
address Byte
in object
LOB data
Total length of Insert LOB Data (Logging On) Log Record : 24 + Data length bytes Adapted from Table 108 on page 546 of IBM DB2 Universal DatabaseAPI Reference
Insert LOB Data (Logging Off) Log Record The insert LOB data (logging off) log recordis written whenever LOB data isinserted into a LOB column, or appended to existing data in a LOB column (and logging of the data hasbeen turned off). The structure of the insert LOB data (logging off) log record is shown in Table B-22. Table 6-22
U
11
Insert LOB Data (Logging OffJLog Record Structure
11
1 Padding
LOB Manager Log Record Header (See TableB-20) char
12
4
unsigned longlength Data
16
8
double
address Byte
in object
Total length of Insert LOB Data (Logging Off) Log Record : 24 bytes Adapted from Table 109 on page 546 of IBM DB2 Universal DatabaseAPI Reference
m
Transaction Manager Log Records The Transaction Manager generates log records that signify the completion of transaction events (i.e. commits and rollbacks). The timestamp values stored in these log records are inCoordinated Universal Time (CUT) format and mark the time, in seconds, since January 1,1970.
Normal CommitLog Record The normal commit log record is written whenever a COMMIT operation is performed. A COMMIT operation can occur when: The COMMIT SQL statement is executed An implicit COMMIT is performed during a CONNECT .RESEToperation. is shown in Table B-23. The structure of the normal commit log record
Appendix B: DB2 Log Records
1
601
~
__
Record Log
1
""
Header (See Table &l)
0
20
20
4
unsigned long
Time transaction committed
24
9
char[9]
Authorization ID of the application (if therecord log as propagatable)
is marked
Total length of Normal Commit Log Record : Propagatable:
33 bytes
Non-propagatable:
24 bytes
Adapted from Table110 on pages 546 -547of IBM DB2 Universal Database API Reference This log record is written for XA transactions in a single-node environment, oron the coordinator nodein a multi-node environment.
Heuristic CommitLog Record The heuristic commit log recordi s written whenever an indoubt transaction is committed. The structure of the heuristic commit log record is shown in Table €3-24. Heuristic Commit Log Record Structure
Table 5 2 4 :,
4
F. ? ; , v , : ~ . . . . e : ,~L~,,:?:~
Description Datalink Manager Record Log
Header (See Table B 4 )
Stem name length
name Stem
Total length o f Unlink File Log Record:32 + Stem name length bytes Adapted from Table132 on page 556 of IBM DB2Universal Database API Reference
One link file log recordis written for each link that is dropped. This is an Undo log record.
~
,
~
.
Appendix B: DB2 Log Records
Delete GroupLog Record The delete grouplogrecord is written whenever a table containing one or more DATALINK columns (that have the file link control attribute) isdropped. Thestructure of the delete group log record is shown in Table B-47. Table B 4 7 Delete Group Record Structure ....... -. . . . ..,...... .... -.. ............ _..”
...........
” ”
Datalink Manager Log Record Header (See Table B-44)
0
10
4
4
long
Server ID
8
7
chad77
Recovery ID
15
1
charill
16
17
char[lfl
Padding Group ID
33
3
char[3]
Padding
Total length of Delete Group Log Record : 36 bytes Adapted from Table 133 on page 556of IBM DB2 Universal Database kpI Reference
Onedeletegrouplogrecord is written for each DATALIN’Kcolumn for each DataLinks File Manager (DLFM) configuredin the datalinks configuration file.A log record is only written for a given DLFM ifthat DLFM has thegroup defined onit when the table is dropped. T h i s is an Undo log record.
Delete PGroupLog Record The delete pgrouplog recordis written whenever a table space containing one or more DATALINK columns (that have the file link control attribute) isdropped. Thestructure of the delete pgroup log recordis shown in Table l3-48. Table B 4 8 Delete PGroup Record Structure
......... . . -~....
_%
..............,..
...
.Y..-..-...(I..V
... .r”m”””.
.....
Datalink Manager Log Record Header (See Table B-44)
U
10
4
4
long
Server ID
8
6
SQLU-LSN
Pool log life
sequence number
“he SQLU-LSN data type is defined as: union ( chart61;
short[3]; 14
2
unsigned short
1; Pool ID
Continued
Offset
Size (Bytes)
C Data Type
Description
ark the ~ r e ~ a r a t i oofna transactio~that he str~ctureof the
DLFM Prepare Log Record Structure
Offset
Size (Bytes)
0
10
C Data Type
Description
are log record is sh
+6 ith
of the example p r o ~ a m shown s in this book operating system, using the
To establish comm~icationsbetween a client ~orkstatio base server, perform the foll steps (after install in^ ~ l i e nAt ~ ~ l i ~ a~ nt ai b~ lne r( and the DB2 ~niversal s en^ K) software):
.Invoke the Client ~ o ~ f i ~ r a tAssistant ion by m for ~ i n d o w NT s Programs menu. no database connections have bee
The DBZ Client Con~gurati~fl Assistant Welcome Panel
tion has been made, press the ~ e xpush t butt ase page of the Add ~ a t a ~ a~s e ~ aa ~
~
~
i
d
~
Appendix C:How the Example Programs Were Developed
Figure C-2
The Source page of the DB2 Add Database SmartGuide
Figure C-3
The Target Database page of the D62 Add Database SrnartGuide
~~~~
..
.
.
Appendix C:How the Example Programs Were Developed
i 6 17
5. Once the database is selected, press the Next push button to move to the Alias
page of the Add Database SmurtGuide and enter a database alias and description (seeFigure C A ) .
Figure C 4 The Alias page of the D62 Add Database SrnartGuide
6. When an alias and a description have been entered, press the Next push button
to move to the ODBC page of the A& Database SrnartGuideand specify how the database is to be registered with ODBC (see Figure C-5). 7. Finally, press the Do& push button to complete the configuration setup. If everything has been entered correctly, the Confinnation dialog shownin Figure C-6 will be displayed you can test theconnection to make sure it is working properly by pressing the Test Connectionpush button (see Figure C-6).
A i
l
__618
1
Appendix C:How the Example Programs Were Developed I "
.".
'l Figure C-5
TheODBC page of the D62 Add Database SmartGuide
'l & A I
I,
Figure C-6
The configuration confirmation dialog
Testing The Connection 8. After the Tkst Connectionpush button on the Confirmation dialog is pressed, the
Connect 2% DB2 Database dialog shownin Figure G 7 will be displayedand you will be promptedfor a user ID and password. 9. When t h i s panel is displayed, providea valid user ID and password and press the OK push button. A DB2 Message dialog likethe one shownin Figure C-8 should appear. 10. After the connection has been configuredand tested, the Client Configuration Assistant panel similar to the one shownin Figure G 9 should replacethe Add Database SmartGuide and the newly configureddatabase should belisted in the Available DB2 Databases list control.
I
Appendix CHOWthe Example Programs Were Developed
Figure C-7
The DB2 connection information dialog
l' !:
!p, !
I
-
Databars product D82MT 5.20 SOL aulhaizatm ID I usnd Database allas I SAMPLE
1:
,
..
-
Figure C-8
The "connection test successful" message dialog.
'
!
\
" .
6 19."l
I
Appendix C:How the Example ProgramsWere Developed
m
How TheExamplesAreStored The Diskette
On
To aid in application development, each of the examples shownthroughout the book are provided, in electronicformat, on the CD that accompanies this book. T h i s CD contains both a 90 dayevaluation copy of DB2 Universal Database Personal Editionand a subdirectory that contains the example programs."his subdirectory (ewlmples) is divided into the following nine subdirectories: H Chapter-05
R Chapter-06 W Chapter-07
Chapter-08 W Chapter-09 W Chapter-l0 H Chapter-l1 W Chapter-l2 H Chapter-l3 R Chapter-l4 Each of these directories contains the examples that were presented in the corresponding chapters in the book.
Fnn(
How To Compile And Execute The Examples The followingsteps can be performedto recompile and execute any example program stored on the diskette: 1. Create a directoryon your hard drive and copy the example programinto it. 2. Invoke the Visual C++ 6.0 Developer Studio.
3. Select New from the Visual C++6.0 Developer Studio File menu. 4. When the New panel is displayed, highlightWin32 ConsoleApplication, enter the appropriate location (hard drive and directory),and a project namethat corresponds to the name of the directory that contains the example program (seeFigure (2-10). 6. When the Win32 Console Application wizard is displayed, selectthe Empty Project radio button and press the Finish button (see Figure C-11). 6. When the new projectis created, select the Project, Settings . . . menu item, choose All Configurations in the Settings For: combo box, and enter thelocation (path) of the DB2 SDK header files in the CIC++, Preprocessor, Additional include directories entry field (see FigureC-12).
Appendix C:How the Example Programs Were Developed ! "_ 62 1 ,
.
" "
Figure C-l 0 The New Projects panelof the VisualC++ 6.0 Developer Studio
' I
Figure C-l1
The first panel of the Win32 Console Application wizard
Figure C-l 2
The UC++Project Settings panelof the Visual C++6.0 Developer Studio.
f 622 ~- L""
" "
'
Appendix C:How the ExampleProgramsWereDeveloped 7. Next, enter thelocation (path) of the DB2 SDK library files in the Link, Input,
Additional library path entry field (see FigureC-13).
Figure C-l 3 The LinWlnput.Project Settings panelof the VisualC++ 6.0 Developer Studio.
8. Then, addthe DB2API.LIB and DB2APlE.LIB libraries to the listof library files shown in the Link, General, Objectllibrary modules entry field (see Figure(2-14).
Figure C-l 4 The LinWGeneral Project Settings panelof the Visual C++ 6.0 Developer Studio. 9. Once the new project settings have been saved, selectthe File Viiw tab in the
right-hand window, highlight the Source Files project files entry, press the right mouse button to display the pop-up menu, and selectthe Add Files to Folder . . . menu item (see Figure(2-15).
Appendix C:How the Example Programs Were Developed
Figure C-l 5 The Add New Files to Folder
~
623 .~
."
. . . menu item
10. Highlight the example file name shown in the Insert Files into Project dialog and press the OK push button (see Figure C-16).
Figure C-l 6 The file selection window. NOTE: All files with .SQC extensions must be precompiled before they will appear in the file selection window (and before they can be ad&d to a project). The following batch file (sqc-comp.bat) was used to precompile the .SQC examples:
Appendix C:How the Example Programs Were Developed REM
***
BUILD EbIBEDDED SQL-API
EXAMPLESCO-
FILE
***
echo o f f
REM *** CONNECT TO THE SAMPLE DATABASE*** db2 connect to sample user userid using password REM *** PRECOMPILE THE.SQC SOURCE CODE FILE *** db2 prep %l.sqc target cplusplus bindfile using %l.bnd
m *** RENAME
FILE***
THE QENERATED copy %l.cXx %l.cpp del%l.cxx
*** BIND THE APPLICATION TO THE db2 bind %l.bnd
SAMPLE
*** DISCONNECT FROM THE db2 connect reset
DATABASE ***
REM
REM
SAMPLE
DATABASE ***
11. Compile and execute the program.
NOTE: An appropriate User ID, and Password must be provided in the SQLConnect( function calls that are used to connect to the DB2 SAMPLE database. Also, if the user ID specified is not the same as the user ID of the creator of the SAMPLEdatabase, SQL statements that interact with tables in the SAMPLE database may have to be qualified. If this isthe case, contact the System Administratorfor information about the appropriate qualifier to use.
BIBLIOGRAPHY International Business Machines Corporation.1997.IBM DB2 Universal Database Administration: Getting Started,Version 5. SlOJ-8154-00.IBM Corporation. International Business Machines Corporation.1998.IBM DB2 Universal Database Administration Guide,Version 5.2. SlOJ-8157-01.IBM Corporation. International Business Machines Corporation.1997.IBM DB2 Universal Database API Reference, Version5. SlOJ-8167-01.IBM Corporation. International Business Machines Corporation.1998.IBM DB2 Universal Database Command Reference, Version5.2. SlOJ-8166-01.IBM Corporation. International Business Machines Corporation.1997.IBM DB2 Universal Database Embedded SQL Programming Guide, Version 5.SlOJ-8158-00.IBM Corporation.
This Page Intentionally Left Blank
mI Index Note: Boldfacenumbers indicate illustrations. access plans (See packages) accounting strings, 79-80, 188-189
ACTIVATE DATABASE, 135, 136
activation time, triggers, 13 ADD NODE, 67,476 aliases, 5, 14,15,515 ALTER TABLE, 10 APPC, 67 application development, 43-59
application programming interface (API), 48,49, 52-54,64,59 arithmetic, 46 binding, 57,58-59
call level interface (CLI), 48,49,51-52,63,59
Client Application Enabler (CAE), 55 COMMITLROLLBACK, 58 compilers, 55 database management systems (DBMS), 52 Database Manager, 57 design of applications, 46-48
distributed units of work (DUOW),48 elements of an application, 46,47
EndTransO, 58 environment for development, 55 input, 46 inputloutput (YO), 46 libraries, 49,52 linking databases,55 logic, 46,48 memory, 46
MicrosoR Foundation Class (MFC),49 open database connectivity (ODBC),52 output, 46 precompilers, 49-50,55, 58-59
Presentation Manager, 49 programming languages, 48,49,66
remote units of work (RUOW), 48 schema, 57 Software Developers Kit (SDK), 55 source code files, 58-59 SQL statements, 48,49-50, 61
static vs. dynamic SQL, 49-50,61
stored procedures,55 test data generationfor applications, 57 testing applications,56-57 transaction logging, 48 transactions, 48,57-58 user interfaces,49 volatility of data, 48 application preparationAPIs, 65
application programming interface (API), 48,49, 52-54,64,59,61-73,75-82
accounting strings,77, 79-80
API function return codes, 70
application preparation APIs, 65 BACKUP API, 54,64 backuphecovery APIs, 65
basic structure,62,63 binding, 77,78 body of API source code files, 62 call level interface (CLI) vs., 61-62
categories of API, 52-54, 62-66
C-language versionof APIs, 66
clientlserver diredory management APIs, 64 data handlingAPIs, 415470,415
data structuresused in API functions, 67 data utilityAPIs, 65 database configuration
APIs, 64,187-233 database connection services (DCS), 237 database controlAPIs, 64, 133-185
database directory management APIs, 64, 235-289
Database Manager configurationM I S , 64, 187-233
Database Managercontrol H I S , 64,133-185 database migrationM I S , 329-414
Database SystemMonitor APIs, 65,505-552 debugging APIs, 71 disaster recovery APIs, 329-414
dynamic link library (DLL), 71
embedded SQL vs., 77,78
atabase connection
setting values, 135 setting connection settin values, 135
c o ~ ~ i t ~cont~o ent concu~enc
eter values, s t o r ~ ~ e ,
9
ata~ase~irect
ase ~ r c ~ i t e c t ~ r e
Index API function returncodes, 70 first-phase errors,511 GET ERROR MESSAGE,
70,81 GET SQLSTATE, 71,81 manually resolving indoubt transactions, 512-514 second-phase errors,512 SQLCA return codes, 70,81 SQLSTATE codes, 71,81 Transaction Manager database errors,512 two-phase commit, 511-512 escalation of locks, 32,37-38 ESTIMATE DATABASE SYSTEM MONITOR BUFFER SIZE, 508,515 event monitors,5,14,15 event, triggers,13 exception handling, 77,79,
188-189
exclusive 6 lock, 31,34 executable applications,APIs, creating, 71,72 executable load module (EXE)
FORGET TRANSACTION STATUS, 513-514,513,515 FORTRAN, 49,66,79 function returncodes, 70 functions, data structures used in API functions,
67-69 general application programming APIs, 65 GET ADDRESS, 79 GET CURRENT CONTEXT, GET DATABASE CONFIGURATION, 189,
190,475,476 GET DATABASE CONFIGURATION DEFAULTS, 189,190 GET DATABASE MANAGER CONFIGURATION, 188,
190,475,476
71,81 292 FETCH TABLESPACE QUERY, 290,291 fields, 7 file formats supportedfor import, export,load,
419-420 file naming conventions, 3 first-phase errors,511 flags, 67 float (real) datatypes, 8 FORCE APPLICATION,136 foreign keys, 11
HAVING, 46 header of API source code, 62 history files, recovery, 16-17,
16,333-335,334
555
GET DATABASE MANAGER CONFIGURATION DEFAULTS, 189,190 GET DCS DIRECTORY APIs, 71 ENTRIES, 237,238,239 EXPORT, 69,57,67,416,420, GET DCS DIRECTORY 554 ENTRY FOR DATABASE, supported file formats, 237,238,239 419-420 GET ERROR MESSAGE,70, FETCH TABLESPACE CONTAINER QUERY,290,
GET TABLE PARTITIONING INFORMATION, 472,476 GET/UPDATE MONITOR SWITCHES, 507,515 granularity, triggers,13 graphic datatypes, 9 GROUP BY, 46
GET NEXT DATABASE DIRECTORY ENTRY,236,
238 GET NEXT NODE DIRECTORY SCAN,237,
238
GET NEXT RECOVERY FILE HISTORY ENTRY,
335,336 GET ROW PARTITIONING INFORMATION, 476 GET ROW PARTITIONING NUMBER, 472 GET SNAPSHOT,508,515 GET SQLSTATE,71,81
I/O parallelism, 473 IBM, 4 IMPORT, 57,67,69,416,420, 554 LOAD vs. IMPORT, 416-417 supported file formats,
419-420 inconsistent data,problems,
330,331 indexes, 5,lO-11,15 binary trees,10 columns, 11 composite keys, 11 foreign keys, 11 keys, 10,ll primary keys, 11 records, 11 rows, 11 unique keys, 11 indoubt transactions,65,69,
505,512 COMMIT ANINDOUBT TRANSACTION,
513-514,515 FORGET TRANSACTION STATUS, 513-514,515 LIST DRDA INDOUBT TRANSACTIONS, 515 LIST INDOUBT TRANSACTION WITH PROMPTING, 513-514,
515 LIST INDOUBTTRANSACTIONS, 512,515
1
Index manually resolving indoubt transactions, 512-514 ROLLBACK A N INDOUBT TRANSACTION, 513-514,515 input, 46 input/output (WO), 46 V 0 parallelism, 473 INSERT, 13,31,38,57,418 INSTALL SIGNAL HANDLER, 79 integer (INT) data types, 8 intent exclusive (K)locks, 32 intent none (IN) locks, 32 intent share (IS)locks, 32 intention, locking, 32 interleaved transactions,25 inter-partition parallelism, 473,474,475 INTERRUPT, 79 INTERRUPT CONTEXT, 555 interrupts, 77,188-189 Intersection operation, 4 intra-partition parallelism, 473,474,475 IPWSPX, 68 isolation levels, 25-29 ISOLATION option, 29 Join operation, 4 keys, 1 0 , l l large objects (LOB),5 libraries, 49 CL1 and, 52 linking databases, 55 LIST DRDA INDOUBT TRANSACTIONS, 515 LIST INDOUBTTRANSACTION WITH PROMF'TING, 513414,515 LIST INDOUBT TRANSACTIONS, 512,515 LOAD, 57,69,417-419,420 IMPORT vs. LOAD, 416-417
LOAD QUERY, 420 supported file formats, 419-420 LOAD QUERY, 420 LOCK TABLE, 37-38 lock wait, 29 locking, 23,29-39,30 attributes of locks, 30 COMMIT, 31 compatibility of locks, 32, 35-36,36 concurrency, 32,33-34 conversion of locks, 32,37 Database SystemMonitor, 506 deadlocks, 32,34-35,34 duration of lock, 30 escalation of locks, 32,37-38 exclusive (X) lock, 31,34 intent exclusive (K)locks, 32 intent none (IN) locks, 32 intent share(IS) locks, 32 intention, 32 lock wait, 29 mode of lock, 30 next keyexclusive (M) lock, 31 next key share (NS) lock, 31 next key weak exclusive ( N W )lock, 31 object of lock, 30 performance vs. locking, 32-38 ROLLBACK, 31 share (S) lock, 31 share with intentexclusive (SIX)locks, 32 size of lock, 30 states, lock states, 30-32 super exclusive (Z)lock, 31 update (U)lock, 31 weak exclusive (W) lock, 31 Locks group, Database System Monitor, 506-507 log files (See also transaction logging), 16-17,69
633
ASYNCHRONOUS READ LOG, 336 space management,39 synchronization of databases, 38-39 Log Manager, 39 extract specific logrecord, ASYNCHRONOUS READ LOG, 336 logic, 46,48 logical drives, 19 long varchar datatypes, 9 long vargraphic datatypes, 9 map, partitioning, 472 memory, 46 synchronization of databases, 38-39 memory copy functions, 79, 188-189 Microsoft, 52 Microsoft Foundation Class ( W C ) , 49 MIGRATE DATABASE, 330, 336 migration, 330,336 database migrationMIS, 329-414 MIGRATE DATABASE, 330,336 multipartition nodegroups, 472 named pipes,68 naming conventions H I S , 66 database names,20 directories and subdirectories, 18 . NetBIOS, 68 NetWare, 68,235 registeringlderegistering DB2 servers with, 237-238,239 network supportAPIs, 65 next key exclusive (M)lock, 31
Index next key share(NS) lock,
31
overlapping transactions,39 packages (access plans),5,13,
next key weakexclusive 0 15 lock, 31 parallelism (See also nodes and nodegroups, 68 Partitioning),472-475 coordinator node, 472 U 0 parallelism, 473 enabling database inter-partition parallelism, partitioning, 475-476 . 473,474,475 multipartition nodegroups, intra-partition Parallelism,
472 node management APIs, 64 partition management
MIS,471-504 workstation directories,20 nondelimited ASCII files supported for import, export, load,419-420 nonrepeatable reads,27,28 Novel1 NetWare (See NetWare) numeric datatypes, 8-9 objects, 4,5 open databaseconnectivity (ODBC) CL1 and, 52 transactions, 25 OPEN DATABASE DIRECTORY SCAN,236,
237,238 OPEN DCS DIRECTORY SCAN, 237,238,239 OPEN NODE DIRECTORY SCAN, 237,238 OPENRECOVERY HISTORY FILE SCAN, 335,
336 OPEN TABLESPACE CONTAINER QUERY,290,
291 OPEN TABLESPACE QUERY, 290,291 operating system(OS), 17 operational utilityAPIs, 65 ORDER BY, 46 output, 46
473,474,475 query parallelism,473,475 partitioning, 69,471-504 ADD NODE, 476 coordinator node, 472 CREATE DATABASE AT NODE, 476 DROP DATABASE AT NODE, 476 DROP NODE VERIFY,476 enabling database partitioning, 475-476 GET ROW PARTITIONING INFORMATION, 476 GET ROW PARTITIONING NUMBER, 472 GET TABLE PARTITIONING INFORMATION, 472,476 V 0 parallelism, 473 inter-partition parallelism,
473,474,475 intra-partition parallelism,
473,474,475 map of partitioning, 472 multipartition nodegroups,
472 parallelism, 472-475 partitioned databases,472 query parallelism,473,475 REDISTRIBUTE NODEGROUP, 476 SET RUNTIME DEGREE,
476 single-partition databases,
472 partitioned databases,472
PC integrated exchange format supportedfor import, export, load, 420 performance compatibility of locks, 32,
35-36,36 concurrency and lock size,
32,33-34 conversion deadlocks, 35 conversion of locks, 32,37 deadlocks, 32,34-35,34 escalation of locks, 32,
37-38 locking vs. performance,
3238 phantom reads, 27,28 physical directories, 17,18-l9 pointers, 77,79,188-l89 PRECOMPILE PROGRAM,
69,78,554 precompiling, 49-50,55,
58-59,69,188-l89 PREPARE, 29,510 Presentation Manager, 49 primary and secondarylog files, 39 primary keys, 11 Product operation,4 programming languages,48,
49,66 Projection operation, 4 PRUNE RECOVERY HISTORY FILE, 335,336 qualifiers, 14-15,15 QUERY CLIENT, 135,136 QUERY CLIENT INFORMATION, 136 query parallelism,473,475 QUIESCETABLESPACES FOR TABLE,419,420 read stability,27,28 REBIND, 78 RECONCILE, 336 records, 7,11
Index recovery, 16-17,23,67,4041 autorestart parameters, 330 backuphecovery APIs, 65 commit'operations, 40-41 disaster recovery APIs, 329-414
inconsistent data,problems, 330,331
recovery history files, 333-335,334
redirected restore operations, 332 RESTART DATABASE, 330,336
RESTORE DATABASE, 332,336
roll forward recovery, ROLLFORWARD DATABASE, 333,336 rollback operations, 40-41 SET TABLESPACE CONTAINER, 332,336 recovery history file, 16-17, 69,329,333-335,334
CLOSE RECOVERY HISTORY FILE SCAN, 335,336
GET NEXT RECOVERY FILE HISTORY ENTRY, 335,336
OPEN RECOVERY HISTORY FILE SCAN, 335,336
PRUNE RECOVERY HISTORY FILE, 335,336 sqluhinfo structures,335, 336
UPDATE RECOVERY HISTORY FILE, 335,336 recovery log, 16-17 redirected restore operations, 332
REDISTRIBUTE NODEGROUP, 476 referential constraint,10 REGISTER, 238,239
relational algebra,4 relational databases,3,4 remote server connection M I S , 65 remote units of work (RUOW), 48 REORGANIZE TABLE,290 updating statisticson table, 292
repeatable reads,27 RESET DATABASE CONFIGURATION, 189, 190
RESET DATABASE MANAGER CONFIGURATION, 189, 190
RESET MONITOR, 508,515 resetting configuration values, 188-189 RESTART DATABASE, 330, 336
RESTORE, 69 restore (See recovery) RESTORE DATABASE, 332, 336
result tables, 7 REXX, 49,66 roll back operation, 28 recovery and data and, 4041
transactions, 25 roll forward recovery, 333,336 ROLLBACK, 31,34,38,79 ROLLBACK AN INDOUBT TRANSACTION, 513-514, 513,515
ROLLFORWARD DATABASE, 67,69,333, 336
sections, 13 SELECT, 28,46,57,416,420 Selection operation, 4 serializable transactions, 2627,553
SET ACCOUNTING STRING, 80
SET APPLICATION CONTEXT TYPE,555 SET CLIENT, 135,136 SET CLIENT INFORMATION, 136 SET CONNECTION, 510 SET CONSTRAINTS, 10 SET RUNTIME DEGREE, 476
SET TABLESPACE CONTAINER, 332,336 share (S) lock, 3 1 share with intentexclusive (SIX)locks, 32 shared libraries,APIs, 71 signals, 77, 79, 188-189 simple (unqualified) name, 15 single-partition databases, 472 smallint datatypes, 8 snapshot monitor, 507-508 Software Developers Kit (SDK), 55 Sorts group, Database System Monitor, 506-507 source code files, 58-59,62 SQL, 5,13,48,49-50,61 embedded SQL statements vs. CLI, 52 embedded SQL vs.APIs, 77, 78
SQLCA, 510 static vs. dynamic SQL, 49-50,61
rows, 4,7, 1 1 RUN STATISTICS,291,292 running APIs, 71
SQL AccessGroup (SAG), 51 SQL statements group, Database SystemMonitor,
schema, 14-15,15,57 second-phase errors, 512
SQLCA return codes, 70,81,
506-507
510
j 636
Index
L " "
SQLSTATE codes, 71,81 sqluhinfo structures,335,336 START DATABASE MANAGER, 68,134,136 start-up, 68 state, 17 static vs. dynamic SQL, 49-50,61
STOP DATABASE MANAGER, 68,134,136 stored procedures,55 strings, accounting strings, 77,79-80
structured query language (See SQL) super exclusive (Z) lock, 3 1 support objects, 4 synchronization of databases, 38-39
system catalog, system catalog view, 15-16 system databasedirectory, 18, 19,236 APIs, 236
CATALOG DATABASE, 236,238
CLOSE DATABASE DIRECTORY SCAN,236, 238
GET NEXT DATABASE DIRECTORY ENTRY, 236,238
OPEN DATABASE DIRECTORY SCAN,236, 238
UNCATALOG DATABASE, 236,238
system managed spaces (SMS), 5 system resources,17 table check constraint, 10 table spaces,5-6,7,67,68, 69,290
CLOSE TABLESPACE CONTAINER QUERY, 290,292
CLOSE TABLESPACE QUERY, 290,291 containers in table spaces, 290,291,332,336
database managed spaces (DMS), 5,290,291 FETCH TABLESPACE CONTAINER QUERY, 290,292
FETCH TABLESPACE QUERY, 290,291 management APIs, 65 OPEN TABLESPACE CONTAINER QUERY, 290,291
OPEN TABLESPACE QUERY, 290,291 QUIESCE TABLESPACES FOR TABLE,419,420 recovery history files, 333-335,334
redirected restore operations, 332 restoring, RESTORE DATABASE, 332,336 retrieve information,290 roll forward recovery, ROLLFORWARD DATABASE, 333,336 SET TABLESPACE CONTAINER, 332,336 system managed spaces
(SMS),5,290,291 table space management
APIs, 289-329 TABLESPACE CONTAINER QUERY, 290,291
TABLESPACE QUERY, 290,291
290,292
reorganizing fragmented data in tables, 290,292 result tables,7 rows, 7 RUN STATISTICS, 291, 292
statistics on tables, 291,292 table managementAPIs, 289-329
testing applications,56-57 updating statisticson table, 291
values, 7 views, 12-13,12 Tables group, Database System Monitor, 506-507 TABLESPACE CONTAINER QUERY, 290,291 TABLESPACE QUERY, 290, 291
TCPDP, 68 test data generationfor applications, 57 testing, APIs, 7 1 testing applications,56-57 threaded applications (See contexts and threaded applications) time datatypes, 9 timestamp datatypes, 9 transaction logging, 23, 38-41,48
validate external references, RECONCILE, 336 tables, 5,7-10,
data types, 8-10 fields, 7 indexes, 10-11,11 QUIESCE TABLESPACES FOR TABLE,419,420 records, 7 REORGANIZE TABLE,
15
base tables,7 check constraints, 10 columns, 7
commit operations, 40-41 lengthy transactions,39 Log Manager, 39 managing log file space, 39 overlapping transactions,39 primary and secondary log files, 39
p
1
I 637
Index
L-
" ""
recovery, database recovery, 40-41
rollback operations, 40-41 synchronization of databases, 38-39 Transaction Manager, 509-510
errors, error handling, 512 indoubt transactions,512 transactions, 25 two-phase commit, 509-512, 611
XA-compliant, two-phase commit, 514-515 transactions (See also indoubt transactions), 23,24-25,48, 57-58,69
call level interface (CLI) and, 25 COMMIT and commit operations, 25,28,31,34, 38,40-41,79,505,510
commitment control,25 compatibility of locks, 32, 3536,36
concurrency, 25-29 conversion deadlocks, 35 conversion of locks, 32,37 cursor stability, 27,28 deadlocks, 32,34-35,34 dirty reads,27 distributed unitof work (DUOW), 509 EndTransO, 58 escalation of locks, 32, 37-38
fist-phase errors,511 indoubt transactions(See indoubt transactions) interleaved transactions,25 isolation levels, 25-29 lengthy transactions,39 manually resolving indoubt transactions, 512-514 nonrepeatable reads,27,28 open database connectivity (ODBC) and, 25
overlapping transactions,39 phantom reads, 27,28 read stability,27,28 repeatable reads,27 roll back operations, 25,28 ROLLBACK, 58 second-phase errors,512 serializable transactions, 25-27
synchronization of databases, 38-39 transaction APIs,, 65 transaction logging, 38-41 Transaction Manager, 25, 509-510
two-phase commit with XAcompliant Transaction Manager., 514-515 two-phase commit, 509-512, 611
uncommitted read,27,28 Transactions (Unitsof Work) group, Database System Monitor, 506-507 transition tables, 14 transition variables,14 triggers, 5,13-14,15 action tobe triggered, 14 activation time,13 cascading, 14 event, 13 granularity, 13 set of affected rows, 13 subject table, 13 two-phase commit, 509-512, 611
XA-compliant Transaction Manager, 514-515
UNCATmOG, 236 UNCATALOG DATABASE, 236,237,238
UNCATALOG DCS DATABASE, 237,238 UNCATALOG NODE,237, 238
uncommitted read,27,28
Union operation, 4 unique constraints,10 unique keys, 1 1 unit of work (See transactions) Universal Database(UDB) architecture, 3 UPDATE, 13,31,33,38 update (U)lock, 31 UPDATE DATABASE CONFIGURATION, 189, 190
UPDATE DATl3ASE MANAGER CONFIGURATION, 189, 190
UPDATE RECOVERY HISTORY FILE,335,336 user defined data types (UDT), 5,15 user defined functions (UDF), 5, 15 user interfaces,49 validate external references, RECONCILE, 336 values, 7 varchar datatypes, 9 vargraphic datatypes, 9 variables, transition variables, 14 viewing configuration values, 188-189
views, 5,12-13,12,15 testing applications,56-57 Visual BASIC, 49 volatility of data, 48 volume directories, 17,19, 236-237
APIs, 237 CATALOG, 236 CATALOG DATABASE,237 OPEN DATABASE DIRECTORY SCAN,237 UNCATALOG, 236 UNCATALOG DATABASE, 237
Index
63% wait, lock wait, 29 WAV files, 10 weak exclusive (W) lock, 31 WHERE clause, 33,46 work sheet format supported for import, export, load, 419-420 workstation (node) directories, 18,20,236 M I S , 237 CATALOG NODE, 237,238
CLOSE NODE DIRECTORY SCAN, 237, 238 GET NEXT NODE DIRECTORY SCAN, 237, 238 OPEN NODE DIRECTORY SCAN, 237,238 UNCATALOG NODE, 237, 238
WOpen Call Level Interface (WOpen CLI), 51-52 WOpen, 51-52 XA-compliant Transaction Manager, two-phase commit processing, 514-515
API INDEX ACTIVATE DATABASE ADD NODE ASYNCHRONOUS READ LOG AllACH AlTACH AND CHANGE PASSWORD AllACH TO CONTEXT BACKUP DATABASE BIND CATALOG DATABASE CATALOG DCS DATABASE CATALOG NODE CHANGE DATABASE COMMENT CLOSE DATABASE DIRECTORY SCAN CLOSE DCS DIRECTORY SCAN CLOSE NODE DIRECTORY SCAN CLOSE RECOVERY HISTORY FILE SCAN CLOSE TABLESPACE CONTAINER QUERY CLOSE TABLESPACE QUERY COMMIT AN INDOUBT TRANSACTION COPY MEMORY CREATE AND AllACH TO AN APPLICATION CONTEXT CREATE DATABASE CREATE DATABASE AT NODE DEACTIVATE DATABASE DEREFERENCEADDRESS DEREGISTER DETACH DETACH AND DESTROY APPLICATION CONTEXT DETACH FROM CONTEXT DROP DATABASE DROP DATABASE AT NODE DROP NODE VERIFY ESTIMATE DATABASE SYSTEM MONITOR BUFFER SIZE EXPORT FETCH TABLESPACE CONTAINER QUERY FETCH TABLESPACE QUERY FORCE APPLICATION
160 477 387 165 169 566 342 99 240 270 256 247 255 28 1 269 404 316 300 545 1 I7 559 150 482 163 118 286 173 562 567 159 485 480 524 42 1 314 296 146
FORGET TRANSACTION STATUS FREE MEMORY GET ADDRESS GET AUTHORIZATIONS GET CURRENT CONTEXT GET DATABASE CONFIGURATION GET DATABASE CONFIGURATION DEFAULTS GET DCS DIRECTORY ENTRIES GET DCS DIRECTORY ENTRYFOR DATABASE GET ERROR MESSAGE GET INSTANCE GET NEXf DATABASEDIRECTORY ENTRY GET NEXT NODE DIRECTORY ENTRY GET NEXT RECOVERY HISTORY FILE ENTRY GET ROW PARTITIONING NUMBER GET SNAPSHOT GET SQLSTATE MESSAGE GET TABLE PARTITIONING INFORMATION GET TABLESPACE STATISTICS GETNPDATE MONITOR SWITCHES IMPORT INSTALL SIGNAL HANDLER INTERRUPT INTERRUPT CONTEXT LIST DRDA INDOUBT TRANSACTIONS LIST INDOUBT TRANSACTIONS LOAD LOAD QUERY MIGRATE DATABASE OPEN DATABASE DIRECTORY SCAN OPEN DCS DIRECTORY SCAN OPEN NODE DIRECTORY SCAN OPEN RECOVERY HISTORY FILE SCAN OPEN TABLESPACE CONTAINER QUERY OPEN TABLESPACE QUERY PRECOMPILE PROGRAM PRUNE RECOVERY HISTORY FILE QUERY CLIENT QUERY CLIENT INFORMATION
552 320 116 128 568 213 225 278 280 122 107 253 267 399 494 527 125 490 307 516 430 109 112 572 536 540 446 464 337 250 276 264 394 310 293 82 409 174 180
API Index QUIESCE TABLESPACES FOR TABLE REBIND RECONCILE REDISTRIBUTE NODEGROUP REGISTER REORGANIZE TABLE RESET DATABASE CONFIGURATION RESET DATABASE MANAGER CONFIGURATION RESET MONITOR RESTART DATABAE RESTORE DATABASE ROUBACK AN INDOUBT TRANSACTION ROLLFORWARD DATABASE RUN STATISTICS SET ACCOUNTING STRING SET APPLICATION CONTEXT TYPE SET CLIENT
466 103 36 1 500 28 1 320 233 212 52 l 339 35 1 548 372 324
1 l9 555 179
SET CLIENT INFORMATION SET RUNTIME DEGREE SET TABLESPACE CONTAINERS SINGLE TABLESPACE QUERY START DATABASE MANAGER START DATABASE MANAGER STOP DATABAE MANAGER TABLESPACE CONTAINER QUERY TABLESPACE QUERY UNCATALOG DATABASE UNCATNOG DCS DATABASE UNCATALOG NODE UPDATE DATABASE CONFIGURATION UPDATE DATABASE MANAGER CONFIGURATION UPDATE RECOVERY HISTORY FILE
185 487 365 304 136 137 142 316 300 244 273 262 228 207 405
F M Pmna ABOUTTHE AUTHOR Roger Sanders is an Educational’Multimedia Assets Specialist with SAS inSchooP, a division of SAS Institute, Inc. focusingon school technologies. He has been ,designing and programming sofhvare applications for the IBM Personal Computer for more than 15 years and specializes in system programmingin C, C++, and 80 X 86 Assembly Language. Hehas written several computer magazinearticles, and he is the author of The Developer’s Handbook to DB2 for Common Servers, ODBC 3.5Developer’s Guide, and DB2 Universal Database Application Programming Interface Developer’s Guide. His background in database application design and development is extensive. It includes experience with DB2 Universal Database, DB2 for Common Servers, DB2 for MVS, INGRES,dBASE, and Microsoft ACCESS.